Wednesday, December 11, 2013

Attributes, attributes, and attributes

A lot of thanks for the people that sent me some persistence files.  It was a bit humbling for me that I only got less than 10 replies, but those good people are highly appreciated! If you feel guilty now, then don't hesitate to still sent me some files to my personal email!

Conclusion? Well, there is quite a variety out there and there is still a lot of diversity, lots of experimentation, and lots of struggling going on. Let me reflect on the most striking issue I noticed: Meta Data.

Persistence means your data is going to make a trip in a time capsule. Java was designed as an in-memory in-process programming language and this has as consequence that you need to provide some extra information how that trip should be made. What classes are persisted, where they are persisted, with what data types, under what name, etc. The most common approach is to use the javax.persistence annotations to markup a persistence object.

For smaller projects this seems to work well but I noticed that the larger projects went outside Java and escaped to text files. I have seen hibernate XML mapping files, proprietary XML files, and one project uses a Domain Specific Language (DSL) they developed with the Eclipse Enterprise Modeling Framework (EMF).

The advantages of annotations should be clear:

  1. Minimizes redundancy since they interact with the Java type information
  2. Can be safely refactored in all IDEs
  3. Can use small names because of Java's package scoping
  4. Type safe
However, one of the projects that used XML did not restrict itself to the persistence problem alone. The attributes of the persistent entities are not only used in Java to persist, they are also reflected as forms in the user interfaces and must be validated before they are stored.This meta-data is used in the GUI (potentially in another language), the core code, and the database (SQL Schema).  For each attribute, there are quite a large number of attributes on those domain attributes that must be defined by the developer. Using a proprietary file format or a DSL obviously provides for ample space to capture this information. Generators can then be used create the annotated data objects, SQL Data Definition Language files (DDL), HTML forms, Javascript validation, etc. This, of course, has the big disadvantage of requiring an extra step in the build. 

I must admit that  I find this model quite attractive considering that most enterprise applications must manage a very large number of attributes in a large number of entities and having a single definition of the domain model sounds awfully attractive; it can prevent a lot of redundancy. The surprise is of course that there is no dominant syntax for this information. The existing data description languages (ASN.1 or XML Schema anyone?) look utterly unusable for this purpose; these languages allow such complex data to be specified that generating Java, Javascript, HTML, etc. will be awfully complicated.

So what do you think? Just annotations on the Java Entity objects (potentially with extra validation annotations that can be used by the HTML5 GUI) or have a special domain object  specification file?

Peter Kriens @pkriens

Monday, November 25, 2013

Problems with Persistence (begging)

I am still struggling with the OSGi persistence story. I am therefore doing some research on OSGi and persistence and I still find the whole Java persistence story quite confusing and complex. Part of my problem is that I see lots of frameworks but it is quite hard to see code that really uses those frameworks. Virtually all tutorials and examples look highly contrived and seem to ignore issues like caching, security and look rather lax with transactions.  

I wonder if people reading this blog could share with me a typical production source or class file showing:
  • How entities are defined
  • The persistent use of the entity, i.e. the part where the SQL will be generated. I.e. places where the PersistenceManager, EntityManager, SQL generation is used
  • How results are cached

A single source or class file per issue is best. Adding a small description how you use persistence (Aries, JPA, JDO, JDBC, etc), the primary issues you face, and describe your environment is highly appreciated. 

I know from my own experience that there is often a feeling that your own code is not up for showing to others but please send me the raw unadulterated code; I need to see how it is today, not how you think it should be. Obviously I am not interested in what the code does or where it is used so feel free to remove comments (if any!) and change names (or just send class files). I am just looking for a couple of hundred of real world samples to extract the patterns that are actually popular in our industry. Obviously I will not share that code with anyone and treat it fully confidential. 

So in this case, do not ask what the OSGi can do for you, but for once, ask what you can do for the OSGi! ;-)

Please? Come on, it only takes 3 minutes. Send your 4 files to: Peter.Kriens@aQute.biz Thanks!

If I get some interesting result I promise share the conclusions with you in this blog. Deal?

Peter Kriens @pkriens



Thursday, November 21, 2013

OSGi Services versus Extenders

In the OSGi community there are two patterns that are widely used, but looking at a recent Stackoverflow question, not so widely understood. So an attempt to elucidate.

The best metaphor I can find for bundles, services, and extenders is our day to day society. In a society with interact with others to achieve our goals. In this interaction we often play a role in a scenario. For example, in a commercial transaction we have the seller and a buyer role and the contract that governs the transaction is the scenario. In the OSGi µservice model, a µservice is one of the actors in the scenario described by the service package. The µservice model is eminently suitable for this role playing since, unlike other service models, is based from the ground up to be dynamic, just like the real world. For example, the existence of a Bluetooth service is not a means to interact with any Bluetooth devices, nope, each Bluetooth service signals that there is a Bluetooth device available in the area and provides the means to talk to that device. It is hard to overestimate the value of this tool. Though there is a place for registering your service in the Bundle's activator, OSGi becomes really powerful when you make this registration dynamic, fully taking dependencies into account, which is trivial to do with Declarative Services. Services are therefore a powerful a domain design tool that solves untold ordering, boilerplate, and concurrency problems.

In our society metaphor, bundles are the people. They control the behavior and choose what scenarios to play in, what services to consume and what services to provide.  Though µservices are a great tool for many scenarios we have to participate in, there are also many not so interesting scenarios that are repetitive, boring, and not in our prime interest. An example. Our family has two cars. In one car I can always open the car door since it detects that I have a key, the other car not. Since I always forget that I actually have to get the key, insert it in the lock, and open the car, I dislike this stupid car. By detecting my presence through the key, the smart car saved me from looking foolish.

We can mimic this behavior in OSGi because we can find out what bundles are installed and get notifications when new ones are installed. Just as our smart car can detect the key in my pocket, so can a bundle detect specific cargo in other bundles and react accordingly. Just yesterday I discussed database migration in OSGi, a perfect example. A Bundle that interacts with a database is written to expect a specific schema in the database it uses. When a new version comes along, the database needs to be updated to match this new schema. Instead of the domain bundle updating the schema itself, a chore, the bundle could contain a description of the new schema. A database schema extender bundle (for example based on Liquibase) could detect the presence of this schema description and, if necessary, adjust the database. Extender bundles can take information from an active bundle and ensure that certain chores are done on the bundle's behalf. An extender like Declarative Services is controlled from an XML file in your bundle to provide all the chores of setting up your components, handling dependencies, etc. In general, any information you can provide declarative (i.e. in resources in your bundle) should use extenders so that the bundle can focus on its domain work. 

Extenders react to the life cycle of the bundles that provide the parameters. This makes it a very powerful to deliver information as well. Paremus' packager packs native executables inside bundles and manage's the life cycle of this native application according to the extendee bundle's life cycle. That is, starting the bundle will start the application, stopping it will stop it. Providing a single model for deploying code, static content, native executables is surprisingly powerful. I have a bundle that can map the static directory in a bundle to a web server, ensuring proper cache control, range requests, and version control. I even use this model to deploy configuration data. You should see these extenders as humble servants that are ready for you and provide lots of value as long as you carry their key.

So when should you use the service pattern and when the extender pattern? 

The service model is usually about the core domain stuff your bundle is supposed to do, it is generally right where the primary focus of your bundle is, it is where your code should be. The extender model is about removing chores and boilerplate from the domain bundles to specialized bundles. And the trick of this all is to understand how the dynamics of the services and bundles unexpectedly make things easier.

Peter Kriens @pkriens

P.S. The book Antifragile from Taleb describes the effect that unreliable dynamic components can make extremely reliable systems.

Wednesday, November 6, 2013

The Transaction Composability Problem

Entering the enterprise world from an embedded background feels a bit like Alice must have felt when she entered Wonderland. Sometimes you feel very big, other times you feel awfully small. Worse, you often do not know your relative size in that new and wondrous world. One of these areas for me is persistence and its associated transaction model.
The reason I have to understand this area better is that for the enRoute project we will have to provide a persistence model that makes sense in a world build out of reusable components. If these components are closely tied to a specific database, JPA, and transaction manager implementations than reuse will be minimal, forfeiting the purpose. There are many issues to solve but analyzing this landscape one thing seems to pop up: the transaction composability problem. A problem quite severe in a reusable component model like OSGi.
Transactions provide the Atomic, Consistent, Isolated, and Durable (ACID) properties to a group of operations. This grouping is in general tied to a thread. A method starts a transaction and subsequent calls on that thread are part of the grouping until the transaction is either committed or rolled back. The easy solution is to start a transaction at the edge of the system (RMI, servlet, queue manager, etc.) and rollback/commit when the call into the application code returns. However,  transactions are related to locks in the databases (and other resource managers) it is therefore crucial to minimize the number of grouped operations to increase throughput and minimize deadlocks. Generating HTML inside a transaction can seriously reduce throughput. Therefore, application developers need to handle transactions in their code.
One of the issues these developers must handle is how to treat an existing transaction. Should they join it or suspend it? Is being called outside a transaction allowed? Since methods can be called in many different orders and from many different places it is very hard to make assumptions about the current transaction state in a method. For example method foo() and bar() can each begin a transaction but then foo() can not call bar() or vice versa.
The corresponding complexity and associated boilerplate code resulted in declarative transactions. Annotations provide the suspend and joining strategy and something outside the application takes care of the details. EJB and Spring containers provide this functionality by proxying the instances or weaving the classes.
Back to OSGi. Since transactions will cross cut through all components we need to define a single transaction model that can be shared by all, just like the service model.
Obviously, we will need to register a Transaction Manager service so the different components can at least manage transactions together. However, do we need to prescribe a declarative model since this seems to be the best practice? Do we then pick EJB or Spring annotations? Support both? Or make new? Or are we moot on this and allow others to provide this support with extenders? This would be similar to the service model, Blueprint, iPOJO, Dependency Manager, and DS are all in business to make life easier for code using services, a similar model could be followed for transactions?
I am very interested in hearing feedback on this.

Tuesday, October 22, 2013

OSGi BOF at OSGi Community Event

The OSGi Alliance will be hosting an OSGi BOF at the OSGi Community Event next week.  The BOF is scheduled for 19.00hrs on Weds 30 Oct in the Silchersaal room at the conference which is located at Forum am Schlosspark in Ludwigsburg.

Peter Kriens will provide a short introduction to "OSGi en Route" a new initiative from the Alliance which is intended to make it easier for developers to get started with using OSGi.  Peter will be keen to answer any questions and get your feedback and suggestions on this initiative.

As with any BOF the goal is for everyone to participate, so if you have any burning questions, problems, or 'itches you need to scratch' then please join us for the BOF to share them with us.  These can be anything you want to discuss, from a specific problem you are facing with OSGi development, through to more general community or OSGi futures questions.

We had a great BOF at the Community Event last year with fantastic participation and we are looking forward to repeating it again this year.

If you unfortunately aren't able to join us but have something you would like raised feel free to submit a comment to this post or email me and we will try and cover this during the session and will feed back to you.


Monday, October 21, 2013

bndtools 2.2.2!

There is a new release out for bndtools! What is it? Well, bndtools is an Eclipse plugin that provides some very nice tools to develop OSGi bundles, it is also used by the OSGi Alliance itself to build the RI's and run the test suites. Since it is based on bndlib, it provides lots of functions that work well together with the maven bundle plugin, ant, sbt, gradle, and many other build environments. The bndtools plugin provides a friendly environment to develop bundles that automatically can then also be built in a continuous integration setup, using one of the aforementioned plugins.

The 2.10 release was a major release in which we added baselining. This release has many smaller features but definitely did improve a lot of things.

The easiest way to get started (or update) with this Eclipse plugin is through the Eclipse Marketplace (In the rather unexpected Help menu). You can also install it manually from its update site: http://bndtools-updates.s3.amazonaws.com

Enjoy, and let the team know how they are doing ...

   Peter Kriens @pkriens

Are you coming to Ludwigsburg next week?

The OSGi Community Event 2013 is taking place next week in Ludwigsburg, Germany between Tues 29 and Thurs 31 Oct.

We have a schedule that is overflowing with OSGi goodness.  The conference kicks off with a "Mastering OSGi with Ease" tutorial on Tuesday morning. From Tues afternoon to close on Thursday there are 26 OSGi talks ranging from embedded, to enterprise, to cloud and covering topics from development hints, tips and best practices, to case studies on how OSGi is being in the world. In addition Thursday morning sees an OSGi Keynote from Ian Robinson of IBM called Travelling Light for the Long Haul.

The full schedule is available online.

There will also be an OSGi BOF on one of the evenings offering the opportunity to find out more about some of the current OSGi Alliance initiatives and to discuss OSGi opportunities and challenges that you see.  Further details on the BOF will be announced soon.

The Community Event is co-located with EclipseCon Europe and there is still time to register to secure your place.  All attendees also get full access to the EclipseCon Europe conference. Details on registration, location and hotels are available on the Community Event homepage.

There are of course plenty of social activities and opportunities to network with your friends and peers, and of course drink and eat plenty of German beer and food!  Tuesday evening sees a Stammtisch and Cirque d'Eclipse; and a reception will be held on Wednesday evening.

It you are joining us for the Community Event you may also be interested in the popular full day Code Camp that the OSGi Users' Forum Germany are running the day before the Community Event (Monday 28 Oct) - Building OSGi based HTML5 Web Application with Peter Kriens and Neil Bartlett. Full details of the agenda and how to register for this are available here.

If you have any questions please email the OSGi Community Event Program Committee.

We hope you can join us and look forward to seeing you next week.

Tuesday, October 8, 2013

Breaking Bad

In my quest to get OSGi and JPA working together purely through services I ran head on to an API that breaking my code badly: java(x).sql. After making things work on Java 6, BJ Hargrave (IBM) setup a continuous integration build on Cloudbees. Since Java 6 is end of life, he rightly picked the Java 7 JDK. Unfortunately, this broke my program since I was proxying the DataSource and Connection class from java.sql and javax.sql respectively. After inspecting it turned out that Java 7 had added a number of methods to javax.sql.DataSource and java.sql.Connection, effectively killing every driver and delegating proxy in existence since they did not implement these methods in their implementation classes.

The problem is not breaking backward compatibility. JDBC moves on and if you want those features drivers will have to upgrade, fair enough. The problem is that decision to upgrade your database drivers is now tightly connected with the choice of the VM. If you want to run on Java 7, JDBC 4.1 is forced upon you since every previous driver will fail to run on Java 7.

It is illustrating to look at the hoops a database vendor has to jump through to allow its drivers to run on Java 6 and Java 7. It must compile on Java 7 to see the new methods but generate Java 1.6 byte codes to make the code run on Java 6. However, the vendor must be extremely careful not to pick up any methods or classes in Java 7 that are not available on Java 7. An awful illustration of how painful a type safe language becomes when you do it wrong.

In OSGi this would all be no problem*. There would be a semantically versioned javax.sql package that contains the SQL API. Consumers (in general you) of this package would get a lot of backward compatibility and providers of this package (the database vendors) will have to provide new releases for each new API. Since in runtime multiple releases can coexist in the same VM, the choice for which VM will not unnecessarily constrain the choices for a database vendor. It is kind of odd that Oracle, a database vendor, makes such a mess in the API to their databases ...

The java(x).sql disarray is a fine illustration of how aggregation creates constrains between its constituents. It is at the heart of the OSGi package dependency model. The siren song of one huge library that contains everything one could ever need (and much more) should be resisted in lieu of modularity since over the long run, super aggregation creates more problems than it solves.

   Peter Kriens (@pkriens)

* If it was not for the nasty detail that javax.sql is badly intertwined with java.sql and java.sql can only be loaded from the VM's boot classpath because it is a java package. Sigh.

Monday, September 30, 2013

Baselining, Semantic Versioning Made Easy

Versioning is one of those things where everybody has a general idea but few really understand it well, resulting in many different and sometimes bizarre practices. The semantic versioning movement put a more solid footing on the version syntax, creating a version Domain Specific Language (DSL) to signal backward compatibility. It uses a 3-part version, where the first part (MAJOR) signals the breaking changes, the second part (MINOR) signals backward compatible changes, and the third part (MICRO/PATCH) signal bug fixes not visible in the public API. For example, an artifact with version 1.2.3 has the same API as 1.2.4, will be backward compatible with 1.3.0, and will break with 2.0.0. By using semantic versioning you pledge that in the future you will use this DSL to signal backward compatibility so that tools can point out breakage or select compatible components. Semantic versions are a big step in software engineering.

So any decent software engineer will agree that semantic versioning is good; being able to watch Maven central close up, I can also see that it has become widely used over the past 2 years. That said, how much work is there for a developer to maintain these versions? Developers are rightly lazy people and versions can be quite error prone and are complicated to maintain without tool support. To minimize the work, I've used the OSGi semantic version rules extensively in bnd. If you compile against an API then you are bound to a range of versions. For example, if you compile against an API with version 1.2.3 then bnd will calculate the corresponding import range: [1.2.3,2). (Actually it is a bit more subtle, see the OSGi semantic version paper.)

Though bnd has the tools to maintain your semantic versions and therefore pledged how things would be updated, it never checked if those pledges were actually kept. If you forgot to change a version after a code change then all bets were off. Since humans are really bad at versions and developers rarely know all the compatibility rules there were many errors.

Meet baselining

When you enable baselining, bnd will baseline the new bundle against the the last released non-snapshot bundle, a.k.a. the baseline. That is, it compares the public exported API of the new bundle with the baseline  If there are any changes it will use the OSGi semantic version rules to calculate the minimum new version. If the new bundle has a lower version, a number of errors are generated. 

The first error is on the place where the version is defined. The other errors are on the API changes that caused the version change. Since bndtools runs bnd continuously you have the uncanny effect that adding a method to an interface suddenly generates errors in different places, pointing out that you are trying to make an incompatible change. Quick fixes are then available to bump the version or to remove the offending API change. Detecting errors earlier is the hallmark of Eclipse and is a great boon to productivity. We all know how much time it saves when you find these bugs while they are being made.

Baselining teaches the actual developers a lot about backward compatibility. After enabling baselining on bnd this weekend I was actually shocked to find that some of the (expected to be) tiny changes I had made in the last three weeks since we froze 2.2 were not as compatible as I thought. (This is another way of saying I had not bumped the appropriate versions.) They were not just bug fixes but actually had API repercussions I had not foreseen, humbling.

Peter Kriens @pkriens

Tuesday, September 24, 2013

The Magic of Modularity

Anybody that has done some computer classes over the last 30 years has learned about modularity and should know it is good. However, to describe why it is good is always a bit harder. Maybe it is not that hard to understand the benefits of encapsulation because we all have been in situations where we could not change something because it was exposed. However, for me the magic actually appears during design, when you pick the modules and decide about their responsibilities. This is reflected in the seminal paper of David Parnas [1971] called "On the criteria to be used in decomposing systems into modules".

Last week I was designing a function for bnd and there I ran into an example that illustrated very nicely why the decomposition is so important. The problem was the communication to the remote repositories. Obviously, one uses a URL for this purpose since it is extremely well supported in Java, it supports many protocols, and in OSGi it can be easily extended by registering a Stream Handler service. However, security and other details often require some pre-processing of the connection request. For example, basic Http authentication requires a request header with the encoded user id and password. Anybody that has ever touched security standards knows this is a vast area that cannot be covered out of the box, it requires some plugin model. This was the reason we already had a very convenient URLConnector interface in bnd that could translate a URL into an Input Stream:

   public interface URLConnector {
      InputStream connect( URL url) throws Exception;
   }

Even more convenient, there were already several implementations, one that disabled Https certificate verification and one for basic authentication. Always so nice when you find you can reuse something.

However, after starting to use this abstraction I found that I was repeating a lot of code in different URL Connector implementations. I first solved this problem with a base class, but then it required extra parameters to select which of the options should be used. And the basic design did not support output (you know you can even send a mail with just a URL?). So after some struggling I decided to change the design and leverage the URLConnection class instead. Though the common use for a URL is to call openStream(), you can actually first get a URLConnection, parameterize it, and the actually open the connection. So instead of a URLConnector interface I devised a URLConnectionHandler interface. This interface had a single method:

   public interface URLConnectionHandler {
      void handle( URLConnection connection) throws Exception;
   }

Since this interface now specifies a transformation it can be called multiple times, unlike the URLConnector interface. This enabled me to write a number of tiny adapters that only did one thing and were therefore much simpler and actually more powerful. The user can now specify a number of  URLConnectionHandler for a matching URL. For example, Basic Authentication should in general not be used without HTTPS since it shows the user id and password in clear text. Instead of building this verification in the Basic Authentication plugin it can now just be selected by the user so that for another URL it can be used with a different combination. 

After porting the existing functionality of the URLConnector implementations I ended up with significantly less code and much more power., only because the structure was different. That is what I call the magic of modularity.

Peter Kriens @pkriens

P.S. Registered for the OSGi Community Event in Ludwigsburg? I will be giving a talk about Developing Web Apps with OSGi. For Germans, there is also an OSGi bootcamp from the OSGi User's Forum Germany. Advance registration ends Oct 1.

Wednesday, September 18, 2013

OSGi's Popularity in Numbers

Just some interesting statistics that I found out by scanning Maven Central. I've got all the metadata in a Mongo database so it is easy to analyze. The current database consists of over 426 thousand JARs organized in more than 46 thousand projects. I have been scanning Maven central since last year and these numbers seems to have almost doubled, which is a scary though if this continues at that exponential rate (especially for Sonatype who seems to pay the bandwidth and storage costs of Maven Central).

Almost 10% of the 46000 projects in Maven Central today are OSGi bundles. The most surprising part for me was that the the official OSGi Core JAR actually comes in at #36. For the official OSGi Core JAR, there are more than 24 thousand transitively inbound projects. It is more popular than dom4j (#43) or Apache commons collections (#45).

So what does this ranking number mean? I uses an algorithm similar to Google Pagerank, a project is more important when it has more inbound maven dependencies (compile and runtime scope) based on the latest revision of a project. A staggering more than half of the 46 thousand projects have a transient dependency on the OSGi JAR. Staggering because it says probably more about the infectious nature of the maven dependency model than OSGi's popularity.

There are over 50 projects that contain the package org.osgi.framework. In overall ranking Eclipse Equinox at #78 and Apache Felix at #216. That said, Apache Felix also provides a compile only JAR with the OSGi packages that comes in at #100. When looking at this list it turns out that the OSGi Jars appear in several incarnations. I even found the release 3 JARs: 300k for core and compendium combined. The compendium, a JAR that is only used by projects that are really OSGi, comes in at #50.

The numbers look very good for OSGi and I think it indicates that we will see more and more projects providing OSGi metadata. As David Bosschaert wrote earlier this year, if you need help adding this metadata then let us know.

So which project is #1? I know you've been dying to ask. Well, the top 5 is:


  1. org.hamcrest : hamcrest-core
  2. junit : junit
  3. javax.activation : activation
  4. javax.mail : mail
  5. org.apache.geronimo.genesis.config : logging-config

Peter Kriens @pkriens



Friday, September 13, 2013

Babysteps, the RFP for the Application Framework

The first step official step is set for the OSGi Application Framework! In the past weeks I've followed the OSGi Process and written a Request for Proposal (RFP). last week we discussed this in lovely southern England at the IBM's Hursley premises. Since the OSGi Alliance recently made the specification process fully open, this RFP is publicly available

At the combined CPEG/EEG/REG meeting yesterday I spent almost 4 hours mostly talking about this. I first demonstrated the system I developed last year and then and segued into lessons learned. Since this was my 'sabbatical' I could do something that is a lot harder when you work for a company. I developed this system from the ground up to be a no-compromise µservice based system. This was fun and proved that µservices work as advertised for real life applications. However, this work also made me aware how hard it is to find the right components for your system. Though popular open source projects have adopted the OSGi headers (thank you! You know who you are.), few projects actually support the µservice model as it was intended. I was therefore forced to develop a lot of base components that just should have been widely available. And even if those components would have been there, there is actually too little information about how to architect an OSGi system.

After I had bored everybody for 2.5 hours we went to the RFP. The RFP is very ambitions (and quite large for an RFP). It outlines the scope, which is much, much, bigger than we can do in a short time. We will actually try provide developers with a complete solution, integrating many best practices. Ambitious, and it will take time, but it is supposed to guide us in the coming years when we work on this project. 

I'd love to get feedback and since this RFP is public you can now actually read while it will progress through the organization. So please read it and let me know. You can either react on the blog, mail me, or create issues on the OSGi Bugzilla

Peter Kriens



Monday, September 2, 2013

Why has OSGi a dynamic model?

OSGi was derived from Sun's Java Embedded Server, a product that had dynamics at its heart. It consisted of a dynamic µservice model with updatable modules. So why did we adopt this, seemingly complex, model? Well, we could, and at the time were heavily frustrated with Windows 98 that seemed to require a reboot for every third mouse move. So it seemed utterly stupid to build a static system that required a reboot to update a module or a configuration.   

What I had not realized at the time is what a powerful software primitive the µservice model actually was.  Once you accept that a service can come and go you need to make it easy to handle this. So we did with Declarative Services (DS). Once you have a primitive that models dynamic behavior you start to see how dynamic the real world actually is. You also notice that highly complex middleware is build to shield the application developer from the facts of life because they are not deemed clever enough to handle dynamics.

Bill Joy, once told us (at Ericsson Research) a very inspiring story about the development of the Internet that opened my eyes: How you can get much better quality, for a much lower price, by just accepting failure. Initially, he told us, the Internet was developed with routers that were not supposed to lose a package, ever. Despite these expensive and highly complex routers the desired quality of the network was not achieved because there were still too many failure modes. The key insight was to accept that it is ok that routers fail. This brought us TCP, the protocol to provide a reliable connection over an unreliable, much simpler, underlying network.

Once you accept that a µservice is frail, you must handle their frailty in your code. If you have DS, this is none to very little work for a component, DS acts in similar vein as TCP does. Systems build out of such resilient components are (much) simpler and thus more reliable. Read AntiFragile of Taleb if you want to see how nature uses this model pervasively.

Once you accept µservices as a primitive they can be used in an amazing number of use cases. In its most basic form it can just be a service abstracting a platform function, like for example logging, that is not likely to go away. It can represent the availability of something, e.g. a Bluetooth device in the neighborhood (of which you can have many). It can represent a configured database connection, a servlet, etc. And the cherry on top is of course that you can now remote a service since the middleware can reliably signal failures, voiding several of the arguments in the Fallacies of Distributed Computing.

When it is easy to work with these dynamics, you start to see more and more use cases. After wading through a very popular open source project last week, I noticed myriad places where µservices could have saved tons of code and would have added functionality. Virtually all software I write today consists of sometimes a small and sometimes sizable module but invariably a module that provides a single service and depends on a handful of services. 

So it is cool to update a module on the fly. However, I find it much cooler how the outside world can change while your system adapts. While I am developing days can pass without reboots,  updating components and configurations all the time. Not only is this a wonderful fluid way to develop, it also ensure your software becomes highly resilient.

Therefore, for me the real innovation of OSGi is the µservices model and paradoxically accepting their low quality of service.

Monday, August 26, 2013

Dear Prudence: Cant we achieve the modularity through normal jars?

Dear Prudence I've some doubts related to osgi I am new to OSGI framework. I was going through the sites and read about OSGI framework. Frankly speaking I did notunderstand anything. Following are my doubts
  • OSGi is supposed to provide modularity. Cant we achieve the modularity through normal jars?
  • What does it mean that OSGi has a dynamic component model?
  • Bundles can be installed,started,stopped,updated,etc. Why do we want to install the bundles? Why cant we access directly like what we access other normal jars?
I am totally confused. Can somebody answer me ? If it is possible to give some examples also?
Confused

Dear Confused,
Your question first puzzled me a bit since there is so much documentation on the Internet today and there are plenty of books that take you from minute detail to broad overview. Not to talk about the hundreds of 'hello world' tutorial blogs. Then it dawned on me that many of these tutorials seem to start with explaining why the author felt compelled to make this tutorial because OSGi was so much easier and more powerful than those other 99 blogs that were read before OSGi was understood ... Maybe there is something in OSGi that makes it really hard to understand before you know it.

I guess everybody has a bubble of knowledge that makes it hard to learn/understand anything outside that bubble. I know first hand, last year I really learned Javascript and found myself balking at seemingly bizarre and complex patterns until they became obvious. Your question seems to indicate that your knowledge bubble does not intersect with the bubbles of people advocating OSGi. So lets design a module system based on normal JARs.

I guess we should start with defining what a module is and why we need it. A software module is characterized by having a public API that provides access to a private implementation. By separating the API from the implementation we can simplify the usage of our module since an API is conceptually smaller than the API + implementation and therefore easier to understand. However, the greatest benefit of modules comes when we have to release a new revision. Since we know that no other module can depend on private implementation code we are free to change the private code at will. Modules restrict changes from rippling through the system, the same way as firewalls restrict fires.

Lets make a framework that can use a JAR as a module. Best practice in our industry is to make code private by default, that is no other module can access the inside.This is the standard Java default, without the public keyword fields, methods, and classes revert to being only locally accessible.

However, if nobody outside the JAR can see anything of the inside then this code can never be called. Like Java, we could look for a class with a public static main(String args[]) method in this module to start the module. Since we do not want to search all classes in a module to find this main class (which also means we could end up finding multiple) we need a way to designate the JAR's main class. Such a mechanism is already defined by Java in the JAR specification: the Main-Class header in the JAR's manifest (a text file with information about the JAR). So we could call such a designated main method in each module to start the module, it can then run its private code. However, the main method does not allow us to stop the module. So let's create a new header for this purpose and call it Module-Activator. The class named in the Module-Activator header must then implement the ModuleActivator interface. This interface has a start and stop method, allowing the framework to start and stop each module.

If the private code executes it will likely need other classes that are not in the JAR. Our framework could search the other modules for this referred class if we knew which part of a module was private and which part was public. Since Java already has a namespace/accessibility/encapsulation hierarchy, we should try stay close to these concepts. This Java hierarchy is field/method, class, package and the package is therefore the best candidate to designate private or public. Since there is currently no Java keyword defined to make a package private or public we could use annotations on a package. However, annotations mean that we need to parse the JAR before we can run it, this is in general a really bad idea for performance and other reasons. Since we already defined a header for the activation, why not define another header: Module-Export? This header would enumerate the packages in the JAR that provide the public API.

This minimalistic module system (only two headers) is very attractive but there is a catch that becomes apparent when you build larger systems.

What happens when you have multiple modules in your framework that both provide a given class but their revision differs? The standard Java approach is to do a linear search of the modules and the first module that declares a class wins. Obviously you should never design a system that has multiple revisions of the same class. However, larger systems tend to run into this problem because dependencies are transitive. JARs depend on other JARs, that depend on further JARs, ad nauseum. If, for example, many of these JARs depend on Log4j it is easy to see that not all these JARs will use the same version of Log4j. For a simple library as Log4j that is generally backward compatible you want to have the latest revision but for other libraries there is no guarantee that revisions are backward compatible.

Basically ignoring this erroneous situation like Java (and Maven) does is not very Java-ish. We can't have a system that fails in mysterious ways long after it was started; in Java we generally like to see our errors as early as possible.  For example, the Java compiler is really good in telling you about name clashes, why should we settle for less on the class path?

Since we already export the packages that are shared with the Module-Export manifest header, why not also specify the imported packages in the manifest with a Module-Import header? If the import and export headers also define a version for the package then the framework can check a-priori if the modules are compatible with each other before any module is started, giving us an early error when things are not compatible. You could do even better. We could also make sure that each module can only see its required version of a package, allowing multiple revisions of the same module in one system (this is generally considered the holy grail against JAR hell).

So dear Confused, we've just designed a simple module system based on plain JARs and three manifest headers to describe the module. Fortunately we do not have to struggle to mature this simple design for lots and lots of very subtle and complex use cases because such a module system already exists: its actually called OSGi.

Let me answer your dynamics questions next week,

Peter Kriens
@pkriens

Wednesday, August 21, 2013

The Perfect OSGi Persistence Model?

After having such good experiences with Mongodb last year it is a tad frustrating to have to dive into the messy and fuzzy Java persistence world. Ok, I did miss the transactions in Mongodb a lot but for the rest it was a dream. However, there are good cases to be made for relational databases, not in the least because of their popularity and therefore widespread support. What I found so far is that virtually no implementations are as easy to use as the OSGi specifications imply. I've now got a working configuration that works but is a far cry from the pick and choose model that OSGi promises. I've created a small blog application that consist of a Web based GUI and a Blog Manager service. The Blog Manager service is then implemented in multiple ways:
  • Dummy database
  • JDBC based on the OSGi DataSourceFactory
  • JPA
  • JDO
The intention is to then have configurations for different implementations for each of the persistence standards. So far I have the dummy version running as well as an OpenJPA/H2 based one. Though H2 worked out of the box, OpenJPA was much harder. I could actually only get it to work by using a number of Aries bundles. Another struggle was to get the Transaction Manager working. After trying out different versions I selected the Jonas Transaction Manager (JOTM) but also this transaction manager designed for OSGi required glue code.

The whole purpose of specifications is that you can pick and choose and not have to spent time writing silly glue code. However, today the implementations are clearly not properly supporting the OSGi specifications, even though in most cases latent support is present. I also see that people are struggling with implementations, often doing things much more complicated than required.

 So what is the ideal OSGi model? The ideal OSGi model is that an application depends on an Entity Manager service. This Entity Manager is configured by Configuration Admin service and uses an OSGi Data Source Factory service from the registry as the JDBC driver. If a Transaction Manager service is registered, then this manager must be used by the JPA and database implementations. Just selecting different bundles should allow you to experiment with different configurations. Such a model is highly decoupled and allows for a lot of flexibility.

 We need a community effort to fulfill the persistence promise of OSGi: plug & play. So, to move this forward, if you have a persistence configuration using JTA, JDBC, JPA, or JDO that works well under OSGi, please send this configuration to me or point me to a public project. If you're a committer (open or closed source) that implements JTA, JPA, JDO, or JDBC implementations and want to work together to make your project work out of the box in OSGi, then send me a mail and I will see how I can help. I will do reporting through this blog and twitter.

 Peter Kriens (@pkriens)

Thursday, August 8, 2013

OSGi RFPs and RFCs now accessible live to everyone

The bulk of the work done in the OSGi Alliance is around the creation of specifications. There are the OSGi Core specifications, which define the Framework itself, the Residential specifications which focus on the Residential and embedded context and the Enterprise specifications that define Service APIs and programming models for Enterprise systems. Every OSGi specification is developed in an RFC document in one of the OSGi expert groups: CPEG, REG or EEG. Before an RFC is developed however the requirements are gathered in an RFP, which is also discussed in one of the EGs.
Interesting is that there is a lot of cross-pollination happening. For example, work done in the EEG is often applicable to the residential context as well and vice-versa. And the nice thing about the work done in the EGs is that it's really all quite democratic and consensus oriented. No single company or organization dominates the decisions.

One thing that was really missing here was the general visibility of all the work happening on the RFPs and RFCs. The OSGi Alliance published Early Access Drafts at certain points in time, but this provided no live visibility on how the specifications were taking shape.

This has now been changed. From now on technical work done by the OSGi Alliance on RFPs and RFCs is visible live in the OSGi design repository at Github: https://github.com/osgi/design
This is great news for people who want to implement a spec that is still in development, for people who want to follow the specification process in general or simply for people who want to comment on or otherwise reference an active OSGi RFP or RFC!
And if you have any feedback, this can be provided through the usual feedback channels: the OSGi Bugzilla system.

For more background see also the press release.

And to get an idea on what OSGi is currently working on, here's the current list of the active documents as available via Github (click on the link to download):

Tuesday, August 6, 2013

OSGi Contracts (wonkish)

Let's talk about versions ... again. Though the OSGi has a very elegant package version model there are still many that think this is too much work. They do not want to be bothered by the niceties of semantic versions and just want to use, let's say, Servlet 3.0. For those people (seemingly not interested in minimizing dependencies) the OSGi Alliance came up with contracts in OSGi Core, Release 5.0.0. A contract allows you to:
  1. Declare that a bundle provides the API of the given specification
  2. Require that the API comes from a bundle that made the declaration
This very common pattern is called the Capability/Requirement (C/R) model in OSGi, it underlies all of its dependency concepts like Import/Export package and others; it forms the foundation of the OSGi Bundle Repository. If you ever want to know what is happening deep down inside a framework than look at the Wiring API and you see the requirements and capabilities in their most fundamental form.
Capabilities declare a set of properties that describe something that a bundle can provide. A Requirement in a bundle has a filter that must match a capability before this bundle can be resolved. To prevent requirements matching completely unrelated capabilities they must both be defined in the same namespace,, where the namespace then defines the semantics of the properties. Using the C/R model we were able to describe most of the OSGi dependencies with surprisingly few additional concepts. For a modern OSGi resolver there is very little difference between the Import-Package and Require-Bundle headers.
So how do those contracts work? Well, the bundle that provides the API for the contract has a contract capability. What this means is that it provides a Provide-Capability clause in the osgi.contract namespace, for example:

Bundle P:
  Provide-Capability: 
     osgi.contract;
      osgi.contract=Servlet;
      uses:="javax.servlet,javax.servlet.http";
      version="3.0"
  Export-Package: javax.servlet, javax.servlet.http

This contract defines two properties, the contract name (by convention this is the namespace name as property key) and the version. A bundle that wants to rely on this API can add the following requirement to its manifest:
Bundle R:
  Require-Capability: osgi.contract;
    filter:="(&(osgi.contract=Servlet)(version=3.0))"
  Import-Package: javax.servlet, javax.servlet.http

Experienced OSGi users should have cringed at these versionless packages, cringing becomes a gut-reaction at the sight of versionless packages. However, in this case it actually cannot harm. The previous example will ensure that Bundle P will be the class loader for the Bundle R for packages javax.servlet, javax.servlet.http. The magic is in the uses: directive, if the Require-Capability in bundle R is resolved to the Provide-Capability in bundle P then bundle R must import these packages from bundle P.
Obviously bnd has support for this (well, since today, i.e. version osgi:biz.aQute.bndlib@2.2.0.20130806-071947 or later). First bnd can make it easier to create the Provide Capability header since the involved packages are in the Export-Package as well as in the Provide-Capability headers. The do-no-repeat-yourself mantra dictated am ${exports} macro. The ${exports} macro is replaced by the exported packages of the bundle, for example:
Bundle P:
  Provide-Capability: 
    osgi.contract;
      osgi.contract=Servlet;
      uses:="${exports}";
      version="3.0"
  Export-Package: javax.servlet, javax.servlet.http

That said, the most extensive help you get from bnd is for requiring contracts. Providing a contract is not so cumbersome, after all you're the provider so you have all the knowledge and the interest in providing metadata. Consuming a contract is less interesting and it is much harder to get the metadata right. In a similar vein, bnd analyzes your classes to find out the dependencies to create the Import-Package statement, doing this by hand is really hard (as other OSGi developing environments can testify!).
So to activate the use of contracts, add the -contract instruction:
bnd.bnd:
  -contract: *

This instruction will give bnd permission to scan the build path for contracts, i.e. Provide-Capability clauses in the osgi.contract namespace. These declared contracts cause a corresponding requirement in the bundle when the bundle imports packages listed in the uses clause. In the example with Bundle R, bnd will automatically insert the Require-Capability header and remove any versions on the imported packages.
Sometimes the wildcard for the -contract instruction can be used to limit the contracts that are considered. Sometimes you want a specific contract but not others. Other times you want to skip a specific contract. The following example skips the 'Servlet' contract:
bnd.bnd:
  -contract: !Servlet,*

The tests provide some examples for people that want to have a deeper understanding: https://github.com/bndtools/bnd/blob/next/biz.aQute.bndlib.tests/src/test/ContractTest.java Contracts will be part of the bnd(tools) 2.2 release (hopefully) at the end of this summer, until then they are experimental. Enjoy. Peter Kriens @pkriens Update: Last example to skip the 'Servlet' contract was reversed, updated the text to show a reverse example (anything BUT Servlet).

Monday, July 29, 2013

Thanks H2!

After struggling with Java persistency I was getting a tad desperate. It is not that hard to get any of the tutorials to work but virtually all tutorial are creating unwanted couplings and often have reams of dependencies (often hidden behind inconspicuous looking maven poms). In an OSGi environment you need a persistent solution that works well with components. This means that in general components should not be bound to a specific brand of database because this would make these components not very reusable, they would work only in environments that had the same database brand. I was getting a bit desperate because it was so hard to find a working combination that did it the OSGi way.

The OSGi way is that the database is provided as a service. By providing it as a service you allow the deployer (the person that provides your application a place to run, which is often yourself but it makes sense to separate these roles) to pick a database brand at runtime by deploying the proper bundle and configuring it. That is, as a user of the database this is the code in the OSGi way:

@Component
public class DatabaseUser {
   DataSource ds;

   @Reference
   void setDataSource( DataSource ds) {
     this.ds = ds;
   }
}


This model is very elegant, it uncouples a lot of details from the component. The component does not have to worry about database configuration, transaction model, passwords, pooling, and not even its brand. Obviously, a component can have special dependencies because it uses proprietary features of certain databases (SQL is treated by vendors more like a recommendation than a specification), these can always be reflected by Require-Capability headers. That said, careful components can restrict themselves to a lowest common denominator so that they can work with any or most  SQL databases.

This leaves us the problem how to get the DataSource object without specifying its brand (or password!) in our code/xml? In a perfect OSGi world we would have JDBC drivers that use Configuration Admin to specify the desired Data Sources; the driver bundle would just register these configured Data Sources. This requires a trivial amount of code since JDBC drivers already configure themselves with properties, a good match to Configuration Admin.

We did not choose this model during the OSGi Enterprise specification because we thought this might be too much work for the driver vendors. I guess that was the correct assessment since the even easier to implement DataSourceFactory service that we did specify is still rare. The Data Source Factory service provides methods to create configured ConnectionPoolDataSource, DataSource, Driver, and XADataSource objects. With this interface it is fortunately not very hard to make a generic component that registers our desired Data Source services:


@Component(configurationPolicy=REQUIRED)
public class ConfiguredDataSource extends DataSource{
   DataSourceFactory dsf;
   DataSource ds;

   @Activate
   void activate(Map map) throws Exception {
     Properties p = new Properties();
     p.putAll(map);
     ds = dsf.createDataSource(p);
   }

   public Connection getConnection() throws SQLException {
     return ds.getConnection();
   }
      
   public Connection getConnection(String username, String password) 
    throws SQLException {
    return ds.getConnection(username, password);
   }

   @Reference
   void setDataSourceFactory( DataSourceFactory ds) {
     this.dsf = dsf;
   }
}

Realize hat this code is much more flexible than it looks at first sight because it leverages Declarative Services's features. Through Configuration Admin you can bind it to different brands of databases, you can set passwords, configuration options, and instantiate multiple DataSource services. However, the hard part, as is sadly still too often the case with OSGi, is finding a bundle that implements this specification. Without much hope I looked at the H2 database that I was already using in my prototypes. It is a fine example of a Java library. Rarely have seen a product with so many features, so small (1Mb), and so few dependencies (0). It is delivered out of the box as an OSGi bundle. And it actually registers a Data Source Factory service! I also got H2 to work with OpenJPA the OSGi way since the OSGi Specifications mandates JPA to use the OSGi way for JDBC, though this did not work out of the box yet. If only more code was delivered like this ...

To conclude, Java persistence is still a mess in my personal opinion; it will require a lot of work to make it all work the OSGi way. However, at least there is light at the end of the tunnel. Thanks H2 guys for this good out of the box component experience, hope you set a widely followed precedent. You definitely gave me hope.

Peter Kriens @pkriens

P.S. If you want to experiment with this approach, look at OPS4j Pax JADBC, the created adapters for laggards in this area.

Monday, July 22, 2013

Accidental Complexity

I am currently writing an RFP for the increasing adoption work I will do for the OSGi Alliance. To understand the landscape, I interviewed a number of software companies. The first interview was a Spring/Maven shop, representing mainstream Java development. However, I've seen that PHP is very  popular in the lower segments of the web applications. Since I am a strong believer in the powers of Java, I always wondered why those people 'struggled' with PHP so I decided to interview a small PHP shop to find out why they used PHP.

The story they told me over lunch was quite interesting. Early 2000 they had some people go on a course at IBM to learn about Java EE. After many thousands of dollars and some weeks later they all had become utterly confused. In their eyes, Java was just way too complicated for what they wanted to do. PHP gave them a simple platform that was more than sufficient for their needs. And not only was it much easier to learn, it was also much easier to find libraries and snippets that they could use in their daily work.

When I tried to defend Java (neutrality is not my stronghold) I found that most of those really good things in Java fell on deaf ears. For example, refactoring is one of my top features in Java but they had no idea what it entailed. So I found myself desperately trying to explain refactoring in terms of a type safe search and replace but it was clear that their eyes glazed over, wondering what the big deal was (ok, the rosé might not have helped). It was clear to me that much of the accidental complexity we have in Java was not understood to have sound software engineering reasons, for them it was just painful and overly complex. PHP was so much easier to get started with. I must admit that when a friend, who is a non-technical sociology professor, built a professionally looking PHP website, including large forms and payments, I was duly impressed.

After some more Rosé my lunch partners did actually come up with problems. Problems that I think could be addressed but the problem was that they saw these 'problem' as facts of life, not something that could be addressed.

Therefore, the primary problem that we should try to address is how to cross the steep threshold that Java puts between an uninitiated developer and a web app. I think having a framework for web apps that provides a skeleton for other applications is the start point that will make a difference. However, to also attract non-Java developers we will need to minimize the accidental complexity of Java and OSGi. With DS annotations and bndtools I think OSGi's accidental complexity is ok for the powers it offers. However, after researching how to handle persistency in Java I find it hard to apply the Java EE APIs in OSGi so that they provide an out of the box experience like PHP. At the same time, I find some other libraries that are not Java EE but seem a lot easier to use. It will be interesting to find out how we will weigh the compatibility requirements against the simplicity requirements.
Peter Kriens (@pkriens)

Tuesday, July 16, 2013

Real Men Don't Use DS

In general watching Stackoverflow is a pleasant way to spent some spare time. Helping other people out feels good and it is nice to see how the number of OSGi questions, and thus adoption, is increasing. However, it can also hurt looking at how people are struggling because of their own choices. There is this idea in our industry that real men start with the command line and Bundle Activators. Bare metal! Somehow this is considered to be a better way for real men to learn a technology than using a sissy IDE. Just like real men do not learn how to drive a car until they can clean and tune the carburator! The fact that  carburators have been absent in cars since the late seventies seems irrelevant to them.
I've written forewords for numerous books about OSGi and virtually all made the same mistake that to understand the technology you should first proof your worth by struggling with a Bundle Activator and a Service Tracker. Just like cars today have electronic injection so does OSGi have dependency injection. Use it. My strong recommendation is that if you want to learn OSGi, use bndtools with a Declarative Service (DS) example project. Make sure you understand the life cycle of declarative services and their immensely powerful integration with Configuration Admin. Realize that any class can be made a service with a simple annotation and any service can be made a dependency with a another annotation. If you need something to initialize, add an activate or start method to your service. Check your imports in the content pane and adjust imports them with drag and drop to make them cohesive and simple to use modules. Visualize what and how is running with Apache Felix Webconsole and Xray. Enjoy the sub-second edit-debug cycle. That is what you should learn and fall in love with because that is what makes OSGi so powerful.
Only after your mastered this initial level start looking under the hood and find that all the goodies that real men need are really there: Continuous integration, command line stuff, low level service access, weaving, proxying, bundle activators, bundle trackers, service trackers, whatever. It is wonderful to know it is all there when you need it but it is not what OSGi is all about.


Monday, July 15, 2013

The Trouble with Objects

When I discovered objects in the early '80s my mind was blown away. This was a technology that felt right, it gave me a way to think about my software that I utterly missed in structured programming. My career in the 90's centered around helping companies getting started with objects. Objects had clearly won, it was a run race. Lately, however, I have been getting more and more doubts.
The unique selling points of objects is that the data can change while the behavior remains the same (or at least backward compatible) via its public interface. If the world is the process then this is a great model. However, with 10Gb ethernets and larger, larger problems, cheaper computers, the world of a program is no longer a process: You are building distributed systems.
Distributed systems exchange information. In Java an attempt was made to preserve the object oriented semantics by not only communicating the data, but also sending the classes. However, ensuring that each system would use the correct class version and maintaing security turned quickly into a quagmire except for the simplest of systems. And even if it does work, it excludes any non-Java participant from collaborating. It is not that the OO community did not get any warning signals. The object oriented impedance mismatch with relational databases shows, at least in retrospect, that hiding your data while persisting it is not an option. At the time, we thought this was a problem that would solve itself when object oriented databases were ready (any day now!). Alas, they largely ran into the same problem that ensuring that the classes that read/wrote the data were of the same or compatible version was hard, if not impossible in many cases.
Looking at Java (and non-Java) persistence solutions I was ran into JOOQ, a library that declares peace between Java and SQL. Instead of abstracting it, it provides a type safe, fluid builder, way to use relational databases. It gave up on abstracting the database but it seems to have won a lot in simplicity and exploiting the powerful relational model. I cannot but wonder if all those abstractions to make the life of the application developer easier are in the end worth the increased complexity. Would it be worth to create a small, pet clinic like, OSGi application that would use all these different persistence models?

Monday, July 8, 2013

Persistence

My assignment for the OSGi Alliance is to increase adoption by making it easier to get started with OSGi. So I am currently writing a Request For Proposal (RFP), the standards requirements document in the OSGi Alliance. One of the primary parts is the Application Domain. In this section you neutrally describe the current practices, it is basically used to scope the problem domain area and provide a vocabulary for the subsequent sections and documents.
So last week I started the section persistence. It is an area that I have rather little experience with so I welcomed the chance to work with them during my sabbatical. I picked the document oriented database Mongodb because it felt much easier to work with in an object oriented environment than relational databases, and I must admit that choice has made me quite happy (except for the lack of transactions). However, it is clear that relational databases are the bread and butter of web applications. I therefore had to look deeper into what's happening in this area.
I then stumbled upon the debate around Java Data Objects (JDO) and Java Persistence Architecture (JPA). I had the privilege to work with Mike Keith (Oracle, JPA spec lead) on the OSGi JPA specifications so I knew something about JPA. However, so far I'd never seriously looked at JDO, and actually thought JPA was replacing JDO. Unfortunately, life seemed to be not so simple.
Both JDO and JPA define metadata to store normal Java objects in a database. JDO focuses on any type of persistent store (even S3 seems to be supported) while JPA is only used for relational databases. JDO seems to be a real working horse that got a new lift when Google selected it for its Google App engine. Though JPA is the new kid on the block its decision to limit itself to relational databases seems a severe limitation with the increasing popularity of NoSQL databases like Mongodb, Cassandra, Neo4j, etc. So far, it also looks like JDO is more portable and flexible while JPA seems faster. However, lots of experiences seem to come from toy projects or evaluation projects. Now lets not start a flamewar but I would love to hear (neutral) experiences of the use of these technologies in real life sized projects. Obviously I am extremely interested in how they work in OSGi.

Peter Kriens


Monday, July 1, 2013

The Architecture

Last year when I started with jpm4j I spent quite a bit of time exploring the state-of the art technologies. Even before that start point I had decided that the world had changed, most Java web frameworks looked very outdated with the advent of the upcoming HTML5. Now, with 55 years I guess I am an old geezer so I remember time sharing (punch cards, ASR/Teletypes, ending in intelligent terminals), the swing to the PC applications that gave us 'fat clients' in the eighties and early nineties, then the pendulum going back to 'thin' browser based clients from 1995 and onwards; and now the back swing to fat Javascript based clients. For many Java developers that started in the late nineties, no longer being a rebel but being treated as an incumbent will come as a nasty surprise since most Java web frameworks like JSP, JSF, Vaadin, Struts, Spring MVC, Wicket, and hundreds of others will be relegated to the dustbin of history over the next decade.

What changed? What has changed is HTML5 combined with Moore's Law. For the first time, there is a software platform that allows you to develop a single application code base that will run on virtually any user facing machine. From smartphones to high-end workstations. Though the rudiments have been available for a long time it was not until the browser vendors/organizations kicked w3c's lead and decided to go on their own in the WHATWG workgroup that we got a pretty decent, fairly wide, specification for web applications that was quickly supported by virtually all browsers. We are back running our application code on the client but this time we have no deployment problem nor having to worry about the client's Operating System.

This is of course a game changer for Java web frameworks. In a Java web framework the UI is managed in the server. This model requires a round-trip from the browser to the server when the GUI needs an update; a slow process that not only places a heavy demand on the server but it can also feel sluggish. With HTML5, the server can focus on the data and becomes oblivious of the GUI. The client worries about the graphics and interaction. The biggest advantage is scaling, no longer is the server required to maintain the often large state of the GUI; instead it provides a REST interface or JSON RPC interface . So not only is the footprint reduced, distributing the workload is even easier since the  protocols can easily be made stateless. Another advantage is simplicity, in my experience the server code becomes considerably smaller and therefore much simpler.

So why am I so excited about HTML5 since it seems irrelevant to OSGi? As  Emanual Rahm said: "Never let a serious crisis go to waste". I believe that HTML5 will be a disruptive change, causing a lot of web applications to be rewritten since customers will demand the responsiveness of local applications. This offers an opportunity for OSGi; an environment that in almost all aspects seems to have been designed for this paradigm shift.

Peter Kriens

Thursday, June 27, 2013

Distributed Eventing RFP 158 now available

It has been a while since the OSGi Remote Services specifications were released. The Remote Services and Remote Service Admin specifications were part of the first Enterprise OSGi release which is available from early 2010. The Remote Service specs focus on using the OSGi Service model to create and consume remotely accessible services. A number of Remote Service implementations have since been created. They use a variety of wire protocols, some of them based on industry standards such as SOAP/HTTP or JSON/REST which allows them to interact with clients written in other languages.

The Remote Services specifications focus however on synchronous interactions. While asynchronous models were indeed mentioned in the original Remote Service RFP, they were not addressed in the Remote Services specifications.

Supporting asynchronous distributed eventing in OSGi through a standard, technology independent API has been on of wishlist of many for quite some time. For certain distributed event based use-cases a people have successfully used the OSGi Event Admin service. While the Event Admin API can be connected to a remote distribution or eventing system, this only addresses a certain type of use case. Specifically, reliability and queue-based semantics are not supported in a 'Distributed Event Admin' solution. For more information on the evaluation of distributing the Event Admin Service, see Marc Schaaf's Master Thesis.

Work has now finally started on a more general Distributed Eventing RFP at OSGi. This RFP looks at introducing a general Distributed Eventing solution into OSGi. You can find RFP 158 in the OSGi buzilla system here: https://www.osgi.org/bugzilla/show_bug.cgi?id=168
I you have any feedback, just leave a comment.

Oh, and just a reminder that this is an RFP, which is a requirements document. No design is yet captured in this document. The design will be part of the RFC work that is started after the RFP is done.

Monday, June 24, 2013

I am back ...

Since last week, I am back, working for the OSGi Alliance. To some this may come as a surprise but talking to people it seems that many actually had expected this.

So what happened? In the past year I've developed a repository (https://www.jpm4j.org) from scratch. Partly because I think such a repo is absolutely necessary, partly because I felt I needed to make an enterprise like app so I knew better what I was talking about. Both succeeded; What did not work out as hoped was commercializing it. I guess business aspects have never been my strongest point :-( and there is only so much I could spent on this sabbatical.

However, this was not the only aspect that made me return. Though I wanted to keep a distance of OSGi in the beginning I developed jpm with OSGi in the way that I've always have advocated: Services first. The good news was that to my great relief, this actually worked much better than I told people. I am now more convinced than ever that the OSGi service model is a true innovation that is highly undervalued in the industry. The bad news was that it required the development of too many basic components just to get started. Though there are an amazing number of libraries and bundles out there they often look awkward  because they do not leverage the unique advantages of OSGi: services + configuration admin. For me it became clear that the threshold to start with OSGi for a real web application was way too high.

After talking to some OSGi Alliance people we proposed to create a 'starter-kit' that would enable developers to start on a much higher level than a servlet. It should provide a foundation like a Ruby-on-Rails but now based on the much more scalable OSGi technology. On top of this foundation we plan to add a real application, demonstrating all the OSGi's best practices. This was proposed to the board and they actually wanted me to start weeks earlier than I had planned. So here I am!

I will therefore  be working on a foundation for OSGi web applications. If you have ideas, let me know. I am sure to keep you posted via this blog and twitter.

Peter Kriens

Tuesday, May 21, 2013

RFP 154 Network Interface Information Service now publicly available

Java standard APIs (i.e. java.net.NetworkInterface, java.net.InetAddress)
provide functions that allow IP network interface information, such as the IP
address and MAC address to be obtained.

However, the bundle that wants to get network interface information has to
monitor whether the information has changed or not for a certain period of
time. Changes in network interface can be pushed to the bundles concerned, the
need for polling by bundles can be eliminated. In addition, some information cannot be obtained via Java standard APIs.

This RFP describes the need for a mechanism that notifies concerned parties of
changes in the network interface and a new API that provides information not
obtainable from the standard Java APIs as well as the corresponding
requirements.

The RFP is now publicly available and we invite you to send us your comments and questions before we finalize it and start working on the RFC. The RFP can be found at: RFP 154 

Tuesday, March 26, 2013

OSGi DevCon 2013 - Whats Coming Up This Week

Wow, Day 1 of OSGi DevCon is already over.  The location here in Boston this year is excellent,  plenty of space and pretty good wifi, although they seem to be selling out of beers too fast at the hotel bar!  If you want to keep up with the latest activity be sure to follow the OSGi Alliance Twitter feed (@osgialliance).

Day 1 was Tutorials day and Neil Bartlett and Tim Ward both had good attendance at their OSGi Tutorials. There were also two OSGi BOFs in the evening.  Jamie Goodyear hosted a Karaf Users BOF and Glyn Normington and Chris Frost hosted a Virgo BOF.

But Day 1 was just the warm up.... There is a really full schedule of talks and activities planned over the next three days.

You can find the full conference schedule for the talks online.

A few activities that aren't in the main schedule and which may be of interest include:

OSGi BOF this evening - 7pm to 9pm in Seaport Ballroom B - You can find full details of the BOF here.
  • Highlights include an update on the latest OSGi specifications and a sneak peak of whats coming up in the future
  • OSGi Tooling discussion

Also there will be a free prize draw to win one of several OSGi Books, so plenty of swag too!

OSGi and JavaScript Workshop - Thurs March 28, 1.30pm to 3.30pm 
Simon Kaegi, who is presenting at the conference on the same topic, will be hosting this workshop to look at how OSGi concepts might map to JavaScript, with the aim to gather requirements and foster discussion around producing an OSGi Alliance RFP for this.  So this is your chance to get involved and input and influence in to a potential future OSGi specification.

The Workshop is free for anyone to attend.  Both conference delegates and non-conference delegates.  However you do need to register to ensure you get access.  Click here for full details and to register.

OSGi Surgery
There is an OSGi Surgery being run where you can book a free 30 minute one on one meeting with Neil Bartlett to discuss anything OSGi.

Whether you want to review code or an issue you are having, explore how OSGi might fit in your environment, or if you are brand new to OSGi and just want to ask some newbie questions that you dont feel comfortable asking in one of the talks then this is a great opportunity.  Slots are limited and you need to book in advance.  Details and how to book can be found here.

Hope you have been able to join us at the conference and enjoy Day 2!  If you have any questions please contact us by email.

OSGi DevCon 2013 Program Committee
BJ Hargrave & Mike Francis