Monday, August 26, 2013

Dear Prudence: Cant we achieve the modularity through normal jars?

Dear Prudence I've some doubts related to osgi I am new to OSGI framework. I was going through the sites and read about OSGI framework. Frankly speaking I did notunderstand anything. Following are my doubts
  • OSGi is supposed to provide modularity. Cant we achieve the modularity through normal jars?
  • What does it mean that OSGi has a dynamic component model?
  • Bundles can be installed,started,stopped,updated,etc. Why do we want to install the bundles? Why cant we access directly like what we access other normal jars?
I am totally confused. Can somebody answer me ? If it is possible to give some examples also?
Confused

Dear Confused,
Your question first puzzled me a bit since there is so much documentation on the Internet today and there are plenty of books that take you from minute detail to broad overview. Not to talk about the hundreds of 'hello world' tutorial blogs. Then it dawned on me that many of these tutorials seem to start with explaining why the author felt compelled to make this tutorial because OSGi was so much easier and more powerful than those other 99 blogs that were read before OSGi was understood ... Maybe there is something in OSGi that makes it really hard to understand before you know it.

I guess everybody has a bubble of knowledge that makes it hard to learn/understand anything outside that bubble. I know first hand, last year I really learned Javascript and found myself balking at seemingly bizarre and complex patterns until they became obvious. Your question seems to indicate that your knowledge bubble does not intersect with the bubbles of people advocating OSGi. So lets design a module system based on normal JARs.

I guess we should start with defining what a module is and why we need it. A software module is characterized by having a public API that provides access to a private implementation. By separating the API from the implementation we can simplify the usage of our module since an API is conceptually smaller than the API + implementation and therefore easier to understand. However, the greatest benefit of modules comes when we have to release a new revision. Since we know that no other module can depend on private implementation code we are free to change the private code at will. Modules restrict changes from rippling through the system, the same way as firewalls restrict fires.

Lets make a framework that can use a JAR as a module. Best practice in our industry is to make code private by default, that is no other module can access the inside.This is the standard Java default, without the public keyword fields, methods, and classes revert to being only locally accessible.

However, if nobody outside the JAR can see anything of the inside then this code can never be called. Like Java, we could look for a class with a public static main(String args[]) method in this module to start the module. Since we do not want to search all classes in a module to find this main class (which also means we could end up finding multiple) we need a way to designate the JAR's main class. Such a mechanism is already defined by Java in the JAR specification: the Main-Class header in the JAR's manifest (a text file with information about the JAR). So we could call such a designated main method in each module to start the module, it can then run its private code. However, the main method does not allow us to stop the module. So let's create a new header for this purpose and call it Module-Activator. The class named in the Module-Activator header must then implement the ModuleActivator interface. This interface has a start and stop method, allowing the framework to start and stop each module.

If the private code executes it will likely need other classes that are not in the JAR. Our framework could search the other modules for this referred class if we knew which part of a module was private and which part was public. Since Java already has a namespace/accessibility/encapsulation hierarchy, we should try stay close to these concepts. This Java hierarchy is field/method, class, package and the package is therefore the best candidate to designate private or public. Since there is currently no Java keyword defined to make a package private or public we could use annotations on a package. However, annotations mean that we need to parse the JAR before we can run it, this is in general a really bad idea for performance and other reasons. Since we already defined a header for the activation, why not define another header: Module-Export? This header would enumerate the packages in the JAR that provide the public API.

This minimalistic module system (only two headers) is very attractive but there is a catch that becomes apparent when you build larger systems.

What happens when you have multiple modules in your framework that both provide a given class but their revision differs? The standard Java approach is to do a linear search of the modules and the first module that declares a class wins. Obviously you should never design a system that has multiple revisions of the same class. However, larger systems tend to run into this problem because dependencies are transitive. JARs depend on other JARs, that depend on further JARs, ad nauseum. If, for example, many of these JARs depend on Log4j it is easy to see that not all these JARs will use the same version of Log4j. For a simple library as Log4j that is generally backward compatible you want to have the latest revision but for other libraries there is no guarantee that revisions are backward compatible.

Basically ignoring this erroneous situation like Java (and Maven) does is not very Java-ish. We can't have a system that fails in mysterious ways long after it was started; in Java we generally like to see our errors as early as possible.  For example, the Java compiler is really good in telling you about name clashes, why should we settle for less on the class path?

Since we already export the packages that are shared with the Module-Export manifest header, why not also specify the imported packages in the manifest with a Module-Import header? If the import and export headers also define a version for the package then the framework can check a-priori if the modules are compatible with each other before any module is started, giving us an early error when things are not compatible. You could do even better. We could also make sure that each module can only see its required version of a package, allowing multiple revisions of the same module in one system (this is generally considered the holy grail against JAR hell).

So dear Confused, we've just designed a simple module system based on plain JARs and three manifest headers to describe the module. Fortunately we do not have to struggle to mature this simple design for lots and lots of very subtle and complex use cases because such a module system already exists: its actually called OSGi.

Let me answer your dynamics questions next week,

Peter Kriens
@pkriens

Wednesday, August 21, 2013

The Perfect OSGi Persistence Model?

After having such good experiences with Mongodb last year it is a tad frustrating to have to dive into the messy and fuzzy Java persistence world. Ok, I did miss the transactions in Mongodb a lot but for the rest it was a dream. However, there are good cases to be made for relational databases, not in the least because of their popularity and therefore widespread support. What I found so far is that virtually no implementations are as easy to use as the OSGi specifications imply. I've now got a working configuration that works but is a far cry from the pick and choose model that OSGi promises. I've created a small blog application that consist of a Web based GUI and a Blog Manager service. The Blog Manager service is then implemented in multiple ways:
  • Dummy database
  • JDBC based on the OSGi DataSourceFactory
  • JPA
  • JDO
The intention is to then have configurations for different implementations for each of the persistence standards. So far I have the dummy version running as well as an OpenJPA/H2 based one. Though H2 worked out of the box, OpenJPA was much harder. I could actually only get it to work by using a number of Aries bundles. Another struggle was to get the Transaction Manager working. After trying out different versions I selected the Jonas Transaction Manager (JOTM) but also this transaction manager designed for OSGi required glue code.

The whole purpose of specifications is that you can pick and choose and not have to spent time writing silly glue code. However, today the implementations are clearly not properly supporting the OSGi specifications, even though in most cases latent support is present. I also see that people are struggling with implementations, often doing things much more complicated than required.

 So what is the ideal OSGi model? The ideal OSGi model is that an application depends on an Entity Manager service. This Entity Manager is configured by Configuration Admin service and uses an OSGi Data Source Factory service from the registry as the JDBC driver. If a Transaction Manager service is registered, then this manager must be used by the JPA and database implementations. Just selecting different bundles should allow you to experiment with different configurations. Such a model is highly decoupled and allows for a lot of flexibility.

 We need a community effort to fulfill the persistence promise of OSGi: plug & play. So, to move this forward, if you have a persistence configuration using JTA, JDBC, JPA, or JDO that works well under OSGi, please send this configuration to me or point me to a public project. If you're a committer (open or closed source) that implements JTA, JPA, JDO, or JDBC implementations and want to work together to make your project work out of the box in OSGi, then send me a mail and I will see how I can help. I will do reporting through this blog and twitter.

 Peter Kriens (@pkriens)

Thursday, August 8, 2013

OSGi RFPs and RFCs now accessible live to everyone

The bulk of the work done in the OSGi Alliance is around the creation of specifications. There are the OSGi Core specifications, which define the Framework itself, the Residential specifications which focus on the Residential and embedded context and the Enterprise specifications that define Service APIs and programming models for Enterprise systems. Every OSGi specification is developed in an RFC document in one of the OSGi expert groups: CPEG, REG or EEG. Before an RFC is developed however the requirements are gathered in an RFP, which is also discussed in one of the EGs.
Interesting is that there is a lot of cross-pollination happening. For example, work done in the EEG is often applicable to the residential context as well and vice-versa. And the nice thing about the work done in the EGs is that it's really all quite democratic and consensus oriented. No single company or organization dominates the decisions.

One thing that was really missing here was the general visibility of all the work happening on the RFPs and RFCs. The OSGi Alliance published Early Access Drafts at certain points in time, but this provided no live visibility on how the specifications were taking shape.

This has now been changed. From now on technical work done by the OSGi Alliance on RFPs and RFCs is visible live in the OSGi design repository at Github: https://github.com/osgi/design
This is great news for people who want to implement a spec that is still in development, for people who want to follow the specification process in general or simply for people who want to comment on or otherwise reference an active OSGi RFP or RFC!
And if you have any feedback, this can be provided through the usual feedback channels: the OSGi Bugzilla system.

For more background see also the press release.

And to get an idea on what OSGi is currently working on, here's the current list of the active documents as available via Github (click on the link to download):

Tuesday, August 6, 2013

OSGi Contracts (wonkish)

Let's talk about versions ... again. Though the OSGi has a very elegant package version model there are still many that think this is too much work. They do not want to be bothered by the niceties of semantic versions and just want to use, let's say, Servlet 3.0. For those people (seemingly not interested in minimizing dependencies) the OSGi Alliance came up with contracts in OSGi Core, Release 5.0.0. A contract allows you to:
  1. Declare that a bundle provides the API of the given specification
  2. Require that the API comes from a bundle that made the declaration
This very common pattern is called the Capability/Requirement (C/R) model in OSGi, it underlies all of its dependency concepts like Import/Export package and others; it forms the foundation of the OSGi Bundle Repository. If you ever want to know what is happening deep down inside a framework than look at the Wiring API and you see the requirements and capabilities in their most fundamental form.
Capabilities declare a set of properties that describe something that a bundle can provide. A Requirement in a bundle has a filter that must match a capability before this bundle can be resolved. To prevent requirements matching completely unrelated capabilities they must both be defined in the same namespace,, where the namespace then defines the semantics of the properties. Using the C/R model we were able to describe most of the OSGi dependencies with surprisingly few additional concepts. For a modern OSGi resolver there is very little difference between the Import-Package and Require-Bundle headers.
So how do those contracts work? Well, the bundle that provides the API for the contract has a contract capability. What this means is that it provides a Provide-Capability clause in the osgi.contract namespace, for example:

Bundle P:
  Provide-Capability: 
     osgi.contract;
      osgi.contract=Servlet;
      uses:="javax.servlet,javax.servlet.http";
      version="3.0"
  Export-Package: javax.servlet, javax.servlet.http

This contract defines two properties, the contract name (by convention this is the namespace name as property key) and the version. A bundle that wants to rely on this API can add the following requirement to its manifest:
Bundle R:
  Require-Capability: osgi.contract;
    filter:="(&(osgi.contract=Servlet)(version=3.0))"
  Import-Package: javax.servlet, javax.servlet.http

Experienced OSGi users should have cringed at these versionless packages, cringing becomes a gut-reaction at the sight of versionless packages. However, in this case it actually cannot harm. The previous example will ensure that Bundle P will be the class loader for the Bundle R for packages javax.servlet, javax.servlet.http. The magic is in the uses: directive, if the Require-Capability in bundle R is resolved to the Provide-Capability in bundle P then bundle R must import these packages from bundle P.
Obviously bnd has support for this (well, since today, i.e. version osgi:biz.aQute.bndlib@2.2.0.20130806-071947 or later). First bnd can make it easier to create the Provide Capability header since the involved packages are in the Export-Package as well as in the Provide-Capability headers. The do-no-repeat-yourself mantra dictated am ${exports} macro. The ${exports} macro is replaced by the exported packages of the bundle, for example:
Bundle P:
  Provide-Capability: 
    osgi.contract;
      osgi.contract=Servlet;
      uses:="${exports}";
      version="3.0"
  Export-Package: javax.servlet, javax.servlet.http

That said, the most extensive help you get from bnd is for requiring contracts. Providing a contract is not so cumbersome, after all you're the provider so you have all the knowledge and the interest in providing metadata. Consuming a contract is less interesting and it is much harder to get the metadata right. In a similar vein, bnd analyzes your classes to find out the dependencies to create the Import-Package statement, doing this by hand is really hard (as other OSGi developing environments can testify!).
So to activate the use of contracts, add the -contract instruction:
bnd.bnd:
  -contract: *

This instruction will give bnd permission to scan the build path for contracts, i.e. Provide-Capability clauses in the osgi.contract namespace. These declared contracts cause a corresponding requirement in the bundle when the bundle imports packages listed in the uses clause. In the example with Bundle R, bnd will automatically insert the Require-Capability header and remove any versions on the imported packages.
Sometimes the wildcard for the -contract instruction can be used to limit the contracts that are considered. Sometimes you want a specific contract but not others. Other times you want to skip a specific contract. The following example skips the 'Servlet' contract:
bnd.bnd:
  -contract: !Servlet,*

The tests provide some examples for people that want to have a deeper understanding: https://github.com/bndtools/bnd/blob/next/biz.aQute.bndlib.tests/src/test/ContractTest.java Contracts will be part of the bnd(tools) 2.2 release (hopefully) at the end of this summer, until then they are experimental. Enjoy. Peter Kriens @pkriens Update: Last example to skip the 'Servlet' contract was reversed, updated the text to show a reverse example (anything BUT Servlet).