Thursday, December 18, 2008

Project Jigsaw #2

Alex Buckley has given a presentation at Devoxx in Antwerp last week (December 2008) that is the next step in the design of Java modularity. Alex has sent me the link to his presentation, you can find it on his blog.

This presentation is the next step in the saga of Java modularity. Sun’s history of Java modularity is littered with blogs and presentations; it shows no proper requirements and design documentation whatsoever. Neither has any of these blogs and presentations been discussed in, or presented to, the relevant JSR 294 or JSR 277 expert groups. It feels therefore awkward to react to a design that has no visible requirements nor a proper design document. How can one judge it, except highly subjectively? By its very nature, a presentation is giving an overview of an underlying construct, leaving out all the details that are so important to understand and evaluate a proper design. One can only hope that these design documents exist in Sun's offices somewhere.

However, getting modularity right in the Java platform can make a tremendous difference for hundreds of thousands of companies and millions of individuals. And clearly there is a self interest as well: it is of paramount importance to the OSGi Alliance to get it right. Because there are no discussions on the proper mailing lists, I feel forced to react with the same ill-matched tools of blogs and presentations. Excuses.

The Devoxx presentation begins with sketching the problems that JSR 294 addresses. These can be summarized as:
  • JAR Hell
  • Platform Fragmentation
  • Startup Performance
However, in the remaining parts of the presentation where the solution is sketched, many implicit requirements rear their heads. In the following sections I discuss the problems that I think Sun is trying to address and then address the areas that I think are not on their horizon.

JAR Hell is composed of the following problems (derived from Dependency Hell, DLL Hell, and JAR Hell):
  • (Too) Many Transitive Dependencies. This is related to the dreadful feeling you get when you want to use A, which needs B and C, which need D, E, F, G, ad nauseum. Though a certain amount of coupling is required to enable reuse and extension mechanisms, it often turns out that many dependencies are not always necessary. However, in Java there is no way to express this optionality.
    Many popular (open source) libraries have a staggering amount of dependencies. For example, use maven to compile "Hello World" for the first time. It downloads an impressive amount of libraries because it has no mechanisms to manage the unnecessary transitive dependencies. OSGi handles this problem by using the service model where only the packages with service interfaces are shared and picking packages, which have the smallest possible granularity in the Java VM model, as the unit of sharing.
  • Dependency on multiple versions. Java applications cannot have multiple versions of the same component in a single application. Well, why would you want this? Assume you rely on JAR A and JAR B. Both A and B require JAR C. Unfortunately, A requires version 1 of C and B requires version 2 of C. In Java you are out of luck. The reason is that they only have a linear class path so or C;version=1 comes first or C;version=2 comes first. During run-time, either A or B is bound to the wrong version, wreaking havoc at unexpected times and places. In OSGi, this problem is addressed by precisely wiring up bundles based on meta-data in the JAR manifest.
  • Unmanaged Dependencies. Java has no way to specify dependencies on other JARs. Depending on the class loader hierarchy, the class path, JARs in folders, and magic. This is all done "blind", there is no verification that it matches the assumptions in the JAR files. The effect is that errors happen (too) late and not early. Trying to make a set of JARs work together can therefore be a cumbersome and very hard to get right. In OSGi, this is addressed with the manifest in each bundle that specifies the assumptions of the code. This allows the framework to verify the bundle dependencies before the code gets started.
  • Use of private code. All public code in a JAR file is visible to all other JAR files, and public is pervasive because it is required to implement an interface. It is therefore easy to use code that is supposed to be an implementation detail, which can easily break a client in a later version or unnecessarily restrict the provider. In OSGi, this is addressed by explicitly exporting packages. All other packages are private.
  • Stomping. Stomping is the problem that you overwrite one JAR with another because the name is the same though the version is different. This can have very subtle, and much less subtle, unpleasant effects while everything looks OK. OSGi addresses this problem by having a clear install phase that separates the JAR from the internal structure. That is, a JAR file can have the same name but can both be installed in the same VM if they have different versions. This install phase connects very easily with all kinds of repositories and management systems.
Platform fragmentation is about the sad story of how Java fragmented into an incompatible set of profiles and configurations at the beginning of this millennium. Today we have Jave ME and Java SE that have grown largely incompatible over time. The argument for this fragmentation is that a mobile phone or embedded device could not be expected to run the same VM as a desktop. This is a valid argument but unfortunately the solution of profiles and configurations was not well thought out. The different profiles are not upward compatible which requires that a programmer targets a profile (for example Foundation Profile). This code will likely run into problems on Java SE because there are packages and classes that do not occur on Java SE, and of course vice-versa. Worse, packages often have different contents in different profiles and there are even differences as small as visibility, new or removed methods and fields. And worst of all, there is no meta-data describing what a JAR expects of its environment so that code can start running only to throw an exception somewhere in the future.

The key mistake made with the profiles and configurations was caused by the lack of modularity. If Java had been re-factored in a core VM (including java.lang because everybody must share the Class and Object classes) then sets of packages that could have extended this core VM. However, for some bizarre reason it was often not even legal to run a packaged developed in Java ME. The most bizarre JSR that I ever was involved with was JSR 197 that made it legal to run the package on Java SE. It is hard to believe that it took us a year to accomplish this minor, but important for one of the OSGi's specifications, feat.

In OSGi we came from the embedded world so this was a serious problem for us from day one. Obviously, we could not change Java itself, but at least we could address the sub-setting problem and the unmanaged aspect of it. We came up with the execution environments. An execution environment looks a lot like a profile/configuration but it is not intended to be the final word. It is a description of a set of classes that are a proper subset of all feasible Java environments, that is, the common denominator. Compiling against this subset (we have the Jars that contain only public API for you to do this) ensures that you are not coupled to anything outside the execution environment and as a consequence, outside any of the sub-setted profiles. For example, we have ee.minimum, which runs on all known profiles, from CDC/FP to Java SE 7. We use this execution environment to target all our APIs. We also have which is aligned with Java ME Foundation Profile. These execution environments are used by Equinox, Felix, and Knopflerfish to allow their implementations to run on the widest possible set of VMs.

However, we did not stop there. We also designed meta-data in the bundle's manifest that indicates the assumption of the bundle about its environment. A bundle cannot resolve unless the framework can establish that the VM implements one of the required execution environments.

Performance. This problem is related to the work done in Java 6 where the JRE can be incrementally loaded to reduce start-up time. This speedup was necessary because it basically made applets impossible to use because the page froze for up to a minute the first time an applet got started. There are two strategies involved in improving performance: lazy and eager. In a lazy model, the code is not activated until there is a direct need. This model works very well where you have a large application where many parts are rarely used and loading code is relatively fast (local disk). Eclipse is a good example where lazy loading is used heavily to minimize startup and footprint. Eager loading is better when it is clear you will need it soon and loading code is slow (over the net). A good example is the average applet. There are many variations and anybody ever written a serious cache manager knows the trickiness. Performance from a module system therefore depends on the available meta-data and initialization/activation model.

This problem is further clarified later in the presentation when it requires that a module should be able to contain partial packages to speed up downloads: java.lang is split in 3 different modules. This requirement is begging for more complexity and the payoff seems very slim. Splitting packages along performance boundaries will require great foresight to have any performance effects and will in almost all cases be in conflict with minimizing coupling. It is a classic example of coupling of two unrelated concepts (performance, low coupling) into a single design concept (module). In practice, it always results in systems that are not good in either performance nor in decreasing the coupling.

Though performance is a crucial aspect of the VM (and some fantastic work has been done in Hotspot, even without modularity), it is important not to mix concepts that have very different optimization axes. Every time when I see that happening, both axes have to be compromised.

Integration with native packaging systems. One of the not well defined concepts in the presentation is the integration with native packaging systems. A typical native packaging system is rpm. Packaging systems use dependency graphs and scripts to modify an operating system to a new state where it provides new functionality. There is a tremendous experience in packaging systems and nowadays they are quite impressive in how reliable they work.

However, take one step back. The absolute number one value of Java is its platform independence. Native packaging systems should be able to reliably provide modules to the Java platform, but I am fairly confident that inside Java we do not want to see any of these native systems in Java. It is crucial that the Java module system is a well defined system in Java without having to defer to a platform dependent module system. This would be the anathema of Java. However, the presentation seems to assume that native packaging system would integrate with the VM? For example, module version syntax is not defined so it can leverage formats from packaging systems. Alas, if we only had a design document and not a presentation ...

Updated 1: Alex Buckley has told me that Project Jigsaw nor JSR 294 willprepare for native packaging systems ... So my fear is unjustified.

Package granularity. Package granularity has always been a hot potato in Java. Though packages look like a first class citizen, there is a lot of fudging going on to ignore them. On one side we have a Package class representing a package, we import packages in our source code, and there are clear access visibility rules for classes in the same package. However, in the class file format a package is not visible and this has led many (including the JRE) to treat packages as second class citizens.

In the presentation, it is argued that libraries often consist of multiple packages. Though the OSGi service model shows that constraining the interface to a single package works well for highly decoupled designs, it does make sense to be able to think about a group of packages as set that belongs together. This is for me the conceptual advantage of a module: a group of packages that tightly belong together. Superpackages anybody?

Multiple module systems. There seems to be a strong implicit requirement that there will be multiple module systems in the VM. These module systems are somehow supposed to handle the run-time class loading in different ways. Sun will provide a "simple" module system for the JDK, but it will allow others for the application level. They are even polite enough to not specify a version syntax so that OSGi can use its own version syntax while Sun can continue with versions that always start with 1.

Multiple implementations of a specification sounds good, doesn't it? Hmm. Let's look what it means. It means that programmers will have to choose a deployment format because none is specified. As a programmer I will have to choose one or more formats I support because I cannot waste the resources to support all. Do I have any gain as a programmer by having a "choice"? Nope, every choice I make makes my code incompatible with modules from other systems.

Specifications are supposed to simplify the life of programmers, not make it harder. By not having an open discussion about deployment formats and creating consensus around one format, the problem is dumped on the lap of millions of programmers, creating confusing and chaos where none should be necessary. Even if interoperability is supported, there will be lots of small problems for no obvious reason.

And there is of course a political aspect. Sun moved the deployment aspects to an OpenJDK project called Jigsaw. The scope of Jigsaw is to create a module system for the JDK and applications. Though I am fairly sure it will be not match OSGi's capabilities it will be part of the JDK. It is hard to ignore the similarity between Microsoft including Internet Explorer in their operating system because they could not compete on functionality with Netscape.

Missing Aspects
The following section details problems that I thought were an intrinsic part of a module system but that are not discussed in the presentation. I think these areas are very important and closely related to Java modularity.

Class Space Consistency. One of the hardest parts in the OSGi R4 specification was class space consistency. Once you allow multiple versions of the same package in a VM you must ensure that the different modules use the right class loaders or you get hard to diagnose class cast exceptions. That is, a class X from class loader A is not compatible with a class X from class loader B. Confusing but true. In OSGi we have the concept of a class space and we maintain consistency in this class space using the "uses" directive, providing information of what implementation dependencies a package has. With this directive, a framework can assign bundles to different class spaces and thereby ensure no collisions happen. The presentation explicitly acknowledges that this can happen in the proposed model, but does not propose to fix this.

This might not be a major problem for a JRE where it is likely that all modules are only a bug fix away from each other. By definition, a JRE is only depending on itself. It is not very likely that a JRE will have the problem of multiple versions of the same packages. However, application programmers that use a large number of open source libraries are rarely that lucky.

Supporting multiple versions in one application is one of the core aspects of JAR hell. Enabling this is therefore good. Not guaranteeing class space consistency will only create module hell.

Plugin/Extensions. The largest missing area in the presentation is a plugin extension model. One of the primary reasons to choose OSGi is to provide an extension model but the presentation assumes all modules are statically wired. Implicit in the model is that we'll be stuck with class path scanning (OK, module scanning) and class loading hacks to make today's applications.

Compatibility. Java is clearly backward compatible and I applaud Sun for the remarkable feat that Java 1 code can still run on a Java 6 VM. However, there is also forward compatibility and that is largely lacking from the JDK perspective. Most javax packages can easily run on earlier VMs. For example, the javax.script package defines a way how script engines can make themselves available to application code. There is nothing in this package that would make it impossible to run it on a Java 1.2 VM. However, it is only available in Java 6. If you can always run the latest VM (I guess like when you are a Sun employee) this is hardly a problem. However, for the rest of us it does pose a problem to move our code bases to the next version, which usually causes a lag of 2-4 years.

Project Jigsaw in OpenJDK and JSR 277 target Java 7, which is supposed to be out in 2010. How could people on older VMs take advantage of some of the features? Is modularity really only needed on future VMs? Looking at OSGi it seems unnecessary to only focus on the future VMs, it runs as well on Java 7 as on Java 1.2.

Preliminary Conclusion
The presentation unfortunately shows all the aspects of too few eyes from too few perspectives. The interests from the VM perspective have a huge role in the presentation. However, the application programmer's perspective has been more or less ignored.

And then there is the most important aspect of all: multiple module systems. Java is in the extremely fortunate situation to have only one modularity standard today that is well adopted and highly mature. Analyzing the problems as stated in the presentation I have not seen anything that OSGi could not do better today. Though in any other area competition is good, a module system is the technology that allows parties to compete on better implementations without technological friction. A single module system will reduce the ease with which one can adopt open source libraries or commercial components. The world really does not need more than one module system in Java.

I am hoping that Mark Reinhold and Alex Buckley will bring their requirements to the OSGi Core Platform Expert Group where we could discuss the problems and have more (some very experienced) eyes on what is really need. I am pretty sure we can find a consensus. I actually hope we can do this soon: there is an OSGi Expert Group meeting in Boston in January and Sun is a member.

I am fairly sure there is only one requirement we can unfortunately not address: "There shall be no OSGi technology in the solution".

Peter Kriens

Monday, December 15, 2008

Project Jigsaw

As many of you know, there is a new phase in the history of Java modularity. A few weeks ago Alex Buckley contacted me for an urgent phone conference. Through the grapevines I already had picked up that a change was in the air, the exit of Stanley Ho (the poor JSR 277 spec lead) had also been a sign on the wall. Unfortunately, the phone conference was canceled while it was supposed to take place and I had to live in suspense for another two weeks. Last week, while I was in Stockholm chasing a room at 1 am because there was none booked, I finally spoke with Alex and Mark Reinhold. This week, at Devoxx I got the change to talk to Mark and Alex in person though I was severely handicapped by having lost my voice.

A bit of historic context for the uninitiated.

1998 Nov. The OSGi Alliance has since 1998 worked on the development of a formal standard for modularity in Java based on the Java class loader model. This standard works on all Java 2 VMs. The specifications are currently at release 4.2. It is very mature, respected, and adoption in the last few years is exponential.

2003 Oct. JSR 232 Mobile Operational Management, which was really OSGi R4 with mobile services was approved in the J2ME Executive Committee.

2005 Jun.
In spring 2005 Sun filed JSR 277 Java Module System. This JSR clearly covered the area that OSGi did so well and it was therefore not clear to may why this JSR did not start with OSGi. A discussion followed and the JSR was accepted by the JCP Executive Committee in June 2005, albeit several members remarked in their vote that OSGi should be taken into account. As a highly interested party I tried to get into this JSR, but I was denied because the expert group was full ... Unfortunately, the discussions were also closed to observers.

2005 Oct. JSR 232 (OSGi R4.0.1) published its EDR.

2006 Feb. In response, with the backing of the OSGi Alliance, IBM filed for JSR 291 Dynamic Component Support for JavaTM SE, led by Glyn Normington. This JSR focused on bringing OSGi into the JCP for Java 2 SE as was done earlier for the Jave 2 ME. Unfortunately, Java has split a number of years ago in two, more and more incompatible, branches. This JSR was run fully in the open.

2006 Apr. Sun decided to extract the language aspects of modularity into a separate JSR from 277. This became JSR 294, the intentions were declared in a short (and cryptic) personal blog of Gilad Bracha (the spec lead). At Java One in 2006 he explained that the deployment guys could not be trusted with the language. Point taken.

2006 Aug. JSR 232 went final and JSR 291 published its Early Draft Review (EDR).

2006 Oct. JSR 277 publishes its early draft. This draft was unfortunately quite bad and extensively discussed. For more information why it was not very good, read my blog for more information. Interestingly, Gilad Bracha, the spec lead for JSR 294 leaves Sun. Alex Buckley takes over as spec lead.

2007 Apr. A commendable decision is made to allow observers on the JSR 277 and 294 mailing lists. However, traffic is quite minimal.

2007 Aug. JSR 291 (OSGi 4.1) goes final.

2007 Nov. JSR 294 publishes their first Early Draft Review. This proposal was based on the original super packages concept and was, ehh, dreadfully complex. I analyzed their proposal in a blog.

2008 Jan. Glyn Normington (JSR 291 Spec lead) files a bug on JSR 277 to ask it to consider interoperability with OSGi. After visible and invisible pressure, Sun accepts this requirement and starts to take a deeper look.

2008 Apr. Stanley Ho publishes an OSGi interoperability model on the JSR 277 mailing list. Unfortunately, this model heavily relies on the second Early Draft of JSR 277 that has not been made public so far. This makes it very hard to judge.

2008 Mar. Alex Buckley posts a message on the 294 mailing list indicating that he had given up on the original superpackages concept, agreeing that it had gotten too complex. He proposed to introduce a new keyword "module". This significantly changed the whole 294 story and was applauded by me in a blog. The module keyword showed a lot of promise.

2008 Apr. JSr 294 is folded back into JSR 277 and Alex and Stanley will act together as spec lead.

2008 May. Stanley Ho proposes a model for versioning in JSR 277. This is met with a lot of resistance because it gratuitously differs significantly from the OSGi versioning scheme, as explained in my blog. Other blogs reacted rather violently to this proposal.

2008 Oct. Stanley Ho leaves Sun

2008 Dec. Mark Reinhold announces project Jigsaw, JSR 277 is put on hold, and JSR 294 is resurrected.

It is kind of interesting to see how other bloggers have taken the news of project Jigsaw. The blogers that like OSGi still show a lot of distrust of Sun's motives. And though they might be right, I think Sun genuinly wants to get modularity into Java 7 and has concluded that OSGi will play a role in Java 7, regardless of what they do. Since Alex Buckley took over some of the specification work in 277 and 294, there has been a real conversation going on and I feel trust was built. Where in the past I often disliked what Sun was doing, I never felt it was evil. Most of it could better be explained from their lack of knowledge of what OSGi really was and an understandable desire to keep things their way.

The difference today is that now some of the key people like Mark Reinhold and Alex Buckley have made an effort to talk to us. If things would turn out very differently from what I expect today, then it would be a personal issue. Something which it never was before. So I am actually quite positive now they promised to more actively participate in the OSGi Alliance to work on modularity.

That said, I obviously have my rather simple wish list for project Jigsaw of only three prioritized items:
  1. Requirements
  2. Requirements
  3. Requirements

This whole mess of the last three and a half years is caused by a process in the JCP that allows people to get started with solutions before they negotiated the requirements with the stakeholders. Requirements make everything in development go smoother because they scope the work and make it possible to explore different solution without falling back to the childish "mine is better." Requirements expose the different perspectives the stakeholder have and allow the negotiation to take place before any investment is made.

If Sun wants to do project Jigsaw really well, they should take the next few months to create an OSGi Request for Proposal (RFP). This is not a complex document with long list of common sense (and therefore useless) requirements numbered in a complex scheme. On the contrary, it is a story about how things are done today, establishes the span and scope, and elucidates the problem that needs to be solved.

An RFP establishes a common vocabulary and understanding in the problem domain. Developing such an RFP allows the expert group to understand the perspectives of others before anybody has made a commitment in a solution. This lack of commitment defuses any disagreements. Even better, once the project gets into the solution space, the group can explore alternatives that can be judged against the requirements instead of subjective opinions.

During one of our conversations Mark indicated that he liked simple solutions. It is hard to disagree with that statement. However, simplicity is in the eye of the beholder. What is simple for one, can be simplistic for another. Without requirements, it is very hard to decide if a solution is simplistic or simple.

I do understand that the time frame is short, Java 7 is supposed to be out early 2010. However, if there is one lesson I learned the hard way then it is this: If it is not worth doing it right, it is not worth doing it at all. So Mark, Alex, I am more than willing to help doing it right this time!

Peter Kriens

P.S. This blog was first posted to the EclipseCon blog due to a combination of a heavy cold and a confusing GUI.

Tuesday, December 2, 2008

OSGi DevCon/EclipseCon Submissions

Last Friday the submission system closed for new submissions. Fortunately. I had 53 submissions at closing time, all vying for a very limited number of slots. The fun task that befell me was to make that choice. Well, at least it looks like we will have a very strong program in March 2009!

If you want to see the submissions, you go the OSGi subissions page. Feel free to add comments or send me an offline mail about them. Any feedback is appreciated.

It is good to see the enthusiasm in the market for OSGi. Looking at all the submission at EclipseCon it looks like OSGi is hot!

Peter Kriens

EclipseCon Submissions

Last friday the submission system closed for new submissions. Fortunately. I had 53 submissions at closing time, all vying for a very limited number of slots. The fun task that befell me was to make that choice,

Monday, November 17, 2008

Why OSGi Services Rock!

Today, OSGi's primary attraction is its class loaders model, which are sometimes called class loaders on steroids. The reason for this focus is because the Java class model had a completely different design goal then the one for which it is used today. Class loaders, an abstraction from where your code comes from, were added to support applets. Applets were loaded from the net and the original designers came up with the idea of a Class Loader. Very clever idea. However, over time applications on Java became bigger and designers saw how you could (ab)use class loaders to create extension mechanisms. With class loaders, you could load a set of JAR files from a directory and the use Class.forName or ClassLoader.loadClass to create objects implementing a shared interface. This model has become so prevalent that too many designers think this is a normal way to program, and not a hack. Many flock to OSGi because in the hope that it solves the many problems that this hack is causing. Though OSGi can solve many problems, several of those problems are implicit in the hack of dynamic class loading. Where OSGi provides proper metadata and strong modularity, the dynamic class loading often requires visibility to all possible JARs; clearly the opposite of modularity. In traditionaly Java, these problems also occurred and were addressed with other hacks: context loaders.

Though OSGi definitely provides an improvement over plain Java, the main attraction of class loaders are not the primary productivity booster. The use of the OSGi class loading model to alleviate problems with the the existing dynmic loading hacks is at most a band-aid. The most interesting aspect of OSGi is its service layer.

The OSGi modularity layer provides you with

Thursday, October 30, 2008

OSGi DevCon 2009

It is that time of the year again. While the rain (at least in the northern hemisphere) rattles on the windows, the fireplace is buzzing, one has to think about that sunny place Santa Clara where EclipseCon and OSGi DevCon will be held next year. Yes, it is again time for submissions.

In the last year the OSGi Alliance has seen an accelerating adoption of its technology. We therefore expect an avalanche of interesting papers, presentations, demos, and tutorials. If you use OSGi technology in an interesting way, if you have done research in an OSGi related area, if you can teach others about OSGi, then get your virtual pen and submit it to the submission system. You have until November 24, but as we all know: Time flies.

I am looking forward to lots of exciting submissions!

Peter Kriens

Monday, October 20, 2008


As an old Smalltalker the addition of closures to the Java language sounds awfully good. Not a coding day goes by or I have a tinge of frustration when I have to do extraneous typing that would have been unnecessary in Smalltalk, where closure are the bread and butter of programming. So you would expect that I would be thrilled. However, I am having some reservations because of a very non-functional aspect of the two main proposals (BGCA and FCM): class loading. Let me elaborate.

About two years ago I played with Groovy and very much liked the builder pattern. With the builder pattern you can create code that looks like HTML but has access to all of Groovy's extensive features. This was great! A friend and I created some examples from an existing web application and it looked wonderful. Unfortunately, it turned out that there was a huge penalty because every closure was compiled into a class. With thousands of closures, class loading quickly became a performance issue. Just imagine heapspaces of many gigabytes (most of them strings, perm spaces that grow to hundreds of megabytes or more).

Knowing closures, I am convinced that untold developers will enthusiastically use them and also create thousands of closures in an average application. This will add a significantly to the number of classes in an application. And though I am the first to admit that class loading is fast, (and even faster in OSGi), the cost is absolutely not zero. And this worries me because when I look at the big vendors in the J2EE application server market than I know that their code bases are already creaking at the seams. You would be impressed if you knew the tricks that were being played to keep the system from grinding to a halt.

Until now, we have been more or less saved by computers becoming faster and faster, and disks growing bigger and bigger. However, the news I read today tells me those good days are over. Yes, we will get more and more cores, but each core will become slower. Though you can do some parallelization of class loading, there are many serial bottlenecks that are unavoidable.

The worst thing of all is that I do not fully understand why the proposals could not compile to methods. Methods are much cheaper than classes because the overhead of class loading is mitigated over all the methods in a class.

To make one thing clear, OSGi class loading is much more efficient than other class loading models due to its network nature instead of hierachy. An OSGi class loader knows authoritively who to consult for a class. OSGi based systems will have the least number of problems.

I do realize that it is usually not a good idea to let a high level design be influenced by low level details like class loading. However, I am ringing the alarm bell in this case. I am seeing too many problems at the bottom of the system that are caused by the people at the top that seem oblivious of the problems they are causing downstream. When you share resources, which is what you do in an application server, you must accept some responsibility for the commons or in the end face reduced performance.

Both JRuby and Groovy now compile closures to methods. Hey! If the dynamic guys can do it, then shouldn't we be able to do this as well in Java?

Peter Kriens

Wednesday, October 15, 2008

OSGi BundleFest

We are now three days in the BundleFest and the spirits are still high after all the long hours of work put in. The group is pretty much settled in and we all know the routine. We start at 9 am (though the first walk in around 8 am). The first thing we do is have a meeting where we report where we are, what we want to do, and any road blocks. This is supposed to be a general (short) meeting but this morning I had to switch off the network, which I could easily do because my mac provides the routing functionality. However, it costed me some serious points with some of the attendents.

We then work until around 11 for coffee. It is intersting to see how smal groups form and people work together and then continue working highly concentrated. It is a perfect forum to learn, every time I sit next to a person I learn new shortcuts in Eclipse. Though Roman Roelofsen (ProSyst) gave me a hard time because he typed faster than I could read.

Today I worked mostly on bnd, it had a rather nasty bug that it got confused when the Elipse workspace differed from the bnd workspace. Anyway, with Tim Diekmann's (TIBCO) and Victor Ransmayr help I figured out the problem and after that the solution was easy. It is amazing how many easy to invalidate assumptions one makes in code.

I almost missed lunch (which is excellent here) because I had to pick up Andreas Kraft (Deutsche Telekom) from the airport while dropping off Sergey Beryozkin (IONA), but the airport is so close that some fries and mussels were left (not many though!).

In the afternoon we had an interesting discussion about the TCK for RFC 119 distributed computing. Should this TCK test for real distribution (i.e. between VMs or even machines), or do we only have to test the assertions that are mentioned in the specification. This was quite a heated discussion that was loudly continued over tea.

I even get some help with the older TCKs. By moving the OSGi build on bnd we had the problem that all the test cases need to be redone. OSGi test cases are quite complicated because they have to install and install hundreds of bundles. In our build, we have about a 110 projects but we generate over 900 bundles, many of those bundles are embedded. Stoyan Boshev (ProSyst) actually is moving along very fast through one of the most complicated test suites we have now. I am happy I do not have to move that one, though it means he pushes me hard to get the functionality in the OSGi build right.

Probably the coolest thing about this meeting is that you can shout a question and there are three people that have a well-informed opinion and sometimes factual information. For me it is sometimes a bit getting used to, because most of my time I spent my time working alone, having so many people around is overwhelming sometimes. But it is fun.

The worst thing, for me, is the food. This place is having copious lunches that are of excellent quality but contain way too many calories. Oh well, next week no food I guess :-(

Well, off to the next item ...

Peter Kriens

Monday, October 13, 2008

OSGi BundleFest

This week we are doing a really interesting experiment. We've organized a week of coding around the next release of OSGi. So, this morning (Monday 13/10/2008) we all joined in the Mercure hotel in La Grande Motte (on the Mediterranean coast). Though the weather wasn't too good (we actually got some rain), the temperature is nice, we can work with the glass doors open. And tomorrow looks very nice.

It is kind of hectic for me because we are using a new build based on OSGi. This of course affects everybody so I am running around trying to solve problems. It is amazing how many variations you can have in Eclipse! But it is a fantastic sight to see this very experienced group hacking away at bundles. Though it takes some mental gymnastics to go from a bnd problem, to a RFC 119 property, a new method on Deployment Admin, a network sharing problem, getting the right food for our vegetarian, and a JUnit problem within 10 minutes. But man, it is fun to see the enthusiasm and energy.

One of the key goals is to kick start the different Reference Implementations and Compatibility Test suites. So, today we saw lots of discussions and concentrated faces. It looks like this week is going to be highly productive for OSGi, it is a pity that we miss some really important people!

Peter Kriens

Tuesday, September 30, 2008

OSGi Commons: OSAMI?

Last year I was asked to participate in the OSAMI project, as part of the French equipe (I happen to live in France). So what is OSAMI?

OSAMI stands for: Open Source AMbient Intelligence. It is (or will be) a European funded project that will develop a large set of components targeting applications outside the PC and server world. That is, applications that directly affect your environment in which you live. We can all testify that computers play a larger and larger role in our lives, but making those devices really work together and share relevant information is not easy. What is missing are components that can be used in a wide variety of devices to make this possible. The OSAMI project will develop these components in a collaboration between industry, universities, local governments, and the European Union.

The good news is that one of the few decided parts of this project is the choice for OSGi. Looking at the problem domain this project is attacking, this is an obvious choice. The specifications are a standard in the market, adoption is high and growing. They give you device- and OS neutrality through Java. And last but not least, the OSAMI domain is close, if not identical, to the domain the OSGi was created for ten years ago.

Obviously, the problem of assembling larger applications out of smaller components is very hard, especially in a dynamic environment like the ones we live in. However, we have gained a lot of experience in those early days and I think we are ready today. The biggest threat I see is the size of the project and its organization. Each country has a local organization that handles the subsidies. Trying to develop components that will work together and will share some kind of architectural consistency will be a major challenge. My (admittedly limited) experience with European projects makes me a bit worried that local champions can focus on their own needs instead of developing a commons that will benefit us all. One of the key failures in a previous project was quite unexpected. It turned out that the participants in the projects never arranged the rights for the material that was created. At the end of the project, it was completely unclear who could do what with the results.

Another challenge will be the standardization of the services. It is quite easy to create an OSGi component that performs some function. However, to generalize and abstract that function in an OSGi service is hard work, I learned.

And not to forget, there are the always significant administrative duties involved in European projects.

So overall, this project is a big challenge but it is also a tremendous opportunity for the OSGi Alliance. In the next couple of years, a very large number of people will be developing an even larger number of components in several open source projects. This will be a significant boom to OSGi in almost any computing field.

I do hope the OSAMI project(s) will take the effort to align their activities with the OSGi Alliance and work to turn their efforts in OSGi Compendium services. This will require a significant effort from the OSAMI participants because the only way to get a service in the specification is to work on it in an OSGi Expert Group.

Anyway, I will closely follow the OSAMI project and report about its progress. I will do my best to liaison between the OSAMI organization and the OSGi Alliance so we can make this project a great succes.

Peter Kriens

P.S. If you want so more information about OSAMI, read a short overview of the German branch of the OSAMI project.

Wednesday, September 17, 2008

Impressive Press Release

If I am honest, I usually find press releases boring (Sorry Alisa). However, this time I must admit that the latest press release of the OSGi Alliance is surprisingly impressive. If you work with something on a daily basis, you often lose track of the progress that is being made, I was therefore pleasantly surprised when I saw the acclamations being made about OSGi by all the leading application servers vendors. I think one can call it progress if all the leading Java Enterprise Edition application servers have used OSGi as their foundation. In alphabetical order:
  1. IBM Websphere. They started in 2006 and aggresively moved their code on OSGi. Obviously it helped that IBM has committed itself to OSGi since day -1.
  2. Oracle WebLogic. Formerly known as BEA weblogic. BEA was one of the first companies public touting the advantages of OSGi, clearing the road for others.
  3. Paremus Infiniflow. Paremus has pioneered the use of OSGi in highly distributed application servers allowing the system to scale to unprecendented heights.
  4. ProSyst ModuleFusion. ProSyst is the key provider of OSGi technology in the embedded worlds but originated from the development of a J2EE server. They are now back in this market with an offering based completely on OSGi.
  5. Redhat JBoss. JBoss already worked with a microkernel approach but recognized the advantages of a standard two years ago.
  6. SAP Netweaver. Well, they are not yet based on OSGi, but they clearly see their future based on the OSGi specifications and are becoming more and more active in the OSGi Alliance.
  7. SpringSource Application Platform. The company that simplified Enterprise development with their Spring Framework decided to create their own offering in the application server market completely based on the OSGi specifications.
  8. Sun Glassfish. And last, but definitely not least, Sun decided to use OSGi in the JEE reference implementation Glassfish. They clearly take OSGi extremely serious nowadays since they also hired Richard S. Hall. It is so good to see Sun back at the OSGi Alliance.
Though not mentioned in the press release because they are not a member, there is also Jonas, an open source JEE implementation that that was arguably the first JEE implementation completely on OSGi. I guess that only leaves Geronimo struggling with modularity? Despite the interestying work that Glyn Normington did in creating an OSGi prototype two years ago.

As said, this list is impressive by any measure. It is a clear indication that the OSGi specifications are mature and robust. Application servers are highly strategic products for companies; no fortune 500 company bets the house on something that is not highly reliable. Even better, most people know how painful it can be to move non-modular code the strong modularity that the OSGi Framework enforces in runtime. The fact that the key software firms in our industry has made this move signals that the advantages of strong modularity are more than worth this pain.

What does this mean for application developers? Interestingly enough, several application platforms based on OSGi do not expose the OSGi APIs for application developers. The companies that really embrace OSGi are SpringSource, ProSyst, Paremus, and Jonas. IBM, Oracle, and Redhat use the advantages themselves but do not (yet?) allow their customers to use them. However, I expect (and hope) that this will change over time. Why? Because for the first time you can now create middleware libaries that can be deployed on all major application servers without having to worry about implementation differences. I expect this possibility to become too attractive to ignore in the next few years, but today, some of the major vendors exclude this possibility. We'll see what will happen.

It is kind of bizarre that a technology developed for home automation ten years ago now ends up as the state of the art foundation of the servers that run the web. However, there is no time to sit on our laurels. This is a major milestone on the road to building applications from components, the vision I have been chasing all my working life.

Peter Kriens

Monday, September 8, 2008


If you search for OSGi on the net you will find lots of references to class loading problems. Many, if not most, of those problems are related to SPIs, or Service Provider Interfaces, a.k.a. plugins, a.k.a. extensions. The developer provides a collaborating mechanism for other developers that he has no direct connection with. An SPI usually consists of some interface that needs to be implemented by a service provider. The hard problem is linking that service provider's code into the main own code. Because the developers have no clue who will provide extensions, they can not just new an implemenation class. They need some mechanism that provides them the name of the implementation class so they can create an instance with Class.forName and newInstance.

One popular mechanism in the Java VM is the service model. In this model, a JAR can contain a file in META-INF/services that has the fully qualified name of the SPI class, usually a factory. When you the getDefault, or getInstance, method on the SPI class, it reads that file and gets the name of the implementation class. It then tries to load that class. However, this process is not well supported with an API and it is therefore sometimes hard to find that class, causing the developers to do class loader hacks. Hacks that tend to work in the anarchistic class path of the VM but that do not work in OSGi because of the modularity restrictions. (How can you load the proper class if all you have is a class name and no version information?) In several blogs I complained about doing these hacks to get extensibility, but I never showed how you could do an SPI model in OSGi.

A very popular way in OSGi to do SPIs is the white board pattern. The white board pattern means that providers just register a service and the SPI code just enumerates all these services. As an example, we create a little portal. This is a servlet that lists a number of HtmlFragment services on the left side, and when you click on their link, it will list the an HTML fragment on the right side of the page.

Disclaimer: the following code ignores error checking to focus on the extension mechanism. Please do not code like this in production systems ...

We start with the service definition. In OSGi, a service is usually defined in an interface. You can use a class but it is more flexible to use an interface. In this case we define the HtmlFragment interface.

package my.service;
import javax.servlet.http.*;
public interface HtmlFragment {
String NAME = "";
String TITLE = "fragment.title";

void write(
HttpServletRequest request,
PrintWriter pw) throws Exception;
This interface defines properties for the name of the Html Fragment and the title of the Html Fragment. The name is the last part of the URL, the title is human readable. The write method is called when the fragment should write something to the screen. The parameters are passsed in an HttpServletRequest. With this interface we can write an example fragment. In good style, we create an Hello World fragment.

package my.fragment.hello;
import javax.servlet.http.*;
import aQute.tutorial.service.portal.*;

public class Component implements
HtmlFragment {
public void write(
HttpServletRequest request,
PrintWriter pw) throws Exception {
pw.println("Hello World");
The bnd code to run this (assuming you use declarative services) is:
Private-Package: my.fragment.hello
Service-Component: my.fragment.hello.Component;\
provide:=my.service.HtmlFragment; \

It should be obvious, that you could creat many different Html Fragment services, each providing their own content.

The next code we need is the portal code. This can be a servlet. You register a servlet with an Http Service. Declarative services makes this quite trivial. So this is all there is to the portal code:
package my.portal;

import org.osgi.framework.*;
import org.osgi.service.component.*;
import org.osgi.service.http.*;

public class Portal {
BundleContext context;

protected void activate(
ComponentContext context) {
this.context = context

public void setHttp(HttpService http)
throws Exception {
new PortalServlet(context), null,

public void unsetHttp(HttpService http) {

Due to the needed HTML, the Portal Servlet is a bit bigger:

package my.portal;
import javax.servlet.http.*;
import org.osgi.framework.*;
import aQute.tutorial.service.portal.*;

class PortalServlet extends HttpServlet {
final BundleContext context;

PortalServlet(BundleContext context) {
this.context = context;
public void doGet(
HttpServletRequest request,
HttpServletResponse rsp)
throws IOException {
PrintWriter pw = rsp.getWriter();
try {
String path = request

if (path != null
&& path.startsWith("/"))
path = path.substring(1);


ServiceReference found = null;
ServiceReference all[] = context.getServiceReferences(HtmlFragment.class.getName(), "(&(*)(fragment.title=*))");
for (ServiceReference ref : all ) {
String name = (String) ref
String title = (String) ref

name, title);
if (name.equals(path))
found = ref;


if (found != null) {
HtmlFragment fragment = (HtmlFragment) context.getService(found);
fragment.write(request, pw);
} catch (Exception e) {
That is all! Again, obviously no error checking but it demonstrates rather well how you can use the OSGi Service Registry to implement SPIs.

And did you realize this code was 100% dynamic?

Peter Kriens

Monday, August 25, 2008

Why Services are Important

Many people choose OSGi because it provides them with far superior class loading functions than plain Java. Some even say that OSGi is class loaders on steroids. However, viewing OSGi from this perspective is like building a car that looks like a horse carriage with engine. Taking full advantage of OSGi requires the use of services, however, the service model is much more enigmatic for most people than class loading. A good friend of mine, Niclas Nilsson, who visited me last week told me a story about how people view software languages in a very subjective way. If the languages they are familiar with do not have a specific feature, then they have a tendency to view this feature as unimportant and consider them (unnecessary) syntactic sugar. For example, few Basic programmers missed objects in the nineties.
I think services are a bit like this. There are millions very experienced Java programmers that build impressive libraries and applications using class loaders as extension mechanisms. There is a wealth of knowledge and experience. In contrast, OSGi services implement a concept where one cannot find a comparable mechanism in plain Java, nor in other languages/environments.

Why are services then so important if so many applications can be built without them? Well, services are the best known way to decouple software components from each other. And, I assume that you are aware of the many advantages that decoupling gives you in almost any aspect of software development.

One of the most important aspects of services is that they significantly minimize class loading problems because they work with instances of objects, not with class names. Instances that are created by the provider, not the consumer. The reduction of the complexity is quite surprising. Simple example is Hibernate; most of the class loading problems in Hibernate are caused by the fact that the configuration is done in XML files. Working with Class instances and instances of those classes make most problems go away.

What kind of problems do you have with the class names? Well, first you have to configure it. This means that if you want to change the implementation, you must not only edit some (hard to read) XML or properties file, you must also make sure that the right JAR files are on your class path. In a service based system, you only install the proper bundle. Look at the 'interesting' world of logging in Java. Most logging subsystems handle their extension needs with class names. This requires having property or XML files in the right place and requires you to have the right logger implementation on your JAR path. In an OSGi world, the only thing you have to do is install an implementation of the log service and everybody uses that log service. As Richard Hall always says: 'The set of installed bundles is your configuration.'

Not only do services minimize configuration, they also significantly reduce the number of shared packages. In Class.forName based system you need to be able to see all possible implementation classes. In a service based system, you only need to see the package in which the service is defined. And the best package is one that is not shared.

To be honest, after working for many years with OSGi I really had to get used to the idea that one could also share packages with implementation code; before R4 OSGi was only intended to share service packages. It seemed such a bad idea.

Sharing implementation classes is almost always bad because it makes bundles extremely dependent on each other. The reason we added all those powerful class loading mechanisms was not for them to be used in new designs, their primary purpose was to ease conversion of existing applications.

And last not least: versioning. Class.forName must fundamental flaw is that the only parameter is a string for the class name. In complex systems there will be multiple classes with the same name. This confusion is completely absent in a service based system because service objects are already correctly wired up.

However, I guess there is no escape to Niclas' story. If you have never built a real system with services it is hard to see the advantages and it is easy consider them as an extra without a lot of value. But, maybe this blog can give you the push to start using services in real designs and find out how surprisingly powerful this very simple concept is. Let me know!

Peter Kriens

Tuesday, August 5, 2008

Classy Solutions to Tricky Proxies

Though the quest for reusable systems is the guiding principle in my life, I only recently start to understand the reason for some of the mechanisms in Spring. For example, I start to see that the whole Aspect Oriented Programming (AOP) business is not about aspect oriented programming, but it is about hammering an existing code base into its intended slot. Unfortunately, some of the slots are square while the code bases are quite round. Therefore, proxying is an important aspect of developing a system by making a round object look square, or vice versa.

The bad news is that OSGi is not making this easier. The OSGi framework was designed with green field applications in mind that would follow proper modularity rules and communicate through services. In a perfect OSGi world, bundles are highly cohesive and are coupled through their services. In the real world, class loaders are (ab)used for configuration and ad-hoc plugin systems. Some of those problems are quite complex.

Today, I was asked about the following problem.

A company is using Spring. One of their bundles referred to a service that another bundle had registered. Spring has decided that services are too jumpy, so they proxy the service object for their clients and add some mixins in the proxy that handle the situations when a service goes away (Spring also adds some other mixins).

Creating a proxy is class loader magic. You take a set of interfaces (the mixins) and generate the byte codes for a class that implements all these interfaces. The implementation of the interface methods forwards the call to a handler object using the object, the method that was called and the arguments.

If you generate the byte codes for a class, you need to define this class in a class loader and this is where the modularity comes into play. In a traditional Java application you can just take the application class loader and define this new class in this class loader. The application class loader usually has full visibility. In a modular system, however, visibility is restricted to what you need to see. In OSGi, the imported packages are all a bundle can see (at least, when you want to work modular and not use dynamic imports buddy class loading).

The catch is that the proxy requires access to all the mixin classes. However, some of these mixin classes come from one bundle, and some come from another bundle. There is likely no class loader that can see both bundle class spaces unless that bundle specifically imports the correct interfaces. However, if a client bundle gets one of his objects proxified, then he has no knowledge of the mixins (that is the whole idea). It would be fundamentally wrong to import the interface classes while he is oblivious of them. However, the bundle that generates the proxy might know the mixin classes but it is unlikely it knows the classes of the target object. If the proxy generator was implemented as a service (as it should be) then there would even be 3 bundles involved: the proxy generator, the bundle needing the proxy, and the client bundle.

Spring solved this problem with a special class loader that chained the class loader from the client bundle together with the class loader of the Spring bundle. They made the assumption that the Spring bundle would have the proper imports for the mixin classes it would provide and the client bundle would have knowledge of the proxied object's classes.

So all should work fine? Well obviously not or I would not write this blog.

The generated proxy class can see the Spring bundle with the interface classes and it can see the all the classes that the client bundle can see. In the following picture, the client bundle imports the service interface from package p, but it does not import package q which is used in package p. If you create a proxy for a class in package p, it will need access to package q, and the client bundle can not provide this.

For example, assume the following case:

javax.swing.event.DocumentEvent de =
(DocumentEvent) Proxy.newProxyInstance(getClass().getClassLoader(),
new Class<>[]{DocumentEvent.class}, this );

The DocumentEvent is from the javax.swing.event package. However, it uses classes from the javax.swing.text package. Obviously the bnd tool correctly calculates the import for the DocumentEvent class. However, when the generated proxy class is instantiated, it will need access to all the classes that the DocumentEvent refers to: the super class, classes used as parameters in method calls, exceptions thrown, etc. These auxiliary classes are crucial for the DocumentEvent class, but they are irrelevant for the client bundle, unless these classes are actually used in the client bundle, where they would be picked up by bnd.

So, if the client bundle does not import javax.swing.text then you will get a ClassDefNotFoundError when you try to instantiate the proxy class. This error gets generated when the VM sees the reference to class in the javax.swing.text package and tries to load it through the proxy class' class loader (that is why this is an error, it happens deep down in the VM).

To be specific, this is exactly as it should be in a proper modular system. The client should not be required to import classes that it does not need. The onus is on the proxy generating code to do it right.

Fortunately, the solution is not that hard but it highlights how easy it is to make the wrong assumption, believe me, these Spring guys are quite clever.

Just take a step back and look at the problem.

Do we need visibility on the bundle level here? Look at the information given to the proxy generator: a set of mixin classes. It does not really matter from what bundle these interfaces classes come from; the class loader that we need must be able to see each of the interface classes class loaders. By definition, those class loaders can see the right class space.

So instead of chaining the bundle class loaders, the problem can be solved by having a class loader that searches each of the class loaders of the used mixin classes.

For the Advanced

The sketched model is too simplistic because there is the cost of a class loader per generated proxy. Even though enterprise developers seem to live in a wonderful world where memory is abundant, as an old embedded geezer such abuse of class loaders worries me. Though it is possible to use a single combined class loader by keep adding the required class loaders but this raises the question of life cycle. Such an ├╝ber class loader would pin an awful lot of bundle class loaders in memory and it will be very hard to not run into class space consistency problems because over time this class loader will see all class spaces. That is, this approach brings us back to all the problems of today's application servers.

I have not tried this, but the solution is likely to make the proxy generator service based. Not only will this make it easier to plugin different generators, it also means that the generator is bundle aware. It can then create a combining class loader for each bundle that uses the Proxy Generator service. This class loader will then be associated with the class space of that bundle and should therefore not be polluted with classes from other class spaces. In the Spring case, this would be the client bundle, not the spring bundle. The reason is, is that the spring bundle could be used from different class spaces.

However, the proxy generator must track any bundle that it depends on. That is, it must find out which bundle's export the interface classes and track their life cycle. If any of these bundles stop, it must clear the client's combining class loader and create a new one when needed. This will allow the classes from the stopped bundle to be garbage collected.

Peter Kriens

For the hard core aficionados, some code that demonstrates the problem:

First a base class that acts as activator. This base class calls a createProxy method in the start method. This is then later extended with a proxy generation method that fails and one that works ok.

package aQute.bugs.proxyloaderror;
import java.lang.reflect.*;
import javax.swing.event.*;
import org.osgi.framework.*;

public abstract class ProxyVisibility implements BundleActivator,
InvocationHandler {

public void start(BundleContext context) throws Exception {
try {
DocumentEvent de = createProxy(this, DocumentEvent.class);
System.out.println("Succesfully created proxy " + de.getClass());
} catch (Throwable e) {
System.out.println("Failed to create proxy" + e);

public void stop(BundleContext context) throws Exception {
// TODO Auto-generated method stub


public Object invoke(Object proxy, Method method, Object[] args)
throws Throwable {
return null;

abstract protected T createProxy(InvocationHandler handler,
Class primary, Class... remainder);


The following class uses the base activator but uses only the client bundle's class loader. Trying to create the proxy will therefore fail because this class loader can not see the javax.swing.text package.

package aQute.bugs.proxyloaderror;
import java.lang.reflect.*;

public class ProxyLoadError extends ProxyVisibility {

public T createProxy(InvocationHandler h,
Class primary, Class... others) {

Class parms[] = new Class[1 + others.length];
parms[0] = primary;
System.arraycopy(others, 0, parms, 1, others.length);

return (T) Proxy.newProxyInstance(getClass().getClassLoader(), parms,

And at last, the solution. The following class extends the base activator and creates a proxy that uses a Combined Class Loader. This loader traverses all the class loaders of the mixin classes.

package aQute.bugs.proxyloaderror;
import java.lang.reflect.*;
import java.lang.reflect.Proxy;
import java.util.*;

public class ProxyLoadOk extends ProxyVisibility {

public T createProxy(InvocationHandler h, Class primary,
Class... others) {
CombinedLoader loader = new CombinedLoader();

Class parms[] = new Class[1 + others.length];
parms[0] = primary;
System.arraycopy(others, 0, parms, 1, others.length);
for (Class c : parms) {

return (T) Proxy.newProxyInstance(loader, parms, h);

static class CombinedLoader extends ClassLoader {
Set loaders = new HashSet();

public void addLoader(ClassLoader loader) {

public void addLoader(Class clazz) {

public Class findClass(String name) throws ClassNotFoundException {
for (ClassLoader loader : loaders) {
try {
return loader.loadClass(name);
} catch (ClassNotFoundException cnfe) {
// Try next
throw new ClassNotFoundException(name);

public URL getResource(String name) {
for (ClassLoader loader : loaders) {
URL url = loader.getResource(name);
if (url != null)
return url;
return null;


Monday, August 4, 2008

Sun Hires Richard Hall

It took sometime but yesterday it was official: Sun has hired Richard Hall in the Glassfish team! This is good news for Richard, who was looking for just a position like that, Sun, who can clearly use the expertise on OSGi that Richard as one of the foremost OSGi experts provides, Apache Felix because this means that Richard will be able to spend more time on this project, and also good news for the OSGi Alliance that will have one of its key expert group members represent Sun. Rarely is a change positive in so many ways.

I look forward to continue working with Richard in his new role. I hope that Richard's role will mean that Glassfish will become one of the key players in the next OSGi release, they clearly have the experience to match. Though it will be interesting to find out what happens in JSR 277 because Richard is currently an independent member. Will he continue to participate (though participate is a big word with the activity in the last year) or will he be able to actively work in that group (and get the long overdue EDR2 out)? That is still a question mark.

I wish Sun and Richard a very happy and fruitful cooperation and hope to be able to work with Richard for a long time.

Peter Kriens

Thursday, July 17, 2008

OSGi Application Grouping

Several people in and outside the OSGi want application grouping. They want to be able to deploy a set of bundles as an "application". These "applications" should have some scoping on class loading and service issues. I am opposed to this but, as Eric Newcomer pointed out to me last week in Dublin, I never really made my case and shown a solution to some of the valid requirements. So, here it is ...

Back to basics. In the beginning, OSGi was about providing functionality to a device. Adding bundles would add more functions and removing bundles would get rid of those functions. Key was that these bundles would collaborate, they would not just be started and running in some silo container; ignorant of their siblings around them. However, these bundles would not have a priori knowledge of each other because the business idea was that different service providers would supply them independently.

The only way you can make this work is if you have a communication model that allows you to very loosely couple the bundles. This was the service model. The service model is very simple: the producer registers some object under an interface and the consumer gets it through the interface. The implementation is safely hidden, the consumer has no clue. Unfortunately, this service model required rules about class loading because those service interfaces involve classes and you better make sure that the producer and the consumer both share the same class loader for that interface/class. It was never the original intention to share implementation classes between bundles because bundles were supposed to communicate through services with a minimal interface to promote loose coupling. Sharing implementation classes was more or less an unintended side effect. Prior to R4 we could not even support multiple versions of the same exported package because that need is not very acute in a service oriented world program.

In the meantime at plain Java, the poor man's extension model became Class.forName. Where the OSGi model offers the services to the consumer, in the poor man's model the consumer creates the services through dynamic class loading, albeit with the indirection of some XML or other text file. This, alas, requires implementation visibility into the provider bundle.

The popularity of this Class.forName extension model makes the OSGi class loaders on steroids the most attractive aspect of OSGi for many people. However, having to export implementation classes from a bundle obviously opens huge holes in the modularity story, suddenly you are exporting implementation classes to the world. This makes the coupling a lot less loose.

My personal feeling is that application models are only trying to patch up these modularity holes with an emergency bandage. This is like drinking heavily and then taking aspirine against the headache. Maybe less drinking would be healthier?

This said, there is of course a concept of applicationness, otherwise not so many people would demand it. However, I do feel that the bundle is the right granularity for application granularity. A good bundle provides one or more functions and expresses dependencies on other bundles through services. That is all, no other primitives are needed (the services imply one or more Java packages). If I want to run an application I pick a bundle that provides me with the right functionality. Systems like OBR can then be used to find bundles that provide the needed services.

Dynamic dependency resolving is needed to create flexible systems. One of the key problems with an application model is that it fixes the application to a specific set of bundles; making it applicable to only one environment and thereby limiting severely the overall reusability.

One of the key issues is of course testing. Many companies want to be able to send a set of bundles to QA and be sure that that is exactly the set that will be deployed on the target system. I do understand this issue. However, I think that the best solution to this requirement is to run these bundles in a framework, or even a VM, of their own. In certain cases, it is even wise to just wrap up all the bundles into one so you are sure they can not be separated. From a modularity perspective, if these bundles are so closely coupled together, why separate them?

Fortunately, the EGs have come up with a model that seems to fit the needs to scope a set of bundles: nested frameworks. They are currently investigating a solution where you can create new frameworks inside an existing framework with relatively little cost. This solution seems to be very unintrusive to the spec because it adds a feature but does not (I hope) influence the existing features, at least not in a major way. Nested framework allow a managing framework to create nested frameworks for each "application". Services can be shared between frameworks by using the existing notification mechanisms and registering the service in alternative frameworks. Even packages could be shared with the

To conclude. I think the demand for an application model is largely driven by problems that are caused by the Class.forName extension model and tight coupling that caused developers to share implementation classes. Even after ten years I think that the bundle-service model of OSGi is the cleanest solutions to software development that I know. I do hope that we will not dilute this model by patching OSGi to support the lesser implementation class sharing model. Then again, with nested frameworks we all can seem to get what we want!

Peter Kriens

Wednesday, July 2, 2008


We are seeing more and more outlines appearing for the next OSGi release. One of the major issues is legacy code. Not only inside the OSGi, but if you go to the web you see a lot of people struggling to get old code to work inside OSGi frameworks. Obviously, we want to mitigate the issues around legacy code as much as possible, the more people that use OSGi the better. However, lately I have some (personal, this absolutely is not an OSGi standpoint!) musings about how to attack the issue of legacy code.

A short story to illustrate my musings. In the eighties, I worked on a page-makeup terminal for the newspaper industry. Petr van Blokland, a graphic designer turned computer specialist, introduced me to the layout grid. This grid had columns and between the columns there was a small gutter. Text and pictures were placed on this grid, usually encompassing multiple columns and gutters. Like:

The OSGi always reminds me of this grid. Why? Because they both restrict you severely but in return they provide simplicity. Instead of having infinite freedom to do whatever you feel like, you must obey some pretty basic rules, which some people find upsetting. But what you get back is that the elements work together as a whole, instead of fighting with each other.

Layouts done with this grid almost invariably look good with no effort (try working with the average layout manager in Swing or SWT!). The advantage is that elements always line up and there is always the same space between elements. Without a grid, it is very hard to avoid unwanted visual effects.

Genuine OSGi bundles almost invariably collaborate with each other without much effort (anybody saw the combination Eclipse and Spring coming?) because modules are self-contained and can only export packages and communicate via services instead of the myriad of ways people have devised in Java.

Interestingly, both are achieved by restricting ones freedom, the opposite of providing more features. But neither OSGi nor this grid is simplistic. A simplistic grid would be a square 8x8 grid, and they just do not work. A simplistic OSGi would be some Class.forName based system without handling versions and dependencies. Both OSGi and the grid seem to be in a sweet spot: simple but not simplistic, providing maximum bang for the buck.

However, legacy code seems to be forcing us to add more and more mechanisms to the OSGi specification. Unfortunately, these mechanisms are often also then used for new OSGi applications because the legacy concepts they represent feel familiar to people. See how many people use Require-Bundle and fragments.

If we add all these freedoms to the next generation, will we not pollute the original model and become in the end much less attractive? Or, if we do not make it easier to use legacy code, will people turn away because they feel affronted that their direct needs are not addressed? Should we leave these issues to framework implementations making legacy code not really portable?

The current popularity of OSGi seems to allow the OSGi to make a stand. What do you think?

Peter Kriens

Tuesday, June 17, 2008

JSR 277 and Import-Package

It looks like Sun is not convinced that Import-Package is a good idea. On the JSR 277 mailing list there was an interesting discussion about the dependency model: Does Import-Package have value over Require-Bundle (to use the OSGi terms)?

Require-Bundle has an intuitive appeal that is hard to deny. These are the artifacts on the class path during compile time and there is a clear advantage to have these same artifacts during deployment. Import-Package is cumbersome to maintain by hand and basically requires tools like bnd. Besides, the whole world seems to use the Require-Bundle model (maven, ivy, netbeans, ...). So, who would ever want to use Import-Package?

Well, have you ever had chewing gum in your hair?

The harder you try to remove it, the more entangled it gets? This is what Require-Bundle gives you. Import-Package is more like Play-Doh. It is still better not to have it in your hair, but you can ask any kindergarten teacher what she prefers!

So, now we have the right mental image, what is the technical reason for this stickiness? I see two reasons:
  • Require-Bundle gives you much more than you actually use because your class files just do not refer to it, a key advantage of a static typed language. Now before you jump to the conclusion you do not care because it is free, realize that there is rent to pay for these unused parts. These unused parts have their own dependencies that need to be satisfied; so they've has just become your dependencies. Visualize the chewing gum in that strand of hair? Dependencies are annoying, and unnecessary dependencies are worse, but these kinds of tied dependencies can be a death knell if they turn out not to be compatible with yours when your code is combined with other bundles to create a system.

  • You depend on the whims of the bundle's author. He usually has no clue what parts of his bundle are used and is already busy working on the next version. He will make decisions based on his needs, not yours. It is therefore likely that the constituents in the bundle will change over time. How can you be sure that this migration happens in a way that is compatible with your bundles? Not only can the next version bring quite unnecessary and unexpected new dependencies, it might actually remove the actual packages you depend on. Unfortunately, your bundle resolves because the bundle you require is present, but its changes were just not compatible with your expectations so you get a Class Not Found Exception in the middle of shutting down this nuclear reactor.

The underlying reason for this problem is lack of cohesion. Cohesion is the measure of how much the code in a JAR is related. For example, utility packages or bundles often have quite low cohesion. For example, the Apache Derby SQL database contains a very interesting file system abstraction that is highly usable without Derby. That is, the cohesion between the core SQL database code and this abstraction is very low. The core Derby uses it, but it is not pertinent to being a SQL database. This is common practice because we programmers really dislike to get lots of puny little artifacts.

It is a fact of life that the cohesion in most JARs is quite low while the cohesion in packages is very high. As an example, the Apache Derby file abstraction is nicely packaged in a specification package and an implementation package. Why not extract it one day? Well, I am fine with this improvement and it would not affect any of my bundles at all. Import-Package does not care what bundle provides the package. If I had not warned them so often, I would feel sorry for all my colleagues that just had their bundles broken by a lone programmer somewhere in the world ...

Two very practical use cases that illustrate these problems. Eclipse, who heavily relies on Require-Bundle, needed to support SWT (the graphics library) on the embedded platform and on the SE platform. The embedded platform needed to be a subset of the SE platform. Unfortunately, all users of SWT had been using Require-Bundle. This made the simple solution, refactoring the SWT bundle in two bundles, impossible because it would break each and every Eclipse plugin. Bundles that used Import-Package would have been oblivious of this change.

The other use case is described in a recent blog: Catch-22 Logging with OSGI Frameworks . It is a rather long story but it boils down to that he could not combine two libraries due to unnecessary constraints caused by Require-Bundle with respect to logging. If you have the masochistic desire for the same sensation as having chewing gum in your hair, I recommend to read this blog.

Peter Kriens

Thursday, June 5, 2008

Community Event

You did not forget to register yet, did you?

See you next week in Berlin!

Peter Kriens

P.S. As an extra bonus we will host an SE Radio Interview! Shouldn't miss it if I were you!

Friday, May 30, 2008

Is 9903520300447984150353281023 Too Small?

JSR 277's Stanley Ho published a rationale why (in the so far unpublished EDR2) JAva Modules (JAM) invent a brand new version scheme. A rationale that needs a lot of text. I could go in painstaking detail, but I think the rationale derails in one of the first paragraphs where the requirements are (implicitly) described. I highlighted the part that describes in what kind of situations you use the major, minor, micro, update, and qualifier version parts:
  • Major version number should be incremented for making changes that are not backward-compatible. The minor and the micro numbers should then be reset to zero and the update number omitted.
  • Minor version number should be incremented for making medium or minor changes where the software remains largely backward-compatible (although minor incompatibilities might be possible); the micro number should then be reset to zero and the update number omitted.
  • Micro version number should be incremented for changing implementation details where the software remains largely backward compatible (although minor incompatibilities might be possible); the update number should then be omitted.
  • Update version number should be incremented for adding bug fixes or performance improvements in a highly compatible fashion.
  • Qualifier should be changed when the build number or milestone is changed.

Can you spot the difference between minor and micro? I can't, and that is the reason that OSGi proposes the following convention: incompatible (major), backward compatible (minor), no API change/bugfix (micro), and builds or variations (qualifier). That is, our micro is Sun's update because Sun's minor and micro appear to have an identical purpose.

And ever thought about the concept of largely backward compatible? Isn't that something like a little bit pregnant?

There are always reason to improve on existing schemes, but any improvement should be balanced against the interests of the existing and future audiences. I am not stating that the OSGi version is the mother of all version schemes, we are as fallible as all of us. Maybe we were too rationalistic; we looked at a lot of schemes and saw how people were also looking for more room at the low-end of the version scheme and hardly ever incremented the first numbers. If you standardize there is always a tension between allowing as much freedom as possible but also minimizing the complexity of the implementations that are depending on the variations you offer. We chose simplicity.

Is the OSGi scheme usable? Well, we have no outstanding requirements or bugs in this area, nor were any proposed in JSR 291, while at the same time it is heavily used. Jason van Zyl, Maven, told me they were thinking of adopting the scheme as well. SpringSource converted almost 400 open source projects to bundles and they did not complain. Seems there is a lot of practical usage out there.

Wouldn't it be a lot more productive for all of us if Sun would just adopt the OSGi scheme? There is lots of work to do on the module API, why not reuse existing specifications where you can? And if the OSGi scheme has burning issues, why not report that as a bug or change request, after all Sun is a distinguished OSGi member.

With an infinite number of builds or variations, 9903520300447984150353281023 possible bug fixes (80.000 fixes per microsecond until the Sun implodes), 4611686014132420609 backward compatible changes, and 2147483647 incompatible releases, the OSGi spec seems to have enough room to breathe for mere mortals.

Peter Kriens

P.S. You did register for the OSGi Community Event in Berlin, June 10-11? Please do so, we have a very strong program prepared with lots of cool demos. Hope to see you there!