Saturday, December 12, 2009


Looking at the raging debate at versions again I can't help but feeling that there are many people out there that do not understand what OSGi versions are, or what versions are intended to achieve in general. It is not the syntax that are important, it is about the standardization of the semantics.

Versions are a language designed to let two parties communicate over the barrier of time. It is like a Domain Specific Language between artifacts that evolve over time. By having fixed version syntax and semantics, an importer can express how it feels about future changes in the exporter. These semantics tell the exporter what version to use when it evolves. The version indicates to tools how different importers and exporters an be combined in a system.

The simplest solution to versioning is to use a fixed identity. A imports B version 1. If B changes, it becomes version 2 and A must be recompiled, but this changes A so it must also increment its version, ad nauseum. Such systems only work when all the software is in a single build integrated with the deployment. Every deploy is then based upon a full new build, you're basically always at the latest version. However, when you use third party packages or you sell your software then latest version systems tends to be unworkable because minute changes ripple through the whole system.

We need some oil to ease the friction, meet version ranges. A version range allows A to import B from version 1 to version 2, non inclusive. This way we can allow B to increase it is version from 1.1, 1.2, 1.3, etc. without requiring A to be recompiled, stopping any rippling effects dead in their tracks. By specifying an import range of [1,2) A relies on B to properly version future releases. If a change is backward compatible, B can increase the minor part of the version (second number) but does not have to increase the major number (the first number). If a change is made that would break existing code, then B increases the major number and resets the minor number. Such a breaking change would get version 2.0.

We skimped a bit on backward compatibility, this is not a well defined concept and partly in the eye of the beholder. A new method on an interface is backward compatible for code using that interface but breaks an implementer of that interface. How is this difference handled? Well, we can put the semantics on the minor part of the version. Implementers use a range for a single minor version and not a single major version number. So if A implemented interfaces from B it would not use [1,2) but [1.0,1.1), if it only used the interfaces in B it would use [1,2).

Backward compatible does not mean forward compatible, if A was compiled against 1.4 then all bets are off if it was bound to version 1.4. It is therefore important that A requires the base version it was compiled against as a minimum. That is, [1.4,2).

However, we make many changes that we want to deploy but we do not want this reflected in the dependency. If we force all dependencies to be the latest version we end up with the same situation we had when we had a single number; any change in version ripples through the whole system again. That is why the OSGi version has a micro number, the last part. It indicates that you made a change but this change does not affect any clients. It is a small bug fix or minute functional enhancement. Obviously deployments should have the latest fixes installed but it is not enforced to keep things manageable in the field. So even though A might be recompiled against 1.4.2, it would put out a version range that did not include the micro part: [1.4,2).

Last, and in this case also least, there is often a need to know which of two artifacts is the newest, or has a certain state. This is left to the qualifier.

From the previous description it is clear that a simple version syntax can have surprisingly rich semantics. Having these semantics specified and agreed upon allows tools to automate parts of the process of maintaining the versions of the artifacts. This is a good thing because users are quite horrible in maintaining versions. For example, the bnd tool can automatically set the import range depending on the export range, takes implements versus uses into account, and I am working on automatically calculating export version changes based on a previous release. Also Eclipse PDE has extensive versioning support, and I expect bundlor also to do clever things. And last, but not least, it looks like Sonatype will also standardize on OSGi version. Without having fixed syntax and semantics, all these tools would have to use proprietary mechanisms.

So yes, it can be fun to use π for version 3.14 but it kills any hope of tools that understand the semantics and take the error prone chore of maintaining versions out of our hands.

Peter Kriens

Tuesday, November 24, 2009

Nice in Nice

Last week I replaced Susan Schwarze (ProSyst) as a panel member at Net at Home. Due to all the Enterprise related work in the OSGi Alliance, there has been very little time to follow to what are undeniably the roots of the OSGi.

When the OSGi Alliance got started, now more than 10 years ago, it had 100% focus on the residential gateway. In those days I visited lots of conferences, trying to sell OSGi to people making computers with 8Mb flash and 16Mb RAM. In those days, anybody presenting something with OSGi in it was reason for wild excitement. The situation has changed dramatically, though the primary success came from a very different industry.

So I drove the 300 kms to Nice without a lot of expectations. Due to traffic, I arrived a bit late but when I entered the room I noticed OSGi on the first slide I saw. And then the second slide, and the second presentation, and the third presentation. Almost all projects that were presented at the conference were based on OSGi! And the most interesting aspect was that nobody was making it sound as a big deal, OSGi seems entrenched in this industry. During my 4-5 years absence, the processors used in the embedded systems have grown large enough to support Java and thus OSGi. Adopting OSGi for a residential projects has become a no-brainer, there are just too many projects to mention here.

It seems that also in this market OSGi has passed a threshold. However, I did notice that very few projects had members that also participated in the OSGi. Many projects have similarities and it seems inevitable that similar bundles are invented over and over again. The reuse that is so successful in enterprise is not that prevalent in the embedded world. I guess the main reason is that the diversity of problems is less for enterprise than for embedded. It is amazing how complex the world becomes when you cannot remain in cyberspace. However, a model like OSGi can make sharing a lot simpler.

Is it therefore not time for the people that do projects in the residential area to collaborate more inside the OSGi Alliance? There seem to be lots of opportunities. Maybe it is time to organize a workshop about how OSGi could play a role in bringing these diverse projects together on the OSGi technology?

Peter Kriens

Thursday, October 29, 2009


Every time when a new industry joins OSGi there is a lot of new people entering the specification work. These people, inevitably, bring their their own culture. Mixing cultures is not always without its problems. Having lived and worked in Holland, Sweden, and France I learned the hard way that moving to another culture can be a tricky thing; it does take some time before you realize that your absolute truth is not really shared by your new countrymen.

So the Enterprise experts bring their prevailing culture. With respect to standards, this culture is heavily influenced by JCP, specifically JEE, and the way things work in the open source world. This is not always aligned with the way OSGi has been run so far. Previous OSGi specs have always been rather rather thoroughly reviewed and picked apart. A running joke in the Alliance is that we start with a telegraph pole but we we end up with a tooth pick. But what a tooth pick!

One of the key things I always liked about the final OSGi specs is that they are complete. Virtually all service specifications and framework specification thoroughly specify edge cases, are fully introspective and provide relevant events to track what is happening inside the service. In JEE vendors have much more freedom. For example, in JEE the deployment aspects are left as an implementation detail. This completeness of OSGi specs is demonstrated by the fact that (so far) all our specs were able to run from a single test framework while only requiring vendors to have their to be tested bundles installed. It is possible to define do the test setup in the test case because the specifications are sufficiently complete. For example, we can deploy a bundle in a framework without having to know the vendor of that framework.

However, some of the EEG members complain we're too specific and do not allow enough space for implementations to do it their way. By giving more leeway to implementations, you allow more innovation and vendors can more easily fit existing products under a specification. These are not invalid arguments.

So I am struggling a bit with this issue. One of my primary roles in the OSGi Alliance is to guard the consistency between the specifications. This causes me sometimes to feel that I am fighting the whole group to guard this consistency. Despite the fact that I am really trying to run the fine line between being conservative and enabling new groups to do it their way. However, the hardest part is that I continuously have to challenge my own beliefs to try to see their point of view. Well, that is what mixing cultures does to you ...

Peter Kriens

Monday, October 19, 2009

OSGi on the Road

A few weeks ago I got a very nice thank you letter from Eric Gaignet, an employee of a regional bus company in the South-West of France called RDTL Voyages. A few years ago he asked me a few questions over email and I had replied him with some architectural advice of how to use OSGi in a vehicle. I remember that at the time I thought this was a wonderful application for OSGi. He reported now that this project had gone very smoothly.

RDTL is a small regional bus carrier (150 buses) with low overhead. A few years ago they realized that new regulations, the impact on the environment, the complexity of the many isolated black box solutions in the bus, and new business opportunities were being combined with an aging IT infra-structure in the bus. This created an interesting opportunity to design a new infra-structure virtually from the ground up, taking advantage of modern technology.

The number of IT solutions in a passenger bus is surprisingly large. Today, a bus interacts with the road systems, it provides up to date information to the electronic displays, it handles the ticketing issues, and reports about the state of the bus. Traditionally, different vendors provide their own isolated solutions.

Not so for RDTL. They started a collaboration with GeenSys, a French IT company specialized in embedded systems to create the e-nove architecture. The input was to create a system where an OSGi gateway in the bus would communicate with an OSGi server in the back office. The gateway connects all the equipment in the bus, provide local information to the driver and aggregates the information to the back-end. Instead of having a proprietary ticketing system, the OSGi gateway connects to a printer, reader, and a display. This all being controlled by OSGi bundles. In the past two years, this architecture was developed and implemented in a remarkably short time. Today, there are more than 30 buses equipped with OSGi gateways.

After I queried Eric for some more information how things had been going, he replied with the following quote:
The best positive experience we got is that we immediately needed an "small application" this summer. I asked a local software company to design this software. They spent 2 days training with Geensys engineers. Thanks to e-nove, they spent only 2 days to design this application. This is great because this was one of the key goals of the project. We can react, adapt and deploy bundles in a very short time.
The e-nove architecture is a very nice example of what you can do with OSGi. It has many of the use cases that drove the development of OSGi in the early years and it is wonderful that it not only has been developed, but it now also been proven to provide the expected advantages. Thanks Eric, for this update!

Peter Kriens

Friday, October 9, 2009

JavaBlend in Ljubljana and Belgrade

About 6 months ago Aleš Justin (JBoss/Redhat) asked me if I could come to Ljubljana for the JavaBlend conference. I told Aleš to talk to OSGi marketing and after some mail exchanges we agreed that if they paid the trip I would come. After this was agreed, I was informed it also included a presentation in Belgrade. Well, from a time perspective this wasn't too much of a difference so I agreed, under the condition I could fly back Friday from Belgrade and then forgot about it.

Normally a conference trip means flying in the night before, giving your presentation, talking to a few people and then trying to get back home as quickly as possible. However, this trip turned out to be quite different; it felt more like a 3 day adventure than work.

The Ljubljana JavaBlend day was very interesting with just very good speakers. Everything helped, the weather was good, the location was excellent (a very nice castle overlooking Ljubljana), and an intriguing number of very young fashion models.

Even my presentation went well. I was fortunate that the speaker before me, Juha Lindfors (OpenRemote), spoke about an open source project doing home automation. Exactly the area that got OSGi started! Though they had taken Tomcat as their platform, I obviously worked on him to move to OSGi. I could honestly say OSGi was made for exactly their purpose.

I learned about Yugoslavia in school and I followed the news about Tito's dead in 1980, the breakup, and the balkan wars. However, the whole geographic situation was a tad fuzzy to me. The first time in former Yugoslavia, the republic Slovenia definitely charmed me, especially because Aleš had taken me on a small sight-seeing tour. However, what I did not know was that Belgrade was more than 500 KM away from Ljubljana. We would do the trip with a bus. I had visions of a small executive bus with leather fauteuils and a lovely stewardess serving us coffee and snacks. Not too bad I thought.

The next day we would leave at 2pm so that would at least gave me some time to get some work done in the morning, which I decided to do in the bar, enjoying some latte. However, this plan was disrupted by Dalibor Topic (Sun) but he made more than up for that by providing a more than worthwhile discussion about JSR 294, the meaning of life, OSGi, and Sun. It is always nice to talk to Dalibor, whom I seem to meet on more conferences than pure chance would predict.

At 2pm we gathered in the lobby and we took a taxi to the Parsek (the conference sponsor), where the bus would leave. There my visions of spacious leather seats were rather rudely shattered. Even Michael O'Leary would've blushed about the available leg space. However, the lack of space turned out not to really matter that much because the trip went amazingly fast. In the bus I was sitting next to Manik Surtani (Redhat) and we had a long and intense discussion about lots of very interesting subjects. Manik was interested in OSGyifing his Infinispan (a data grid) project so I might have made another OSGi convert. Hey, I am the evangelist!

Though Tomaž Cerar had forgotten to bring the promised bottle of whiskey, the hosts kept us well entertained through the evening driving through Croatia. We were told scary stories about the Serbians specifically and Balkan politics in general, creating a nice suspense. The eerie mood was amplified by farmers burning their land all along the highway, supervised by a slowly rising large blood-red moon. The crossing of the Kroation-Serbian border was therefore almost a disappointment when it only took 10 minutes by civilized officers instead of the 2.5 hours and brutal interrogations we were promised. Though the very modern (well in 1971) 5-star (Serbian stars that is) hotel turned a bit scary when they insisted to take my passport and then pushed it through a hole in the wall. Fortunately, I got it back half an hour later, though it took some angry looks.

The next morning we had to travel through the middle of Belgrade to the conference hotel. I still remembered the scary stories of the night before, describing Serbian drivers and Belgrade traffic. Especially the queues over bridges were told to be notorious. Now, it was kind of important to me that we started in time because the plan was that I would be the first speaker, then hop in a taxi, and hopefully catch my flight back from Tesla Airport. After all the horror stories leaving at 7.30 sounded, well, not unimportant. However, the relaxed nature that seems to be part of of all ex-Yugoslavians made us leave well after 8. Traffic was not too bad but I am pretty sure we blatantly violated the rules of the bridges of Königsberg by crossing the same bridge multiple times. And I am sure it was the same Samsung building I saw after about 20 minutes. Thinking about my taxi ride, it was also a bit disconcerting that we seemed to move farther and farther away from the airport. We did arrive before nine, well just. Any hopes to also start at nine were shattered by two introductory speakers that took their time, and more. Then, half way through my presentation, I was told to stop because the taxi was waiting. I felt that I needed at least another 10 minutes to finish and take some questions, only to discover the taxi had disappeared when I came down! Some nerve wrecking 15 minutes later, and several organizers talking in their phones (still relaxed!), an also very relaxed taxi driver showed up. And actually they were right, the drive to the airport was a non-event, though I swear I have seen cars on those roads that I had last seen when I was 5.

Thanks Aleš for inviting me and all the others that made this a really trip wonderful!

Peter Kriens

Friday, September 25, 2009

Enterprise Expert Group

I am currently sitting in the Kempinski hotel on the Munich airport waiting for my flight home. Very tired but even more satisfied. In the last few weeks we have been struggling in the Enterprise Expert Group about core issues. The last meeting, 8 weeks ago in Dublin, was actually quite heated and a bit depressive, but this meeting turned out highly productive!

The key issue was the focus of the work we're doing in the JEE APIs. Is the focus to make JEE applications run on OSGi unchanged or do we provide a recommended way of using those APIs on OSGi? The problem is that JEE has many built-in assumptions that do not hold in an OSGi world. JEE containers have a application-thread association, a much simpler class loading model with most dependencies contained in the application, and there is no visible life cycle. Especially the creation of providers through static factories is very cumbersome in OSGi.

A good example to highlight these problems is for example the static nature of JNDI. An application performs a new InitialContext() call and expects that a proper context is returned. The environment can influence the returned context via singleton that can only be set once.

The proposal on the table is that an OSGi bundle sets this singleton and then, when called, would get a Context service from the service registry. The real proposal is a bit more complicated, but this is the principle. At first sight, this sound like a viable solution. It allows bundles deliver new Context providers, and it is compatible with normal JSE/JEE code that uses new InitialContext().

Unfortunately, this static factory pattern ignores the dynamic life cycle of bundles. It raises the following concerns:
  • Ordering - When the application does new InitialContext(), is the singleton already set? Is the desired provider already registered?
  • Ownership - On whom's behalf is the Context provider service checked out? It is notoriously hard to determine the caller's bundle context from the stack, if at all possible in a reliable way.
  • Staleness - What happens when the Context provider is stopped? The InitialContext then has a reference to a stale object.
These solutions are solvable if one lowers the bar that we held so far. You can order bundles with start-level ordering and in most cases the stale reference will not crash the system.

The problem we now faced in the last weeks was that many, coming from the JEE world, felt that the portability aspect was more important than the issues raised by this solution. Others felt that though lowering the bar in a product might be acceptable, an OSGi specification should show the right way to do it in OSGi. They felt that expecting that a JSE/JEE application always needed some porting to make it work on OSGi. There seems to be no free lunch for modularity.

During this meeting it was clear that we reached a consensus. The design was changed to be service based. In principle, a bundle would set the singleton and then register a service, allowing bundles to properly synchronized with this singleton. It is then up to containers to implement the correct life cycle handling for OSGi challenged applications. Additionally, the spec will recommend not to use the JEE/JSE static factories but will contain a section that outlines the consequences when they are used, given a proper warning to the deployer.

It was kind of invigorating to see how we have been able to come to a consensus even though the debate was sometimes heated and messy. If we can continue in this spirit, then I think we will see some very interesting specifications coming out of this group in the coming years.

Peter Kriens

Friday, August 28, 2009

About Modularity

Despite common belief, splitting up an application into a number of bundles is not automatically introducing modularity. The almost magic benefits of modularity are caused by the particular decomposition. Many decompositions can actually increase complexity, only the right decompositions reduce complexity. This was perfectly demonstrated by the father of modularity David Parnas. In his seminal paper On the Criteria To Be Used in Decomposing Systems into Modules (PDF), written in 1972. Parnas takes a simple problem, a text indexer, and shows two different decompositions. He then added the requirement that the source text files could be very large, no longer fitting in memory. The first decomposition required changing each module, the second decomposition only had one affected module.

Parnas' paper clearly shows why creating the correct decomposition is so hard, it is about predicting the future. With this disclaimer, there are some general rules that apply. These rules are in their essence the same as we use daily in writing our object oriented code. The most important being information hiding. Information hiding works because if you do not know something, you cannot make false assumptions about it. Assumptions that could break your code when violated in runtime.

With modularity, we have the fantastic tool that the rules inside the module are different from the rules outside the module. As a module author we can control what we expose from the inside and what is hidden from the outside. We can make sure that outsiders cannot make assumptions about our internal workings because they do not have access to them. By minimizing our dependencies on other modules we can reduce our own assumptions of the outside world. The less we assume, the more resilient our module will be to changes in the outside world.

In OSGi we even quite far with this model. In the initial model, you could basically only export API, not implementation classes. Each bundle could only talk to other bundles through services. A bundle is not even allowed to make assumptions about its and other modules' life-cycle. Though for many this sounds extreme, it was put in place to minimize assumptions that are so easily broken.

But there is more to modularity than just robustness and resilience. The proper decomposition can also simplify by requiring much less code. In 1984 I was the designer for an object oriented window system used in a page makeup terminal, this was my largest object oriented design so far. This design process was accompanied with an uneasy feeling that I was not designing, I was only postponing. Every abstraction seems to just postpone the really hard problems. However, one day I realized that I was done without ever having felt that I solved those really hard problems, they seem to have disappeared. In small this effect is visible with recursive algorithms. With the right decomposition, you need very little code, with the wrong decomposition you wrestle with the special cases of start and ending. The right decomposition somehow removes the need for much code.

I do not have a good understanding where the complexity goes when you have a proper modular structure but I have seen it too many times to know it happens. The problem is that once you found a proper decomposition, it feels so natural that you can't understand why you did not see this immediately.

Modularity provides benefits when it is used in the proper way, splitting an application into parts is not guaranteed to give you these benefits. If the cohesion is low and the coupling high between modules it is likely to give you more pain than gain. But there are more ways you can kill the benefits. If you break through abstractions, you likely will be broken when the implementation behind those abstractions change. Unfortunately, there are patterns in Java, mainly class loading related, that are fundamentally not modular and can easily kill the benefits of modularity.

However, if you refactor that application and you hit that right composition of modules, I can ensure you that you will immediately know why it is worth it.

Peter Kriens

Wednesday, August 19, 2009

A Simple Module System

To progress the work in JSR 294, BJ Hargrave, Richard Hall and undersigned have submitted a proposal for a simple module system. The purpose of this module system is to simple enough that can be supported both by Jigsaw and OSGi. In reality, this will likely be sufficient for most people. More advanced features, for example some of the special requirements that Jigsaw has with regard to the JDK modularization, can then be supported in a module system specific way.

This proposal is based on the module accessibility keyword, as is the current proposal, and there are no additional visibility rules. Module boundaries are defined by the artifact in which the modules are delivered. Modules can require other modules that must then become completely visible to the requiree.

The approach of finding a common denominator looks very promising. I am looking forward to continue working with the JSR 294 EG to iron out all the details.

Peter Kriens

Monday, July 6, 2009

No Pain No Gain

Being in the spotlight for OSGi has many good sides but it always pains when you get heavily criticized for a presentation about OSGi & Java Modularity that you gave. However, you usually learn more from criticism than from praise. Lets see if we can digest the criticism in William's blog and learn something from it.

From the first part I can see the criticism that OSGi is old technology and we keep (re)inventing component models. I guess there is a fine line between old and mature but it seems a tad unfair to hold age, without any further qualifications, against us. On the contrary, for stability, robustness, and usability a proven technology seems hard to beat. Then again, a promise is always more luring than hard reality I guess.

With the criticism that OSGi has many component models I cannot but disagree. OSGi has a single component model, since day one, and nothing has changed since then. True, on top of the OSGi component models we have different programming models because the needs in that area are quite diverse. However, all these programming models work seamlessly together through the single and only OSGi bundle and service model.

So what's left. In the end, William Louth makes a valid and interesting point: I said in my presentation that going modular is not without pain, class loaders are the major culprits in Java. However, I do not think this is a problem with OSGi, it is a fundamental problem when you modularize applications. Modules cannot see every class in the application, this global view is the antithesis of modularity. However, almost all class loader tricks I see that cause problems require this global view of the class path. Without addressing this problem, I do not think we can reap the fruits of going modular. How can Jigsaw will be able to solve these problems without destroying the benefits of modularity? Moving us from class path hell into module path hell.

The problem we're facing is that class loaders have become a very successful way to extend applications, something they were never designed for. In OSGi, we started with an extension model that makes the coupling between the modules very explicit: services. The class loading architecture was designed to support these services. The OSGi service model allows modules (bundles) to be very self contained and, ideally, only to export packages that are used to define the service.

So the big problem the OSGi faces is how to handle legacy code that heavily depends on class loaders for what OSGi does with services.

I spent considerable time thinking about this. Together with others, we created an RFP investigating all problems we could lay our hands on. My personal conclusion is that an application that requires a global unrestricted view of the class path cannot be modular. Yes, there are hacks that allow you to find the class your class loader need, but these hacks revert to class searching which implies that valuable properties like version constraints and class space constraints are no longer taken into account. My fear is that providing easy "solutions" to these practices will destroy the modular value that OSGi provides. I have found that these visibility problems are very fundamental. Time will tell if Sun is able to solve this problem with their "... much improved replacement."

Peter Kriens

Tuesday, June 23, 2009

Hi, We're OSGi. We mean no harm

Dear James,
I read your interesting interview with eWeek. There were many parts where we agreed, but I was also slightly puzzled with your observations about OSGi. "OSGi is this thing that kind of came from a different universe that's being used for modularity." I do agree that many people see the quality of the OSGi specs as out this world, but that seems a bit exagerated. I do agree we did not start out in the enterprise application space, we started in embedded world where space and performance constraints are pervasive, but after all I thought we both lived in the Java universe. However, this seems at odds with your remarks like "So we needed something that was a lot lighter weight." and "... OSGi's just too much fat."

Hmm, the OSGi core API is 27 classes. That is a all. Security, Module layer, Life cycle layer, and Service Layer. Exceptions, permissions, and interfaces. And one of them is even deprecated! Just the module layer in Jigsaw seems to have more classes, and they just got started ... Or did you mean the implementations? With Concierge at around 80k for an R3 implementation and Felix at 350k it seems a stretch to call us fat? OSGi is even deployed in smart cards.

It's true, our documentation is a bit fat. The core is described in 300 pages. Though we have lots of pictures! And we have virtually no errata, despite the fact that OSGi has been used in tens of thousands of applications over the last decade.

I'd like to tell you a little anecdote. In 1997 I tried to convince Ralph Johnson about Java. My key argument was that Java was so nicely small and therefore easy to understand. Only 11 packages! Ralph, a famous Smalltalker, looked at me wearily and said: "Just wait." Oh boy, was he right. That lesson still drives me every day to keep OSGi lean and mean, annoying many people along the way, but I guess that is the price one needs to pay.

If you think project Jigsaw will be leaner than OSGi, well, modularity is a problem where size does matter. You cannot demonstrate modularity with a Hello World because modularity solves the problem of large evolving code bases. [deleted]

So, James, I think that are lots of details where we did not get it perfect, but OSGi's weight is not one of them. The misconceptions about OSGi at Sun stand to cost our industry a lot of money and pain in the coming years. Project Jigsaw's simplicity is a fallacy, hidden by the fact that they do not address the hard issues that OSGi has now been working on for over a decade. All major application servers are today based on OSGi, not because it was fun or a hype, but because they had no choice. These are applications in themselves that have a scale where modularity is not an option but a necessity. The success of open source will move many Enterprise applications in this same realm.

If you believe that a simplistic solution can address the needed scale, well than indeed we do live in another universe. However, please remember that we're here as friends, to help, and mean no harm.

Your's sincerely,

Peter Kriens

Friday, June 12, 2009

Classpath hell just froze over?

Other blogs seems to drive this blog nowadays. Just read Classpath hell just froze over. This raises the questions where OSGi stands in the relation with JSR 294. I am not speaking from an official OSGi point of view, but I can of course give my personal opinion. I therefore posted a comment on the original blog but it turned out to be its own blog ...

JSR 294 contains 2 parts:
  • the module keyword
  •, with dependencies
I like the module keyword. We discussed this in 294 and we have some tentative agreement of the model, but the details still need to be worked out. However, agreement here is desirable and achievable.

The file? I have my severe doubts and have expressed this in the expert group. I do not think we fully understand the interactions with IDEs like Eclipse, IntelliJ, Netbeans, JDeveloper, etc. Currently, the command line with javac is completely driving the design while imho the IDE will be the common case for anyone needing modularity. Nobody needs modularity for a Hello World program, and letting this use case drive the design seems plain wrong.

One of the problems with this approach that I'm seeing is that a lot of dependencies are already specified in the sources (import package anyone?) and that a smart IDE can help find the proper modules to import those packages from. However, selecting package for import needs a wide range of modules because when you're developing, you want completion support in the IDE. However, when you have compiled your code, the compiler knows exactly which module was selecrted out of this wide scope it was compiled against.

And not to forget the build tools, they will start having to interpret the module-info file and link to the appropriate module system to find their class path. Today, a build tool tells the compiler its class path, in the future it would first have to compile or interpret the Java file. This alone will probably kill 90% of the ant scripts because the class path is used in other places then compiling. Also maven will have to start to interact with this.

This is not all. The current approach for is creating a meta-module system. Both Jigsaw, OSGi, and any other module systems can put their metadata in the file. This is the famous design by committee problem: let's each have it our own way and make the other optional. Good for vendors, bad for users. This is something that I am always fighting inside the OSGi. Having a meta-module system will cause severe fragmentation on the module layer. Some people hope for runtime interaction between module systems. Well, this will be very hard, if not impossible. Java module systems are just to complex to be able to map one to another without causing some severe pain.

Then about Jigsaw. It is very focused on breaking up the JDK into modules. I like the native installers but I dislike the fact that they put this native packaging stuff in my face. Java should abstract the platform so I can write code for any platform and distribute my code in a singular form. It's Java's original promise to deploy and manage this code on all the variety of VMs/OSs/packagers out there. There are hundreds of VM's and even more VM-package manager combinations. There is no way anybody can support all of these combinations. Write once, deploy everywhere, and then run anywhere? This all goes against the original promise and architecture of Java.

Jigsaw is too simple for the kind of applications in the enterprise space. Most of the class path problems are still in the module-path: split packages, no real hiding of classes (they will be protected by the module keyword only, not like OSGi invisible to other bundles), no multiple versions of the same JAR to solve hard dependency problems in large applications. Looking at Mark Reinhold's slides I think he agrees, Enterprise applications should be build on OSGi. However, small applications do not have to bother with modularity, so why allow Jigsaw to be used for applications at all? Unfortunately, if Jigsaw becomes part of the JDK delivery, people will start using it, causing immediate irreversible fragmentation.

Ok, to summarize. The module keyword is heavily supported by me and other OSGi people I know. It would allow us to put the bundles in the accessibility check of the Java VM. We're not there yet, but it looks good.

From a module system point of view I think we're moving into a direction of a meta-module system, where one of the two users of this meta-module system has an awful lot of homework left to do. After ten years of working on OSGi I would not yet dare to say that I completely understand Java module systems, I therefore shudder of the thought of a meta-module systems ...

Hope this helps to enlighten some issues.

Peter Kriens

Wednesday, June 10, 2009

OSGi Case Studies == Pain?

During JavaOne Atlassian presented a case study of OSGi and offered Atlassian Plugins. The presentation was very dualistic. One one side it was very positive towards OSGi but it also was (overly) critical, mostly because they ran in many problems with legacy code. This caused reactions, like the blog OSGi Case Studies == Pain.

As the OSGi evangelist I would probably say it a bit sweeter, but in principle the blog is right: don't use OSGi unless the benefit offsets the cost. Which should of course be true for everything you do. If you have a legacy application of a couple of thousands lines of code, don't bother with OSGi, it will probably be just in your way at its current incarnation.

The reason for the pain is not some unnecessary complexity in OSGi, on the contrary, with less than 30 well documented classes OSGi is actually quite simple in contrast with almost any other API I know. However, enforced modularity is painful because it confronts you with all the entanglements in your code, and even worse, the hacks and shortcuts in the libraries you use. Strong modularity puts your code in a straightjacket, and often that is no fun when you have legacy code that enjoys the anarchy of the Java class path.

Is that pain worth it? Well, both industry and science seems to have clear consensus that modularity provides lots of benefits for large applications. All the major Java app servers are not based on OSGi because they think it is fancy (notice that most do not provide the API to application developers), the sheer size of these systems just require strong modularity to survive their evolution.

Eli Whitney was one of the key drivers of the industrial revolution because he had the idea of interchangeable parts for guns. However, it took the effort of many others and long years before this simple idea could finally be put into practice. During that transition, the skilled gun making craftsmen of those days poked fun of the whole idea and told each other jokes about how interchangeable parts weren't. I think OSGi is that simple idea of interchangeable parts for software. It is a very, very hard problem, but I am convinced it is worth pursuing. Once you have experienced how you can mix and match bundles to construct a large part of an application it is imposible to ever go back.

OSGi is by far the furthest along this idea but I am the first to admit we're not there. It will require more time and effort of many, including you, to modularize our codebases and develop the tools to simplify that process. So yes, it can be a pain to move legacy code towards strong modularity but I am sure that when your applications are big enough the gain will more than compensate the pain. As many can already testify, including Atlassian.

Peter Kriens

Wednesday, June 3, 2009

Monday, May 18, 2009


The last few weeks have been quite hectic, closing the Core specifications and working hard on the Compendium. The Compendium contains two major new specifications: Remote Services and Blueprint. The process of creating these specifications seems to have surprised many, and angered some. There is even a blog that makes us look like fools, just making up a process as we go along. Maybe that is understandable, we are trying to run the specification process on a trust and consensus basis and that goes better in an informal atmosphere. People that trust each other just work more effective and are more productive.

However, we do have a formal process documented in RFC 75 and we try to follow it meticulously. This process document has two purposes. First it defines for newcomers the model of how the work will be executed. Second, it is provides clear rules for the worst case when consensus cannot be reached. Unfortunately, it is of course one of the hundreds of things we have to do and most people assumed that the OSGi process was like most other standard organizations. They never have read the process document, really understood the early presentations about the process, nor read one of Eric Newcomer's blogs.

The (seemingly) unique aspect of the OSGi is that it recognizes the role of a Specification Document Editor (SDE). The SDE is paid by the OSGi; it is not an employee of a member company. So far, I had the honor to play this role for the releases 2, 3, and 4.

This makes the RFCs are what their abbreviation say they are: Request For Comments, they are not the final specification. RFCs are input of the specification writing process, a document to reach consensus about a technical design between disparate parties. They can be compared to a design document. And don't we all know very well how the final product changes during development? The vote for the RFC indicates a technical agreement among the EG members, not a vote for the final specification. And frankly, I am a bit disappointed that there is a confusion because the OSGi specification documents look in my eyes very different from the RFCs ...

So let me quote the applicable chapter that describes the current phase (TSC is the technical steering committee that consists of the EG chairs and the Technical Director (me)):

5.6.1 Input
The input to this process shall be the RFC(s) and RFP(s) and any supporting documentation from the EG. This shall be provided to the SDE by the TSC. It is possible that a single Specification may incorporate more than a single RFC and RFP. This integration shall be at the direction of the TSC.

5.6.2 Actions
The SDE shall create the appropriate documents based on the content of one or more RFCs. Under ideal circumstances the formulation of the Specification would be a mechanical process but it is expected that the SDE will uncover inconsistencies or other issues in the RFC(s) which require clarification by the appropriate EG. In this case the SDE shall liaise with the appropriate EGs directly to resolve the issue. The SDE shall have at least one and preferably several review cycles with the appropriate EGs to ensure accuracy prior to completion of the Specification.

5.6.3 Output
When a Specification has been completed it shall be electronically signed using the OSGi Alliance certificate and then voted upon by the EG as stated in section If the document is rejected then it is returned to the SDE together with a written explanation as to the problems with it. The SDE, in conjunction with the EG, shall then modify the document as needed to address the EGs concerns before the document re-enters the formal process.

I think this phase of the process is for a large part the reason that our specs have so few errata. The process of taking documents from an EG and explaining the contents in a consistent tends to discover a lot of issues. Inconsistencies with other parts of the OSGi specs, hidden compromises that do not make sense when looking at things as a whole, hidden assumptions and knowledge, overlaps with other parts of the spec, etc. However good the RFC editors are, it is hard to create a good technical design and at the same time understand, and take into consideration the overall context in which it will be placed. RFC editing is a secondary responsibility, while the SDE is doing this as a primary responsibility. Then again, the SDE has no power whatsoever, any change as well as the final specification, must be approved by the EG.
I do not think any RFC has come through this Specification Writing phase unscathed. However, nobody has ever denied that the specification was better than the input RFC(s) due to this phase, well so far at least.

The second related frustration of last week was a bug report in Eclipse complaining about my reading schedule because there are API changes in an RFC that they based their product on. Since about two years, we addressed public concerns that we were too closed, by publishing interim drafts of the RFCs as well as specifications. Obviously, we made it crystal clear that there are no guarantees about the final specification. Worse, we had virtually no feedback for the RFCs so far and that we are now being banged on the head for fixing issues that come up late during specification writing. It ain't over 'til the fat lady sings ...

That said, I am not denying we have a problem. Though the core went out on the planned date, but the compendium is 4 weeks delayed and it is not completely clear Remote Services and Blueprint will be finished in that time frame. Part of the problem is that the OSGi Alliance has only one SDE, and that person (me) is only being paid part-time to work on it. The EGs have been very active lately and this has significantly increased the workload. We need to fix this somehow. However, I do think we should keep the SDE role for the sake of the final quality.

However, in my opinion, a specification is not cost free for a community, a bad specification can actually be quite expensive. I will not use any names here. I pride myself to work for an organization that actually wants to publish high quality specifications and is willing to pay the (sometimes) steep price. Tim Diekmann's (co-chair of the Enterprise EG) mail signature is:
"There is never enough time to do it right, but there is always enough time to do it over" -- Murphy's Law
Well, so far, we actually always tried to do it right because specs cannot be done over. Even if it sometimes really hurts.

Peter Kriens

P.S. Processes rate slightly above licenses on my list of favorite subjects ... Back to RFC 119 and RC 124.

Process Hell

The last few weeks have been quite hectic, closing the core specification and working hard on the compendium. The compendium contains two major

Thursday, March 26, 2009

Wednesday, March 18, 2009

OSGi DevCon 2009 BOF

One of the OSGi activities we are organizing during OSGi DevCon is an OSGi BOF. The BOF is at the first night, March 23. It starts at 19.30, room 209. We invite anybody that has an interest in OSGi. There will be many of the key OSGi people like board members, experts, and just people that are interested. And ... the OSGi Alliance will provide drinks for the participants!

Though we plan to have an informal gathering, we have a short program planned:
  • 19.30 Welcome
  • 19.40 What to See at OSGi DevCon 2009
  • 20.00 What's up next with OSGi?
  • 20.20 Tell us what you think ...
  • 20.45 Close
I really hope to meet you there!

Peter Kriens

Friday, March 6, 2009

Project Jigsaw #3

We are currently working hard in JSR 294, there are lots of interesting discussions and the mood in the group is very good. I think we can do some interesting things there. It is very interesting for once to be able to have the option to change the VM!

However, I see in articles and discussions with people that Sun's project Jigsaw is equated with JSR 294. This is not true, project Jigsaw is a puzzle to me. I personally think it is a bad idea to create a completely new module system from scratch when OSGi has been around for a decade and is the foundation of all major app servers. Proving that OSGi works because that class of applications is as complex as it gets.

And right in this spot where we could create a sound component model in Java, we fragment the world.

I do hope I can continue to work with on Sun in JSR 294 as we do today, focusing on the language. I will, however, disagree about project Jigsaw. Just to make things clear.

You did register for EclipseCon did you? This is going to be so interesting this year!

Peter Kriens

Wednesday, February 18, 2009

OSGi Service Hooks

I made sure not to have any conferences or meetings in February so I could focus on the writing of the specifications. It is now half way through the month and it has been a crazy month, even worse than usual. Though I have not been able to devote nearly 25% of the time to the specifications as I intended, it looks like the Service Hooks specifications are basically finished.

So what are these Service Hooks? Service hooks complement the OSGi service programming model. The current model is based on the following primitives:
  • Register a service (Publish)
  • Find services (Find)
  • Get a service (Bind)
  • Listen for service events
The service registry acts as a mediator between the publishing and finding/binding party. Due to RFC 119 Distributed OSGi, we discovered that we were missing primitives in the service registry. One of the key requirements in RFC 119 was to be able to provide services on demand. However, you can only provide services on demand when you know what bundles are waiting for. The beauty of the OSGi service registry is that this information is available through the internal registry of Service Listeners. Service Listeners are registered with a filter that contains the information about their interest. The Listener Hook provides this information. Registering a Listener Hook service will provide the caller with an initial list of all registered Service Listeners and is then kept informed of additions and removals.

During the write up of the specification for the Listener Hook a huge discussion erupted about ordering. Should the events of the adding/removing Service Listeners be ordered or not? There are two sides to this debate. Ordering is much easier for the hook implementer but it turns out that this is quite difficult for the Framework because these events are originating deep down in the bowls of the service registry. The occurrence of the out of order delivery is also very rare, though the chance is finite. After long discussions we found the solution in an extra method on the event object that indicated the state of the event object. This allows the hook implementer to easily find out that a specific delivery is out of order inside a synchronized block and ignore the event. The most scary part was that we also discovered that the same out of order case was possible with service events. Something none of us had ever realized.

The second use case for the hooks was proxying. With proxying, you want to hide the original service and provide an alternative. However, in current OSGi you cannot hide a service for another bundle. Meet the Event and Find Hooks! The Event Hook is a service that receives a service event before it is delivered to the bundles. As a bonus, it can remove bundles from the delivery list, effectively hiding events from that bundle. Its counterpart is the Find Hook that gets a chance to look at, and reduce, the results of the getServiceReference and getServiceReferences methods. This hook can remove Service Reference objects from the result, also effectively hiding the service from the caller.

One of the issues we found was that these hooks need to be implemented very carefully because, depending on when they begin, they can partially hide events from a bundle, creating the wrong image for that bundle. For example, if a bundle already has a service before you hide future events, it is easy to hide the unregister event for that bundle. Fortunately, we found out that the service tracker could handle most of these partial events, but if you hide an unregister event for a service, the service tracker could hold on to that service forever.

Clearly, these hook services are not for the faint of heart. They are very close to the framework and require to be very well-behaved. That said, they definitely look like they will enable new software patterns for more symmetric service programming. It is likely that utilities like the Service Tracker, or extensions to the OSGi programming models, are created on top of these hooks.

Peter Kriens

Thursday, February 12, 2009

OSGi and Second Hand Cars

An interesting aspect of the Internet is that you get so many examples of where software goes terribly wrong. Today I received a very nice mail from Sophy that warmed my heart.

My name is Sophy Jhonston. I've just visited your website ( ) and I was wondering if you'd be interested in exchanging links with my website?. I can offer you a link back from my Cars Guide website which is ( page rank 3. Your link will be placed here: (It`s a Cars Guide website with page rank 3)
I hope you have a nice day and thank you for your time,

Sophy Jhonston

Sophy, thanks for this wonderful example of the (sad) state of the software art. As thanks, hereby your link and please add us at your highly ranked car page. I might do wonders for our Vehicle Expert Group!

Peter Kriens

Monday, February 2, 2009

Ordering? Get Over It!

One of the ever recurring discussion in OSGi is about ordering during initialization. Ordering is the siren song that we are often so lured by as a shortcut to handle the actual dependencies. Ordering can work if you're the omnipotent god in your project and you can oversee all the details. However, for mere mortals it creates very brittle software that is highly unlikely to work in other environments. Just the fact that in OSGi any bundle can be stopped wreaks havoc with any ordering model. However, many are strengthened in their pursuit of simpleness when they discover the start level service! The start level service seems to offer ordering! Yes, it is clearly one of the pretty sirens!

However, the start level service is a tool of the deployer. Writing your code with the assumption that the deployer will setup all the bundles properly and handle all edge cases will only work when you're the deployer and you will never get sick. And even then, you'll likely spent a lot of time chasing dependency issues that could easily have been prevented.

And the only thing you really need is the mindset that dependencies should be explicit and managed. In OSGi, dependencies are handled through services and packages. Package dependencies make sure that you can build code on solid ground by ensuring that all your required code dependencies are managed. That is, you won't run if the classes are not available. So when your code runs, it has a well defined environment. However, this model is not suitable for the myriad of dependencies that we should handle in modern software. Log service is up? Do I get configuration? Is my communication provider up? Is there a properly initialized database? You know what I am talking about.

In non-OSGi software may of these dependencies are handled implicity with a lazy initialization model. This is one of the key reason that Spring became so succesful: it provide a model where you could specify all this ordering and configuration in just one place. Though this works amazingly well for monolithic applications, it does not scale so well with lots of independently developed components because all the ordering and configuration details must be handled centrally. Creating the so called "godess" spring file. In a large systen, there is a huge advantage to allow independent components to form the application. Independent components can be developed easier, can be made more robust, and allow the final application more flexibility in deployment. However, this requires that each component specifically states its dependencies.

It turns out that virtually any dependency can be mapped to the OSGi service model. It is a bit of a paradox, but because OSGi services are so unreliable, you can make extremely robust systems because real world dependencies are all too often dynamic. If you use OSGi services for all your dependencies, there is no need to use start level ordering to make the initialization work.

So what is the purpose of the start level service? Well, there are many cases where you can improve the performance or appearance of a system by ordering the initialization. For example, if one of your bundles is a flash image to show the system, then it has it advantages that this image is visible during the first second when your system starts. There is also the issue that some orderings are more smooth than others. Starting some strategic bundles first can make a significant improvement. However, this is the toy of the deployer and bundles that make assumptions about other bundles without explicitly handling them are just bad bundles ... however attractive startlevel number 2 might appear.

Peter Kriens

EclipseCon and OSGi DevCon Early Registration

Don't forget to

Wednesday, January 28, 2009

JSR 294

It seems we are starting to work in JSR 294 to work on modularity for the Java language. If you're interested, you can follow the discussions via an observer list. This will be the primary place where Java modularity will be discussed between the different stake holders. I think that there will be a lot of interesting arguments on this list in the next two months concerning Java and modularity.

I am looking forward to a productive EG, lets see what will come out of it!

Peter Kriens

Monday, January 5, 2009

Project Jigsaw #3: Multiple Module Systems

One of the strategic aspects of project Jigsaw is to allow multiple module systems. In Alex's presentation it is suggested that there will be a "simple" module system for the JDK, which is also usable for developers. However, it is envisioned that it will have a provider model that will allow others to provide their own favorite module system. Examples given are maven and OSGi. This is in line with the general philosophy of Java to allow multiple implementations.

In general this is a very good philosophy in general because it promotes competition, which results in more and therefore better choices for the user. However, in the case of a module system it will do the opposite: it will stifle competition, which will cause less choice and therefore worse solutions. The reason for this effect is that a module system is the conduit for the market of implementations.