The end of the year is nearing and that is a time to reflect. What did we achieve this year and where will we be heading to? Bear with me.
Evangelism
For me, this year was special because for the first time in my life I spent much of my time in non-technical work. The OSGi Alliance board had appointed me their official evangelist. That was a bit getting used to; suddenly I had to visit conferences, present OSGi technology and just talk about it instead of doing it. The hardest part was learning how to consistently use the word “OSGi” as an adjective for some trademark reason.
This evangelism thing is a bit hectic: EclipseCon in Santa Clara, ApacheCon in San Diego, ApacheCon Europe Dublin, Oredev in Malmö, Javapolis in Antwerp, CCNC in Las Vegas, OOPSLA in Portland, and BCC in Birmingham. Busy, but most of the conferences were very gratifying because I met so many people that are enthusiastic about the OSGi technology. It never ceases to amaze me that there are so many people that can enthusiastically tell me that they had been using OSGi for many, many years; most of the times from companies that I never heard of. But I love those stories and they make my work very rewarding. Next year it looks like there will be a similar schedule so do not hesitate to talk to me when you see me in my “Ask me about OSGi” shirt..
Mobile Specifications
In 2006 we had to finalize the Mobile specifications, which had some political hurdles. The project had run as a joint project between the JCP (JSR 232) and the OSGi Alliance. One of the key aspects has been the device management; a specification that uses the same concepts as the OMA DM protocol. JSR 246 had developed a similar specification for MIDP that was originally based on the work done in the OSGi but unfortunately deviated significantly. Fortunately, the spec leads Jon Bostrom (Nokia) and Jens Petzold (BenQ) realized that two similar but different specifications in the JCP would not be such a good idea. It was therefore great that the companies worked together and aligned the APIs.
There is a always a lot of work in finalizing specifications. Not only must the text be edited, reviewed, and reviewed, and reviewed … it is also necessary to ensure that the reference implementations and test suites are correct and nicely packaged. The mobile specification was truly an international effort, which did not make it easier. We had teams in Brazil, US, Hungary, Finland, and Bulgaria all working together to finalize the required artifacts. Great work and thanks all for the cooperation and patience!
After the mobile specification was finalized, I received one of the first prototypes from Nokia, an E70, that offered an OSGi framework. I remember racing to the local UPS to get it when it had arrived in Montpellier last June. The first versions needed some work but if everything works out I should get a new version today! I hope to write about many demos with this phone in the next year.
Even after working with the mobile specification for a long time, I still believe that the vision of an OSGi platform in a mobile phone opens a huge market of both niche and mass market applications.
Automotive
From the outside there was not much news from the vehicle/automotive front. Part of the reason is that many members have been too actively involved in the EU sponsored Global System for Telematics (GST) project to spent much time inside the OSGi Alliance. However, a number of companies have remained active in the OSGi vehicle expert group and are working on some very interesting projects. One good decision of this group was to align the vehicle management specification with the mobile device management specification, including diagnostics. The Mobile OMA DM based device management architecture was slightly extended in cooperation with the mobile expert group and then included wholesale. Based on this specification, the vehicle group then developed an abstraction of the vehicle data. This abstraction allows applications to use vehicle data without having to concern themselves with actual models and car types. For example, an application can close all the doors on any compliant car or find out if it is raining.
However, the most important and interesting work has been the navigation model, driven by Olivier Pavé and Rob van den Berg (both Siemens VDO). This navigation model is my personal favorite. By providing a standardized API to the on-board navigation system the OSGi specifications will enable a very large number of interesting applications. A standard navigation system allows third parties to combine their unique data with position and guidance information. These applications can leverage a highly complex navigation system for their own goals, enabling applications that today are not feasible.
For example, the Guide Michelin could offer a bundle that helps you select a restaurant, make the reservations, and guide you to the selected restaurant. Or it could even notify you for a restaurant that is worth a detour. They even could push the envelope and suggest a trip to a restaurant that is worth the trip.
I hope that we will be able to finalize the vehicle work next year. I also have good hopes that some of the results of the GST project will be merged with the OSGi vehicle specifications. Next month (Jan 11) we will have a vehicle workshop in Detroit, looking at the guest list I am sure this will be a very interesting meeting that will put the OSGi vehicle group back on track.
Enterprise
The new kid on the block is the OSGi Enterprise Group! The OSGi Alliance is gaining a momentous amount of steam in the enterprise/server world lately. Though the core technology is very useful as is, there are clearly many requirements that could improve the core and extend the set of available standardized services. Service distribution seems to be at the top of the list, but there are many more areas, as became visible in the workshop. The challenge will be to keep the core lean and mean so that its applicability for the other markets does not get lost. However, I have full confidence that we will succeed there; there are too many people involved that distinctly dislike bloat and the success of POJO’s is a clear sign that our industry is becoming acutely aware of the threats of coupling.
BJ Hargrave (IBM) and I arranged a very successful Enterprise Workshop in September in the San Francisco bay area. This workshop has resulted in an active group of members that are willing to run an OSGi EG. The board approved a new Enterprise Expert Group last month, appointing Tim Diekmann (Siemens) and Eric Newcomer (IONA) as co-chairs.
Spring-OSGi
Before 2006, OSGi technology was put on the map for most people by the Eclipse adoption of the OSGi service platform. This year, the visibility increased significantly because Interface 21, the company behind Spring, decided to support OSGi frameworks. That definitely was a good idea looking at the market response. Adrian Colyer (Interface21) took the lead but the work was shared by Hal Hildebrand (Oracle) and Andy Piper (BEA). The most interesting part for in this process was the reaction of the participants of this group to the OSGi technology. Initially the approach was to “support” OSGi frameworks. During the project, you saw the participants get more enthusiastic about the combination. It would surprise me if OSGi did not become a premier deployment platform for Spring. And vice versa. Spring has some very interesting techniques that overlap the Declarative Services from OSGi. I think it is definitely worth the effort to investigate how these two can work together. I hope Interface 21 will become an OSGi member soon so we can explore the possibilities of specifying the Spring-OSGi collaboration in a future specification.
JSR 277 and JSR 294 module systems
The dramatic highlight of this year is obviously JSR 277 and its brethren JSR 294. JSR 277 attempts to define a module system for the future Java 7 that has a significant overlap with the functionality of the OSGi core framework, JSR 294 is doing the same but restricts itself to changes to the VM and language.
I desperately tried to be on the JSR’s expert group but I was repeatedly denied, despite efforts from people like Doug Lea. Despite this rejection, I tried to evaluate the first draft from JSR 277 without too much bias in my blog. This blog was very well read by the industry. In general the reaction was that Sun should align with other standard groups if they have achieved significant results in the market and not try to reinvent the wheel. Last week I ran into Stanley Ho and Alex Buckley (JSR 277 and JSR 294 spec leads from Sun) at Javapolis an Antwerp. We had a good and pleasant discussion and I hope this will bear some fruit. However, these JSRs will be an ongoing saga so I am curious what next year will bring us.
JSR 291 OSGi Service Platform for J2SE
IBM filed JSR 291 with Glyn Normington (IBM) as lead this year. Though the OSGi specifications were already part of the JCP in JSR 232 (Mobile) it was felt that the visibility of a JME specification by the JCP SE/EE Executive Committee was too low for comfort. JSR 291 was setup as an open expert group that would provide requirements to the OSGi Alliance. All design work is done inside the OSGi Alliance by its members. Interestingly, this process was setup because of intellectual property rights but the separating the requirements gathering process from the API design worked really well in JSR 232 and now in JSR 291. Maybe this separation of concerns should be tried more often in the JCP?
Anyway, JSR 291 is moving along and will produce a specification next year. This will come out as OSGi Release 4.1, likely in April.
Apache Felix
Last year I was asked to support an incubation project at Apache: Felix. This project was based on Oscar, the first and popular open source implementation of the OSGi specifications. It was really nice to see how Richard S. Hall rallied a large group of people with enormous energy around this project. They have been working in incubation status during this year and are now ready to graduate to a top level project. I hope, and expect, that this graduation will push many more Apache projects to first enable their artifacts to run on an OSGi platform and later commit themselves to Felix. Many of Apache’s software projects scream for an underlying OSGi platform that can unify the myriad of plugin systems and class loader abuse present in Apache and other open source code.
OSGi Popularity
The best part for me this year was the tremendous increase in outside awareness of the OSGi Alliance. I remember sadly staring at 8770 Google hits for OSGi in 2005, today we have between 1.5 and 2.5 million hits. I do not really know what the number means, and it seems to depend on where you ask it, but the trend is to my liking. Same for Google alerts. In the beginning of this year I received about one Google Alert per week on the OSGi keyword (My week was good when I received two). Today I receive one every day with 5-10 links. Also the statistics of this blog look like we are on the root of the archetypical hockey stick curve. And best, the last few weeks we received a flurry of requests for membership as well as introductions to speak on conferences. If this trend continues, next year will be very busy, but I will love it!
Future
So much more has happened this year but I am already impressed that certain readers got this far. Let us close with a short outlook for next year.
I expect that 2007 will be the year of the Enterprise. Much of the OSGi Alliance’s effort will go into creating a specification for server side/enterprise applications. I think that the key parties in this market will join the work so we can have a lean mean industry wide platform for server side applications.
I als see many signs that the home automation market is making a revival. This market is where we came from and it would do me great pleasure if we could increase our activities in this market. For the mobile market I expect several vendors to provide OSGi capability. This will create a flurry of development work. However, most of this work will be in specialized applications. The popularity of the platform will then likely become visible in 2008/2009. For 2007, mobile will offer interesting opportunities that can pay off handsomely.
OSGi DevCon
At last, I’d like to put in a plug for the OSGi developers conference which is held jointly with EclipseCon 2007 in Santa Clara 5-8 March. The OSGi track will be focused on server side, embedded, desktop, mobile, and other areas where OSGi platforms are used. If you have an interest in this area, please join us and meet all those other people that are involved in creating an eco-system for universal middleware.
Ok, this blog is way too long. Let me finish by wishing all the readers a fantastic 2007 and I hope to meet many of you next year on a workshop, member meeting, a conference, or online.
Peter Kriens
Friday, December 22, 2006
Monday, December 11, 2006
PHP on an OSGi Framework?
Last week (2006/12/08) the OSGi Alliance had a webinar for their members. Last month, Susan Schwarze from marketing had asked me very sweetly to provide a demonstration for this webinar. After some soul searching I had proposed to make a demonstration using the OSGi Framework in a multi language setting. The scenario of the demo was simple: show how to start a framework, get some basic bundles installed, install some applications from the OSGi Bundle Repository (OBR) to show it all works, and then as grande finale, install a PHP interpreter and port a PHP Wiki to an OSGi bundle in real time. And this all must happen in 20 minutes.
The most interesting part of the demo was obviously the PHP part. I always thought that having a PHP interpreter on an OSGi framework would be really cool (However, hard this is to explain to my kids and wife)! With an OSGi based PHP interpreter you can deploy PHP applications as OSGi bundles. This is attractive to companies that today use PHP on remote servers as well as device vendors that want to design their web sites in PHP.
The demo was not the first time I had come up with PHP. I had searched the web for a PHP interpreter written in Java already several times but I had never found one until I discussed this demo with BJ Hargrave. He mentioned that there was an open source project/company doing a Java based PHP interpreter: Quercus. The interpreter is a part in the bigger Resin webserver. With my usual optimism, I though: “How hard can it be to port this interpreter to an OSGi bundle?”
Well, hard.
It started all very hopeful because there was a class Quercus Servlet and the OSGi HttpServer supports servlets very well. Unfortunately, the Quercus Servlet was assuming it ran inside the Resin server, making it impossible to use with another web server. Writing my own servlet was luckily only a few hours. Unfortunately, I could not resist simplifying the Quercus Servlet which added a few unnecessary hours to undo these simplifications when I understood why the complications were there in the first place.
The next problem I ran into was the file system. Normally PHP assumes that all the sources are in the file system and in our case the files had to come from a bundle. Quercus/Resin supported a virtual file system but it took some time to find out how you could provide a new source for files. It turned out you had to write your own Path sub-class and then set the root path to an instance of your type. All other files are then relative to that root. Reading was very easy to program, I just mapped all the read primitives to the resource access in the bundle. However, writing was a lot harder. At first, I decided to allow the bundle that carries the PHP files to specify its URI and a set of writable directories. That is, I architected the following headers:
PHP-Path: /mywiki
PHP-Data config, data, wiki.d
I got this to work but ran into the problem that some PHP programs actually re-write their own source files. Having special writable directories forced me to modify several PHP programs, obviously not a desirable architecture. Then it suddenly hit me (under the shower!) that I could just do the rather standard technique of copy-on-write. So I got rid of the PHP-Data header altogether. A read would now always first look on the file system (actually the bundle’s data directory) and then in the bundle’s resources. A bit tricky was the handling of directories; they should be automatically created when you wrote a file that also resided in the jar. After this (awfully late) insight, it became quite easy and I could use several PHP programs from the net without any modifications. This worked so well that I could show off this model with a real functioning Wiki that was wrapped in a bundle directly from the Internet!
However, I wanted one more splash and I found this splash in the Quercus modules. The Quercus interpreter was designed to be very modular; all its commands were written in separate modules which were loaded at startup. This model was too tantalizing to not use it to show off the dynamic capabilities of the OSGi Framework. Would it not be impressive if you could install a new Quercus Module on the fly as a bundle (Again, an excitement my family rarely shares)? So I decided to create a Service Tracker that listened to Quercus Module services. Those service objects were then added/removed as a module to the Quercus interpreter. I had to write a new remove function inside the interpreter because, like so many programs, it had no concept of modules that could go away.
Quercus Modules could now be written as OSGi bundles. This turned out very simple to do because bnd (a program/plugin that creates bundles from a recipe) supports declarative services as well. I could just create a small bnd file that builds the bundle and indicates what class should be registered under the QuercusModule interface. Like:
Private-Package: com.caucho.quercus.lib.zip
Import-Package: *
Service-Component: com.caucho.quercus.lib.zip.ZipModule;
provide:=com.caucho.quercus.module.QuercusModule
I had decided early on that I did not want to do the demo live. I guess I am becoming chicken at my old age; the chance that something fails is just a bit too high. I therefore used Instant Demo. This application can record everything that happens in a window. You can edit this recording and provide sound as well. Instant Demo then converts the total set of files to a flash file so you can play it in any browser. I must say, using such a program is a humbling experience because every little error requires a restart. It was painful to have to repeat the same actions over and over again just to get it right.
Anyway, after a lot of hard work I finally succeeded in making four short movies about this demo. If you are interested, download this presentation and play it. Let me know if you think we should have more of such demos. Should we also do this for JRuby and run Ruby-On-Rails applications on OSGi Frameworks? Hmm, sounds cool!
Peter Kriens
P.S. The Quercus interpreter is not available because the code is not industrialized. It really worked for a demo (nothing was faked) but it requires more work to harden it for real use. However, I will contact the resin people to see if they like to modularize their webserver. Their web server screams for OSGi technology!
The most interesting part of the demo was obviously the PHP part. I always thought that having a PHP interpreter on an OSGi framework would be really cool (However, hard this is to explain to my kids and wife)! With an OSGi based PHP interpreter you can deploy PHP applications as OSGi bundles. This is attractive to companies that today use PHP on remote servers as well as device vendors that want to design their web sites in PHP.
The demo was not the first time I had come up with PHP. I had searched the web for a PHP interpreter written in Java already several times but I had never found one until I discussed this demo with BJ Hargrave. He mentioned that there was an open source project/company doing a Java based PHP interpreter: Quercus. The interpreter is a part in the bigger Resin webserver. With my usual optimism, I though: “How hard can it be to port this interpreter to an OSGi bundle?”
Well, hard.
It started all very hopeful because there was a class Quercus Servlet and the OSGi HttpServer supports servlets very well. Unfortunately, the Quercus Servlet was assuming it ran inside the Resin server, making it impossible to use with another web server. Writing my own servlet was luckily only a few hours. Unfortunately, I could not resist simplifying the Quercus Servlet which added a few unnecessary hours to undo these simplifications when I understood why the complications were there in the first place.
The next problem I ran into was the file system. Normally PHP assumes that all the sources are in the file system and in our case the files had to come from a bundle. Quercus/Resin supported a virtual file system but it took some time to find out how you could provide a new source for files. It turned out you had to write your own Path sub-class and then set the root path to an instance of your type. All other files are then relative to that root. Reading was very easy to program, I just mapped all the read primitives to the resource access in the bundle. However, writing was a lot harder. At first, I decided to allow the bundle that carries the PHP files to specify its URI and a set of writable directories. That is, I architected the following headers:
PHP-Path: /mywiki
PHP-Data config, data, wiki.d
I got this to work but ran into the problem that some PHP programs actually re-write their own source files. Having special writable directories forced me to modify several PHP programs, obviously not a desirable architecture. Then it suddenly hit me (under the shower!) that I could just do the rather standard technique of copy-on-write. So I got rid of the PHP-Data header altogether. A read would now always first look on the file system (actually the bundle’s data directory) and then in the bundle’s resources. A bit tricky was the handling of directories; they should be automatically created when you wrote a file that also resided in the jar. After this (awfully late) insight, it became quite easy and I could use several PHP programs from the net without any modifications. This worked so well that I could show off this model with a real functioning Wiki that was wrapped in a bundle directly from the Internet!
However, I wanted one more splash and I found this splash in the Quercus modules. The Quercus interpreter was designed to be very modular; all its commands were written in separate modules which were loaded at startup. This model was too tantalizing to not use it to show off the dynamic capabilities of the OSGi Framework. Would it not be impressive if you could install a new Quercus Module on the fly as a bundle (Again, an excitement my family rarely shares)? So I decided to create a Service Tracker that listened to Quercus Module services. Those service objects were then added/removed as a module to the Quercus interpreter. I had to write a new remove function inside the interpreter because, like so many programs, it had no concept of modules that could go away.
Quercus Modules could now be written as OSGi bundles. This turned out very simple to do because bnd (a program/plugin that creates bundles from a recipe) supports declarative services as well. I could just create a small bnd file that builds the bundle and indicates what class should be registered under the QuercusModule interface. Like:
Private-Package: com.caucho.quercus.lib.zip
Import-Package: *
Service-Component: com.caucho.quercus.lib.zip.ZipModule;
provide:=com.caucho.quercus.module.QuercusModule
I had decided early on that I did not want to do the demo live. I guess I am becoming chicken at my old age; the chance that something fails is just a bit too high. I therefore used Instant Demo. This application can record everything that happens in a window. You can edit this recording and provide sound as well. Instant Demo then converts the total set of files to a flash file so you can play it in any browser. I must say, using such a program is a humbling experience because every little error requires a restart. It was painful to have to repeat the same actions over and over again just to get it right.
Anyway, after a lot of hard work I finally succeeded in making four short movies about this demo. If you are interested, download this presentation and play it. Let me know if you think we should have more of such demos. Should we also do this for JRuby and run Ruby-On-Rails applications on OSGi Frameworks? Hmm, sounds cool!
Peter Kriens
P.S. The Quercus interpreter is not available because the code is not industrialized. It really worked for a demo (nothing was faked) but it requires more work to harden it for real use. However, I will contact the resin people to see if they like to modularize their webserver. Their web server screams for OSGi technology!
Monday, November 27, 2006
Unfaithful thoughts
Say “OSGi” and Java is not very far behind, they have been married since the early days. When we started working on the specifications Java was our obvious partner, and OSGi’s parents were well acquainted which made the marriage quite convenient. We needed a portable and security sensitive environment and we were very smitten with Java. However, the world has moved on since then and many other interesting partners are popping up all the times. People tell me that the adoption of OSGi technology is often hindered by the overhead that is introduced by a Java Virtual Machine and its sometimes obese library (it was very slim when we started!). We should also not forget the enormous amount of legacy code available in a diverse range of languages that many companies are unable to port. So far we had to deny them the wonderful features that OSGi offers like modularization and (remote) management. Why not let our minds wander off a bit and see what options there are?
A very interesting trend at this moment is the efforts of many to port native environments to the Java VM. Last week at Oredev a Ruby expert told me that JRuby (the Java implementation of the interpreter) had a significant better thread model than Ruby itself. JRuby is getting awfully close to implement the full Ruby language, which could create a drive to make JRuby a preferred environment because it can painlessly integrate any efforts written in Java. A similar story can be told about Jython, a Java based implementation of Python.
Even PHP is possible; I am currently preparing a demonstration for the 7 December OSGi members only webinar. We will show standard PHP applications running on an OSGi framework. The PHP interpreter (Quercus, from the Resin web server) is completely written in Java and it was not hard to turn it into a bundle. I just needed an activator that tracks bundles with PHP pages and registers a servlet under the right namespace. It is really cool to see how you can now write a web site in PHP and deploy it to an OSGi framework as a bundle!
The Java VM misses a few instructions to implement dynamically typed languages efficiently, but there are discussions going on to add these instructions to the VM. VM based solutions remain almost all of the advantages of using Java because they use the same programming and security model. Using the Java VM as the common denominator is therefore a very nice model and I expect to see many examples of this model in the coming years.
The VM model still leaves out native applications, and there are some really cool native applications out there. The standard Java solution is JNI, which, well, ehh, sucks; JNI is extremely intrusive and pedantic. Porting a C application to run under a Java VM is a painful exercise. The alternative is to use some communication mechanism, like for example sockets, and communicate over this medium. Doable, but it is hard because it requires addressing mechanisms, marshalling (serializing) to communicate any parameters, and many more chores: see CORBA if you want to know how painful this can be.
One of the best features of the OSGi Framework is the service registry. The service registry really decouples bundles from each other. A bundle that needs to communicate with another bundle never directly addresses that other bundle but uses a shared service to communicate. A service is defined by the API it implements and a set of properties. Services are detected or found dynamically and then bound. The service registry arbitrates between providers and consumers and provides appropriate security and life cycle control.
The most elegant solution to the legacy and native problem is an implementation of the service registry in C. Most languages besides Java support a very easy integration with C libraries. On the PC and on Linux it is quite easy to directly call DLLs or shared libraries from most dynamic languages. Writing a service registry in C that has the same semantics as the OSGi service registry should not be rocket science. Using a fast communication medium like shared memory would minimize any overhead. Key problems are the garbage collection, leasing issues, and marshalling. You could optimize the registry for tightly coupled clusters, though the technology obviously enables more loosely coupled solutions.
There are of course many interesting questions remaining. Should the management of the bundles remain in Java, or should these aspects be moved to a C library as well, allowing full Java free implementations? Obviously Java will be the best supported environment for a long time to come, however, living apart together might not be a bad solution to satisfy the requirements that I am hearing more and more.
Peter Kriens
A very interesting trend at this moment is the efforts of many to port native environments to the Java VM. Last week at Oredev a Ruby expert told me that JRuby (the Java implementation of the interpreter) had a significant better thread model than Ruby itself. JRuby is getting awfully close to implement the full Ruby language, which could create a drive to make JRuby a preferred environment because it can painlessly integrate any efforts written in Java. A similar story can be told about Jython, a Java based implementation of Python.
Even PHP is possible; I am currently preparing a demonstration for the 7 December OSGi members only webinar. We will show standard PHP applications running on an OSGi framework. The PHP interpreter (Quercus, from the Resin web server) is completely written in Java and it was not hard to turn it into a bundle. I just needed an activator that tracks bundles with PHP pages and registers a servlet under the right namespace. It is really cool to see how you can now write a web site in PHP and deploy it to an OSGi framework as a bundle!
The Java VM misses a few instructions to implement dynamically typed languages efficiently, but there are discussions going on to add these instructions to the VM. VM based solutions remain almost all of the advantages of using Java because they use the same programming and security model. Using the Java VM as the common denominator is therefore a very nice model and I expect to see many examples of this model in the coming years.
The VM model still leaves out native applications, and there are some really cool native applications out there. The standard Java solution is JNI, which, well, ehh, sucks; JNI is extremely intrusive and pedantic. Porting a C application to run under a Java VM is a painful exercise. The alternative is to use some communication mechanism, like for example sockets, and communicate over this medium. Doable, but it is hard because it requires addressing mechanisms, marshalling (serializing) to communicate any parameters, and many more chores: see CORBA if you want to know how painful this can be.
One of the best features of the OSGi Framework is the service registry. The service registry really decouples bundles from each other. A bundle that needs to communicate with another bundle never directly addresses that other bundle but uses a shared service to communicate. A service is defined by the API it implements and a set of properties. Services are detected or found dynamically and then bound. The service registry arbitrates between providers and consumers and provides appropriate security and life cycle control.
The most elegant solution to the legacy and native problem is an implementation of the service registry in C. Most languages besides Java support a very easy integration with C libraries. On the PC and on Linux it is quite easy to directly call DLLs or shared libraries from most dynamic languages. Writing a service registry in C that has the same semantics as the OSGi service registry should not be rocket science. Using a fast communication medium like shared memory would minimize any overhead. Key problems are the garbage collection, leasing issues, and marshalling. You could optimize the registry for tightly coupled clusters, though the technology obviously enables more loosely coupled solutions.
There are of course many interesting questions remaining. Should the management of the bundles remain in Java, or should these aspects be moved to a C library as well, allowing full Java free implementations? Obviously Java will be the best supported environment for a long time to come, however, living apart together might not be a bad solution to satisfy the requirements that I am hearing more and more.
Peter Kriens
Friday, November 3, 2006
Domain Specific Languages
Last week I visited the OOPSLA in Portland to promote the OSGi. During that time I also visited many different sessions. Interestingly, only this week I started to see a trend in the conference, strangely enough not while I was there. Trends on OOPSLA are important; this is the conference where objects, patterns, aspects, and many other innovations were first discussed.
The trend is the focus on domain specific languages. We seem to be entering the time when Java is not the one size that fits all we something think it is. Interestingly, one of the key persons in the Java world (Guy Steele) had a presentation about Fortress, a language that is intended to replace Fortran. The syntax is highly optimized for mathematics. A key aspect is that the language should be readable by the domain experts. This means they have operator overloading because there are many mathematical operators that have formalized definitions. They allow the overloading of many Unicode characters, solving one of the problems with C++ where there were too few operators to work with.
However, there were many other presentations that address the language issue. A company called Intentional Software presented an editor that could display and edit the same abstract syntax tree in many different forms. This ranged from a mathematical expressions with all the nice looking formatting, to Java code, and all the way to the graphically display menu structure of a mobile phone. Again, the interesting aspect is that syntax does matter. But there was more. Another presentation told the story of a company that had 400 pages of requirements and domain knowledge. The estimate was that the system was roughly going to be 20.000 pages: a 50x increase in size. The presenter’s solution was to model the domain in a domain specific language and generate the system from the program written in that language. The expectation was that the system could be written in around 4000 pages of code. During the rest of the OOPSLA there were many more Domain Specific Language (DSL) presentations.
You see the shimmers of Domain Specific Languages in things like Ant and Maven. The XML files that parametrize Ant and Maven define a language. Unfortunately, they picked XML, which caused much of the clarity of a domain specific language to be lost in the sea of tags. Many developers have attempted to design domain specific languages using XML; the art of real language design seems lacking in today’s computer science classes. And it is so easy today with javacc, antlr, and other compiler compilers.
What does this mean for the OSGi service platform? The OSGi Service Platform is currently highly Java oriented, do we need other languages?
First there is already a domain specific language for Eclipse plugins called eScript, written as an example for the Eclipse FAQ book. Unfortunately it does not run as is on Eclipse 3.3, but it is an interesting approach. Instead of mucking around with MANIFEST.MF, plugin.xml, resources and Java code, it is all specified in one concise file. I think this approach is promising. An environment like Eclipse contains so much functionality, it would be very nice if one could tap into that functionality with a short script instead of all the work involved today.
I think the OSGi service platform is a great platform to provide the runtime for Domain Specific Languages. The OSGi provides a wonderful plumbing infrastructure for the generators that map the DSL programs. The nice thing about DSLs is that they only specify domain related issues and leave the plumbing and other chores to the environment. This implies that any overhead that the OSGi service platform requires can easily be created by the generators. This will make it very easy for people to write highly productive and specialized code while not being bothers with plumbing.
Look for DSL and you will find more than just modems. After a long time where very few languages were developed, I expect to see many domain languages to see the light in the coming years. I hope the Java community can embrace this opportunity by allowing these languages to run on the Java VMs and not oppose these innovations. Domain Specific Languages could be a great boon for our productivity, and could make the OSGi Service Platform more attractive to a larger audience.
Peter Kriens
The trend is the focus on domain specific languages. We seem to be entering the time when Java is not the one size that fits all we something think it is. Interestingly, one of the key persons in the Java world (Guy Steele) had a presentation about Fortress, a language that is intended to replace Fortran. The syntax is highly optimized for mathematics. A key aspect is that the language should be readable by the domain experts. This means they have operator overloading because there are many mathematical operators that have formalized definitions. They allow the overloading of many Unicode characters, solving one of the problems with C++ where there were too few operators to work with.
However, there were many other presentations that address the language issue. A company called Intentional Software presented an editor that could display and edit the same abstract syntax tree in many different forms. This ranged from a mathematical expressions with all the nice looking formatting, to Java code, and all the way to the graphically display menu structure of a mobile phone. Again, the interesting aspect is that syntax does matter. But there was more. Another presentation told the story of a company that had 400 pages of requirements and domain knowledge. The estimate was that the system was roughly going to be 20.000 pages: a 50x increase in size. The presenter’s solution was to model the domain in a domain specific language and generate the system from the program written in that language. The expectation was that the system could be written in around 4000 pages of code. During the rest of the OOPSLA there were many more Domain Specific Language (DSL) presentations.
You see the shimmers of Domain Specific Languages in things like Ant and Maven. The XML files that parametrize Ant and Maven define a language. Unfortunately, they picked XML, which caused much of the clarity of a domain specific language to be lost in the sea of tags. Many developers have attempted to design domain specific languages using XML; the art of real language design seems lacking in today’s computer science classes. And it is so easy today with javacc, antlr, and other compiler compilers.
What does this mean for the OSGi service platform? The OSGi Service Platform is currently highly Java oriented, do we need other languages?
First there is already a domain specific language for Eclipse plugins called eScript, written as an example for the Eclipse FAQ book. Unfortunately it does not run as is on Eclipse 3.3, but it is an interesting approach. Instead of mucking around with MANIFEST.MF, plugin.xml, resources and Java code, it is all specified in one concise file. I think this approach is promising. An environment like Eclipse contains so much functionality, it would be very nice if one could tap into that functionality with a short script instead of all the work involved today.
I think the OSGi service platform is a great platform to provide the runtime for Domain Specific Languages. The OSGi provides a wonderful plumbing infrastructure for the generators that map the DSL programs. The nice thing about DSLs is that they only specify domain related issues and leave the plumbing and other chores to the environment. This implies that any overhead that the OSGi service platform requires can easily be created by the generators. This will make it very easy for people to write highly productive and specialized code while not being bothers with plumbing.
Look for DSL and you will find more than just modems. After a long time where very few languages were developed, I expect to see many domain languages to see the light in the coming years. I hope the Java community can embrace this opportunity by allowing these languages to run on the Java VMs and not oppose these innovations. Domain Specific Languages could be a great boon for our productivity, and could make the OSGi Service Platform more attractive to a larger audience.
Peter Kriens
Friday, October 20, 2006
OSGi Developer Conference
The OSGi has teamed up with EclipseCon 2007, 5-8 March, Santa Clara California to organize the 2007 OSGi developer conference. This will be the premier conference for OSGi developers to attend in 2007.
We are currently looking for submissions! Long talks, short talks, tutorials, demos, and panels. Please submit a proposal to the Eclipse submissions site in the OSGi track. Proposals will be reviewed by the program committee, where we have a seat.
Some of the submission deadlines are near, please submit your proposal as soon as possible. The submissions must be registered at the EclipseCon Submissions site. Do not forget to register at the OSGi track.
I know there are a large number of interesting projects under way that are heavily using OSGi technologies. If you are working on one of those projects, do not hesitate to submit a proposal, we have quite a few slots.
We have a proposal on the table to add an OSGi user groups meeting at the
end of the conference. We will let you know more soon.
Peter Kriens
We are currently looking for submissions! Long talks, short talks, tutorials, demos, and panels. Please submit a proposal to the Eclipse submissions site in the OSGi track. Proposals will be reviewed by the program committee, where we have a seat.
Some of the submission deadlines are near, please submit your proposal as soon as possible. The submissions must be registered at the EclipseCon Submissions site. Do not forget to register at the OSGi track.
I know there are a large number of interesting projects under way that are heavily using OSGi technologies. If you are working on one of those projects, do not hesitate to submit a proposal, we have quite a few slots.
We have a proposal on the table to add an OSGi user groups meeting at the
end of the conference. We will let you know more soon.
Peter Kriens
Thursday, October 19, 2006
JSR 277 Review
The JSR 277 Java Module System is an important JSR for the OSGi Alliance and me. Though we have been working on Java modularization since 1998 I did not get a chair on the table; the table was already full with 14 seats. Last week, the current 20 man Expert Group has recently dispatched their Early Draft. So this is a personal account of my reading of the Early Draft.
JSR 277 defines a static module system that is similar to the OSGi Require-Bundle approach. Modules define their metadata in the
Module definitions can be obtained from repositories. Repositories contain module definitions that provide metadata to guide the resolving process. Name and version ranges are supported from the metadata. However, an import policy class can be used to do more complex resolving strategies. Repositories can be file or URL based, allowing for remote repositories. Repositories are hierarchically linked; if a repository can not find a module definition, it will delegate to its parent. At the top of the chain is a system repository and above that a boot repository. Module definitions are instantiated before they can be used; this associates a class loader with a module. When all the modules are resolved and instantiated, the main class is called and the program can execute. There is no API for unloading a module.
The ambition level of JSR 277 is significantly lower than where the OSGi specifications are today, even way lower than we set out in 1998. In a way, for me the JSR 277 proposal feels rather toyish. The Expert Group took a simplistic module loading model and ignored many of the lessons that we learned over the past 8 years. No built in consistency, no unloading, no package based sharing. Maybe we wasted a lot of time or solved unimportant problems, but I really do not think so. It would be ok if they had found a much cleaner model, well supported by the VM, but it is clearly not. Their model is based on delegated class loaders, just like the OSGi framework minus 8 years of experience. Ok, I am biased, so let us take a closer look. Though just a warning, the following text is utterly boring unless you are handicapped with a deep interested in Java class loaders. Much of the work over the past 8 years was making bundle developers worry about application functionality and let the OSGi framework worry about class loaders. This stuff should be under the covers. Alas.
The key disappointment in JSR 277 is the lack of dynamics. There is no dynamic loading and unloading of modules/bundles, any change will require a reboot of the VM. The model is geared to the traditional application model of starting an application, running, and then killing the VM. Fortunately, JSR 291 Dynamic Component Support addresses dynamics and more comprehensive module loading, so let’s hope that will become the Java standard.
The most surprising feature of JSR 277 is a total lack of consistency checking. Modules are loaded by name and version only; there is no verification that the graph of class loaders form a consistent class space. Take the example in paragraph 8.3.3.3. There are four modules:
Modules resolve independently, implying that module
A key problem with the Require-Bundle/Module approach is ordering. Though the EDR implies that classes are searched in a specific order, this still depends on what other modules do. Assume we have:
The order for module
Split packages are packages that come partly from one module and partly from another module. Split packages are nasty because the package security access does not work and it is also easy to unexpectedly shadow classes and resources.
Security problems occur with split packages when you load class
Worst of all, none of this is under the control of the importer. The importer just imports a name and version, and then receives whatever the exporter decides to export. Obviously, this is a brittle connection. For example, module
Another problem with modules is their fan out. Typical projects create JARs that are useful in a number of situations. For example, a program like bnd (creates bundle manifest headers from the class files) can be used as an Ant task, an Eclipse plugin, from the command line, and a Maven plugin. It will therefore have to import packages from Ant, Eclipse, and Maven. Those dependencies are needed and are therefore more or less ok (though it is nice to make them optional, a feature missing from JSR 277). However, Ant will import another (large) set of modules. Similarly for maven and Eclipse, ad nauseum. The bnd program is a terrifying example, but the problem is that the usual fan out of a module is large and the fact that dependencies are transitive can worsen this effect. If you want to see the effect of module like imports, check out maven. A simple hello world program can drag in hundreds of modules due to the transitive dependencies. Modules (and Require Bundle) as well are a typical example of creating a constructs to solve a problem but simultaneously creating problems on the next level, problems which are usually ignored in the first version only to bite the early adopters.
A puzzling aspect of 277 is the dependency on super packages from JSR 294. Super packages are shrouded in a veil of mystery. The only public information so far came from Gilad Bracha’s blog. JSR 277 unveils a bit more but many questions are left unanswered. Super packages list all their member classes and a class can only be member of a single module. In 277, it is implied that this module file maps closely to the module metadata. If this is true, than this is a huge constraint. It means that the deployment format is rigidly bound to the development time modules. Each development module must be deployed separately.
I’ve found that managing the packaging is a powerful tool for the deployer. I have written many bundles that mix and match packages from different projects. This flexibility is needed because there are two counter acting forces. On one side you want to simplify deployment by deploying the minimum number of bundles. Anybody that had to chase dependencies knows how annoying the need for more and more bundles. On the other hand, bigger bundles are likely to create more dependencies because they contain more (sometimes unnecessary) code. So you like to minimize the number of external dependencies. Sometimes the best solution is to include the code of other JARs in a deployment JAR. The proposed super packages will put a stop to that model: it will be impossible to manage the deployment packaging because the programmer has decided the packaging a priori; bad idea.
The biggest regret the EG members will have within 2 years is the import policy. Why? Isn’t it a nice idea that the programmer can participate in the resolving? Well, in Java the solution to those problems looks deceptively simple: use a class to abstract the required functionality. An import policy is a class in your module that gets called during the resolving of the module. It can inspect the module metadata, query the repositories and bind other modules.
In the OSGi Alliance, we inherited a similar idea from Java Embedded Server (JES), the archetypical OSGi framework developed by SUN. After countless hours talking and testing we decided that it was a bad idea because of 2 key reasons:
Wasn’t there anything I liked? Well, I like the idea of the repositories. The OSGi framework maintains the repository internally with its installed bundles, it left external repositories outside the specification by standardizing the installation API. Repositories allow the framework to download or find modules just before activation. This is an interesting model, pursued by Maven and the OSGi Alliance with OBR.
However, I think the JSR 277 repository model is too simplistic. It codifies the current maven repository model, which is still immature and will likely change over time. For example, the current model takes only two extra constraints into account: OS, and platform. Unfortunately, life is seriously more complicated. OS’s have their own versioning scheme, processors have compatibility matrices (i.e. an x86 runs on a i586, win32 is compatible with WinXP but not vice versa), the OS is often a variation of OS and window system. Encoding these constraints in the file name is obviously bound to collide with reality one day. In contrast, the OSGi OBR uses a generic requirement-capability model derived from JSR 124 that is much better suited for finding the right module.
Well, the paper document I have reviewed is heavily covered with my marker and I could continue for more pages. However, it is too much for this blog. Well, ok, last complaint: the syntax for version ranges, it is too tempting to leave it alone (though it is not that important). The industry more or less has standardized on versions with a major, micro, minor number and a string qualifier. JSR 277 adds an update number that is between minor and qualifier (Maybe one day someone can explain to me why we need all those number while the only thing that is signaled is a change that is backward compatible or not, but developers seem to like to have all those numbers). However, I have no problem adding another number beneath minor. I do have a problem with a version range syntax that is obtuse and non-standard.
Intervals have a mathematical notation that is easy to understand. Parentheses are non-inclusive and brackets are inclusive. The interval 1 < x <= 5 is therefore represented as (1,5]. [2.3,5.1) indicates any version that is more or equal to 2.0 and less than 5.1. Simple, elegant, has been around since Pythagoras. Choosing this notation for the OSGi specifications was a no-brainer.
Now, let us take a look at what JSR 277 brewed. They use the same terminals as regular expressions, but they have a very different meaning. A partial version can be suffixed with a + (anything later) or a * (anything with the same prefix). So if you say 1+ you mean [1,∞) or 1. 1* is [1,2). The JSR uses square brackets to fix the first positions while floating the last. For example, 1.[2.3+] is [1.2.3, ∞) and 1.[2.3*] is [1.2.3,1.2.4). The JSR 277 syntax has no easy formulation for the not uncommon case [1,5). The only way to express this is with concatenation of version ranges: 1*;2*;3*;4*. Ok, enough about this strange syntax.
Conclusion. JSR 277 takes a simplistic view of the world, ignoring many real life problems: consistency, optionality, split packages, etc. The only way this EG can pass its final review is lack of attention for detail of the Executive Committee or alternatively, serious muscle power of SUN. JSR 277 is a missed opportunity for Java. I do not doubt that this specification will end up in Java 7, but it will further fragment the Java world for no technical reason. Not only is this specification impossible to implement on J2ME any time soon, it will also leave the many OSGi adopters out in the cold. Why?
Peter Kriens
JSR 277 defines a static module system that is similar to the OSGi Require-Bundle approach. Modules define their metadata in the
MODULE-INF/METADATA
.module
resource. Dependencies are pecified in the import clause with the module’s symbolic name and version range. Other clauses in the module’s metadata list the resources and classes that are visible to other modules. When a JSR 277 application gets started, it should have a main class with a static main
method. However, a module must be resolved, as well as its required modules, before the main method is called.Module definitions can be obtained from repositories. Repositories contain module definitions that provide metadata to guide the resolving process. Name and version ranges are supported from the metadata. However, an import policy class can be used to do more complex resolving strategies. Repositories can be file or URL based, allowing for remote repositories. Repositories are hierarchically linked; if a repository can not find a module definition, it will delegate to its parent. At the top of the chain is a system repository and above that a boot repository. Module definitions are instantiated before they can be used; this associates a class loader with a module. When all the modules are resolved and instantiated, the main class is called and the program can execute. There is no API for unloading a module.
The ambition level of JSR 277 is significantly lower than where the OSGi specifications are today, even way lower than we set out in 1998. In a way, for me the JSR 277 proposal feels rather toyish. The Expert Group took a simplistic module loading model and ignored many of the lessons that we learned over the past 8 years. No built in consistency, no unloading, no package based sharing. Maybe we wasted a lot of time or solved unimportant problems, but I really do not think so. It would be ok if they had found a much cleaner model, well supported by the VM, but it is clearly not. Their model is based on delegated class loaders, just like the OSGi framework minus 8 years of experience. Ok, I am biased, so let us take a closer look. Though just a warning, the following text is utterly boring unless you are handicapped with a deep interested in Java class loaders. Much of the work over the past 8 years was making bundle developers worry about application functionality and let the OSGi framework worry about class loaders. This stuff should be under the covers. Alas.
The key disappointment in JSR 277 is the lack of dynamics. There is no dynamic loading and unloading of modules/bundles, any change will require a reboot of the VM. The model is geared to the traditional application model of starting an application, running, and then killing the VM. Fortunately, JSR 291 Dynamic Component Support addresses dynamics and more comprehensive module loading, so let’s hope that will become the Java standard.
The most surprising feature of JSR 277 is a total lack of consistency checking. Modules are loaded by name and version only; there is no verification that the graph of class loaders form a consistent class space. Take the example in paragraph 8.3.3.3. There are four modules:
A, B, C,
and D
(-> means import).
A -> B v1.0, C v1.0
B -> D v1.1
C -> D v1.0
Modules resolve independently, implying that module
B
will resolve to module D
v1.1 and module C
will resolve to module D
v1.0. The result is that module A
can see the same class coming from module D v1.0
and module D v1.1
, likely resulting in class cast exceptions. Or more concrete, assume module A
is your application and module B
and module C
implement some web framework and module D
is javax.servlet
2.1 and 2.4. If you use a Servlet
, do you get it through module B
or C
? Errors resulting from this inconsistency can be very hard to trace. There exists a method in the Module class called deepValidate()
that uses a brute force method by loading all the classes in all modules to see if class cast pr class not found exceptions occur. Obviously this is very expensive and not even bound to find most consistency problems, just loading problems!A key problem with the Require-Bundle/Module approach is ordering. Though the EDR implies that classes are searched in a specific order, this still depends on what other modules do. Assume we have:
A -> B, C, D
B -> D
The order for module
A
is not B, C, D
but instead B, D, C
because module B
also imports module <D
(assuming classes we re-exported). This is one of the reasons I do not like Require-Bundle, though practically we have much less of a problem than JSR 277 will have. The issue is split packages.Split packages are packages that come partly from one module and partly from another module. Split packages are nasty because the package security access does not work and it is also easy to unexpectedly shadow classes and resources.
Security problems occur with split packages when you load class
com.p.A
from module A
, and com.p.B
from module B
. The different class loaders will make it impossible for A
to see package private methods in B
despite the fact they reside in the same package. This is not a theoretical problem especially if multiple versions of the same module are supported. It is easy to pickup a new class from a new version but then load the auxiliary classes from a module that is listed earlier. Worst of all, none of this is under the control of the importer. The importer just imports a name and version, and then receives whatever the exporter decides to export. Obviously, this is a brittle connection. For example, module
A
contains packages a
and b
. One day module A
is refactored and package b
is moved to module B
. This is not uncommon because big modules are not easy to work with, more about that later. Unfortunately, all clients of module A
must now be changed to add module B
to their import lists even though nothing has changed except the packaging. Why is this not a problem with OSGi imports? Well, OSGi imports packages, in the previous example package b
will just be obtained from module B
without any change. It might not seem like a big deal, but for large systems this can amount to major work.Another problem with modules is their fan out. Typical projects create JARs that are useful in a number of situations. For example, a program like bnd (creates bundle manifest headers from the class files) can be used as an Ant task, an Eclipse plugin, from the command line, and a Maven plugin. It will therefore have to import packages from Ant, Eclipse, and Maven. Those dependencies are needed and are therefore more or less ok (though it is nice to make them optional, a feature missing from JSR 277). However, Ant will import another (large) set of modules. Similarly for maven and Eclipse, ad nauseum. The bnd program is a terrifying example, but the problem is that the usual fan out of a module is large and the fact that dependencies are transitive can worsen this effect. If you want to see the effect of module like imports, check out maven. A simple hello world program can drag in hundreds of modules due to the transitive dependencies. Modules (and Require Bundle) as well are a typical example of creating a constructs to solve a problem but simultaneously creating problems on the next level, problems which are usually ignored in the first version only to bite the early adopters.
A puzzling aspect of 277 is the dependency on super packages from JSR 294. Super packages are shrouded in a veil of mystery. The only public information so far came from Gilad Bracha’s blog. JSR 277 unveils a bit more but many questions are left unanswered. Super packages list all their member classes and a class can only be member of a single module. In 277, it is implied that this module file maps closely to the module metadata. If this is true, than this is a huge constraint. It means that the deployment format is rigidly bound to the development time modules. Each development module must be deployed separately.
I’ve found that managing the packaging is a powerful tool for the deployer. I have written many bundles that mix and match packages from different projects. This flexibility is needed because there are two counter acting forces. On one side you want to simplify deployment by deploying the minimum number of bundles. Anybody that had to chase dependencies knows how annoying the need for more and more bundles. On the other hand, bigger bundles are likely to create more dependencies because they contain more (sometimes unnecessary) code. So you like to minimize the number of external dependencies. Sometimes the best solution is to include the code of other JARs in a deployment JAR. The proposed super packages will put a stop to that model: it will be impossible to manage the deployment packaging because the programmer has decided the packaging a priori; bad idea.
The biggest regret the EG members will have within 2 years is the import policy. Why? Isn’t it a nice idea that the programmer can participate in the resolving? Well, in Java the solution to those problems looks deceptively simple: use a class to abstract the required functionality. An import policy is a class in your module that gets called during the resolving of the module. It can inspect the module metadata, query the repositories and bind other modules.
In the OSGi Alliance, we inherited a similar idea from Java Embedded Server (JES), the archetypical OSGi framework developed by SUN. After countless hours talking and testing we decided that it was a bad idea because of 2 key reasons:
- By definition, you do not have a valid Java environment before you resolved all the required modules. Code executed in a module that is not resolved is in limbo and is bound to run into problems. There are also several security related issues.
- Having a procedural solution will prevent management systems from predicting what module will resolve to what module. Within the OSGi specifications, we spent countless hours to make the systems predictable for this reason. This is necessary because in the future more and more systems will be managed (I can’t wait for that day!). Inserting a user class in to the resolution process leaves the management system in the blind. Interestingly, Java made the same mistake with security permissions. Abstracting the permissions as a class is good OO practice, but it kills any possible optimization. A declarative approach could have significantly reduced the 20%-30% overhead of Java security while the flexibility that the model offers is rarely used.
Wasn’t there anything I liked? Well, I like the idea of the repositories. The OSGi framework maintains the repository internally with its installed bundles, it left external repositories outside the specification by standardizing the installation API. Repositories allow the framework to download or find modules just before activation. This is an interesting model, pursued by Maven and the OSGi Alliance with OBR.
However, I think the JSR 277 repository model is too simplistic. It codifies the current maven repository model, which is still immature and will likely change over time. For example, the current model takes only two extra constraints into account: OS, and platform. Unfortunately, life is seriously more complicated. OS’s have their own versioning scheme, processors have compatibility matrices (i.e. an x86 runs on a i586, win32 is compatible with WinXP but not vice versa), the OS is often a variation of OS and window system. Encoding these constraints in the file name is obviously bound to collide with reality one day. In contrast, the OSGi OBR uses a generic requirement-capability model derived from JSR 124 that is much better suited for finding the right module.
Well, the paper document I have reviewed is heavily covered with my marker and I could continue for more pages. However, it is too much for this blog. Well, ok, last complaint: the syntax for version ranges, it is too tempting to leave it alone (though it is not that important). The industry more or less has standardized on versions with a major, micro, minor number and a string qualifier. JSR 277 adds an update number that is between minor and qualifier (Maybe one day someone can explain to me why we need all those number while the only thing that is signaled is a change that is backward compatible or not, but developers seem to like to have all those numbers). However, I have no problem adding another number beneath minor. I do have a problem with a version range syntax that is obtuse and non-standard.
Intervals have a mathematical notation that is easy to understand. Parentheses are non-inclusive and brackets are inclusive. The interval 1 < x <= 5 is therefore represented as (1,5]. [2.3,5.1) indicates any version that is more or equal to 2.0 and less than 5.1. Simple, elegant, has been around since Pythagoras. Choosing this notation for the OSGi specifications was a no-brainer.
Now, let us take a look at what JSR 277 brewed. They use the same terminals as regular expressions, but they have a very different meaning. A partial version can be suffixed with a + (anything later) or a * (anything with the same prefix). So if you say 1+ you mean [1,∞) or 1. 1* is [1,2). The JSR uses square brackets to fix the first positions while floating the last. For example, 1.[2.3+] is [1.2.3, ∞) and 1.[2.3*] is [1.2.3,1.2.4). The JSR 277 syntax has no easy formulation for the not uncommon case [1,5). The only way to express this is with concatenation of version ranges: 1*;2*;3*;4*. Ok, enough about this strange syntax.
Conclusion. JSR 277 takes a simplistic view of the world, ignoring many real life problems: consistency, optionality, split packages, etc. The only way this EG can pass its final review is lack of attention for detail of the Executive Committee or alternatively, serious muscle power of SUN. JSR 277 is a missed opportunity for Java. I do not doubt that this specification will end up in Java 7, but it will further fragment the Java world for no technical reason. Not only is this specification impossible to implement on J2ME any time soon, it will also leave the many OSGi adopters out in the cold. Why?
Peter Kriens
Friday, September 29, 2006
Alternative OSGi Styles
When you spent most of your daily life worrying about extremely detailed Java class loader peculiarities then you tend to start thinking that what you’re doing is really important. This feeling is amplified when you see all those important people taking it really serious. And then you suddenly get confronted by a group of people that completely violate our specification but still do something very interesting.
I am talking about osxa. They looked at our specification and decided that the goal was lofty but that it contained too much unnecessary cruft for them. They just took the interesting 80% (well for them interesting) and ignored the rest. I almost took it personal, but they just put all bundles on the class path! Ok, it meant that they did not need class loaders anymore and that opened a larger range of deployment scenarios. For example, they can deploy their applications in a JNLP web start without the need for signing.
Obviously this implies they have no proper isolation between bundles; bundles can conflict in their packages. However, this is isolation is not always needed, especially when the set of bundles that gets deployed together is fixed. These conflicts can then be resolved ahead of time.
And this is the different, surprising, perspective that they took. They love the service registry and do not care about the module layer. They needed a model where they could use many different components, bunch them up together, and deploy them as a whole in a JNLP deployment or inside web app server. What me worry about bundle life cycles?
Actually, when I realized how much they liked the service registry, I could not stop myself from warming up to them. I am often told that the real important stuff of the OSGi is the module layer. The service registry is just a hanger on. In JSR 291 we actually tried to remove the service registry to simplify the specification. We did not because the module layer and the service registry were very hard to separate. However, I always liked the services most; for me, the module layer was the necessity but the services were interesting.
Obviously the osxa people trample over our carefully crafted specifications. However, I think their perspective is interesting. I think it demonstrates the power of the specification that you can write bundles that will run in a compliant framework but also in something that is wildly different. I have always felt that bundles are the important parts; as long as bundles can write against a good specification, then framework developers should be allowed a lot of leeway. It is interesting to see how they use quite a few equinox bundles.
Anyway, the osxa will not receive compliance anytime soon because they completely discard the module layer and take a lot of liberty with other aspects. They can therefore obviously not call themselves an OSGi framework implementation and developers should be very careful not to target any of the peculiarities of this framework. Still it is interesting to see that the success of the OSGi specifications is enabling these initiatives. Initiatives that can have value in certain circumstances and clearly prove the viability of the OSGi model.
Now, if they only could get their demo working again …
Peter Kriens
I am talking about osxa. They looked at our specification and decided that the goal was lofty but that it contained too much unnecessary cruft for them. They just took the interesting 80% (well for them interesting) and ignored the rest. I almost took it personal, but they just put all bundles on the class path! Ok, it meant that they did not need class loaders anymore and that opened a larger range of deployment scenarios. For example, they can deploy their applications in a JNLP web start without the need for signing.
Obviously this implies they have no proper isolation between bundles; bundles can conflict in their packages. However, this is isolation is not always needed, especially when the set of bundles that gets deployed together is fixed. These conflicts can then be resolved ahead of time.
And this is the different, surprising, perspective that they took. They love the service registry and do not care about the module layer. They needed a model where they could use many different components, bunch them up together, and deploy them as a whole in a JNLP deployment or inside web app server. What me worry about bundle life cycles?
Actually, when I realized how much they liked the service registry, I could not stop myself from warming up to them. I am often told that the real important stuff of the OSGi is the module layer. The service registry is just a hanger on. In JSR 291 we actually tried to remove the service registry to simplify the specification. We did not because the module layer and the service registry were very hard to separate. However, I always liked the services most; for me, the module layer was the necessity but the services were interesting.
Obviously the osxa people trample over our carefully crafted specifications. However, I think their perspective is interesting. I think it demonstrates the power of the specification that you can write bundles that will run in a compliant framework but also in something that is wildly different. I have always felt that bundles are the important parts; as long as bundles can write against a good specification, then framework developers should be allowed a lot of leeway. It is interesting to see how they use quite a few equinox bundles.
Anyway, the osxa will not receive compliance anytime soon because they completely discard the module layer and take a lot of liberty with other aspects. They can therefore obviously not call themselves an OSGi framework implementation and developers should be very careful not to target any of the peculiarities of this framework. Still it is interesting to see that the success of the OSGi specifications is enabling these initiatives. Initiatives that can have value in certain circumstances and clearly prove the viability of the OSGi model.
Now, if they only could get their demo working again …
Peter Kriens
Tuesday, September 26, 2006
OSGi UIs and the Web
The OSGi user interface has been a problematic aspect from the beginning, despite the fact that there are so many to chose from: awt, swt, lcdui, swing, thinlet, flash, svg, p3ml, proprietary, and html based. We tried many, many times to come up with a model that all could live with, so far to no avail.
However, one model that seems to emerge is web based UIs. The advent of DHTML, Javascript, and SVG provide an extremely rich environment for application development. Unfortunately, Javascript is not a nice language to handle large and complex problems. Many companies solve this by using webservices to distribute the work between the graphic user interface (GUI) and a server. Web services are remote procedure calls done over the HTTP(S) protocol. This model is reasonably well supported by Javascript using the XMLHttpRequest object.
Th web services model maps very well to an OSGi service platform. In a way, you just need a way to easily call the services in the OSGi service registry from Javascript. So how would such an application look like?
Let us first layout the bundles I like to think in bundles because it is nicely concrete and down to earth. We obviously need a web server, which is easy to fulfill with the OSGi Http Service. We then require a bundle to handle the grunts of the remote procedure calling and another bundle to prove that it works.
The rpc calling bundle has the responsibility to handle the remote procedure calls from Javascript and map them to calls on an OSGi service. I’ve called this bundle webrpc. The second bundle is the demo application. This summer I emotionally blackmailed my son Thomas to learn how to write a program; he caved in and wrote a working Sudoku game in Java after a false start in C++. This exercise gave me more experience with the Sudoku domain than I ever wanted to have so the demo will be a sudoku bundle.
The webrpc bundle should not indiscriminately export services. There are obviously security issues but also not all services are well suited to be exported. However, we’d like to keep the service as simple as possible. The choice was made to look for the public property on the service registration. This is easy to set with declarative services and it means we can use a POJO as our service; the service itself is not involved with the remote procedure calling. The value of the property is the name under which the service should be known to the outside world.
I could have used a complete web services stack, there are an amazing amount of those applications available, almost all written in Java. However, I wanted to run the demo on my Nokia E70 phone that runs an OSGi service platform. Memory is only abundant on PCs and services, so a complete web services stack can add up. It also adds up on the Javascript side because a compliant web services stack is non-trivial. I therefore decided to optimize space and implement a very lite model of RPC over HTTP. Requests are done using the HTTP protocol. The path provides the service and its method, while the query parameters provide the arguments. I used 0 for the first argument, 1 for the next, and so on. For example:
http://localhost/rpc/sudoku/newGame?0=simple
This request will invoke the newGame method on the service that is registered with the property public set to “sudoku”.
The webrpc bundle depends on the Http Service. When there is an Http Service present, it registers itself under the /rpc alias. The servlet will get called for all requests that start with /rpc. For such a request, the webrpc bundle looks up the service name (in the example sudoku) in the service registry using a simple query string with the OSGi filter language to do this. The next part is calling the right method in this service; this is non-trivial because it is necessary to coerce all the arguments from strings into the method argument types. I wrote a separate Invoker class to handle this coercion. Read it and weep. Most of the work is handling Java’s silly primitives.
When the method has been called it returns an Object. This object must be shipped back to the Javascript in the browser. Now the gut reaction of most programmers is to think XML. Not me this time, there is another standard called JSON that is so much easier for Javascript. It is a representation of strings, numbers, arrays, and maps that is compatible with the Javascript syntax. Converting an object into JSON is quite straightforward for values like numbers, strings, maps, and arrays; references to objects are a bit harder but that is a nice subject for another article. The JSON value is then returned as the body of the HTTP response. The Javascript will receive this text body and compile it to Javascript arrays, values, and dictionaries. From the Javascript point of view it can not be simpler.
The webrpc bundle maps requests that do not look like a method name to a getResource() call on the service object’s class. A service implementer can place all its web resources in the www directory and they are automatically mapped. That is, if someone requests /rpc/sudoku/prototype.js, it will load the resource from aQute/sudoko/www/prototype.js. Assuming the implementation class is in the aQute.sudoku package.
All this effort in the webrpc bundle has makes the sudoku bundle almost trivial. Well, I guess it was trivial in the first place because the server only calculates the board and this is not rocket science. The Soduko service only implements a newGame method. This method returns an array of 81 integers (remember how 9x9 was?). Negative integers mark positions that can be shown from the beginning; positive integers are to be guessed by the player. For the HTML side I decided to make a splash screen in SVG. Hey! We now have those fantastic tools, so let us use them. Though splash screen sounds more dramatic than it turned out, I admit, I am a lousy artist.
For the handling of the HTML and http requests calls I am using prototype.js. The Javascript and HTML is actually quite small. The Sudoku board is completely drawn in HTML and formatted using XML. When the Javascript starts up, it requests a new game from the Sodoku service. When this information (asynchronously) arrives, it formats the board with the received information. Rather cool in this model is the use of Cascading Style Sheets (CSS). CSS makes the mundane detail of formatting all the little details nicely manageable; however, remember I am not an artist so the looks can be improved.
Both the webrpc and sudoku bundles used declarative services. The webrpc bundle was hard to test standalone because it relies heavily on the OSGi service registry. The Sudoku service was however a POJO and could therefore easily be tested.
Try it out! If you think it looks too big, just make the window smaller. Notice how nicely it all scales because CSS and SVG are used to adjust all parameters to the window size.
You can download the bundles from the OSGi Bundle Repository (OBR). The JAR’s contain the source code of the bundles in the OPT-INF/src directory. See how extremely little code is required to achieve the goals. Enjoy!
Peter Kriens
However, one model that seems to emerge is web based UIs. The advent of DHTML, Javascript, and SVG provide an extremely rich environment for application development. Unfortunately, Javascript is not a nice language to handle large and complex problems. Many companies solve this by using webservices to distribute the work between the graphic user interface (GUI) and a server. Web services are remote procedure calls done over the HTTP(S) protocol. This model is reasonably well supported by Javascript using the XMLHttpRequest object.
Th web services model maps very well to an OSGi service platform. In a way, you just need a way to easily call the services in the OSGi service registry from Javascript. So how would such an application look like?
Let us first layout the bundles I like to think in bundles because it is nicely concrete and down to earth. We obviously need a web server, which is easy to fulfill with the OSGi Http Service. We then require a bundle to handle the grunts of the remote procedure calling and another bundle to prove that it works.
The rpc calling bundle has the responsibility to handle the remote procedure calls from Javascript and map them to calls on an OSGi service. I’ve called this bundle webrpc. The second bundle is the demo application. This summer I emotionally blackmailed my son Thomas to learn how to write a program; he caved in and wrote a working Sudoku game in Java after a false start in C++. This exercise gave me more experience with the Sudoku domain than I ever wanted to have so the demo will be a sudoku bundle.
The webrpc bundle should not indiscriminately export services. There are obviously security issues but also not all services are well suited to be exported. However, we’d like to keep the service as simple as possible. The choice was made to look for the public property on the service registration. This is easy to set with declarative services and it means we can use a POJO as our service; the service itself is not involved with the remote procedure calling. The value of the property is the name under which the service should be known to the outside world.
I could have used a complete web services stack, there are an amazing amount of those applications available, almost all written in Java. However, I wanted to run the demo on my Nokia E70 phone that runs an OSGi service platform. Memory is only abundant on PCs and services, so a complete web services stack can add up. It also adds up on the Javascript side because a compliant web services stack is non-trivial. I therefore decided to optimize space and implement a very lite model of RPC over HTTP. Requests are done using the HTTP protocol. The path provides the service and its method, while the query parameters provide the arguments. I used 0 for the first argument, 1 for the next, and so on. For example:
http://localhost/rpc/sudoku/newGame?0=simple
This request will invoke the newGame method on the service that is registered with the property public set to “sudoku”.
The webrpc bundle depends on the Http Service. When there is an Http Service present, it registers itself under the /rpc alias. The servlet will get called for all requests that start with /rpc. For such a request, the webrpc bundle looks up the service name (in the example sudoku) in the service registry using a simple query string with the OSGi filter language to do this. The next part is calling the right method in this service; this is non-trivial because it is necessary to coerce all the arguments from strings into the method argument types. I wrote a separate Invoker class to handle this coercion. Read it and weep. Most of the work is handling Java’s silly primitives.
When the method has been called it returns an Object. This object must be shipped back to the Javascript in the browser. Now the gut reaction of most programmers is to think XML. Not me this time, there is another standard called JSON that is so much easier for Javascript. It is a representation of strings, numbers, arrays, and maps that is compatible with the Javascript syntax. Converting an object into JSON is quite straightforward for values like numbers, strings, maps, and arrays; references to objects are a bit harder but that is a nice subject for another article. The JSON value is then returned as the body of the HTTP response. The Javascript will receive this text body and compile it to Javascript arrays, values, and dictionaries. From the Javascript point of view it can not be simpler.
The webrpc bundle maps requests that do not look like a method name to a getResource() call on the service object’s class. A service implementer can place all its web resources in the www directory and they are automatically mapped. That is, if someone requests /rpc/sudoku/prototype.js, it will load the resource from aQute/sudoko/www/prototype.js. Assuming the implementation class is in the aQute.sudoku package.
All this effort in the webrpc bundle has makes the sudoku bundle almost trivial. Well, I guess it was trivial in the first place because the server only calculates the board and this is not rocket science. The Soduko service only implements a newGame method. This method returns an array of 81 integers (remember how 9x9 was?). Negative integers mark positions that can be shown from the beginning; positive integers are to be guessed by the player. For the HTML side I decided to make a splash screen in SVG. Hey! We now have those fantastic tools, so let us use them. Though splash screen sounds more dramatic than it turned out, I admit, I am a lousy artist.
For the handling of the HTML and http requests calls I am using prototype.js. The Javascript and HTML is actually quite small. The Sudoku board is completely drawn in HTML and formatted using XML. When the Javascript starts up, it requests a new game from the Sodoku service. When this information (asynchronously) arrives, it formats the board with the received information. Rather cool in this model is the use of Cascading Style Sheets (CSS). CSS makes the mundane detail of formatting all the little details nicely manageable; however, remember I am not an artist so the looks can be improved.
Both the webrpc and sudoku bundles used declarative services. The webrpc bundle was hard to test standalone because it relies heavily on the OSGi service registry. The Sudoku service was however a POJO and could therefore easily be tested.
Try it out! If you think it looks too big, just make the window smaller. Notice how nicely it all scales because CSS and SVG are used to adjust all parameters to the window size.
You can download the bundles from the OSGi Bundle Repository (OBR). The JAR’s contain the source code of the bundles in the OPT-INF/src directory. See how extremely little code is required to achieve the goals. Enjoy!
Peter Kriens
Wednesday, September 13, 2006
Enterprise Workshop
I am typing this on my way back home from a very successful enterprise workshop. Over 30 people from many different companies had taken the trouble to show up for this requirements gathering session; many of them OSGi members but also a number of new companies that are interested in using the OSGi service platform in less resource constrained environments than home gateways, cars, or mobile phones. After this busy day I can safely say that there is sufficient interest to start working on enterprise OSGi! We will create the charter for the EG in the coming weeks.
How do you gather the requirements from 30 people at the same time? Not an easy task. We decided to first get a short introduction from all participatns and then let volunteers present their position statement. We got a few interesting presentations from Siemens, SAP, and others about where they see the opportunities for the OSGi specifications to play a role. We then spent some time doing a round table discussion to capture the ideas and requirements from the different participants. These ideas were categorized and then sorted by voting. After several failed voting system attempts we decided to vote with post-it notes glued to the projector screen, which reminded Jon Bork from Intel of “hanging chads”. Fortunately, we did not need the supreme court to intervene.
“Stay modular and small”. This was one of the first things Hal Hildebrand from Oracle said, and nobody disagreed. People like the OSGi technology because it was created for a constrained environment, which has given it a lean appearance. Unanimously, we liked to keep it that way. The new specifications required for the enterprise group should not fragment the world like J2ME and J2SE and neither should they bloat like J2EE. New features should be added in a modular fashion so that the basic framework is still usable in embedded environments.
A key requirement we identified was distribution. The strength of the OSGi specifications are that they specify a very efficient model for multiple applications to share a single VM in a single process. However, when moving to the enterprise it becomes crucial to provide mechanisms to scale to multiple processes, multiple VMs, multiple machines, and multiple languages. So far, we have not addressed outside VM issues because inter-process communications requires very CPU intensive mechanisms that are just no viable in resource constrained environments. It is, however, clear that the enterprise world is more than willing to pay the extra price for the increased scaling, availability, reliability, and access to applications written in other languages than Java that a distributed model can provide. As was remarked, most new code is just the fur on a giant hairball of legacy applications. From an implementation point of view, the OSGi service registry seems the perfect candidate to implement these requirements.
Another requirement was more focus on the whole life cycle of applications. The OSGi Alliance provides good specifications for the service platform but development, debugging, deployment, management, auditing, monitoring, patching, education, and certifications of applications have so far been out of scope. These were considered crucial aspects for “universal middleware” standard to succeed in the enterprises.
There was also a very interesting discussion about more comprehensive dependency management. Can we model the aspects that are outside the VM? More extensive metadata can go a long way to make deployments more accurate and offers possibilities in grid computing like models.
A number of new services were proposed. An interesting one was the network awareness service. I remember discussing such a service at least 5 years ago. Time flies. Also, people would like to see even more security mechanisms, an enhanced configuration admin, data synchronization, web services stacks, and several more.
There were many more topics raised and all of them deserve further discussions. However, the previously sketched areas seem to cover the most urgent needs. The next steps we have to take is chart a charter for the expert group, select a chair (if you are interested, you should propose yourself as a candidate), get board approval and then have our kick off meeting so we can start working. Most companies volunteered people to work on different aspects of the specifications..
The general scope of the EG is clear: We want to make OSGi feasible in the more distributed enterprise world and address the non-Java oriented aspects of enterprise applications. If your company is working in this scope, I would seriously consider joining. Looking at the current adoption rate of OSGi in his area and the participating companies, I would say that we will probably develop some pretty important standards in the coming time; specifications that will impact the work of many. Though we have a sufficient number of companies to get started, we can always use more!
Peter Kriens
.
How do you gather the requirements from 30 people at the same time? Not an easy task. We decided to first get a short introduction from all participatns and then let volunteers present their position statement. We got a few interesting presentations from Siemens, SAP, and others about where they see the opportunities for the OSGi specifications to play a role. We then spent some time doing a round table discussion to capture the ideas and requirements from the different participants. These ideas were categorized and then sorted by voting. After several failed voting system attempts we decided to vote with post-it notes glued to the projector screen, which reminded Jon Bork from Intel of “hanging chads”. Fortunately, we did not need the supreme court to intervene.
“Stay modular and small”. This was one of the first things Hal Hildebrand from Oracle said, and nobody disagreed. People like the OSGi technology because it was created for a constrained environment, which has given it a lean appearance. Unanimously, we liked to keep it that way. The new specifications required for the enterprise group should not fragment the world like J2ME and J2SE and neither should they bloat like J2EE. New features should be added in a modular fashion so that the basic framework is still usable in embedded environments.
A key requirement we identified was distribution. The strength of the OSGi specifications are that they specify a very efficient model for multiple applications to share a single VM in a single process. However, when moving to the enterprise it becomes crucial to provide mechanisms to scale to multiple processes, multiple VMs, multiple machines, and multiple languages. So far, we have not addressed outside VM issues because inter-process communications requires very CPU intensive mechanisms that are just no viable in resource constrained environments. It is, however, clear that the enterprise world is more than willing to pay the extra price for the increased scaling, availability, reliability, and access to applications written in other languages than Java that a distributed model can provide. As was remarked, most new code is just the fur on a giant hairball of legacy applications. From an implementation point of view, the OSGi service registry seems the perfect candidate to implement these requirements.
Another requirement was more focus on the whole life cycle of applications. The OSGi Alliance provides good specifications for the service platform but development, debugging, deployment, management, auditing, monitoring, patching, education, and certifications of applications have so far been out of scope. These were considered crucial aspects for “universal middleware” standard to succeed in the enterprises.
There was also a very interesting discussion about more comprehensive dependency management. Can we model the aspects that are outside the VM? More extensive metadata can go a long way to make deployments more accurate and offers possibilities in grid computing like models.
A number of new services were proposed. An interesting one was the network awareness service. I remember discussing such a service at least 5 years ago. Time flies. Also, people would like to see even more security mechanisms, an enhanced configuration admin, data synchronization, web services stacks, and several more.
There were many more topics raised and all of them deserve further discussions. However, the previously sketched areas seem to cover the most urgent needs. The next steps we have to take is chart a charter for the expert group, select a chair (if you are interested, you should propose yourself as a candidate), get board approval and then have our kick off meeting so we can start working. Most companies volunteered people to work on different aspects of the specifications..
The general scope of the EG is clear: We want to make OSGi feasible in the more distributed enterprise world and address the non-Java oriented aspects of enterprise applications. If your company is working in this scope, I would seriously consider joining. Looking at the current adoption rate of OSGi in his area and the participating companies, I would say that we will probably develop some pretty important standards in the coming time; specifications that will impact the work of many. Though we have a sufficient number of companies to get started, we can always use more!
Peter Kriens
.
Tuesday, August 29, 2006
OSGi and Spring
Update: JSR 232 is approved in the final ballot!
Last week I visited the BEA offices for a meeting with the Spring-OSGi group. This project started a few months ago when Interface21 (the shepherds of Spring), specifically Adrian Colyer, contacted us to see if we could make the Spring- and OSGi technology compatible. At first I had to look a bit deeper into Spring because I had only heard of the technology. I quickly found out that it is highly popular in a diverse range of Java applications. So what does Spring do?
The core responsibility of Spring is configuration. One of the key software lessons our industry learned in the last decade is that you want your software modules to be as cohesive and uncoupled as possible. The more uncoupled software is, the easier it is to reuse it. However, in the end you need some place where the modules are wired together and their options set. It is a like squeezing a balloon, you can never get rid of the coupling, you can only concentrate it in as few places as possible. This is a similar model that is promoted for OSGi applications; your bundle activator should configure a number of POJOs that finally do the real work. Spring's reason for being is based on this software model.
The Spring core interprets an XML file that describes how a set of beans must be wired together. A bean is a POJO that is configured with get and set methods. Beans are accessed using reflection but can optionally participate by implementing a number of Spring interfaces. The Spring XML looks a lot like a script language (something XML is extremely ill suited for, but that is another story, and yes, I know there is an alternative) but actually creates a graph and can therefore correctly handle dependencies and ordering.
On top of this configuration code, the Spring library has a large number of nicely decoupled libraries that can be used to decorate the beans. Do you need for JTA transaction support, but your POJO did not know anything about JTA? No problem, with Spring you can provide the required transaction calls around your methods without changing the source code. Spring uses two forms of aspect oriented programming to achieve the decoration. You can use events around the method calls defined in an interface, or you can use full blown AspectJ programming. There is also a library that provides an shell around JDBC, removing any implementation intricacies. And there is of course much, much, more. Overall pretty cool stuff.
Oracle and BEA were also very interested in providing Spring capabilities on top of the OSGi Service Platform. Hal Hildebrand (Oracle) and Andy Piper (BEA) were part of the team doing the work. On the mailing list, Richard Hall and Jeff McAffer also tuned in. With these people, you can be guaranteed of lots of interesting discussions on the Spring-OSGi mailing list; I learned a lot about features required to integrate larger enterprise applications. There is a lot of messy code out there! It is amazing how many home grown mechanisms application programmers have invented to get some kind of extensibility. Many of these mechanisms have peculiar ideas about class loading that do not always mesh well with the OSGi rather strict rules. One of the key focus of OSGi R5 will likely be addressing how this legacy code will run unmodified on OSGi service platforms.
A couple of weeks ago I decided to plummet in the deep and see what all the fuzz was all about so I wrote a simple application using the Spring-OSGi prototype. Adrian had written a very nice draft of a spec so it was not hard to get started. After a couple of hours the application was running and I seemed to be able to use many of the facilities. From an OSGi perspective, I had some qualms. I thought some of the solutions were to cumbersome and there was too much choice in certain areas. I wrote a review report from my experiences, which I had to redo because it got lost in the mail ... I think the project was going very well but I felt we needed to do a hack session together. I did not have enough Spring experience to change their code, and I think they could use some OSGi experience to improve their code. We therefore decided to meet in London. Olivier Gruber and BJ Hargrave from IBM also attended because of their
OSGi expertise and interest in Spring.
The group had become a bit too big to do any real hacking but that turned out to be ok. We spent most of the first day going through Adrian's excellent specification text. We were incredibly productive that day, we killed so many features! With lots of intense discussions we could significantly simplify the model. It turned out that the OSGi service registry and the Spring application context could really work together very well. The tricky part is of course the dynamics, the OSGi specifications seem to remain unique in their focus on dynamicity. Interestingly, the end result looks like a specification that can take over or merge with our Declarative Services in the long run because it is now a super-set. Another item to discuss for OSGi R5.
Overall, this is a very interesting project that has a lot of industry attention: two very cool technologies that are being married. I hope that the group can turn the prototype into the mainstream Spring code. Adrian told me they were thinking of Spring 2.1. As a bonus, Spring 2.1 will deliver all its libraries as bundles, that is, all libraries will have the correct OSGi headers.
I am convinced that the OSGi Enterprise workshop will discuss this project, for example, should OSGi standardize the OSGi layer of Spring?
Peter Kriens
P.S. You did RSVP did you? Realize that we will have limited seating based on the people that have RSVP'ed. Hope to see you there!
Last week I visited the BEA offices for a meeting with the Spring-OSGi group. This project started a few months ago when Interface21 (the shepherds of Spring), specifically Adrian Colyer, contacted us to see if we could make the Spring- and OSGi technology compatible. At first I had to look a bit deeper into Spring because I had only heard of the technology. I quickly found out that it is highly popular in a diverse range of Java applications. So what does Spring do?
The core responsibility of Spring is configuration. One of the key software lessons our industry learned in the last decade is that you want your software modules to be as cohesive and uncoupled as possible. The more uncoupled software is, the easier it is to reuse it. However, in the end you need some place where the modules are wired together and their options set. It is a like squeezing a balloon, you can never get rid of the coupling, you can only concentrate it in as few places as possible. This is a similar model that is promoted for OSGi applications; your bundle activator should configure a number of POJOs that finally do the real work. Spring's reason for being is based on this software model.
The Spring core interprets an XML file that describes how a set of beans must be wired together. A bean is a POJO that is configured with get and set methods. Beans are accessed using reflection but can optionally participate by implementing a number of Spring interfaces. The Spring XML looks a lot like a script language (something XML is extremely ill suited for, but that is another story, and yes, I know there is an alternative) but actually creates a graph and can therefore correctly handle dependencies and ordering.
On top of this configuration code, the Spring library has a large number of nicely decoupled libraries that can be used to decorate the beans. Do you need for JTA transaction support, but your POJO did not know anything about JTA? No problem, with Spring you can provide the required transaction calls around your methods without changing the source code. Spring uses two forms of aspect oriented programming to achieve the decoration. You can use events around the method calls defined in an interface, or you can use full blown AspectJ programming. There is also a library that provides an shell around JDBC, removing any implementation intricacies. And there is of course much, much, more. Overall pretty cool stuff.
Oracle and BEA were also very interested in providing Spring capabilities on top of the OSGi Service Platform. Hal Hildebrand (Oracle) and Andy Piper (BEA) were part of the team doing the work. On the mailing list, Richard Hall and Jeff McAffer also tuned in. With these people, you can be guaranteed of lots of interesting discussions on the Spring-OSGi mailing list; I learned a lot about features required to integrate larger enterprise applications. There is a lot of messy code out there! It is amazing how many home grown mechanisms application programmers have invented to get some kind of extensibility. Many of these mechanisms have peculiar ideas about class loading that do not always mesh well with the OSGi rather strict rules. One of the key focus of OSGi R5 will likely be addressing how this legacy code will run unmodified on OSGi service platforms.
A couple of weeks ago I decided to plummet in the deep and see what all the fuzz was all about so I wrote a simple application using the Spring-OSGi prototype. Adrian had written a very nice draft of a spec so it was not hard to get started. After a couple of hours the application was running and I seemed to be able to use many of the facilities. From an OSGi perspective, I had some qualms. I thought some of the solutions were to cumbersome and there was too much choice in certain areas. I wrote a review report from my experiences, which I had to redo because it got lost in the mail ... I think the project was going very well but I felt we needed to do a hack session together. I did not have enough Spring experience to change their code, and I think they could use some OSGi experience to improve their code. We therefore decided to meet in London. Olivier Gruber and BJ Hargrave from IBM also attended because of their
OSGi expertise and interest in Spring.
The group had become a bit too big to do any real hacking but that turned out to be ok. We spent most of the first day going through Adrian's excellent specification text. We were incredibly productive that day, we killed so many features! With lots of intense discussions we could significantly simplify the model. It turned out that the OSGi service registry and the Spring application context could really work together very well. The tricky part is of course the dynamics, the OSGi specifications seem to remain unique in their focus on dynamicity. Interestingly, the end result looks like a specification that can take over or merge with our Declarative Services in the long run because it is now a super-set. Another item to discuss for OSGi R5.
Overall, this is a very interesting project that has a lot of industry attention: two very cool technologies that are being married. I hope that the group can turn the prototype into the mainstream Spring code. Adrian told me they were thinking of Spring 2.1. As a bonus, Spring 2.1 will deliver all its libraries as bundles, that is, all libraries will have the correct OSGi headers.
I am convinced that the OSGi Enterprise workshop will discuss this project, for example, should OSGi standardize the OSGi layer of Spring?
Peter Kriens
P.S. You did RSVP did you? Realize that we will have limited seating based on the people that have RSVP'ed. Hope to see you there!
Wednesday, August 16, 2006
OSGi Enterprise Workshop
It is rare that a technology developed for embedded devices ends up at the core of enterprise applications. The constraints and optimization axes of these two worlds are, ehhh, worlds apart. Still, it looks like the OSGi Alliance is succeeding in this remarkable feat as demonstrated by the increasing interest of enterprise software vendors in OSGi technology. Initially, some vendors voiced concerns about potential patents held by the founding members, but the by now famous patent pledge made those worries go away.
Enterprise software is not a well defined area. We know it is usually big, requires lots of people to work on it, and it is invariably expensive. Web servers are of course a big aspect of Enterprise software as well as tight integration with databases. Java made big inroads in this world because it provided a uniform environment for applications. Its type safety reduced the number of errors during the integration phases and this made it smoother to integrate different applications.
J2EE was SUN’s attempt to capture the communality of Enterprise applications in a framework. The idea behind a framework is that it reduces the application specific code and the resulting application more robust. It is a wonderful appealing idea and object oriented theory gave birth to too many frameworks in the nineties. I even build one for Ericsson in 94! Frameworks are incredibly appealing but they are so hard to get right. They key problem is the natural focus on the advantages of the framework and total disregard for the increased complexity and learning curve that a framework will cause. For example, if we add a function for authentication, we needlessly increase the complexity for people that do not need authentication at all. Just the increase in the API and manual is negative. The battle field of software is therefore littered with failed frameworks (alas, including mine). Only frameworks that can restrict themselves to core functionality and fanatical focus on decoupling stand a chance to succeed.
I do not think SUN and compatriots heeded those lessons from the nineties when they started with J2EE. From the start, J2EE became a monster in deployment complexity and intrusiveness. 5 years ago I bought a book about J2EE and spent a week developing applications, just to understand what the hype was. Well, I utterly disliked the intrusiveness of the code and the many, seemingly unnecessary, restrictions. This dislike happened ever before I found out about the XML situps that one needs to perform before you get your “Hello World” running. Look at the funny comparison between web application development environments from JPL to demonstrate my dislike. I understand that the EJB 3 JSR is providing more focus on simplicity, but I will hold my judgment until I have played with it.
The success of Spring is caused by the intrusiveness of J2EE into applications and its inclusion of many technologies that are not always necessary, or in other words, the classic Framework syndrome. Developers love frameworks but they are not very faithful; this is where the POJOs are coming from. POJOs are successful because the coupling between the units of design are minimized, but more important, they are coupled in the right direction. Let the framework call you, not you calling the framework. The result is that the developers can focus on their domain area, not on learning Yet Another Framework.
The OSGi Framework is successful because it was designed to decouple, which provides many benefits during development and deployment. The OSGi Framework could not care less about what your components do and how they do it. It only provides the primitives to deploy your components and to let them collaborate in a dynamic way. Collaboration is done through service interfaces, which makes the components easy to test without complicated harnesses. Components can be build on top of POJOs, very little code has to be coupled to the OSGi Framework. Standardizing this component layer will create universal middleware. Middleware developed in different industries that can be deployed on any platform. In the end, application development will be mixing and matching universal middleware.
The OSGi development model is so attractive to the Enterprise world because it is sometimes amazing how often I see the wheels being reinvented. Often just because the existing wheels were connected to axes that the developer had no use for. Though the OSGi Framework is quite mature for the embedded, vehicle, home automation, and mobile markets, we need to work on the issues that are crucial for the enterprise markets. What functionality is missing in R4 that is important for enterprise applications?
Next month I will lead an OSGi workshop in the bay area where we will discuss these issues with a number of experts. This is an open workshop but seats are limited. If your company is into making Enterprise applications, please attend (register ASAP with rranck@inventures.com, we have limited number of seats). The signs are that OSGi technologies will play a major role in server side applications over the coming years. If you can not attend the workshop, do not hesitate to add comments to this blog or just email me.
Peter Kriens
Enterprise software is not a well defined area. We know it is usually big, requires lots of people to work on it, and it is invariably expensive. Web servers are of course a big aspect of Enterprise software as well as tight integration with databases. Java made big inroads in this world because it provided a uniform environment for applications. Its type safety reduced the number of errors during the integration phases and this made it smoother to integrate different applications.
J2EE was SUN’s attempt to capture the communality of Enterprise applications in a framework. The idea behind a framework is that it reduces the application specific code and the resulting application more robust. It is a wonderful appealing idea and object oriented theory gave birth to too many frameworks in the nineties. I even build one for Ericsson in 94! Frameworks are incredibly appealing but they are so hard to get right. They key problem is the natural focus on the advantages of the framework and total disregard for the increased complexity and learning curve that a framework will cause. For example, if we add a function for authentication, we needlessly increase the complexity for people that do not need authentication at all. Just the increase in the API and manual is negative. The battle field of software is therefore littered with failed frameworks (alas, including mine). Only frameworks that can restrict themselves to core functionality and fanatical focus on decoupling stand a chance to succeed.
I do not think SUN and compatriots heeded those lessons from the nineties when they started with J2EE. From the start, J2EE became a monster in deployment complexity and intrusiveness. 5 years ago I bought a book about J2EE and spent a week developing applications, just to understand what the hype was. Well, I utterly disliked the intrusiveness of the code and the many, seemingly unnecessary, restrictions. This dislike happened ever before I found out about the XML situps that one needs to perform before you get your “Hello World” running. Look at the funny comparison between web application development environments from JPL to demonstrate my dislike. I understand that the EJB 3 JSR is providing more focus on simplicity, but I will hold my judgment until I have played with it.
The success of Spring is caused by the intrusiveness of J2EE into applications and its inclusion of many technologies that are not always necessary, or in other words, the classic Framework syndrome. Developers love frameworks but they are not very faithful; this is where the POJOs are coming from. POJOs are successful because the coupling between the units of design are minimized, but more important, they are coupled in the right direction. Let the framework call you, not you calling the framework. The result is that the developers can focus on their domain area, not on learning Yet Another Framework.
The OSGi Framework is successful because it was designed to decouple, which provides many benefits during development and deployment. The OSGi Framework could not care less about what your components do and how they do it. It only provides the primitives to deploy your components and to let them collaborate in a dynamic way. Collaboration is done through service interfaces, which makes the components easy to test without complicated harnesses. Components can be build on top of POJOs, very little code has to be coupled to the OSGi Framework. Standardizing this component layer will create universal middleware. Middleware developed in different industries that can be deployed on any platform. In the end, application development will be mixing and matching universal middleware.
The OSGi development model is so attractive to the Enterprise world because it is sometimes amazing how often I see the wheels being reinvented. Often just because the existing wheels were connected to axes that the developer had no use for. Though the OSGi Framework is quite mature for the embedded, vehicle, home automation, and mobile markets, we need to work on the issues that are crucial for the enterprise markets. What functionality is missing in R4 that is important for enterprise applications?
Next month I will lead an OSGi workshop in the bay area where we will discuss these issues with a number of experts. This is an open workshop but seats are limited. If your company is into making Enterprise applications, please attend (register ASAP with rranck@inventures.com, we have limited number of seats). The signs are that OSGi technologies will play a major role in server side applications over the coming years. If you can not attend the workshop, do not hesitate to add comments to this blog or just email me.
Peter Kriens
Thursday, August 3, 2006
Why is Software So Brittle?
The past few days I acted as a software user instead of software programmer: I needed a wiki for an OSGi project. How hard can that be? After a bit of Googling I found Ward Cunningham’s site (he is the Wiki inventor) and he is hosting a wiki about wikis (This text will have a weird number of w words). Well, that site contained a very long list of wikis! I guess a wiki is on the sweet spot of being relative easy to make but with a big visual impact.
How should one choose from so many wikis? Silly enough, the implementation language seemed to be the most logical selection criteria. As you might have guessed, Java is one of my favorite languages and was therefore an obvious choice. Eagerly checking the list for OSGi based implementations turned out, however, to be a disappointment. Many wikis point out how extensible they are but none had chosen the OSGi service platform to implement their plugins; they all developed plugins in a proprietary way. There is a lot of evangelizing left to do.
Anyway, after looking at some Java implementations I found a wiki that looked very promising: FitNesse. This is not a simple wiki, it also contains an extensive testing framework. However, the wiki looked really good (not a common feature for a lot of wikis) and the code looked interesting. When I inspected the code, my hands itched … if I had been a wise men, I would have slapped them and continued. The code looked so clean and so easy to bundlefy that I could not resist. Turning it into a bundle was indeed trivial; the makers of FitNesse provided a very clean configuration and start/stop interface. I only had a slight problem with resource loading. They used the system class loader for resource loading instead of their own class loading, can’t see why, but if that was all I could fix it. Still optimistic at that moment in time.
We have an OSGi framework running that hosts the Bundle Repository, so it was attractive to install this wiki bundle on the same machine, sharing the same VM for these applications is a serious memory saver. The FitNesse bundle ran its own web server (Oh, why don’t they use OSGi technology?). This required me to map their web server through our standard web server so that we could handle authentication in one place. The standard OBR web server runs behind a firewall but is proxied through our normal web server. After trying this same trick for the wiki, it turned out that the FitNesse wiki code assumed it had the whole web server on its own; it could not handle a prefix path. I was getting this sinking feeling that I might have taken the wrong path. As a solution, I could of course run the web server directly out in the open. Unfortunately, this would require me to write the authentication code. The sinking feeling changed into hitting rock bottom. Maybe it was time to turn around and see if there were easier ways of getting this done. If I only had been a wise man …
I talked to BJ Hargrave and he recommended the MediaWiki, the wiki that powers the WikiPedia. This wiki is PHP based and run by hundreds of sites. How hard can it be to install this? Well, hard. It turned out we were running PHP 4 on our server and MediaWiki runs on PHP 5. Sigh. Well, we have rpm installed on our Linux system so we just download PHP 5 and we are on our way. Well, that is the theory. The sinking feeling returned. PHP 5 required a long list of additional uploads. Carefully trying to install a few dependent packages quickly showed that the dependency tree closely resembled the big bang. BJ had tole me there was also a PHP 4 version of the MediaWiki so I felt myself floating to the surface again, only to sink at record speed again after installing this version. Yes, it requires version 4, which happened to be a different version 4 than our version 4. The dependency fan out was not as enormous as the PHP 5, but still impressive enough for me to give up.
Simple and small became my main criteria now. So I finally found a simple and very small PmWiki based on PHP that actually had the functionality I needed, and nothing more. It was a breeze to install and did not have any nasty dependencies.
So what did I learn? Well, besides not trying to be clever, I learned that we are an industry with a problem. The last couple of years we have spent a tremendous amount of energy in formalizing the dependencies of our software artifacts; assuming that this would solve the problems of unreliable software. However, it looks like we have created another problem which is the dependency fan out. Unfortunately, virtually all known dependency mechanisms are based on versioned artifacts. Artifacts are usually a collection of functions that are not always needed. Often there is a core of highly coupled packages and a periphery of utilities and mappers to other artifacts. For example, many tool vendors today provide a mapper to run their tools under ant or maven; implicitly coupling their artifact to maven and ant.
We need to make the use of software less painful. Though we can extend our current way of working with big-bang releases (integrating many different products in a single build), in the end we need to find a way that minimizes dependencies between artifacts. The OSGi service registry with its service oriented architecture is clearly in the right direction. However, we need more. Tools that can visualize the dependencies and that can help us reduce the dependencies. Yes, with today’s tools we can actually manage the dependencies, but that does not imply that dependencies have become a positive thing. I think the OSGi is on the right way, but a lot of work is left to do.
How should one choose from so many wikis? Silly enough, the implementation language seemed to be the most logical selection criteria. As you might have guessed, Java is one of my favorite languages and was therefore an obvious choice. Eagerly checking the list for OSGi based implementations turned out, however, to be a disappointment. Many wikis point out how extensible they are but none had chosen the OSGi service platform to implement their plugins; they all developed plugins in a proprietary way. There is a lot of evangelizing left to do.
Anyway, after looking at some Java implementations I found a wiki that looked very promising: FitNesse. This is not a simple wiki, it also contains an extensive testing framework. However, the wiki looked really good (not a common feature for a lot of wikis) and the code looked interesting. When I inspected the code, my hands itched … if I had been a wise men, I would have slapped them and continued. The code looked so clean and so easy to bundlefy that I could not resist. Turning it into a bundle was indeed trivial; the makers of FitNesse provided a very clean configuration and start/stop interface. I only had a slight problem with resource loading. They used the system class loader for resource loading instead of their own class loading, can’t see why, but if that was all I could fix it. Still optimistic at that moment in time.
We have an OSGi framework running that hosts the Bundle Repository, so it was attractive to install this wiki bundle on the same machine, sharing the same VM for these applications is a serious memory saver. The FitNesse bundle ran its own web server (Oh, why don’t they use OSGi technology?). This required me to map their web server through our standard web server so that we could handle authentication in one place. The standard OBR web server runs behind a firewall but is proxied through our normal web server. After trying this same trick for the wiki, it turned out that the FitNesse wiki code assumed it had the whole web server on its own; it could not handle a prefix path. I was getting this sinking feeling that I might have taken the wrong path. As a solution, I could of course run the web server directly out in the open. Unfortunately, this would require me to write the authentication code. The sinking feeling changed into hitting rock bottom. Maybe it was time to turn around and see if there were easier ways of getting this done. If I only had been a wise man …
I talked to BJ Hargrave and he recommended the MediaWiki, the wiki that powers the WikiPedia. This wiki is PHP based and run by hundreds of sites. How hard can it be to install this? Well, hard. It turned out we were running PHP 4 on our server and MediaWiki runs on PHP 5. Sigh. Well, we have rpm installed on our Linux system so we just download PHP 5 and we are on our way. Well, that is the theory. The sinking feeling returned. PHP 5 required a long list of additional uploads. Carefully trying to install a few dependent packages quickly showed that the dependency tree closely resembled the big bang. BJ had tole me there was also a PHP 4 version of the MediaWiki so I felt myself floating to the surface again, only to sink at record speed again after installing this version. Yes, it requires version 4, which happened to be a different version 4 than our version 4. The dependency fan out was not as enormous as the PHP 5, but still impressive enough for me to give up.
Simple and small became my main criteria now. So I finally found a simple and very small PmWiki based on PHP that actually had the functionality I needed, and nothing more. It was a breeze to install and did not have any nasty dependencies.
So what did I learn? Well, besides not trying to be clever, I learned that we are an industry with a problem. The last couple of years we have spent a tremendous amount of energy in formalizing the dependencies of our software artifacts; assuming that this would solve the problems of unreliable software. However, it looks like we have created another problem which is the dependency fan out. Unfortunately, virtually all known dependency mechanisms are based on versioned artifacts. Artifacts are usually a collection of functions that are not always needed. Often there is a core of highly coupled packages and a periphery of utilities and mappers to other artifacts. For example, many tool vendors today provide a mapper to run their tools under ant or maven; implicitly coupling their artifact to maven and ant.
We need to make the use of software less painful. Though we can extend our current way of working with big-bang releases (integrating many different products in a single build), in the end we need to find a way that minimizes dependencies between artifacts. The OSGi service registry with its service oriented architecture is clearly in the right direction. However, we need more. Tools that can visualize the dependencies and that can help us reduce the dependencies. Yes, with today’s tools we can actually manage the dependencies, but that does not imply that dependencies have become a positive thing. I think the OSGi is on the right way, but a lot of work is left to do.
Wednesday, July 26, 2006
Interesting times
After some slow (hot!) summer weeks there is suddenly a lot of activity. The best news so far is the patent pledge that the OSGi Alliance has made this week. Five key members of the OSGi Alliance have promised not to sue anybody that implements the release 4 spec for patent infringement as long as the patent is necessary for the implementation. Patents are a necessity but software patents can be bizarre. With over 6 million approved patents, no programmer can claim that he is writing code that does not infringe on some patents. If you are a really big company the problem is not that serious. Once Microsoft knocks on the door of Sun Microsystems it usually ends with an exchange of patents and a deal not to sue each other. As long as you are small, the costs of suing you are less than any potential royalties so small companies are also relatively safe. However, when you grow you can suddenly find some lawyers knocking on your door, potentially eating away a large part of your well deserved fortune. Look at Research in Motion (RIM) of the Blackberry that was forced to pay 600 million dollar for a patent that was suspect at the least.
The patent pledge that was made by the 5 OSGi members has largely removed this potential booby trap when you implement OSGi specifications. And in my opinion, that is exactly the way standards organizations should work. Participants in the OSGi ecosystem should all gain by having more adoption of OSGi technology because it grows the market; something we all can take advantage of. It never was the intention of the OSGi members to become rich on royalties; this pledge has made that crystal clear.
Less positive this week was the vote for JSR 298 by the J2ME Executive Committee. They decided to approve a JSR that is right in OSGi Alliance’s backyard: Telematics. I must not have paid attention because I thought this JSR was stillborn. In May it was voted down by the EC, I have missed the reconsideration ballot of this week.
Why is this JSR 298 bad? Lets take a look. First it states that “OSGi could be too heavy”. Sigh. First, OSGi technology is extremely lean for what it does. Second, it was designed to build applications out of managed middleware. This model allows applications to be lean and mean because they rely on provided middleware. MIDP and other J2ME models have no concept of shared libraries and therefore require a choice between installing it on the platform via an undefined way (growing t the platform) or including it in your application (growing your application). Due to the sharing model, moderately complex OSGi applications are usually much smaller than their brethren on other J2ME application environments.
Now let us step back for a second. Flash memory costs 2 cts a megabyte today in large quantities. What are we talking about? An OSGi R4 framework can be implemented in 250K. CPU speed is not an issue because the OSGi framework lets components communicate with only a little bit of setup overhead. I dare to say that the price/performance ratio of an OSGi framework is very hard to beat.
And despite rumors of the contrary, OSGi technology does run on CLDC environments! Not out of the box because the class loaders of CLDC are veiled, but most CLDC VMs can run an OSGi framework with a bit of work. All the OSGi APIs limit themselves to a subset of CLDC and have therefore no probem on CLDC. However, also in this area one should stay realistic. CLDC is a cramped environment that maybe once was necessary because CDC was too big. Since then, processors have become magnitudes more powerful, flash memory has become a gift with cereals in the form of a USB memory stick, and internal memory is also dropping in price. And this trend is bound to continue for the coming years.
There are clearly applications that require the Bill Of Material (BOM) to be squeezed to the last cent. However, one can ask if these are the applications that require standardized Java APIs. The advantage of standardized Java APIs is that software from different parties can run together and collaborate to achieve the desired functionality. Creating a cramped environment is likely to make this goal a lot harder to achieve. I have seen many examples where the penny wise choice of a limited environment turned out to be pound foolish.
Last but not least: security. The OSGi specifications have an extensive and very powerful security model that is almost absent in MIDP; the security in MIDP is very difficult to extend to a complex area as telematics. Software that erroneously sends out a few unwanted short messages is not good but neither is it a disaster. Controlling real world devices like cars is a different story. The JSR is completely silent about security. How come?
I hope this elucidation has put the OSGi technology is heavy argument finally out of the way! Then again, if this is out of the way, the whole reason for JSR 298 goes away. Much of the work that the JSR plans do is already done in the OSGi Vehicle Expert Group. Especially the upcoming VEG release will handle full control of the vehicle using standard protocols like OMA DM as well as application control. And we’ll likely also have an interesting navigation model!
The second ballot for JSR 298 succeeded. Ok, I was not paying attention but if they got a second chance to passthe JSR, couldwe get a second chance to vote it down?
Peter Kriens
The patent pledge that was made by the 5 OSGi members has largely removed this potential booby trap when you implement OSGi specifications. And in my opinion, that is exactly the way standards organizations should work. Participants in the OSGi ecosystem should all gain by having more adoption of OSGi technology because it grows the market; something we all can take advantage of. It never was the intention of the OSGi members to become rich on royalties; this pledge has made that crystal clear.
Less positive this week was the vote for JSR 298 by the J2ME Executive Committee. They decided to approve a JSR that is right in OSGi Alliance’s backyard: Telematics. I must not have paid attention because I thought this JSR was stillborn. In May it was voted down by the EC, I have missed the reconsideration ballot of this week.
Why is this JSR 298 bad? Lets take a look. First it states that “OSGi could be too heavy”. Sigh. First, OSGi technology is extremely lean for what it does. Second, it was designed to build applications out of managed middleware. This model allows applications to be lean and mean because they rely on provided middleware. MIDP and other J2ME models have no concept of shared libraries and therefore require a choice between installing it on the platform via an undefined way (growing t the platform) or including it in your application (growing your application). Due to the sharing model, moderately complex OSGi applications are usually much smaller than their brethren on other J2ME application environments.
Now let us step back for a second. Flash memory costs 2 cts a megabyte today in large quantities. What are we talking about? An OSGi R4 framework can be implemented in 250K. CPU speed is not an issue because the OSGi framework lets components communicate with only a little bit of setup overhead. I dare to say that the price/performance ratio of an OSGi framework is very hard to beat.
And despite rumors of the contrary, OSGi technology does run on CLDC environments! Not out of the box because the class loaders of CLDC are veiled, but most CLDC VMs can run an OSGi framework with a bit of work. All the OSGi APIs limit themselves to a subset of CLDC and have therefore no probem on CLDC. However, also in this area one should stay realistic. CLDC is a cramped environment that maybe once was necessary because CDC was too big. Since then, processors have become magnitudes more powerful, flash memory has become a gift with cereals in the form of a USB memory stick, and internal memory is also dropping in price. And this trend is bound to continue for the coming years.
There are clearly applications that require the Bill Of Material (BOM) to be squeezed to the last cent. However, one can ask if these are the applications that require standardized Java APIs. The advantage of standardized Java APIs is that software from different parties can run together and collaborate to achieve the desired functionality. Creating a cramped environment is likely to make this goal a lot harder to achieve. I have seen many examples where the penny wise choice of a limited environment turned out to be pound foolish.
Last but not least: security. The OSGi specifications have an extensive and very powerful security model that is almost absent in MIDP; the security in MIDP is very difficult to extend to a complex area as telematics. Software that erroneously sends out a few unwanted short messages is not good but neither is it a disaster. Controlling real world devices like cars is a different story. The JSR is completely silent about security. How come?
I hope this elucidation has put the OSGi technology is heavy argument finally out of the way! Then again, if this is out of the way, the whole reason for JSR 298 goes away. Much of the work that the JSR plans do is already done in the OSGi Vehicle Expert Group. Especially the upcoming VEG release will handle full control of the vehicle using standard protocols like OMA DM as well as application control. And we’ll likely also have an interesting navigation model!
The second ballot for JSR 298 succeeded. Ok, I was not paying attention but if they got a second chance to passthe JSR, couldwe get a second chance to vote it down?
Peter Kriens
Monday, July 17, 2006
To Include Or Not To Include
I am always curious if other engineering disciplines can vehemently argue so much about fundamentals as we do? Do bridge engineers have heavy discussions about how much concrete is needed for a bridge pillar? Can they get into heated arguments if a skyscraper needs I reinforcement or not? In the information industry we can differ about the most fundamental approaches, everybody is an expert. Where are our universities testing the approaches and telling is what works and what does not? How could we more easily test ideas and decide about their value?
Why do I muse about this? Last week I had an argument with some experts that we could not decide. I lost because the owner of the problem decided to go another way while I strongly believed he was wrong. I’ll explain the problem and then you can figure out how to decide what the better approach should have been.
A couple of weeks ago I had to give a tutorial at ApacheCon Europe 2006. Obviously, I had to adapt my tutorial to use Apache Felix instead of Eclipse Equinox so I would not insult my hosts (or undergo the wrath of Richard Hall). Alas, Apache Felix does not yet have an implementation of declarative services. I therefore looked at several open source implementations and picked one to port. Declarative services require a framework specific implementation because it needs the Bundle Context of other bundles and there is no public function to do this in the specification. Bad, but we could not figure out in R4 how to do this in a secure way without using security. Now we know that you can not do secure things when security is not on, but that is another story.
So porting the declarative services was straightforward except for one snatch. To parse the Service-Component manifest header, the implementation used a utility function, which turned out to be implemented in the framework JAR. Not good. Looking at the parser, I decided that the easy way out was to write a simple replacement parser; the syntax for the Service-Component header is quite simple. The replacement code added about 6 lines to the implementation, making the declarative services bundle run on Felix without requiring any other supporting bundles.
After the tutorial I decided to submit my change as a patch, trying to be a good citizen. Great was my surprise when I started getting pushback. First, they felt that the parser was too simplistic; it ignored empty paths and did not detect syntax errors in the attributes. Personally that is fine by me, parsers should be lenient (though not creating erroneous values) and generators should be strict. However, this is again another story, many people feel more comfortable with rigid parsers for diagnostic reasons.
So I then redid the parser, discovering that the Service-Component header did not even support attributes! I threw the right exceptions on empty paths and discovered that Service-Component allowed quotes! Interestingly, the original manifest parser was thus not suited for this header at all.
After submitting my second patch, my expectation to be the hero now was again not honored. They still did not like it. Why? There was now a redundancy in the system; there were now two manifest parsers (despite that the original was wrong). They were therefore not willing to accept my patch. Instead, they will update their central manifest parser to support the Service-Component header quotes and reject attributes. Why? They did not like the redundancy.
This obviously did not fix my coupling problem whatsoever. I am a strong believer in least amount of coupling. In the OSGi build system I have addressed exactly this problem by copying the class files of utilities from the class path into the JAR; by making the packages private I prevent any version clashes. This gives me a single source without lots of utility bundles. The disadvantage is of course that if you find a bad bug, you must update all the bundles that contain that code. In my experience this is rarely much different from a shared bundle. State of the art version handling is so brittle that it is likely that all dependent bundles require an update to make the system resolve with the new utility bundle. And even if they require an update, the difference between updating one bundle or several bundles is not that big. It can even be questioned if you want to update the dependent bundles, often they do not really need the bug fix.
However, in the manifest parser case, I would personally gladly have accepted the source code redundancy; the new “improved” parser was only 15 lines. Yes, it is redundant, but the chance that you have an error in this code is pretty minute after testing and review.
And this is the point I want to discuss after this long introduction. We must balance redundancy (bad) versus coupling between bundles (bad). Redundancy is bad because it means we sometimes have to fix bugs or make improvements in multiple places. Coupling is bad because it makes the deployment situation more complex. The fact that today we are starting to handle dependencies does not mean dependencies have become benign. They can still bite you unexpectedly and mean. So how do we balance two bads? How do we decide which is the least bad?
Peter Kriens
Why do I muse about this? Last week I had an argument with some experts that we could not decide. I lost because the owner of the problem decided to go another way while I strongly believed he was wrong. I’ll explain the problem and then you can figure out how to decide what the better approach should have been.
A couple of weeks ago I had to give a tutorial at ApacheCon Europe 2006. Obviously, I had to adapt my tutorial to use Apache Felix instead of Eclipse Equinox so I would not insult my hosts (or undergo the wrath of Richard Hall). Alas, Apache Felix does not yet have an implementation of declarative services. I therefore looked at several open source implementations and picked one to port. Declarative services require a framework specific implementation because it needs the Bundle Context of other bundles and there is no public function to do this in the specification. Bad, but we could not figure out in R4 how to do this in a secure way without using security. Now we know that you can not do secure things when security is not on, but that is another story.
So porting the declarative services was straightforward except for one snatch. To parse the Service-Component manifest header, the implementation used a utility function, which turned out to be implemented in the framework JAR. Not good. Looking at the parser, I decided that the easy way out was to write a simple replacement parser; the syntax for the Service-Component header is quite simple. The replacement code added about 6 lines to the implementation, making the declarative services bundle run on Felix without requiring any other supporting bundles.
After the tutorial I decided to submit my change as a patch, trying to be a good citizen. Great was my surprise when I started getting pushback. First, they felt that the parser was too simplistic; it ignored empty paths and did not detect syntax errors in the attributes. Personally that is fine by me, parsers should be lenient (though not creating erroneous values) and generators should be strict. However, this is again another story, many people feel more comfortable with rigid parsers for diagnostic reasons.
So I then redid the parser, discovering that the Service-Component header did not even support attributes! I threw the right exceptions on empty paths and discovered that Service-Component allowed quotes! Interestingly, the original manifest parser was thus not suited for this header at all.
After submitting my second patch, my expectation to be the hero now was again not honored. They still did not like it. Why? There was now a redundancy in the system; there were now two manifest parsers (despite that the original was wrong). They were therefore not willing to accept my patch. Instead, they will update their central manifest parser to support the Service-Component header quotes and reject attributes. Why? They did not like the redundancy.
This obviously did not fix my coupling problem whatsoever. I am a strong believer in least amount of coupling. In the OSGi build system I have addressed exactly this problem by copying the class files of utilities from the class path into the JAR; by making the packages private I prevent any version clashes. This gives me a single source without lots of utility bundles. The disadvantage is of course that if you find a bad bug, you must update all the bundles that contain that code. In my experience this is rarely much different from a shared bundle. State of the art version handling is so brittle that it is likely that all dependent bundles require an update to make the system resolve with the new utility bundle. And even if they require an update, the difference between updating one bundle or several bundles is not that big. It can even be questioned if you want to update the dependent bundles, often they do not really need the bug fix.
However, in the manifest parser case, I would personally gladly have accepted the source code redundancy; the new “improved” parser was only 15 lines. Yes, it is redundant, but the chance that you have an error in this code is pretty minute after testing and review.
And this is the point I want to discuss after this long introduction. We must balance redundancy (bad) versus coupling between bundles (bad). Redundancy is bad because it means we sometimes have to fix bugs or make improvements in multiple places. Coupling is bad because it makes the deployment situation more complex. The fact that today we are starting to handle dependencies does not mean dependencies have become benign. They can still bite you unexpectedly and mean. So how do we balance two bads? How do we decide which is the least bad?
Peter Kriens
Tuesday, July 11, 2006
Eclipse PDE versus JDE/ant
Last week there was a discussion about Eclipse Plugin Development Environment (PDE) on the equinox developers list. As you likely know, not only is Eclipse based on the OSGi Service Platform, it also provides support for developing bundles. They call them plugins, but they are really supposed to be bundles. I use the Eclipse Java Development Environment (JDE) for all my work, including the OSGi build with specification, reference implementations, and test suites. The build contains a total of more than 300 bundles files, most of them bundles, in more than 130 projects. There are bigger builds, but this is a significant project by my metrics.
The PDE discussion evolved around the flexibility of the environment. It got started by a request from the maven people to make the location of the manifest flexible. This touched a nerve with me because it is right at the heart why I am not using the PDE. I unfortunately still use the normal JDE because the PDE does not cut it for me. That is, my bundles are still bundles and have not moved up the evolution chain to become plugins. I have tried it many times but they always seem to revert to my home brewn ant/btool solution. Honestly, I tried.
The reason why I am not using the PDE architecture is because it contains a number of harsh constraints. My key problems are:
The reason for these (serious) architectural constraints is that Eclipse must be able to run your code at a whiff at any time. “No build” is the mantra. If you click Debug, Eclipse starts the OSGi framework, and runs your bundle from the project directory, without wrapping the content in a JAR. Well almost, the bin directory is mapped on the class path despite the fact that it is not really a member of the bundle class path, but hey it works! Then again, I think this quick edit-debug cycle can also be achieved with less draconian constraints.
Despite my complaints, rants, or other kinds of obnoxious behavior with the Eclipse guys, I did not get one iota further. I have been going through Eclipse 3.0, 3.1, and 3.2 without making even a dent in this model. Interestingly, I thought my dislike was generally shared by many but in the Equinox mailing list discussion there was a response that intrigued me:
We seem to have the same dislike of spaghetti, but we completely differed in the approach to avoid it. The red thread in my professional career has been decoupling, decoupling, and decoupling. Last week I stumbled on a course I had given in the early nineties and I was surprised how little I had moved forward in this area. I still strongly feel that the coupling of deliverables (bundles or other JARs) is an architectural decision, and not an ad-hoc design choice or afterthought. Today we have lots of tools to manage dependencies (maven, OSGi, etc.), but no-dependency is still much better than a managed dependency. This implies that adding a dependency like a library is a decision that needs to be taken by the architect of the project, not any developer. I fail to see a problem with “Organize Imports”, as long as the scope is defined by the project. Incidentally, this is the way the JDE works. You need to specifically add a JAR to the build path. Once it is on the build path, developers can (and should) use it as much as possible. Just like a pregnancy, there is no such thing is a little bit of coupling; it is a yes/no decision. If you use it once, you might as well touch it often if it saves you time. During the development, developers should not have to be concerned about what they use of the libraries that are made available to them.
However, next comes the packaging. The PDE can only have one JAR file and the content is defined by my project directory layout (sort of). I do not know how other people are building bundles, but most bundles I am responsible for require some packaging tricks.
For example, sometimes I use a package providing some util function. I really do not want to create a dependency on an evil util bundle. I also do not want to copy the source code. So in quite a few projects I copy the byte codes into my bundle and make the package bundle private. This way I combine a single source without adding additional dependencies.
Another example is the OSGi test case. An OSGi test case consists of a set of inner bundles that are loaded in the target and then perform their testing. This requires me to wrap up bundles in bundles. A similar structure is the deliverable for all the reference implementations. This is a single bundle that installs a set of other bundles, which it contains. I also often need to deliver the same code in different forms, for example in a midlet and a bundle.
I also find that I use the same information items in lots of different places. It is surprising how often you need the bundle symbolic name, or a name derived from it. I use properties to ensure that I have a single definition. I can easily preprocess the manifest, the declarative services XML file, or a readme.txt file. All these tricks are impossible with the PDE, they require some kind of processing before you run the bundle which is unfortunately not available.
And most important, in this phase you can finally see what your real dependencies are. In the OSGi build I calculate the imported packages and regularly inspect the outcome. Quite often, I am surprised what I see and then find out why a specific dependency was introduced. They are usually easy to solve.
So in contrast to the mailing list response, I think I have carefully thought about the dependency issue. Despite reflecting on his remarks, I still think that the current model I use is better than the Eclipse PDE model. Now I only have to find a way so that they listen to me!
Peter Kriens
The PDE discussion evolved around the flexibility of the environment. It got started by a request from the maven people to make the location of the manifest flexible. This touched a nerve with me because it is right at the heart why I am not using the PDE. I unfortunately still use the normal JDE because the PDE does not cut it for me. That is, my bundles are still bundles and have not moved up the evolution chain to become plugins. I have tried it many times but they always seem to revert to my home brewn ant/btool solution. Honestly, I tried.
The reason why I am not using the PDE architecture is because it contains a number of harsh constraints. My key problems are:
- A project = a bundle. It is impossible to deliver multiple artifacts from the same code base with the provided tools. For the OSGi build it would mean 300++ projects.
- The project directory layout must look like the root of the bundle, well eh, sort of. The PDE does perform some magic under the covers before it runs your code or packages the bundle, but the idea is that there is a 1:1 relation.
- The manifest must be up to date at all time, forcing the developer to enter the dependency information in literal form. There is no possibility to generate information for the manifest during build time.
The reason for these (serious) architectural constraints is that Eclipse must be able to run your code at a whiff at any time. “No build” is the mantra. If you click Debug, Eclipse starts the OSGi framework, and runs your bundle from the project directory, without wrapping the content in a JAR. Well almost, the bin directory is mapped on the class path despite the fact that it is not really a member of the bundle class path, but hey it works! Then again, I think this quick edit-debug cycle can also be achieved with less draconian constraints.
Despite my complaints, rants, or other kinds of obnoxious behavior with the Eclipse guys, I did not get one iota further. I have been going through Eclipse 3.0, 3.1, and 3.2 without making even a dent in this model. Interestingly, I thought my dislike was generally shared by many but in the Equinox mailing list discussion there was a response that intrigued me:
I've generally fine with 1 project = 1 bundle, and I like being forced to maintain manifests as I develop my bundles. Your model of pooling all the source together and then sorting out the bundles later sounds messy to me. With no constraints on importing packages, developers just hit "Organise Imports" to pull in dependencies from all over the place, resulting in spaghetti code.
We seem to have the same dislike of spaghetti, but we completely differed in the approach to avoid it. The red thread in my professional career has been decoupling, decoupling, and decoupling. Last week I stumbled on a course I had given in the early nineties and I was surprised how little I had moved forward in this area. I still strongly feel that the coupling of deliverables (bundles or other JARs) is an architectural decision, and not an ad-hoc design choice or afterthought. Today we have lots of tools to manage dependencies (maven, OSGi, etc.), but no-dependency is still much better than a managed dependency. This implies that adding a dependency like a library is a decision that needs to be taken by the architect of the project, not any developer. I fail to see a problem with “Organize Imports”, as long as the scope is defined by the project. Incidentally, this is the way the JDE works. You need to specifically add a JAR to the build path. Once it is on the build path, developers can (and should) use it as much as possible. Just like a pregnancy, there is no such thing is a little bit of coupling; it is a yes/no decision. If you use it once, you might as well touch it often if it saves you time. During the development, developers should not have to be concerned about what they use of the libraries that are made available to them.
However, next comes the packaging. The PDE can only have one JAR file and the content is defined by my project directory layout (sort of). I do not know how other people are building bundles, but most bundles I am responsible for require some packaging tricks.
For example, sometimes I use a package providing some util function. I really do not want to create a dependency on an evil util bundle. I also do not want to copy the source code. So in quite a few projects I copy the byte codes into my bundle and make the package bundle private. This way I combine a single source without adding additional dependencies.
Another example is the OSGi test case. An OSGi test case consists of a set of inner bundles that are loaded in the target and then perform their testing. This requires me to wrap up bundles in bundles. A similar structure is the deliverable for all the reference implementations. This is a single bundle that installs a set of other bundles, which it contains. I also often need to deliver the same code in different forms, for example in a midlet and a bundle.
I also find that I use the same information items in lots of different places. It is surprising how often you need the bundle symbolic name, or a name derived from it. I use properties to ensure that I have a single definition. I can easily preprocess the manifest, the declarative services XML file, or a readme.txt file. All these tricks are impossible with the PDE, they require some kind of processing before you run the bundle which is unfortunately not available.
And most important, in this phase you can finally see what your real dependencies are. In the OSGi build I calculate the imported packages and regularly inspect the outcome. Quite often, I am surprised what I see and then find out why a specific dependency was introduced. They are usually easy to solve.
So in contrast to the mailing list response, I think I have carefully thought about the dependency issue. Despite reflecting on his remarks, I still think that the current model I use is better than the Eclipse PDE model. Now I only have to find a way so that they listen to me!
Peter Kriens