After some slow (hot!) summer weeks there is suddenly a lot of activity. The best news so far is the patent pledge that the OSGi Alliance has made this week. Five key members of the OSGi Alliance have promised not to sue anybody that implements the release 4 spec for patent infringement as long as the patent is necessary for the implementation. Patents are a necessity but software patents can be bizarre. With over 6 million approved patents, no programmer can claim that he is writing code that does not infringe on some patents. If you are a really big company the problem is not that serious. Once Microsoft knocks on the door of Sun Microsystems it usually ends with an exchange of patents and a deal not to sue each other. As long as you are small, the costs of suing you are less than any potential royalties so small companies are also relatively safe. However, when you grow you can suddenly find some lawyers knocking on your door, potentially eating away a large part of your well deserved fortune. Look at Research in Motion (RIM) of the Blackberry that was forced to pay 600 million dollar for a patent that was suspect at the least.
The patent pledge that was made by the 5 OSGi members has largely removed this potential booby trap when you implement OSGi specifications. And in my opinion, that is exactly the way standards organizations should work. Participants in the OSGi ecosystem should all gain by having more adoption of OSGi technology because it grows the market; something we all can take advantage of. It never was the intention of the OSGi members to become rich on royalties; this pledge has made that crystal clear.
Less positive this week was the vote for JSR 298 by the J2ME Executive Committee. They decided to approve a JSR that is right in OSGi Alliance’s backyard: Telematics. I must not have paid attention because I thought this JSR was stillborn. In May it was voted down by the EC, I have missed the reconsideration ballot of this week.
Why is this JSR 298 bad? Lets take a look. First it states that “OSGi could be too heavy”. Sigh. First, OSGi technology is extremely lean for what it does. Second, it was designed to build applications out of managed middleware. This model allows applications to be lean and mean because they rely on provided middleware. MIDP and other J2ME models have no concept of shared libraries and therefore require a choice between installing it on the platform via an undefined way (growing t the platform) or including it in your application (growing your application). Due to the sharing model, moderately complex OSGi applications are usually much smaller than their brethren on other J2ME application environments.
Now let us step back for a second. Flash memory costs 2 cts a megabyte today in large quantities. What are we talking about? An OSGi R4 framework can be implemented in 250K. CPU speed is not an issue because the OSGi framework lets components communicate with only a little bit of setup overhead. I dare to say that the price/performance ratio of an OSGi framework is very hard to beat.
And despite rumors of the contrary, OSGi technology does run on CLDC environments! Not out of the box because the class loaders of CLDC are veiled, but most CLDC VMs can run an OSGi framework with a bit of work. All the OSGi APIs limit themselves to a subset of CLDC and have therefore no probem on CLDC. However, also in this area one should stay realistic. CLDC is a cramped environment that maybe once was necessary because CDC was too big. Since then, processors have become magnitudes more powerful, flash memory has become a gift with cereals in the form of a USB memory stick, and internal memory is also dropping in price. And this trend is bound to continue for the coming years.
There are clearly applications that require the Bill Of Material (BOM) to be squeezed to the last cent. However, one can ask if these are the applications that require standardized Java APIs. The advantage of standardized Java APIs is that software from different parties can run together and collaborate to achieve the desired functionality. Creating a cramped environment is likely to make this goal a lot harder to achieve. I have seen many examples where the penny wise choice of a limited environment turned out to be pound foolish.
Last but not least: security. The OSGi specifications have an extensive and very powerful security model that is almost absent in MIDP; the security in MIDP is very difficult to extend to a complex area as telematics. Software that erroneously sends out a few unwanted short messages is not good but neither is it a disaster. Controlling real world devices like cars is a different story. The JSR is completely silent about security. How come?
I hope this elucidation has put the OSGi technology is heavy argument finally out of the way! Then again, if this is out of the way, the whole reason for JSR 298 goes away. Much of the work that the JSR plans do is already done in the OSGi Vehicle Expert Group. Especially the upcoming VEG release will handle full control of the vehicle using standard protocols like OMA DM as well as application control. And we’ll likely also have an interesting navigation model!
The second ballot for JSR 298 succeeded. Ok, I was not paying attention but if they got a second chance to passthe JSR, couldwe get a second chance to vote it down?
Peter Kriens
Wednesday, July 26, 2006
Monday, July 17, 2006
To Include Or Not To Include
I am always curious if other engineering disciplines can vehemently argue so much about fundamentals as we do? Do bridge engineers have heavy discussions about how much concrete is needed for a bridge pillar? Can they get into heated arguments if a skyscraper needs I reinforcement or not? In the information industry we can differ about the most fundamental approaches, everybody is an expert. Where are our universities testing the approaches and telling is what works and what does not? How could we more easily test ideas and decide about their value?
Why do I muse about this? Last week I had an argument with some experts that we could not decide. I lost because the owner of the problem decided to go another way while I strongly believed he was wrong. I’ll explain the problem and then you can figure out how to decide what the better approach should have been.
A couple of weeks ago I had to give a tutorial at ApacheCon Europe 2006. Obviously, I had to adapt my tutorial to use Apache Felix instead of Eclipse Equinox so I would not insult my hosts (or undergo the wrath of Richard Hall). Alas, Apache Felix does not yet have an implementation of declarative services. I therefore looked at several open source implementations and picked one to port. Declarative services require a framework specific implementation because it needs the Bundle Context of other bundles and there is no public function to do this in the specification. Bad, but we could not figure out in R4 how to do this in a secure way without using security. Now we know that you can not do secure things when security is not on, but that is another story.
So porting the declarative services was straightforward except for one snatch. To parse the Service-Component manifest header, the implementation used a utility function, which turned out to be implemented in the framework JAR. Not good. Looking at the parser, I decided that the easy way out was to write a simple replacement parser; the syntax for the Service-Component header is quite simple. The replacement code added about 6 lines to the implementation, making the declarative services bundle run on Felix without requiring any other supporting bundles.
After the tutorial I decided to submit my change as a patch, trying to be a good citizen. Great was my surprise when I started getting pushback. First, they felt that the parser was too simplistic; it ignored empty paths and did not detect syntax errors in the attributes. Personally that is fine by me, parsers should be lenient (though not creating erroneous values) and generators should be strict. However, this is again another story, many people feel more comfortable with rigid parsers for diagnostic reasons.
So I then redid the parser, discovering that the Service-Component header did not even support attributes! I threw the right exceptions on empty paths and discovered that Service-Component allowed quotes! Interestingly, the original manifest parser was thus not suited for this header at all.
After submitting my second patch, my expectation to be the hero now was again not honored. They still did not like it. Why? There was now a redundancy in the system; there were now two manifest parsers (despite that the original was wrong). They were therefore not willing to accept my patch. Instead, they will update their central manifest parser to support the Service-Component header quotes and reject attributes. Why? They did not like the redundancy.
This obviously did not fix my coupling problem whatsoever. I am a strong believer in least amount of coupling. In the OSGi build system I have addressed exactly this problem by copying the class files of utilities from the class path into the JAR; by making the packages private I prevent any version clashes. This gives me a single source without lots of utility bundles. The disadvantage is of course that if you find a bad bug, you must update all the bundles that contain that code. In my experience this is rarely much different from a shared bundle. State of the art version handling is so brittle that it is likely that all dependent bundles require an update to make the system resolve with the new utility bundle. And even if they require an update, the difference between updating one bundle or several bundles is not that big. It can even be questioned if you want to update the dependent bundles, often they do not really need the bug fix.
However, in the manifest parser case, I would personally gladly have accepted the source code redundancy; the new “improved” parser was only 15 lines. Yes, it is redundant, but the chance that you have an error in this code is pretty minute after testing and review.
And this is the point I want to discuss after this long introduction. We must balance redundancy (bad) versus coupling between bundles (bad). Redundancy is bad because it means we sometimes have to fix bugs or make improvements in multiple places. Coupling is bad because it makes the deployment situation more complex. The fact that today we are starting to handle dependencies does not mean dependencies have become benign. They can still bite you unexpectedly and mean. So how do we balance two bads? How do we decide which is the least bad?
Peter Kriens
Why do I muse about this? Last week I had an argument with some experts that we could not decide. I lost because the owner of the problem decided to go another way while I strongly believed he was wrong. I’ll explain the problem and then you can figure out how to decide what the better approach should have been.
A couple of weeks ago I had to give a tutorial at ApacheCon Europe 2006. Obviously, I had to adapt my tutorial to use Apache Felix instead of Eclipse Equinox so I would not insult my hosts (or undergo the wrath of Richard Hall). Alas, Apache Felix does not yet have an implementation of declarative services. I therefore looked at several open source implementations and picked one to port. Declarative services require a framework specific implementation because it needs the Bundle Context of other bundles and there is no public function to do this in the specification. Bad, but we could not figure out in R4 how to do this in a secure way without using security. Now we know that you can not do secure things when security is not on, but that is another story.
So porting the declarative services was straightforward except for one snatch. To parse the Service-Component manifest header, the implementation used a utility function, which turned out to be implemented in the framework JAR. Not good. Looking at the parser, I decided that the easy way out was to write a simple replacement parser; the syntax for the Service-Component header is quite simple. The replacement code added about 6 lines to the implementation, making the declarative services bundle run on Felix without requiring any other supporting bundles.
After the tutorial I decided to submit my change as a patch, trying to be a good citizen. Great was my surprise when I started getting pushback. First, they felt that the parser was too simplistic; it ignored empty paths and did not detect syntax errors in the attributes. Personally that is fine by me, parsers should be lenient (though not creating erroneous values) and generators should be strict. However, this is again another story, many people feel more comfortable with rigid parsers for diagnostic reasons.
So I then redid the parser, discovering that the Service-Component header did not even support attributes! I threw the right exceptions on empty paths and discovered that Service-Component allowed quotes! Interestingly, the original manifest parser was thus not suited for this header at all.
After submitting my second patch, my expectation to be the hero now was again not honored. They still did not like it. Why? There was now a redundancy in the system; there were now two manifest parsers (despite that the original was wrong). They were therefore not willing to accept my patch. Instead, they will update their central manifest parser to support the Service-Component header quotes and reject attributes. Why? They did not like the redundancy.
This obviously did not fix my coupling problem whatsoever. I am a strong believer in least amount of coupling. In the OSGi build system I have addressed exactly this problem by copying the class files of utilities from the class path into the JAR; by making the packages private I prevent any version clashes. This gives me a single source without lots of utility bundles. The disadvantage is of course that if you find a bad bug, you must update all the bundles that contain that code. In my experience this is rarely much different from a shared bundle. State of the art version handling is so brittle that it is likely that all dependent bundles require an update to make the system resolve with the new utility bundle. And even if they require an update, the difference between updating one bundle or several bundles is not that big. It can even be questioned if you want to update the dependent bundles, often they do not really need the bug fix.
However, in the manifest parser case, I would personally gladly have accepted the source code redundancy; the new “improved” parser was only 15 lines. Yes, it is redundant, but the chance that you have an error in this code is pretty minute after testing and review.
And this is the point I want to discuss after this long introduction. We must balance redundancy (bad) versus coupling between bundles (bad). Redundancy is bad because it means we sometimes have to fix bugs or make improvements in multiple places. Coupling is bad because it makes the deployment situation more complex. The fact that today we are starting to handle dependencies does not mean dependencies have become benign. They can still bite you unexpectedly and mean. So how do we balance two bads? How do we decide which is the least bad?
Peter Kriens
Tuesday, July 11, 2006
Eclipse PDE versus JDE/ant
Last week there was a discussion about Eclipse Plugin Development Environment (PDE) on the equinox developers list. As you likely know, not only is Eclipse based on the OSGi Service Platform, it also provides support for developing bundles. They call them plugins, but they are really supposed to be bundles. I use the Eclipse Java Development Environment (JDE) for all my work, including the OSGi build with specification, reference implementations, and test suites. The build contains a total of more than 300 bundles files, most of them bundles, in more than 130 projects. There are bigger builds, but this is a significant project by my metrics.
The PDE discussion evolved around the flexibility of the environment. It got started by a request from the maven people to make the location of the manifest flexible. This touched a nerve with me because it is right at the heart why I am not using the PDE. I unfortunately still use the normal JDE because the PDE does not cut it for me. That is, my bundles are still bundles and have not moved up the evolution chain to become plugins. I have tried it many times but they always seem to revert to my home brewn ant/btool solution. Honestly, I tried.
The reason why I am not using the PDE architecture is because it contains a number of harsh constraints. My key problems are:
The reason for these (serious) architectural constraints is that Eclipse must be able to run your code at a whiff at any time. “No build” is the mantra. If you click Debug, Eclipse starts the OSGi framework, and runs your bundle from the project directory, without wrapping the content in a JAR. Well almost, the bin directory is mapped on the class path despite the fact that it is not really a member of the bundle class path, but hey it works! Then again, I think this quick edit-debug cycle can also be achieved with less draconian constraints.
Despite my complaints, rants, or other kinds of obnoxious behavior with the Eclipse guys, I did not get one iota further. I have been going through Eclipse 3.0, 3.1, and 3.2 without making even a dent in this model. Interestingly, I thought my dislike was generally shared by many but in the Equinox mailing list discussion there was a response that intrigued me:
We seem to have the same dislike of spaghetti, but we completely differed in the approach to avoid it. The red thread in my professional career has been decoupling, decoupling, and decoupling. Last week I stumbled on a course I had given in the early nineties and I was surprised how little I had moved forward in this area. I still strongly feel that the coupling of deliverables (bundles or other JARs) is an architectural decision, and not an ad-hoc design choice or afterthought. Today we have lots of tools to manage dependencies (maven, OSGi, etc.), but no-dependency is still much better than a managed dependency. This implies that adding a dependency like a library is a decision that needs to be taken by the architect of the project, not any developer. I fail to see a problem with “Organize Imports”, as long as the scope is defined by the project. Incidentally, this is the way the JDE works. You need to specifically add a JAR to the build path. Once it is on the build path, developers can (and should) use it as much as possible. Just like a pregnancy, there is no such thing is a little bit of coupling; it is a yes/no decision. If you use it once, you might as well touch it often if it saves you time. During the development, developers should not have to be concerned about what they use of the libraries that are made available to them.
However, next comes the packaging. The PDE can only have one JAR file and the content is defined by my project directory layout (sort of). I do not know how other people are building bundles, but most bundles I am responsible for require some packaging tricks.
For example, sometimes I use a package providing some util function. I really do not want to create a dependency on an evil util bundle. I also do not want to copy the source code. So in quite a few projects I copy the byte codes into my bundle and make the package bundle private. This way I combine a single source without adding additional dependencies.
Another example is the OSGi test case. An OSGi test case consists of a set of inner bundles that are loaded in the target and then perform their testing. This requires me to wrap up bundles in bundles. A similar structure is the deliverable for all the reference implementations. This is a single bundle that installs a set of other bundles, which it contains. I also often need to deliver the same code in different forms, for example in a midlet and a bundle.
I also find that I use the same information items in lots of different places. It is surprising how often you need the bundle symbolic name, or a name derived from it. I use properties to ensure that I have a single definition. I can easily preprocess the manifest, the declarative services XML file, or a readme.txt file. All these tricks are impossible with the PDE, they require some kind of processing before you run the bundle which is unfortunately not available.
And most important, in this phase you can finally see what your real dependencies are. In the OSGi build I calculate the imported packages and regularly inspect the outcome. Quite often, I am surprised what I see and then find out why a specific dependency was introduced. They are usually easy to solve.
So in contrast to the mailing list response, I think I have carefully thought about the dependency issue. Despite reflecting on his remarks, I still think that the current model I use is better than the Eclipse PDE model. Now I only have to find a way so that they listen to me!
Peter Kriens
The PDE discussion evolved around the flexibility of the environment. It got started by a request from the maven people to make the location of the manifest flexible. This touched a nerve with me because it is right at the heart why I am not using the PDE. I unfortunately still use the normal JDE because the PDE does not cut it for me. That is, my bundles are still bundles and have not moved up the evolution chain to become plugins. I have tried it many times but they always seem to revert to my home brewn ant/btool solution. Honestly, I tried.
The reason why I am not using the PDE architecture is because it contains a number of harsh constraints. My key problems are:
- A project = a bundle. It is impossible to deliver multiple artifacts from the same code base with the provided tools. For the OSGi build it would mean 300++ projects.
- The project directory layout must look like the root of the bundle, well eh, sort of. The PDE does perform some magic under the covers before it runs your code or packages the bundle, but the idea is that there is a 1:1 relation.
- The manifest must be up to date at all time, forcing the developer to enter the dependency information in literal form. There is no possibility to generate information for the manifest during build time.
The reason for these (serious) architectural constraints is that Eclipse must be able to run your code at a whiff at any time. “No build” is the mantra. If you click Debug, Eclipse starts the OSGi framework, and runs your bundle from the project directory, without wrapping the content in a JAR. Well almost, the bin directory is mapped on the class path despite the fact that it is not really a member of the bundle class path, but hey it works! Then again, I think this quick edit-debug cycle can also be achieved with less draconian constraints.
Despite my complaints, rants, or other kinds of obnoxious behavior with the Eclipse guys, I did not get one iota further. I have been going through Eclipse 3.0, 3.1, and 3.2 without making even a dent in this model. Interestingly, I thought my dislike was generally shared by many but in the Equinox mailing list discussion there was a response that intrigued me:
I've generally fine with 1 project = 1 bundle, and I like being forced to maintain manifests as I develop my bundles. Your model of pooling all the source together and then sorting out the bundles later sounds messy to me. With no constraints on importing packages, developers just hit "Organise Imports" to pull in dependencies from all over the place, resulting in spaghetti code.
We seem to have the same dislike of spaghetti, but we completely differed in the approach to avoid it. The red thread in my professional career has been decoupling, decoupling, and decoupling. Last week I stumbled on a course I had given in the early nineties and I was surprised how little I had moved forward in this area. I still strongly feel that the coupling of deliverables (bundles or other JARs) is an architectural decision, and not an ad-hoc design choice or afterthought. Today we have lots of tools to manage dependencies (maven, OSGi, etc.), but no-dependency is still much better than a managed dependency. This implies that adding a dependency like a library is a decision that needs to be taken by the architect of the project, not any developer. I fail to see a problem with “Organize Imports”, as long as the scope is defined by the project. Incidentally, this is the way the JDE works. You need to specifically add a JAR to the build path. Once it is on the build path, developers can (and should) use it as much as possible. Just like a pregnancy, there is no such thing is a little bit of coupling; it is a yes/no decision. If you use it once, you might as well touch it often if it saves you time. During the development, developers should not have to be concerned about what they use of the libraries that are made available to them.
However, next comes the packaging. The PDE can only have one JAR file and the content is defined by my project directory layout (sort of). I do not know how other people are building bundles, but most bundles I am responsible for require some packaging tricks.
For example, sometimes I use a package providing some util function. I really do not want to create a dependency on an evil util bundle. I also do not want to copy the source code. So in quite a few projects I copy the byte codes into my bundle and make the package bundle private. This way I combine a single source without adding additional dependencies.
Another example is the OSGi test case. An OSGi test case consists of a set of inner bundles that are loaded in the target and then perform their testing. This requires me to wrap up bundles in bundles. A similar structure is the deliverable for all the reference implementations. This is a single bundle that installs a set of other bundles, which it contains. I also often need to deliver the same code in different forms, for example in a midlet and a bundle.
I also find that I use the same information items in lots of different places. It is surprising how often you need the bundle symbolic name, or a name derived from it. I use properties to ensure that I have a single definition. I can easily preprocess the manifest, the declarative services XML file, or a readme.txt file. All these tricks are impossible with the PDE, they require some kind of processing before you run the bundle which is unfortunately not available.
And most important, in this phase you can finally see what your real dependencies are. In the OSGi build I calculate the imported packages and regularly inspect the outcome. Quite often, I am surprised what I see and then find out why a specific dependency was introduced. They are usually easy to solve.
So in contrast to the mailing list response, I think I have carefully thought about the dependency issue. Despite reflecting on his remarks, I still think that the current model I use is better than the Eclipse PDE model. Now I only have to find a way so that they listen to me!
Peter Kriens
Subscribe to:
Posts (Atom)