Time flies like an arrow (and fruit flies like a banana) ... One big disadvantage of organizing conferences is that it looks like time goes even faster. It is already time again to prepare for OSGi DevCon 2008 (a.k.a. EclipseCon). For OSGi DevCon we are looking for people that are using OSGi technology in unusual ways and that are willing to present it to their peers. You can present it in a lightning talk, a short talk, a long talk, or even a tutorial.
Finally, the OSGi technology is in the upswing after many years of preparation. It is wonderful to see how people apply OSGi tecnology in the most interesting ways. It is also very good to see how more and more tools and frameworks are being produced to simplify the use of OSGi technology.
There are three more weeks to submit your proposals. I really hope many people will take the effort to submit a proposal. You have time until the 19th of November.
Submit now!
Peter Kriens
Monday, October 29, 2007
Wednesday, October 17, 2007
iJAM, Formalized Class Loading
(This blog is adapted after comment from Victor, I had not see that iJam had an exception for java.*, excuses).
There is a paper floating on the net that proposes an alternative to the class loading strategy of JSR 277. Richard S. Hall pointed me to this paper and told me to write a blog about it. So I did.
Though the paper provides a formalization of the class loading strategy, the modification it proposes is actually quite simple. Standard Java class loading rules is parent first, then self. In a modern Java system, there can be many ancestors so this rule is supposed to be applied recursively until one reaches the system class loader. This is a simple rule that provides Sun with the guarantee that its classes can not be overridden at a lower level. This model implies that a module can never override anything available in its parents. For example, if a module wants to use a newer version of the XML parser in the platform then it would be nice if these classes could be overridden on application level to ensure the proper version. For this reason, iJAM proposes to change the parent first rule to local first, except for java.* and javax.* classes. This allows a module to override any class in an ancestor class loader.
I do agree with the problem but I disagree rather strongly with the solution, it is too simple. Let me explain why I came to this conclusion.
First, loading only javax.* and java.* from the parent is ignoring classes that come from the bootclasspath but do not start with java.* or javax.*. An example is org.xml.sax. If this package is loaded from a module, then the system classes will load their own instance of this package and modules will see another. This will cause class cast exceptions if you try to give your SAX Handler to the parser because they will use different class loaders for the same package.
Another problem is that many javax.* packages are prime candidates to be downloaded as bundles. Though there are logical reasons to treat java.* as special because overriding java.lang.Object is quite disastrous, there are no reasons to treat javax.* in the same way.
A module must be able to define its priorities in loading, there are clear use cases where overriding a platform provided class is crucial. Can this be done as an all or nothing decision on module level? Don't think so, only in simple cases are these decisions module wide, and when was the last time you did something simple and got paid for it? Sometimes you want to provide a default implementation in case the platform, or any of the other modules, does not provide an implementation. Other times you want to be sure to get your own version. A simple rule as local first can not distinguish between these cases, nor can a rule like parent first satisfy you all the time.
Another problem with the JSR 277 and iJAM rules is that it treats classes as standalone entities, not part of a cohesive package. If your module overrides one class of a larger package you have something we call a split package. Split packages are nasty. First, classes in the same package have package private visibility (the default). However, this rule only works when those classes come from the same class loader. Obviously, you can get some very hard to understand errors when two classes in the same package get access errors for a field that is package private. Really, split packages are evil and it is quite surprising that JSR 277 allows them, as much as iJAM proposes this same behavior. Obviously, there are many other errors that can occur when half of your classes are loaded from your module and the other half from some other module. Each half could have quite interesting assumptions about the other half. Unless you enjoy debugging very strange errors, split packages are just not a recommended strategy. A package is not just part of the name of a class, it is a grouping concept that should be respected.
So how does OSGi address this issue? Well, our class loading rules are extensive because this is a highly complex area that we were unfortunate enough to have to learn over the last 9 years.
Our first rule is to load any java.* class from the parent class loader. As I explained, very few java.* classes can be overridden without wreaking havoc in some way (java.sql.* is the only example that comes to mind).
It then looks at the imported packages of a bundle. In OSGi, the manifest explicitly tells the framework which packages are expected to come from another bundle. These packages are listed in an Import-Package header with explicit constraints on the exporter.
If the package is not imported, the Framework will look in bundles linked with Required-Bundle. If that fails, the framework then looks in the jar and attached fragments. The developer can control searching the JAR in detail with the Bundle-Classpath header.
The class loading strategy is obviously an example that Einstein must have had in mind when he said: "Things should be as simple as possible, but not simpler". We learned the hard way that in the end, the developer must be able to control what he gets from where, but not at the expense of a potentially unworkable system. Though I like French laissez-faire, when it comes to class loading I prefer the more deterministic Anglo-Saxon approach.
To conclude, I really like the formal work that the iJAM paper shows, it is one of my secret desires to ever have a similar Z specification of the OSGi module layer. If the authors of the iJAM papers want to work on this, please let me help. However, I think that the class loading strategy in this paper is, just like the class loading strategy of JSR 277, unfortunately too simplistic for the complexity of the real world.
Peter Kriens
There is a paper floating on the net that proposes an alternative to the class loading strategy of JSR 277. Richard S. Hall pointed me to this paper and told me to write a blog about it. So I did.
Though the paper provides a formalization of the class loading strategy, the modification it proposes is actually quite simple. Standard Java class loading rules is parent first, then self. In a modern Java system, there can be many ancestors so this rule is supposed to be applied recursively until one reaches the system class loader. This is a simple rule that provides Sun with the guarantee that its classes can not be overridden at a lower level. This model implies that a module can never override anything available in its parents. For example, if a module wants to use a newer version of the XML parser in the platform then it would be nice if these classes could be overridden on application level to ensure the proper version. For this reason, iJAM proposes to change the parent first rule to local first, except for java.* and javax.* classes. This allows a module to override any class in an ancestor class loader.
I do agree with the problem but I disagree rather strongly with the solution, it is too simple. Let me explain why I came to this conclusion.
First, loading only javax.* and java.* from the parent is ignoring classes that come from the bootclasspath but do not start with java.* or javax.*. An example is org.xml.sax. If this package is loaded from a module, then the system classes will load their own instance of this package and modules will see another. This will cause class cast exceptions if you try to give your SAX Handler to the parser because they will use different class loaders for the same package.
Another problem is that many javax.* packages are prime candidates to be downloaded as bundles. Though there are logical reasons to treat java.* as special because overriding java.lang.Object is quite disastrous, there are no reasons to treat javax.* in the same way.
A module must be able to define its priorities in loading, there are clear use cases where overriding a platform provided class is crucial. Can this be done as an all or nothing decision on module level? Don't think so, only in simple cases are these decisions module wide, and when was the last time you did something simple and got paid for it? Sometimes you want to provide a default implementation in case the platform, or any of the other modules, does not provide an implementation. Other times you want to be sure to get your own version. A simple rule as local first can not distinguish between these cases, nor can a rule like parent first satisfy you all the time.
Another problem with the JSR 277 and iJAM rules is that it treats classes as standalone entities, not part of a cohesive package. If your module overrides one class of a larger package you have something we call a split package. Split packages are nasty. First, classes in the same package have package private visibility (the default). However, this rule only works when those classes come from the same class loader. Obviously, you can get some very hard to understand errors when two classes in the same package get access errors for a field that is package private. Really, split packages are evil and it is quite surprising that JSR 277 allows them, as much as iJAM proposes this same behavior. Obviously, there are many other errors that can occur when half of your classes are loaded from your module and the other half from some other module. Each half could have quite interesting assumptions about the other half. Unless you enjoy debugging very strange errors, split packages are just not a recommended strategy. A package is not just part of the name of a class, it is a grouping concept that should be respected.
So how does OSGi address this issue? Well, our class loading rules are extensive because this is a highly complex area that we were unfortunate enough to have to learn over the last 9 years.
Our first rule is to load any java.* class from the parent class loader. As I explained, very few java.* classes can be overridden without wreaking havoc in some way (java.sql.* is the only example that comes to mind).
It then looks at the imported packages of a bundle. In OSGi, the manifest explicitly tells the framework which packages are expected to come from another bundle. These packages are listed in an Import-Package header with explicit constraints on the exporter.
If the package is not imported, the Framework will look in bundles linked with Required-Bundle. If that fails, the framework then looks in the jar and attached fragments. The developer can control searching the JAR in detail with the Bundle-Classpath header.
The class loading strategy is obviously an example that Einstein must have had in mind when he said: "Things should be as simple as possible, but not simpler". We learned the hard way that in the end, the developer must be able to control what he gets from where, but not at the expense of a potentially unworkable system. Though I like French laissez-faire, when it comes to class loading I prefer the more deterministic Anglo-Saxon approach.
To conclude, I really like the formal work that the iJAM paper shows, it is one of my secret desires to ever have a similar Z specification of the OSGi module layer. If the authors of the iJAM papers want to work on this, please let me help. However, I think that the class loading strategy in this paper is, just like the class loading strategy of JSR 277, unfortunately too simplistic for the complexity of the real world.
Peter Kriens
Thursday, October 4, 2007
Universal OSGi
There is an idea that has been simmering in the OSGi Alliance for quite some time: Universal OSGi. My first recollection of this idea is from 1999, when I was working in Linkoping at Ericsson Wireless Technology, a place in the middle of nowhere. I worked there a couple of days a week helping them to use the OSGi specifications on the Ericsson ebox. This was not an easy task because there were virtually no Java programmers and lots of native Linux device driver developers. These guys have it in their genes to see anything between them and the Linux API as a personal insult. The fact that the ebox was severely underpowered for Java, 25Mhz 486 - 32Mb RAM - 8Mb flash, did obviously not do anything for my popularity either. However, these guys are intelligent and once they understood the modularity model and the remote management model that this enabled, they decided that they wanted to have it too. well, not the Java but the modularity. They even had visions to replace the OSGi specifications with something more native.
I objected to that model, and I still do. Though I am all too aware of the many flaws that the language was born with (I am a Smalltalker by heart, how could they miss closures!), I am profoundly impressed with what has been done on security and the abstraction of the the operating system. The OSGi technology provides some very crucial missing parts in Java, but a very large number of the features we provide are enabled by Java. I really do not believe that what is done in the modularity layer, the life cycle management layer, the service layer, and the security model can be done in any other existing environment. Not even .NET, though they get closest.
However, the need for integrating with native code does not go away because our model is better. The battlefield is littered with corpses that had a better model. It is a plain fact that in many of the markets that use OSGi technology, native code integration plays a major role. Java is good, but there is a lot of legacy code out there that is just not Java.
The only thing we have on offer for native code integration is through the Java Native Interface (JNI) and remote procedure calls like with web services, CORBA, etc. Anybody that tried to program with JNI knows how painful and intrusive it can be. I do not think I would have survived Linkoping proposing JNI. Remote procedure calls are better, and well known in the industry. However, remote procedure calls provide interoperation but no modularity, life cycle, or security. The interoperation through remote procedure calls works well as is proven many times, it lacks the tight integration of all important computing aspects that the OSGi technology provides.
Meet Universal OSGi. This is not a worked out concept but work in progress. Universal OSGi is an attempt to use the normal run of the mill Java based OSGi service platform but provide a model where native code can be integrated. This means that you should be able to download bundles with a native executable, like a DLL or shared library. Lets call these native bundles.
You should be able to express the dependencies of these native bundles in the manifest on other native bundles so that a management system could calculate the transitive dependencies, just like a Java bundle. Then you should be able to start and stop these bundles, which should result in loading the native code in another process and providing it with its dependencies, and starting it. The native code should be able to get and register services which are used by the native code to communicate with the rest of the system.
Handling the native code dependencies and the life cycle issues is not trivial but definitely doable. A native handler bundle can use the extender model to detect native bundles of a type it recognizes. This bundle would somehow have to interact with the resolver because the Java resolver needs to know when a native bundle can be resolved (or maybe this could be expressed with Require-Bundle). If the native bundle is started, the native handler will load the native code in another process. When the native bundle is stopped, the native handler cleans up by killing the process. Not trivial, but doable.
The hard part is the communication aspect. Obviously, the service layer is the crucial middle man, requiring that the native code can communicate with the OSGi service registry, hosted in the Java VM process. This requires a native API that maps the primitives of the OSGi service registry to C, C++, .NET, PHP, Haskell, etc. Primitives like registering a service, getting a service, and listening for services. And of course coupled to the life cycle layer. If a bundle is stopped, all its services must be unregistered. This registry is still doable, albeit a bit less trivial. The hardest part is how the services are mapped in the remote procedure calls. This is the problem that many have tried and few really succeeded because it somehow always remains messy. CORBA has the Interface Definition Language (IDL) which was supposed to be the mother of all languages but largely failed in the Java world because its C++ orientation made mapping it to Java painful. I remember a long ago project where we had two class for every parameter because that was the way output parameters could be modeled, a concept well known to C++ but unkown to Java.
For Universal OSGi, it is likely that the best solution is the Java interface as an "IDL". Not only do we already have a lot of experience with Java interfaces, they are also conceptually very clean, not associated with an implementation. In Java it is already trivial to proxy interfaces. it will therefore be necessary to map Java interfaces in a mechanical way to concepts known in the native environment. For example, in C++ a Java interface can be mapped to an abstract base class that can be used as mixin. Most OSGi service specifications are very suitable for this mapping.
A key problem in designing such a communication system is how the remote procedure calls are handled. A remote procedure call crosses the process boundary and pointers to memory locations are therefore no longer valid. Each process has its own memory. There are two solutions to this problem. One can pass the value to which the pointer is pointing, or one can pass a symbolic reference of the object. Passing a value can be done with immutable objects like int, String, etc. but it can not be done for complex objects like java.lang.Class. If a mutable object is passed by value, changes on the remote side are not reflected on the caller's side, changing the behavior for remote and local calling. However, one can proxy any complex object by passing a symbolic reference and doing the same for any objects that are exchanged in method calls. The other side must recognize this reference and do a remote procedure call back into the caller's process for all methods. This model is called proxying. It is too expensive for real life communications due to the latency and bandwidth constraints. For Universal OSGi it might be viable because it runs all the participants on the same device. That allows all communications tobe done with very fast techniques like shared memory.
These are intriguing and exciting ideas that could truly make OSGi technology more universally applicable. However, there are a lot of technical details to iron out and even when that has been done, there is a lot of spec work for different native languages. We need members that are willing to make this work. Interested?
Peter Kriens
I objected to that model, and I still do. Though I am all too aware of the many flaws that the language was born with (I am a Smalltalker by heart, how could they miss closures!), I am profoundly impressed with what has been done on security and the abstraction of the the operating system. The OSGi technology provides some very crucial missing parts in Java, but a very large number of the features we provide are enabled by Java. I really do not believe that what is done in the modularity layer, the life cycle management layer, the service layer, and the security model can be done in any other existing environment. Not even .NET, though they get closest.
However, the need for integrating with native code does not go away because our model is better. The battlefield is littered with corpses that had a better model. It is a plain fact that in many of the markets that use OSGi technology, native code integration plays a major role. Java is good, but there is a lot of legacy code out there that is just not Java.
The only thing we have on offer for native code integration is through the Java Native Interface (JNI) and remote procedure calls like with web services, CORBA, etc. Anybody that tried to program with JNI knows how painful and intrusive it can be. I do not think I would have survived Linkoping proposing JNI. Remote procedure calls are better, and well known in the industry. However, remote procedure calls provide interoperation but no modularity, life cycle, or security. The interoperation through remote procedure calls works well as is proven many times, it lacks the tight integration of all important computing aspects that the OSGi technology provides.
Meet Universal OSGi. This is not a worked out concept but work in progress. Universal OSGi is an attempt to use the normal run of the mill Java based OSGi service platform but provide a model where native code can be integrated. This means that you should be able to download bundles with a native executable, like a DLL or shared library. Lets call these native bundles.
You should be able to express the dependencies of these native bundles in the manifest on other native bundles so that a management system could calculate the transitive dependencies, just like a Java bundle. Then you should be able to start and stop these bundles, which should result in loading the native code in another process and providing it with its dependencies, and starting it. The native code should be able to get and register services which are used by the native code to communicate with the rest of the system.
Handling the native code dependencies and the life cycle issues is not trivial but definitely doable. A native handler bundle can use the extender model to detect native bundles of a type it recognizes. This bundle would somehow have to interact with the resolver because the Java resolver needs to know when a native bundle can be resolved (or maybe this could be expressed with Require-Bundle). If the native bundle is started, the native handler will load the native code in another process. When the native bundle is stopped, the native handler cleans up by killing the process. Not trivial, but doable.
The hard part is the communication aspect. Obviously, the service layer is the crucial middle man, requiring that the native code can communicate with the OSGi service registry, hosted in the Java VM process. This requires a native API that maps the primitives of the OSGi service registry to C, C++, .NET, PHP, Haskell, etc. Primitives like registering a service, getting a service, and listening for services. And of course coupled to the life cycle layer. If a bundle is stopped, all its services must be unregistered. This registry is still doable, albeit a bit less trivial. The hardest part is how the services are mapped in the remote procedure calls. This is the problem that many have tried and few really succeeded because it somehow always remains messy. CORBA has the Interface Definition Language (IDL) which was supposed to be the mother of all languages but largely failed in the Java world because its C++ orientation made mapping it to Java painful. I remember a long ago project where we had two class for every parameter because that was the way output parameters could be modeled, a concept well known to C++ but unkown to Java.
For Universal OSGi, it is likely that the best solution is the Java interface as an "IDL". Not only do we already have a lot of experience with Java interfaces, they are also conceptually very clean, not associated with an implementation. In Java it is already trivial to proxy interfaces. it will therefore be necessary to map Java interfaces in a mechanical way to concepts known in the native environment. For example, in C++ a Java interface can be mapped to an abstract base class that can be used as mixin. Most OSGi service specifications are very suitable for this mapping.
A key problem in designing such a communication system is how the remote procedure calls are handled. A remote procedure call crosses the process boundary and pointers to memory locations are therefore no longer valid. Each process has its own memory. There are two solutions to this problem. One can pass the value to which the pointer is pointing, or one can pass a symbolic reference of the object. Passing a value can be done with immutable objects like int, String, etc. but it can not be done for complex objects like java.lang.Class. If a mutable object is passed by value, changes on the remote side are not reflected on the caller's side, changing the behavior for remote and local calling. However, one can proxy any complex object by passing a symbolic reference and doing the same for any objects that are exchanged in method calls. The other side must recognize this reference and do a remote procedure call back into the caller's process for all methods. This model is called proxying. It is too expensive for real life communications due to the latency and bandwidth constraints. For Universal OSGi it might be viable because it runs all the participants on the same device. That allows all communications tobe done with very fast techniques like shared memory.
These are intriguing and exciting ideas that could truly make OSGi technology more universally applicable. However, there are a lot of technical details to iron out and even when that has been done, there is a lot of spec work for different native languages. We need members that are willing to make this work. Interested?
Peter Kriens
Subscribe to:
Posts (Atom)