Wednesday, December 26, 2007
Two Thousand and Seven
It is again at year's end. A few more days and the year 2008 will start. What did we achieve in this year?
Wednesday, December 12, 2007
JSR 294 SuperPackages Revisited
Some of my questions about JSR 294 have been answered by Alex Buckley from Sun. I have updated the existing blog to not hide that I had some misunderstanding. If you're interested in 294 it might be interesting to take a look at these changes, I definitely started to understand 294 better, though I am afraid to say that it did not improve my feeling about it. So please read again and watch the [] for changes.
JSR 294 SuperPackages
Peter Kriens
JSR 294 SuperPackages
Peter Kriens
Monday, December 3, 2007
JCP, or What?
In my vanity I subscribe to Google Alerts on my name. This morning it pointed me to the blog of Stephen Colebourne. Stephen argues against the JCP for many of the same reasons I have, (guess that is why he quoted me so he popped up in my vanity alert). Stephen makes some very sensible proposals. However, some comments posted on the blog are a bit worrying. Yes, I agree the current JCP is bad because its owner (Sun) is more interested in its own interest than the interest of the Java community. Ok, it is also flawed because the process has no requirements phase and is often lead by the commercial interests of one party. However, at the core of some of the comments is a discussion of what is more relevant: an implementation or a specification?
This discussion has been raised inside the OSGi Alliance as well. Why not pick one framework implementation and put a stamp on it and name it the standard? Obviously a single implementation is simpler, what do we get from a time consuming specification process that forces people to compromise?
Well, just as obviously, people's requirements differ. Equinox is optimized for very large systems, uses lots of caching, but is therefore much bigger than Knopflerfish or Apache Felix. The best part about OSGi technology is that it still strives to write applications once and run anywhere, and it is really getting closer. An independent specification allows different implementations that optimize for a certain area.
The second advantage is neutrality. Comments on Stephen's blog make it sound that today most Java innovation takes place on the OpenJDK mailing list. I am not sure if this will create a coherent API in the long run though I am pretty sure it will create bloat because one man's holy grail is another man's unnecessary complexity. The specification process allows the balancing of the interest of several parties before it is cast. Note that the implementation freedom inherent in specifications is a good way to keep the specifications aligned, just implement if differently.
The third advantage is time. Though the slow process of standardization is often used as a negative, I think it is actually a positive (within limits). Last year we started the requirements process for Enterprise OSGi and one of the main aspects is Distributed OSGi. In the past 6 months you can see how the different parties started to understand each other's use cases and requirements and how they are not trying to find solutions that are much broader than if any of them had hacked together a solution on their own. I really think that most of the OSGi specifications actually reflect this focus on industry wide solutions and not trying to solve a small acute problem under great time pressure.
The fourth advantage is the whole struggle with intellectual property rights (IPR). Life might be beautiful if everything was free, but it isn't. We live in a world where companies have legitimate rights to the fruits of their labor. Interestingly, the most free for all movement (GNU) creates the biggest problem of all because of its viral nature, affecting anything it touches. It turns out a lot easier handle the IPR issues in an implementation than in a specification because there is much less IPR involved (and therefore much less touching).
In a way, the world of creating implementations and specifications is similar to classes and interfaces. I am pretty sure no Java programmer wants to give up interfaces, the reuse of Java code would be significantly harder. Therefore, could Java be maintained by an open source community as some think? I do not think so. The language would quickly drift in a direction the hardest pushers push it into, it will bloat and become unusable for many applications.
What we need is a strong modularity layer so different solutions can compete evenly. The market can then decide which solutions work best. However, to evolve the core platform Java will need a community process, just one that is not dominated by the interest of a single commercial party.
Peter Kriens
P.S. One of the comments states that OSGi is more complex than JSR 277. Sigh. The OSGi framework API is significantly smaller counted as methods and classes/interfaces. We just have more good documentation. And obviously we cover the cases that JSR 277 will discover in the next 5 years.
This discussion has been raised inside the OSGi Alliance as well. Why not pick one framework implementation and put a stamp on it and name it the standard? Obviously a single implementation is simpler, what do we get from a time consuming specification process that forces people to compromise?
Well, just as obviously, people's requirements differ. Equinox is optimized for very large systems, uses lots of caching, but is therefore much bigger than Knopflerfish or Apache Felix. The best part about OSGi technology is that it still strives to write applications once and run anywhere, and it is really getting closer. An independent specification allows different implementations that optimize for a certain area.
The second advantage is neutrality. Comments on Stephen's blog make it sound that today most Java innovation takes place on the OpenJDK mailing list. I am not sure if this will create a coherent API in the long run though I am pretty sure it will create bloat because one man's holy grail is another man's unnecessary complexity. The specification process allows the balancing of the interest of several parties before it is cast. Note that the implementation freedom inherent in specifications is a good way to keep the specifications aligned, just implement if differently.
The third advantage is time. Though the slow process of standardization is often used as a negative, I think it is actually a positive (within limits). Last year we started the requirements process for Enterprise OSGi and one of the main aspects is Distributed OSGi. In the past 6 months you can see how the different parties started to understand each other's use cases and requirements and how they are not trying to find solutions that are much broader than if any of them had hacked together a solution on their own. I really think that most of the OSGi specifications actually reflect this focus on industry wide solutions and not trying to solve a small acute problem under great time pressure.
The fourth advantage is the whole struggle with intellectual property rights (IPR). Life might be beautiful if everything was free, but it isn't. We live in a world where companies have legitimate rights to the fruits of their labor. Interestingly, the most free for all movement (GNU) creates the biggest problem of all because of its viral nature, affecting anything it touches. It turns out a lot easier handle the IPR issues in an implementation than in a specification because there is much less IPR involved (and therefore much less touching).
In a way, the world of creating implementations and specifications is similar to classes and interfaces. I am pretty sure no Java programmer wants to give up interfaces, the reuse of Java code would be significantly harder. Therefore, could Java be maintained by an open source community as some think? I do not think so. The language would quickly drift in a direction the hardest pushers push it into, it will bloat and become unusable for many applications.
What we need is a strong modularity layer so different solutions can compete evenly. The market can then decide which solutions work best. However, to evolve the core platform Java will need a community process, just one that is not dominated by the interest of a single commercial party.
Peter Kriens
P.S. One of the comments states that OSGi is more complex than JSR 277. Sigh. The OSGi framework API is significantly smaller counted as methods and classes/interfaces. We just have more good documentation. And obviously we cover the cases that JSR 277 will discover in the next 5 years.
Tuesday, November 27, 2007
JSR 294 SuperPackages
This blog is updated after a pleasant conversation with Alex Buckley over email in which he explained some of the finer points. These sections are marked with []. JSR 294 has produced their public draft of superpackages! (Somehow the exclamation mark seems necessary with this name ... ) Anyway, superpackages are a new construct for Java 7 to improve the modularity of the Java language. Originally JSR 277 was going to take care of modularity but Gilad Bracha (formerly Sun) thought that deployers could not be trusted with language elements and spun off JSR 294 after he published a spin about superpackages in his blog. Gilad left, but the JSR 294 expert group churned on and they produced a public draft, this blog is a review of that draft.
Superpackages address the ever present need to modularize. Types encapsulate types, fields and methods, packages encapsulate types, and superpackages encapsulate, packages and nested superpackages. When Java was designed packages were intended to be a set of closely related types, a.k.a. a module. However, systems have become larger and larger and packages turned out to not have the right granularity for many. It could have worked if packages had been nested, but this would of course have limited the flexibility of the programmer and Sun used to favor configuration over convention.
For the OSGi specifications we came up with the JAR file as a module of packages. Packages can be exported and imported, allowing the developer to keep classes private or expose them to be used by others. Modularity in OSGi is therefore a deployment concept, the same packages can be members of different modules/bundles. OSGi service platforms enforce these rules with the aid of class loaders.
However, purists want to have the modularity in the language itself, therefore superpackages were born in Sunville. Superpackages group a set of named (super-)packages and export types from these packages.
First let me make clear that I have not seen anything in the public review draft that would make it hard for OSGi to support superpackages. The current incarnation of the OSGi framework will be more or less oblivious of superpackages. The accessibility rules are enforced by the VM and the system works with normal class/resource loading rules. Superpackages and its member types must come from the same class-loader/bundle, but that does not look like an issue. So from an OSGi point of view there is no reason for concern with JSR 294 as long as a superpackage and its member packages come from the same bundle.*
So the remainder is just a technical review of the technical merits of 294, the OSGi specifications are not affected by it.
The superpackages specification is surprisingly complex and there is no overview. The spec is a set of changes to the existing Java Language Specification. I'll try to show you my understanding.
The current model is depicted in the following picture.
When you begin a class you give it a package name with the package keyword:
package com.foo;
class A { ... }
That is, the class defines its membership to a package. This is different from a superpackage that honors its name looking at the complexity it introduces. The following picture shows a similar diagram when superpackages are introduced.
This clearly is supercomplex in comparison with the current model. The major drawback is the redundancy that is introduced by this model. When I went to school ages ago I was thought that redundancy is evil. Over time I learned that redundancy can improve a system but the cost always remains. In this case the redundancy actually seems to create a very brittle structure that makes deployment unnecessary rigid. Let's explore.
[] Before first. As you can see, the Package is dashed now. The reason is that Sun is a bit ambiguous about packages in this specification. In this specification a package is treated as a name with structure (com.foo.bar is a sub-superpackage of com.foo), this is not uncommon. However, in Java a package is (well should be) a first class citizen but it often is not. Packages do provide access rules and there is the java.lang.Package class (not in java.lang.reflect). However, this class is not reflective; it is impossible to find the classes belonging to a package. Would you accept that a class could not enumerate its fields and methods? So why should a package not be able to enumerate its members? In contrast, a superpackage is in java.lang.reflect can enumerate is contained superpackages and, strangely enough, its member types, not its packages. Interestingly, to be able to enumerate the types the VM must be able to enumerate the packages because the superpackage class file only enumerates the package names, not the individual members. I would advise the JSR 294 expert group to take the packages seriously and allow full reflection.
First, superpackage links are always bidirectional and can therefore easily be wrong. A class file points to the superpackage file and the superpackage file points to the package of the class files. That is, if you move a package to another superpackage the superpackage definition file and the sources of the types must be modified. Fortunately wildcards can be used in the superpackage to identify the exported classes, though this means that most (if not all) changes in a type require recompilation of the superpackage definition.
The same is true for superpackage parenthood (enclosement) and membership (nesting). This link is also bidirectional, the parent must list its children and the children must list their parent. A superpackage is restricted to one parent, it is a true hierarchy. By the way, the top of the hierarchy is the unnamed superpackage. Oh yes, also all superpackages with a simple name (no dot in them) are automatically visible (in scope) to any superpackage if I read 7.4.5 correctly (took me some time to figure that out).
[]The unnamed superpackage is really special. Any top level super package is automatically a member of this supersuperpackage. Any class can see any exported type from a top level superpackage. I missed this the first time, the specification could make this more clear. Because the rules for the unnamed superpackage are so different. For example, membership is automatic, all exported members are visible to anyone, and the unnamed package is not open for reflection, it is represented as null. I wonder if this unnamed superpackage should not just be named the global space. I.e. there is no supersuperpackage so do not imply it by calling the superpackage that shall not be named?
The data structure for superpackages is not elegant. Though we have good refactoring tools today in Eclipse, IDEA, and Netbeans, I do not think it is a good excuse to design data structures that are so error prone.
However, the restriction is something that worries me most because it seems to create a system that I have hard time to see how it should work. The restriction is intentional, from section 7.4.5:
The first access rule in 5.4.4. in the specification reads that type C is accessible to type D if any of the following conditions is true:
At first could not understand how one of the most common cases, a library provider, could work with superpackages. The rules state that a type can only see what is available to its superpackage. This seems to exclude visibility between peer superpackages. For example, if OSGi would put all its specification packages in the org.osgi super package, a member type of the com.acme package could not see the OSGi exported types. However, after a lot of puzzling I found that in 7.4.2 it states that "A superpackage name can be simple or qualified (§6.2). A superpackage with a simple name is trivially in scope in all superpackage declarations."
I guess this means means that a super package has all superpackages with simple names as superpackage members? If this interpretation is true, then any "top" level superpackage would be visible to anybody else. Therefore, the following example should work:
[] The previous is not correct, the magic is the unnamed superpackage. I missed the rule (even after looking after being told) that any top level superpackage makes its exports available to any type in the system, regardless if it is a simple or complex name. That is exported types of top level superpackages are global. The use of the unnamed superpackage confused me because the rule is so different from normal superpackages. Silly me.
It seems Sun is slowly moving to conventions over configuration! Influence of the Ruby guys they hired? A name with no dots means general membership is clearly convention. However, it raises a number of issues.
Versioning
One would expect that in AD 2007 any modularity system in Java should have an evolution model. Alas, I have not been able to find any hint, not even a manifest header, of versions and how a superpackage should evolve. Obviously, this needs to be addressed. Superpackages are related to large systems and large systems do not pop in existence nor do they suddenly disappear. They live a long time and need to change over time.
Defining the Content of a Superpackage
The spec says that a superpackage has only member types and nested superpackages. However, the superpackage file contains a list of packages and lists the nested superpackages. The exports, however, list the exported type names and the exported superpackages.
These data structures are specified in the superpackage declaration and this is a file that the average developer will love to hate. This file must list all the packages of a superpackage; wildcarding or using the hierarchy is not allowed. Each package must be entered in full detail. Same for member superpackages as well as exported superpackages. Exported types can, however, use a short cut by specifying the exported types with the on-demand wildcard. That is, you can export com.foo.* indicating that you accept all types in the com.foo package (or all nested types in the com.foo type!). This sounds cool until you look at normal practice. A very common case is that implementation classes in a package have a name that ends in Impl. However, the wildcard in on-demand specifications is all or nothing. This likely means that all exported types must be enumerated by hand. Painful!
Deployment Versus Language
The key driver to remove JSR 294 from JSR 277 was that deployment artifacts should have no influence on the language. I am not so sure of that. One of the key insights I have obtained in the last few years is that there are many ways to slice and dice a JAR. With the myriad of profiles and configurations in Java, it is likely that you must deploy the same code in different ways to different platforms. For example, one of the really useful tricks of the bnd tool is the possibility to copy code from other bundles. Though many people can get upset about this, it provides me with a way to minimize dependencies without introducing redundancy because there is still one source code base.
The current solution of superpackages is highly rigid and static with its double linked structure. A class can only be a member of one superpackage. I really prefer the more flexible solution of a description that defines how a set of classes are modularized. It is a pity that the JCP does not work from a requirements phase so that these differences could be discussed without a worked out proposal on the table.
Restrictions and Security
In section 7.4.5 an example with an Outer and Outer.Inner superpackage is given that elucidates why the nested package must name their enclosing package. However, without a security manager anybody can easily access any packages to their liking. Access restrictions are conveniences, not security.
It would have been a better solution to add a SuperpackagePermission that specifies which packages can be named members or not. This would be similar to the OSGi PackagePermission. This would be a safe way to control access, the current model pays a very high price (only a single parent, double pointers) but does not provide security, just a slight barrier.
OSGi and Superpackages
Assuming that superpackages can come from different class loaders (see footnote), it is likely that the OSGi specifications need an additional header that reflects the super package dependencies. This dependency is a nice intermediate between Import-Package and Require-Bundle, albeit at the cost of a lot of added complexity and maintenance.
Interestingly, in the work for the next release we are being pushed from multiple sides to provide an entity that stands for an application. In J2EE and MIDP the context of an application is clear because applications are strictly contained in a silo. In OSGi platforms the situation is more fluid because applications collaborate. Superpackages could be a handle in this direction but in the current incarnation it will likely not work.
[]It looks like that a top level superpackage and its children must come from a single class loader. This is bad, because it means that in JSR 277 and OSGi it is impossible to deploy a member superpackages in modules. A common use case is that an enterprise has an application consisting of multiple bundles. Superpackages could have been used to minimize the exposure of internal API to other residents. However, bundles in OSGi and modules in JSR 277 require their own class loader, implying that a top level superpackage must be deployed with all its enclosed superpackages and code in one module/bundle. I guess OSGi Fragments can be abused to still allow partial delivery and update of an application, but this is not very elegant.
Conclusion
I wish I could be positive about JSR 294 but it I can't. The lack of requirements document makes it hard to judge the JSR on its own merits so this conclusion is my personal opinion.
I think that the current solution is unnecessary complex, there is too much redundancy and there is too much to specify; information that is usually quite volatile during development. The current model unnecessarily allows too many potential errors. Also versioning must be addressed. And if I understand the model with simple names being available to all superpackages then a solution must be envisioned to allow superpackage names to be unique.
However, the key aspect I differ with is if we need a language construct for modularity. Maybe I am blinded by almost ten years of OSGi modularity but JAR based modularity seems to provide more than superpackages provide at great additional expense. So if superpackages must be added to the language, please simplify it and provide a more convenient method to specify its contents. Better, consider how much JAR based modularity could add to the language.
Peter Kriens
* = There is a slight concern with section 5.3.5 of the Classfile and VM changes of the 294classfilevm.html file. In the last paragraph it states that a superpackage and its member types must be loaded from the same class loader. I interpret this as its direct members, however, one could interpret this as an member type of the enclosed super packages. If this unlikely interpretation is true, all superpackages would have to come from the same class loader, which seems silly. However, 7.1.1 defines type membership transitively giving credence to the silly interpretation. Needs work.
Superpackages address the ever present need to modularize. Types encapsulate types, fields and methods, packages encapsulate types, and superpackages encapsulate, packages and nested superpackages. When Java was designed packages were intended to be a set of closely related types, a.k.a. a module. However, systems have become larger and larger and packages turned out to not have the right granularity for many. It could have worked if packages had been nested, but this would of course have limited the flexibility of the programmer and Sun used to favor configuration over convention.
For the OSGi specifications we came up with the JAR file as a module of packages. Packages can be exported and imported, allowing the developer to keep classes private or expose them to be used by others. Modularity in OSGi is therefore a deployment concept, the same packages can be members of different modules/bundles. OSGi service platforms enforce these rules with the aid of class loaders.
However, purists want to have the modularity in the language itself, therefore superpackages were born in Sunville. Superpackages group a set of named (super-)packages and export types from these packages.
First let me make clear that I have not seen anything in the public review draft that would make it hard for OSGi to support superpackages. The current incarnation of the OSGi framework will be more or less oblivious of superpackages. The accessibility rules are enforced by the VM and the system works with normal class/resource loading rules. Superpackages and its member types must come from the same class-loader/bundle, but that does not look like an issue. So from an OSGi point of view there is no reason for concern with JSR 294 as long as a superpackage and its member packages come from the same bundle.*
So the remainder is just a technical review of the technical merits of 294, the OSGi specifications are not affected by it.
The superpackages specification is surprisingly complex and there is no overview. The spec is a set of changes to the existing Java Language Specification. I'll try to show you my understanding.
The current model is depicted in the following picture.
When you begin a class you give it a package name with the package keyword:
package com.foo;
class A { ... }
That is, the class defines its membership to a package. This is different from a superpackage that honors its name looking at the complexity it introduces. The following picture shows a similar diagram when superpackages are introduced.
This clearly is supercomplex in comparison with the current model. The major drawback is the redundancy that is introduced by this model. When I went to school ages ago I was thought that redundancy is evil. Over time I learned that redundancy can improve a system but the cost always remains. In this case the redundancy actually seems to create a very brittle structure that makes deployment unnecessary rigid. Let's explore.
[] Before first. As you can see, the Package is dashed now. The reason is that Sun is a bit ambiguous about packages in this specification. In this specification a package is treated as a name with structure (com.foo.bar is a sub-superpackage of com.foo), this is not uncommon. However, in Java a package is (well should be) a first class citizen but it often is not. Packages do provide access rules and there is the java.lang.Package class (not in java.lang.reflect). However, this class is not reflective; it is impossible to find the classes belonging to a package. Would you accept that a class could not enumerate its fields and methods? So why should a package not be able to enumerate its members? In contrast, a superpackage is in java.lang.reflect can enumerate is contained superpackages and, strangely enough, its member types, not its packages. Interestingly, to be able to enumerate the types the VM must be able to enumerate the packages because the superpackage class file only enumerates the package names, not the individual members. I would advise the JSR 294 expert group to take the packages seriously and allow full reflection.
First, superpackage links are always bidirectional and can therefore easily be wrong. A class file points to the superpackage file and the superpackage file points to the package of the class files. That is, if you move a package to another superpackage the superpackage definition file and the sources of the types must be modified. Fortunately wildcards can be used in the superpackage to identify the exported classes, though this means that most (if not all) changes in a type require recompilation of the superpackage definition.
The same is true for superpackage parenthood (enclosement) and membership (nesting). This link is also bidirectional, the parent must list its children and the children must list their parent. A superpackage is restricted to one parent, it is a true hierarchy. By the way, the top of the hierarchy is the unnamed superpackage. Oh yes, also all superpackages with a simple name (no dot in them) are automatically visible (in scope) to any superpackage if I read 7.4.5 correctly (took me some time to figure that out).
[]The unnamed superpackage is really special. Any top level super package is automatically a member of this supersuperpackage. Any class can see any exported type from a top level superpackage. I missed this the first time, the specification could make this more clear. Because the rules for the unnamed superpackage are so different. For example, membership is automatic, all exported members are visible to anyone, and the unnamed package is not open for reflection, it is represented as null. I wonder if this unnamed superpackage should not just be named the global space. I.e. there is no supersuperpackage so do not imply it by calling the superpackage that shall not be named?
The data structure for superpackages is not elegant. Though we have good refactoring tools today in Eclipse, IDEA, and Netbeans, I do not think it is a good excuse to design data structures that are so error prone.
However, the restriction is something that worries me most because it seems to create a system that I have hard time to see how it should work. The restriction is intentional, from section 7.4.5:
If a superpackage did not have to declare which superpackage it is nested in, then the following problem could occur. Consider these superpackages, where Outer.Inner does not declare that it is nested in Outer.
superpackage Outer {
member superpackage Outer.Inner;
}
superpackage Outer.Inner {
member package foo;
export foo.C;
}
If a type outside the Outer.Inner superpackage tries to access foo.C, then the access
would succeed because foo.C is exported from Outer.Inner and neither C.class nor the superpackage file for Outer.Inner mentions the fact that Outer.Inner is a non-exported nested superpackage of Outer. The intent of the Outer superpackage - to restrict access to members of Outer.Inner - is subverted.
Clearly, they have chosen restriction over convenience. However, the consequences of this are quite far reaching. Let us take a look at the access rules. I always need pictures for these things so we need a legend:superpackage Outer {
member superpackage Outer.Inner;
}
superpackage Outer.Inner {
member package foo;
export foo.C;
}
If a type outside the Outer.Inner superpackage tries to access foo.C, then the access
would succeed because foo.C is exported from Outer.Inner and neither C.class nor the superpackage file for Outer.Inner mentions the fact that Outer.Inner is a non-exported nested superpackage of Outer. The intent of the Outer superpackage - to restrict access to members of Outer.Inner - is subverted.
The first access rule in 5.4.4. in the specification reads that type C is accessible to type D if any of the following conditions is true:
- Type C is public and is not a member of a named superpackage
- Type C is public and both type C and D are a member of the same superpackage S.
- Type C is public and C is an exported member of named superpackage S and D is a member of the enclosing superpackage O of superpackage S
- Type C and D reside in the same package p.
At first could not understand how one of the most common cases, a library provider, could work with superpackages. The rules state that a type can only see what is available to its superpackage. This seems to exclude visibility between peer superpackages. For example, if OSGi would put all its specification packages in the org.osgi super package, a member type of the com.acme package could not see the OSGi exported types. However, after a lot of puzzling I found that in 7.4.2 it states that "A superpackage name can be simple or qualified (§6.2). A superpackage with a simple name is trivially in scope in all superpackage declarations."
I guess this means means that a super package has all superpackages with simple names as superpackage members? If this interpretation is true, then any "top" level superpackage would be visible to anybody else. Therefore, the following example should work:
[] The previous is not correct, the magic is the unnamed superpackage. I missed the rule (even after looking after being told) that any top level superpackage makes its exports available to any type in the system, regardless if it is a simple or complex name. That is exported types of top level superpackages are global. The use of the unnamed superpackage confused me because the rule is so different from normal superpackages. Silly me.
It seems Sun is slowly moving to conventions over configuration! Influence of the Ruby guys they hired? A name with no dots means general membership is clearly convention. However, it raises a number of issues.
- If the package must have a simple name, how do we handle uniqueness? Package names normally are scope with the reverse domain name, like org.osgi... However, org.osgi is not a simple name? [] This is thus not an issue, a superpackage can have a dotted name, the trick is that it must be a top level package, i.e. not being enclosed.
- It seems that top level packages are special. Then why is a superpackage not just defined in a single file that allows nested superpackages without naming them? This model would significantly simplify the model where the VM must find resources from all over the system that have obligatory relations. A lot of potential errors could be removed this way.
- []Despite my minsunderstanding, the previous point is still relevant. It is not clear why superpackage members are spread out over the file system while they are closely dependent on each other with bidirectional links.
Versioning
One would expect that in AD 2007 any modularity system in Java should have an evolution model. Alas, I have not been able to find any hint, not even a manifest header, of versions and how a superpackage should evolve. Obviously, this needs to be addressed. Superpackages are related to large systems and large systems do not pop in existence nor do they suddenly disappear. They live a long time and need to change over time.
Defining the Content of a Superpackage
The spec says that a superpackage has only member types and nested superpackages. However, the superpackage file contains a list of packages and lists the nested superpackages. The exports, however, list the exported type names and the exported superpackages.
These data structures are specified in the superpackage declaration and this is a file that the average developer will love to hate. This file must list all the packages of a superpackage; wildcarding or using the hierarchy is not allowed. Each package must be entered in full detail. Same for member superpackages as well as exported superpackages. Exported types can, however, use a short cut by specifying the exported types with the on-demand wildcard. That is, you can export com.foo.* indicating that you accept all types in the com.foo package (or all nested types in the com.foo type!). This sounds cool until you look at normal practice. A very common case is that implementation classes in a package have a name that ends in Impl. However, the wildcard in on-demand specifications is all or nothing. This likely means that all exported types must be enumerated by hand. Painful!
Deployment Versus Language
The key driver to remove JSR 294 from JSR 277 was that deployment artifacts should have no influence on the language. I am not so sure of that. One of the key insights I have obtained in the last few years is that there are many ways to slice and dice a JAR. With the myriad of profiles and configurations in Java, it is likely that you must deploy the same code in different ways to different platforms. For example, one of the really useful tricks of the bnd tool is the possibility to copy code from other bundles. Though many people can get upset about this, it provides me with a way to minimize dependencies without introducing redundancy because there is still one source code base.
The current solution of superpackages is highly rigid and static with its double linked structure. A class can only be a member of one superpackage. I really prefer the more flexible solution of a description that defines how a set of classes are modularized. It is a pity that the JCP does not work from a requirements phase so that these differences could be discussed without a worked out proposal on the table.
Restrictions and Security
In section 7.4.5 an example with an Outer and Outer.Inner superpackage is given that elucidates why the nested package must name their enclosing package. However, without a security manager anybody can easily access any packages to their liking. Access restrictions are conveniences, not security.
It would have been a better solution to add a SuperpackagePermission that specifies which packages can be named members or not. This would be similar to the OSGi PackagePermission. This would be a safe way to control access, the current model pays a very high price (only a single parent, double pointers) but does not provide security, just a slight barrier.
OSGi and Superpackages
Assuming that superpackages can come from different class loaders (see footnote), it is likely that the OSGi specifications need an additional header that reflects the super package dependencies. This dependency is a nice intermediate between Import-Package and Require-Bundle, albeit at the cost of a lot of added complexity and maintenance.
Interestingly, in the work for the next release we are being pushed from multiple sides to provide an entity that stands for an application. In J2EE and MIDP the context of an application is clear because applications are strictly contained in a silo. In OSGi platforms the situation is more fluid because applications collaborate. Superpackages could be a handle in this direction but in the current incarnation it will likely not work.
[]It looks like that a top level superpackage and its children must come from a single class loader. This is bad, because it means that in JSR 277 and OSGi it is impossible to deploy a member superpackages in modules. A common use case is that an enterprise has an application consisting of multiple bundles. Superpackages could have been used to minimize the exposure of internal API to other residents. However, bundles in OSGi and modules in JSR 277 require their own class loader, implying that a top level superpackage must be deployed with all its enclosed superpackages and code in one module/bundle. I guess OSGi Fragments can be abused to still allow partial delivery and update of an application, but this is not very elegant.
Conclusion
I wish I could be positive about JSR 294 but it I can't. The lack of requirements document makes it hard to judge the JSR on its own merits so this conclusion is my personal opinion.
I think that the current solution is unnecessary complex, there is too much redundancy and there is too much to specify; information that is usually quite volatile during development. The current model unnecessarily allows too many potential errors. Also versioning must be addressed. And if I understand the model with simple names being available to all superpackages then a solution must be envisioned to allow superpackage names to be unique.
However, the key aspect I differ with is if we need a language construct for modularity. Maybe I am blinded by almost ten years of OSGi modularity but JAR based modularity seems to provide more than superpackages provide at great additional expense. So if superpackages must be added to the language, please simplify it and provide a more convenient method to specify its contents. Better, consider how much JAR based modularity could add to the language.
Peter Kriens
* = There is a slight concern with section 5.3.5 of the Classfile and VM changes of the 294classfilevm.html file. In the last paragraph it states that a superpackage and its member types must be loaded from the same class loader. I interpret this as its direct members, however, one could interpret this as an member type of the enclosed super packages. If this unlikely interpretation is true, all superpackages would have to come from the same class loader, which seems silly. However, 7.1.1 defines type membership transitively giving credence to the silly interpretation. Needs work.
JSR 294 SuperPackages
JSR 294 has produced their public draft of superpackages! Somehow the exclamation mark seems necessary with this name ... Anyway, superpackages are a new construct for Java 7 to improve modularity. Originally JSR 277 was going to take care of modularity but Gilead Bracha (formerly Sun) thought that deployers could not be trusted with language elements and spun off JSR 294 after he published a spin about superpackages in his blog.
Superpackages address the ever present need to encapsulate. Types encapsulate fields and methods, packages encapsulate types, and superpackages encapsulate, packages and superpackages. When Java was designed packages were intended to be a set of closely related types, a.k.a. a module. However, systems have become larger and larger and packages turned out to not have the right granularity. It could have worked if packages had been nested, but this would of course have limited the flexibility of the programmer and Sun favors configuration over convention.
For the OSGi specifications we came up with the JAR file as a module of packages. Packages can be exported and imported, allowing the developer to keep classes private or expose them to be used by others. Modularity in OSGi is therefore a deployment concept, the same packages can be members of different modules/bundles. OSGi service platforms enforce these rules with the aid of class loaders.
However, purists want to have the modularity in the language itself, therefore superpackages were born in Sunville. Superpackages group a set of named (super-)packages and export types from these packages. First let me make clear that I have not seen anything in the public review draft that would make it hard for OSGi to support superpackages. The current incarnation of the OSGi framework will be more or less oblivious of superpackages. The accessibility rules are enforced by the VM and the system works with normal class/resource loading rules. Superpackages and its member types (not member superpackages) must come from the same class-loader/bundle, but that does not look like an issue. So from an OSGi point of view there is no reason for concern with JSR 294 as long as a superpackage and its member packages come from the same bundle.*
So the remainder is just a technical review of the technical merits of 294, the OSGi specifications are not affected by it.
The superpackages specification is surprisingly complex. The current model is depicted in the following picture.
When you begin a class you give it a package name with the package keyword:
package com.foo;
class A { ... }
That is, the class defines its membership to a package. This is different from a superpackage that honors its name looking at the complexity it introduces. The following picture shows a similar diagram when superpackages are introduced.
This clearly is supercomplex in comparison with the current model. The major drawback is the redundancy that is introduced by this model. When I went to school ages ago I was thought that redundancy is bad. Over time I learned that redundancy can improve a system but the cost always remains. In the case the redundancy actually seems to create a very brittle structure that makes deployment unnecessary rigid.
First, superpackage links are always bidirectional and can therefore easily be wrong. A class file points to the superpackage file and the superpackage file points to the packages of these class files. That is, if you move a package to another superpackage the superpackage definition file and the sources of the types must be modified. Fortunately wildcards can be used in the superpackage to identify the exported classes, though this means that any change in a type requires recompilation of the superpackage definition.
The same is true for superpackage parenthood and membership. This link is also bidirectional, the parent must list its children and the children must list their parent. A superpackage is restricted to one parent, it is a true hierarchy. By the way, the top of the hierarchy is the unnamed superpackage.
A data structure that is so error prone is not elegant. Though we have good refactoring tools today in Eclipse, IDEA, and Netbeans, I do not think it is a good excuse to design data structures that are so error prone ... but it happens alarmingly often nowadays.
However, the restriction is something that worries me most because it seems to create a system that I have hard time to see how it should work. The restriction is intentional, from section 7.4.5:
The first access rule in 5.4.4. in the specification reads that type C is accessible to type D if any of the following conditions is true:
* = There is a slight concern with section 5.3.5 of the Classfile and VM changes of the 294classfilevm.html file. In the last paragraph it states that a superpackage and its member types must be loaded from the same class loader. I interpret this as its direct members, however, one could interpret this as an member type of the enclosed super packages. If this unlikely interpretation is true, all superpackages would have to come from the same class loader, which seems silly. However, 7.1.1 defines type membership transitively giving credence to the silly interpretation. Needs work.
Superpackages address the ever present need to encapsulate. Types encapsulate fields and methods, packages encapsulate types, and superpackages encapsulate, packages and superpackages. When Java was designed packages were intended to be a set of closely related types, a.k.a. a module. However, systems have become larger and larger and packages turned out to not have the right granularity. It could have worked if packages had been nested, but this would of course have limited the flexibility of the programmer and Sun favors configuration over convention.
For the OSGi specifications we came up with the JAR file as a module of packages. Packages can be exported and imported, allowing the developer to keep classes private or expose them to be used by others. Modularity in OSGi is therefore a deployment concept, the same packages can be members of different modules/bundles. OSGi service platforms enforce these rules with the aid of class loaders.
However, purists want to have the modularity in the language itself, therefore superpackages were born in Sunville. Superpackages group a set of named (super-)packages and export types from these packages. First let me make clear that I have not seen anything in the public review draft that would make it hard for OSGi to support superpackages. The current incarnation of the OSGi framework will be more or less oblivious of superpackages. The accessibility rules are enforced by the VM and the system works with normal class/resource loading rules. Superpackages and its member types (not member superpackages) must come from the same class-loader/bundle, but that does not look like an issue. So from an OSGi point of view there is no reason for concern with JSR 294 as long as a superpackage and its member packages come from the same bundle.*
So the remainder is just a technical review of the technical merits of 294, the OSGi specifications are not affected by it.
The superpackages specification is surprisingly complex. The current model is depicted in the following picture.
When you begin a class you give it a package name with the package keyword:
package com.foo;
class A { ... }
That is, the class defines its membership to a package. This is different from a superpackage that honors its name looking at the complexity it introduces. The following picture shows a similar diagram when superpackages are introduced.
This clearly is supercomplex in comparison with the current model. The major drawback is the redundancy that is introduced by this model. When I went to school ages ago I was thought that redundancy is bad. Over time I learned that redundancy can improve a system but the cost always remains. In the case the redundancy actually seems to create a very brittle structure that makes deployment unnecessary rigid.
First, superpackage links are always bidirectional and can therefore easily be wrong. A class file points to the superpackage file and the superpackage file points to the packages of these class files. That is, if you move a package to another superpackage the superpackage definition file and the sources of the types must be modified. Fortunately wildcards can be used in the superpackage to identify the exported classes, though this means that any change in a type requires recompilation of the superpackage definition.
The same is true for superpackage parenthood and membership. This link is also bidirectional, the parent must list its children and the children must list their parent. A superpackage is restricted to one parent, it is a true hierarchy. By the way, the top of the hierarchy is the unnamed superpackage.
A data structure that is so error prone is not elegant. Though we have good refactoring tools today in Eclipse, IDEA, and Netbeans, I do not think it is a good excuse to design data structures that are so error prone ... but it happens alarmingly often nowadays.
However, the restriction is something that worries me most because it seems to create a system that I have hard time to see how it should work. The restriction is intentional, from section 7.4.5:
If a superpackage did not have to declare which superpackage it is nested in, then the following problem could occur. Consider these superpackages, where Outer.Inner does not declare that it is nested in Outer.
superpackage Outer {
member superpackage Outer.Inner;
}
superpackage Outer.Inner {
member package foo;
export foo.C;
}
If a type outside the Outer.Inner superpackage tries to access foo.C, then the access
would succeed because foo.C is exported from Outer.Inner and neither C.class nor the
superpackage file for Outer.Inner mentions the fact that Outer.Inner is a non-exported nested superpackage of Outer. The intent of the Outer superpackage - to restrict access to members of Outer.Inner - is subverted.
Clearly, they have chosen restriction over convenience. However, the consequences of this are quite far reaching. Let us take a look at the access rules. I always need pictures for these things so we need a legend:superpackage Outer {
member superpackage Outer.Inner;
}
superpackage Outer.Inner {
member package foo;
export foo.C;
}
If a type outside the Outer.Inner superpackage tries to access foo.C, then the access
would succeed because foo.C is exported from Outer.Inner and neither C.class nor the
superpackage file for Outer.Inner mentions the fact that Outer.Inner is a non-exported nested superpackage of Outer. The intent of the Outer superpackage - to restrict access to members of Outer.Inner - is subverted.
The first access rule in 5.4.4. in the specification reads that type C is accessible to type D if any of the following conditions is true:
- Type C is public and is not a member of a named superpackage
- Type C is public and both type C and D are a member of the same superpackage S.
- Type C is public and C is an exported member of named superpackage S and D is a member of the enclosing superpackage O of superpackage S
- Type C and D reside in the same package
* = There is a slight concern with section 5.3.5 of the Classfile and VM changes of the 294classfilevm.html file. In the last paragraph it states that a superpackage and its member types must be loaded from the same class loader. I interpret this as its direct members, however, one could interpret this as an member type of the enclosed super packages. If this unlikely interpretation is true, all superpackages would have to come from the same class loader, which seems silly. However, 7.1.1 defines type membership transitively giving credence to the silly interpretation. Needs work.
JSR 294 SuperPackages
JSR 294 has produced their public draft of superpackages! Somehow the exclamation mark seems necessary with this name ... Anyway, superpackages are a new construct for Java 7 to improve modularity. Originally JSR 277 was going to take care of modularity but Gilead Bracha (formerly Sun) thought that deployers could not be trusted with language elements and spun off JSR 294 after he published a spin about superpackages in his blog.
Superpackages address the ever present need to encapsulate. Types encapsulate fields and methods, packages encapsulate types, and superpackages encapsulate, packages and superpackages. When Java was designed packages were intended to be a set of closely related types, a.k.a. a module. However, systems have become larger and larger and packages turned out to not have the right granularity. It could have worked if packages had been nested, but this would of course have limited the flexibility of the programmer and Sun favors configuration over convention.
For the OSGi specifications we came up with the JAR file as a module of packages. Packages can be exported and imported, allowing the developer to keep classes private or expose them to be used by others. Modularity in OSGi is therefore a deployment concept, the same packages can be members of different modules/bundles. OSGi service platforms enforce these rules with the aid of class loaders.
However, purists want to have the modularity in the language itself, therefore superpackages were born in Sunville. Superpackages group a set of named (super-)packages and export types from these packages. First let me make clear that I have not seen anything in the public review draft that would make it hard for OSGi to support superpackages. The current incarnation of the OSGi framework will be more or less oblivious of superpackages. The accessibility rules are enforced by the VM and the system works with normal class/resource loading rules. Superpackages and its member types (not member superpackages) must come from the same class-loader/bundle, but that does not look like an issue. So from an OSGi point of view there is no reason for concern with JSR 294 as long as a superpackage and its member packages come from the same bundle.*
So the remainder is just a technical review of the technical merits of 294, the OSGi specifications are not affected by it.
The superpackages specification is surprisingly complex. The current model is depicted in the following picture.
When you begin a class you give it a package name with the package keyword:
package com.foo;
class A { ... }
That is, the class defines its membership to a package. This is different from a superpackage that honors its name looking at the complexity it introduces. The following picture shows a similar diagram when superpackages are introduced.
This clearly is supercomplex in comparison with the current model. The major drawback is the redundancy that is introduced by this model. When I went to school ages ago I was thought that redundancy is bad. Over time I learned that redundancy can improve a system but the cost always remains. In the case the redundancy actually seems to create a very brittle structure that makes deployment unnecessary rigid.
First, superpackage links are always bidirectional and can therefore easily be wrong. A class file points to the superpackage file and the superpackage file points to the packages of these class files. That is, if you move a package to another superpackage the superpackage definition file and the sources of the types must be modified. Fortunately wildcards can be used in the superpackage to identify the exported classes, though this means that any change in a type requires recompilation of the superpackage definition.
The same is true for superpackage parenthood and membership. This link is also bidirectional, the parent must list its children and the children must list their parent. A superpackage is restricted to one parent, it is a true hierarchy. By the way, the top of the hierarchy is the unnamed superpackage.
A data structure that is so error prone is not elegant. Though we have good refactoring tools today in Eclipse, IDEA, and Netbeans, I do not think it is a good excuse to design data structures that are so error prone ... but it happens alarmingly often nowadays.
However, the restriction is something that worries me most because it seems to create a system that I have hard time to see how it should work. The restriction is intentional, from section 7.4.5:
* = There is a slight concern with section 5.3.5 of the Classfile and VM changes of the 294classfilevm.html file. In the last paragraph it states that a superpackage and its member types must be loaded from the same class loader. I interpret this as its direct members, however, one could interpret this as an member type of the enclosed super packages. If this unlikely interpretation is true, all superpackages would have to come from the same class loader, which seems silly. However, 7.1.1 defines type membership transitively giving credence to the silly interpretation. Needs work.
Superpackages address the ever present need to encapsulate. Types encapsulate fields and methods, packages encapsulate types, and superpackages encapsulate, packages and superpackages. When Java was designed packages were intended to be a set of closely related types, a.k.a. a module. However, systems have become larger and larger and packages turned out to not have the right granularity. It could have worked if packages had been nested, but this would of course have limited the flexibility of the programmer and Sun favors configuration over convention.
For the OSGi specifications we came up with the JAR file as a module of packages. Packages can be exported and imported, allowing the developer to keep classes private or expose them to be used by others. Modularity in OSGi is therefore a deployment concept, the same packages can be members of different modules/bundles. OSGi service platforms enforce these rules with the aid of class loaders.
However, purists want to have the modularity in the language itself, therefore superpackages were born in Sunville. Superpackages group a set of named (super-)packages and export types from these packages. First let me make clear that I have not seen anything in the public review draft that would make it hard for OSGi to support superpackages. The current incarnation of the OSGi framework will be more or less oblivious of superpackages. The accessibility rules are enforced by the VM and the system works with normal class/resource loading rules. Superpackages and its member types (not member superpackages) must come from the same class-loader/bundle, but that does not look like an issue. So from an OSGi point of view there is no reason for concern with JSR 294 as long as a superpackage and its member packages come from the same bundle.*
So the remainder is just a technical review of the technical merits of 294, the OSGi specifications are not affected by it.
The superpackages specification is surprisingly complex. The current model is depicted in the following picture.
When you begin a class you give it a package name with the package keyword:
package com.foo;
class A { ... }
That is, the class defines its membership to a package. This is different from a superpackage that honors its name looking at the complexity it introduces. The following picture shows a similar diagram when superpackages are introduced.
This clearly is supercomplex in comparison with the current model. The major drawback is the redundancy that is introduced by this model. When I went to school ages ago I was thought that redundancy is bad. Over time I learned that redundancy can improve a system but the cost always remains. In the case the redundancy actually seems to create a very brittle structure that makes deployment unnecessary rigid.
First, superpackage links are always bidirectional and can therefore easily be wrong. A class file points to the superpackage file and the superpackage file points to the packages of these class files. That is, if you move a package to another superpackage the superpackage definition file and the sources of the types must be modified. Fortunately wildcards can be used in the superpackage to identify the exported classes, though this means that any change in a type requires recompilation of the superpackage definition.
The same is true for superpackage parenthood and membership. This link is also bidirectional, the parent must list its children and the children must list their parent. A superpackage is restricted to one parent, it is a true hierarchy. By the way, the top of the hierarchy is the unnamed superpackage.
A data structure that is so error prone is not elegant. Though we have good refactoring tools today in Eclipse, IDEA, and Netbeans, I do not think it is a good excuse to design data structures that are so error prone ... but it happens alarmingly often nowadays.
However, the restriction is something that worries me most because it seems to create a system that I have hard time to see how it should work. The restriction is intentional, from section 7.4.5:
If a superpackage did not have to declare which superpackage it is nested in, then the following problem could occur. Consider these superpackages, where Outer.Inner does not declare that it is nested in Outer.
superpackage Outer {
member superpackage Outer.Inner;
}
superpackage Outer.Inner {
member package foo;
export foo.C;
}
If a type outside the Outer.Inner superpackage tries to access foo.C, then the access
would succeed because foo.C is exported from Outer.Inner and neither C.class nor the
superpackage file for Outer.Inner mentions the fact that Outer.Inner is a non-exported nested superpackage of Outer. The intent of the Outer superpackage - to restrict access to members of Outer.Inner - is subverted.
Clearly, they have chosen restriction over convenience. However, the consequences of this are quite far reaching. Let us take a look at the access rules. I always need pictures for these things so we need a legend:superpackage Outer {
member superpackage Outer.Inner;
}
superpackage Outer.Inner {
member package foo;
export foo.C;
}
If a type outside the Outer.Inner superpackage tries to access foo.C, then the access
would succeed because foo.C is exported from Outer.Inner and neither C.class nor the
superpackage file for Outer.Inner mentions the fact that Outer.Inner is a non-exported nested superpackage of Outer. The intent of the Outer superpackage - to restrict access to members of Outer.Inner - is subverted.
* = There is a slight concern with section 5.3.5 of the Classfile and VM changes of the 294classfilevm.html file. In the last paragraph it states that a superpackage and its member types must be loaded from the same class loader. I interpret this as its direct members, however, one could interpret this as an member type of the enclosed super packages. If this unlikely interpretation is true, all superpackages would have to come from the same class loader, which seems silly. However, 7.1.1 defines type membership transitively giving credence to the silly interpretation. Needs work.
JSR 294 SuperPackages
JSR 294 has produced their public draft of superpackages! Somehow the exclamation mark seems necessary with this name ... Anyway, superpackages are a new construct for Java 7 to improve modularity. Originally JSR 277 was going to take care of modularity but Gilead Bracha (formerly Sun) thought that deployers could not be trusted with language elements and spun off JSR 294 after he published a spin about superpackages in his blog.
Superpackages address the ever present need to encapsulate. Types encapsulate fields and methods, packages encapsulate types, and superpackages encapsulate, packages and superpackages. When Java was designed packages were intended to be a set of closely related types, a.k.a. a module. However, systems have become larger and larger and packages turned out to not have the right granularity. It could have worked if packages had been nested, but this would of course have limited the flexibility of the programmer and Sun favors configuration over convention.
For the OSGi specifications we came up with the JAR file as a module of packages. Packages can be exported and imported, allowing the developer to keep classes private or expose them to be used by others. Modularity in OSGi is therefore a deployment concept, the same packages can be members of different modules/bundles. OSGi service platforms enforce these rules with the aid of class loaders.
However, purists want to have the modularity in the language itself, therefore superpackages were born in Sunville. Superpackages group a set of named (super-)packages and export types from these packages. First let me make clear that I have not seen anything in the public review draft that would make it hard for OSGi to support superpackages. The current incarnation of the OSGi framework will be more or less oblivious of superpackages. The accessibility rules are enforced by the VM and the system works with normal class/resource loading rules. Superpackages and its member types (not member superpackages) must come from the same class-loader/bundle, but that does not look like an issue. So from an OSGi point of view there is no reason for concern with JSR 294 as long as a superpackage and its member packages come from the same bundle.*
So the remainder is just a technical review of the technical merits of 294, the OSGi specifications are not affected by it.
The superpackages specification is surprisingly complex. The current model is depicted in the following picture.
* = There is a slight concern with section 5.3.5 of the Classfile and VM changes of the 294classfilevm.html file. In the last paragraph it states that a superpackage and its member types must be loaded from the same class loader. I interpret this as its direct members, however, one could interpret this as an member type of the enclosed super packages. If this unlikely interpretation is true, all superpackages would have to come from the same class loader, which seems silly.
Superpackages address the ever present need to encapsulate. Types encapsulate fields and methods, packages encapsulate types, and superpackages encapsulate, packages and superpackages. When Java was designed packages were intended to be a set of closely related types, a.k.a. a module. However, systems have become larger and larger and packages turned out to not have the right granularity. It could have worked if packages had been nested, but this would of course have limited the flexibility of the programmer and Sun favors configuration over convention.
For the OSGi specifications we came up with the JAR file as a module of packages. Packages can be exported and imported, allowing the developer to keep classes private or expose them to be used by others. Modularity in OSGi is therefore a deployment concept, the same packages can be members of different modules/bundles. OSGi service platforms enforce these rules with the aid of class loaders.
However, purists want to have the modularity in the language itself, therefore superpackages were born in Sunville. Superpackages group a set of named (super-)packages and export types from these packages. First let me make clear that I have not seen anything in the public review draft that would make it hard for OSGi to support superpackages. The current incarnation of the OSGi framework will be more or less oblivious of superpackages. The accessibility rules are enforced by the VM and the system works with normal class/resource loading rules. Superpackages and its member types (not member superpackages) must come from the same class-loader/bundle, but that does not look like an issue. So from an OSGi point of view there is no reason for concern with JSR 294 as long as a superpackage and its member packages come from the same bundle.*
So the remainder is just a technical review of the technical merits of 294, the OSGi specifications are not affected by it.
The superpackages specification is surprisingly complex. The current model is depicted in the following picture.
* = There is a slight concern with section 5.3.5 of the Classfile and VM changes of the 294classfilevm.html file. In the last paragraph it states that a superpackage and its member types must be loaded from the same class loader. I interpret this as its direct members, however, one could interpret this as an member type of the enclosed super packages. If this unlikely interpretation is true, all superpackages would have to come from the same class loader, which seems silly.
Monday, November 26, 2007
Android and OSGi
An Android literally means a man lookalike. In Google's android the man must be a metaphor for Java; android is clearly a Java look alike. Google has done what open sourcerers are trying to prevent as much as possible: fork. Though forks are usually bad, sometimes they are necessary to break a stranglehold. Let us take a look at what they did and see what the implications are.
Google created a new VM called Dalvik that uses another format for the class files but otherwise look very much like Java CDC. They also provide a utility that can convert Java class files to so called DEX files: the native Dalvik format. So for programmers it walks like Java, it talks like Java, but it is not really Java.
What are the consequences?
What Google is hoping for is that the industry will not care that much about the Java logo, which stands for the compliance program, and will embrace the new features that Google can now provide to their environment without any restrictions.
Of course the current scenery, already pretty messed up by Sun and JCP leads, is now totally looking like a terrible train wreck. It is the tragedy of Java that by starting out to create an environment that would allow programmers to write a program once and run it unmodified anywhere has unnecessarily given programmers a bewildering choice for their target environment that are subtly incompatible.
Google has not used this fork to address many of the shortcomings in the Java architecture but chose to implement a model that solves some of the symptoms but does not address the underlying problems. On the contrary, the SDK shows a concerning lack of insight in the problems of a Java execution platform. Their motivation seems to be build more around licensing issues and not at providing a better architecture for Java like applications; an architecture that should take into account that much middle-ware today crosses the realms from embedded, to mobile, to desktop, to server. Instead, the key innovation seems to be having XML for the manifest, yet another RMI, a brand new graphic layer enriching the already bewildering number of GUIs, and a cumbersome collaboration model. Bewildering is that much of the new API shows a lack of understanding of versioning issues, a primary concern for a mobile execution platform.
Google could have used this fork to move Java back to the write once, run anywhere model; and it would not have been very difficult because the OSGi architecture could have been used as a template.
Then again, if we could run an OSGi framework as an android application we might provide a model where programmers can write middle-ware and applications that can easily cross the fractious borders of the Java landscape. A bit like a plane flying high over a battle field ... Can this be done?
Well, yes. Karl Pauls and Marcel Offermans of Luminis spent their weekend getting Apache Felix to work on the android emulator. They have written a really interesting blog about their experiences. It turns out that the emulator is basically a Linux kernel running inside a Windows or Linux application. It is possible to start a shell inside this kernel and start a Dalvik session with Apache Felix. Obviously, Felix and any bundles to be used must be converted to dex files: the android class file format.
Unfortunately, Dalvik does not support the normal Class Loader model that is so powerful in standard Java. Instead, the android.dalvik.DexFile class can be used to do class loading, but it is not clear if this is a standard class bound to be available in the future or if this is is an implementation class (android clearly lacks modularization meta data). Even so, the design is awkward; a DexFile models a JAR file with dex files. There is no way to load just bytes as a standard class loader does. The bytes must be in a JAR file, a perfect example of an unnecessary restrictive design. Useful techniques like the Bundle-Classpath where the bundle can contain embedded JARs must be emulated by extracting them to the file system which is of course an unnecessary waste of persistent memory. The bundles must be converted to dex files anyway, so that a tool like bnd could convert the bytecodes and flatten the classpath when the bundle is created.
Google states very clearly that in Android, all applications are equal. This means that an application that acts as a framework for other applications should be feasible. It will be interesting to research how open the environment is. Can the OSGi framework add application icons to the desktop for each bundle that has a GUI? This would make it easy to support the OSGi Application model developed in JSR 232 and now also supported in Eclipse.
The advantage of having an OSGi framework on the android platform is obvious. There is a lot of existing code out there that can be used unmodified on the android platform. This will allow android applications to be written as small bundles that only provide the GUI code, all the other work is done by bundles that can run just as easily on any other Java platform that supports an OSGi framework. It will also allow android phones to be managed by existing management systems because the OSGi specification has a management API that is widely supported.
So what is the conclusion? Overall Google has achieved the amazing feat of making an already troubled field even more opaque. The good news is that it might wake up Sun and make them realize that their current ways are not working and that cooperation is the operative world. In a perfect world the industry would sit together and create a thoroughly specified platform that is easy for the developers to use so they can make the applications users are willing to pay for. Economic laws tells us loudly that this model can make us all richer and our users happier. In the real world we seem to be unable to avoid the tragedy of the commons because of our greed.
In this real world, OSGi can actually become more and more important because it is the only Java technology that tries to provide a platform that crosses J2ME, JEE, JSE, and Dalvik. A platform that will allow developers to ignore the differences when they are not relevant and exploit them when needed. It is clear that there will be parties making an OSGi service platform available on android, the platform is fortunately open enough. But wouldn't it be much better if both Sun and Google provided OSGi technology out of the box instead of inventing their own home brewn solutions over and over?
Peter Kriens
Google created a new VM called Dalvik that uses another format for the class files but otherwise look very much like Java CDC. They also provide a utility that can convert Java class files to so called DEX files: the native Dalvik format. So for programmers it walks like Java, it talks like Java, but it is not really Java.
What are the consequences?
- Dalvik is not a Java VM so it is not bound by any Sun licensing.
- Java code generally runs on Dalvik without changes to the source code ... so far
What Google is hoping for is that the industry will not care that much about the Java logo, which stands for the compliance program, and will embrace the new features that Google can now provide to their environment without any restrictions.
Of course the current scenery, already pretty messed up by Sun and JCP leads, is now totally looking like a terrible train wreck. It is the tragedy of Java that by starting out to create an environment that would allow programmers to write a program once and run it unmodified anywhere has unnecessarily given programmers a bewildering choice for their target environment that are subtly incompatible.
Google has not used this fork to address many of the shortcomings in the Java architecture but chose to implement a model that solves some of the symptoms but does not address the underlying problems. On the contrary, the SDK shows a concerning lack of insight in the problems of a Java execution platform. Their motivation seems to be build more around licensing issues and not at providing a better architecture for Java like applications; an architecture that should take into account that much middle-ware today crosses the realms from embedded, to mobile, to desktop, to server. Instead, the key innovation seems to be having XML for the manifest, yet another RMI, a brand new graphic layer enriching the already bewildering number of GUIs, and a cumbersome collaboration model. Bewildering is that much of the new API shows a lack of understanding of versioning issues, a primary concern for a mobile execution platform.
Google could have used this fork to move Java back to the write once, run anywhere model; and it would not have been very difficult because the OSGi architecture could have been used as a template.
Then again, if we could run an OSGi framework as an android application we might provide a model where programmers can write middle-ware and applications that can easily cross the fractious borders of the Java landscape. A bit like a plane flying high over a battle field ... Can this be done?
Well, yes. Karl Pauls and Marcel Offermans of Luminis spent their weekend getting Apache Felix to work on the android emulator. They have written a really interesting blog about their experiences. It turns out that the emulator is basically a Linux kernel running inside a Windows or Linux application. It is possible to start a shell inside this kernel and start a Dalvik session with Apache Felix. Obviously, Felix and any bundles to be used must be converted to dex files: the android class file format.
Unfortunately, Dalvik does not support the normal Class Loader model that is so powerful in standard Java. Instead, the android.dalvik.DexFile class can be used to do class loading, but it is not clear if this is a standard class bound to be available in the future or if this is is an implementation class (android clearly lacks modularization meta data). Even so, the design is awkward; a DexFile models a JAR file with dex files. There is no way to load just bytes as a standard class loader does. The bytes must be in a JAR file, a perfect example of an unnecessary restrictive design. Useful techniques like the Bundle-Classpath where the bundle can contain embedded JARs must be emulated by extracting them to the file system which is of course an unnecessary waste of persistent memory. The bundles must be converted to dex files anyway, so that a tool like bnd could convert the bytecodes and flatten the classpath when the bundle is created.
Google states very clearly that in Android, all applications are equal. This means that an application that acts as a framework for other applications should be feasible. It will be interesting to research how open the environment is. Can the OSGi framework add application icons to the desktop for each bundle that has a GUI? This would make it easy to support the OSGi Application model developed in JSR 232 and now also supported in Eclipse.
The advantage of having an OSGi framework on the android platform is obvious. There is a lot of existing code out there that can be used unmodified on the android platform. This will allow android applications to be written as small bundles that only provide the GUI code, all the other work is done by bundles that can run just as easily on any other Java platform that supports an OSGi framework. It will also allow android phones to be managed by existing management systems because the OSGi specification has a management API that is widely supported.
So what is the conclusion? Overall Google has achieved the amazing feat of making an already troubled field even more opaque. The good news is that it might wake up Sun and make them realize that their current ways are not working and that cooperation is the operative world. In a perfect world the industry would sit together and create a thoroughly specified platform that is easy for the developers to use so they can make the applications users are willing to pay for. Economic laws tells us loudly that this model can make us all richer and our users happier. In the real world we seem to be unable to avoid the tragedy of the commons because of our greed.
In this real world, OSGi can actually become more and more important because it is the only Java technology that tries to provide a platform that crosses J2ME, JEE, JSE, and Dalvik. A platform that will allow developers to ignore the differences when they are not relevant and exploit them when needed. It is clear that there will be parties making an OSGi service platform available on android, the platform is fortunately open enough. But wouldn't it be much better if both Sun and Google provided OSGi technology out of the box instead of inventing their own home brewn solutions over and over?
Peter Kriens
Thursday, November 22, 2007
How Many JSRs does it Take to Reinvent the OSGi Framework?
How many JSRs does it take to implement the OSGi Framework? Well, we are currently at 6 and counting with JSR 320. I guess we are in a spot everybody wants to make his own and the JCP does not seem to have any build in controls to keep the process straight.
In an ideal world, the JCP would have a single architecture board that would look at the interest of Java and its users from an overall perspective. In the real world, the structure of JCP is geared to the creation of unrelated, not always well specified, standards. What we currently have is a hodgepodge driven by private interests that is eroding the value of Java.
The argument to increase the mess with JSR 320 is to make it "light weight". This is an easy argument to make and in the JCP you easily get away with there is no requirements process. For a JSR you fill in a rather simple questionnaire, and that is about it.
What does light weight really mean? Concierge is an 80Kb implementation of the OSGi R3 Service Platform, is that light weight? I do not know, it depends on where it is going to be used, and more important what functions need to be added in the future to handle the real world use cases. Over the last 30 years I too often fell in the trap to develop something that already existed but I thought I could do it simpler. Works perfectly until it gets used by others and you usually quickly find out some reasons for the original complexity. For this reason, the OSGi Alliance divided the specification process in 3 steps of which the first step is a requirements document: the Request For Proposal or RFP.
The RFP template consist of a section that describes the application domain, a concise description of the problem, a use case section, and a requirements section. Interestingly, it is always surprisingly hard to keep people honest in this document. Most authors already have a solution in their mind and find it terribly hard to write down these sections without talking about their solution. And I can assure you, it is hard. It is so much easier to just require a technical solution than explain the underlying forces.
However, it turns out that it is a lot easier to discuss requirements with the other stakeholders than to discuss ad-hoc technical solutions. During these discussions, all parties learn a lot and most interestingly, tend to converge on the key requirements quite quickly. Interestingly, often the initiator learns a lot about his own solution as well.
Once the requirements are better understood, the best technical solutions can be much more easily compared which prevents many political discussions: our solution is better than yours. It is hard to underestimate what this does for the mood and efficiency of the expert groups.
The result of this more careful standardization process is a more cohesive and complete architecture than one finds in the JCP.
When the JCP would work more from requirements, so many problems could be prevented. The JSR 277 expert group would likely have found out that the OSGi R4 Service Platform satisfied most of their requirements before they invested in a developing their own solution. JSR 320 is a typical case. Where is the requirements document that I could look at and write a proposal for based on existing OSGi technology? Such a document does not exist. In a years time with the public review the solution will be so ingrained that fundamental changes are not possible.
JCP is a sad variation on the tragedy of the commons. Java is a common area that we share in a large community. The better we take care of the commons, the more people will join this community and the more we can prosper. However, the land grab process of the JCP is slowly destroying the value of the commons because it creates a hodgepodge that is harder and harder to navigate for its users, diminishing the value for all of us. How can we change this process before it is too late?
Peter Kriens
In an ideal world, the JCP would have a single architecture board that would look at the interest of Java and its users from an overall perspective. In the real world, the structure of JCP is geared to the creation of unrelated, not always well specified, standards. What we currently have is a hodgepodge driven by private interests that is eroding the value of Java.
The argument to increase the mess with JSR 320 is to make it "light weight". This is an easy argument to make and in the JCP you easily get away with there is no requirements process. For a JSR you fill in a rather simple questionnaire, and that is about it.
What does light weight really mean? Concierge is an 80Kb implementation of the OSGi R3 Service Platform, is that light weight? I do not know, it depends on where it is going to be used, and more important what functions need to be added in the future to handle the real world use cases. Over the last 30 years I too often fell in the trap to develop something that already existed but I thought I could do it simpler. Works perfectly until it gets used by others and you usually quickly find out some reasons for the original complexity. For this reason, the OSGi Alliance divided the specification process in 3 steps of which the first step is a requirements document: the Request For Proposal or RFP.
The RFP template consist of a section that describes the application domain, a concise description of the problem, a use case section, and a requirements section. Interestingly, it is always surprisingly hard to keep people honest in this document. Most authors already have a solution in their mind and find it terribly hard to write down these sections without talking about their solution. And I can assure you, it is hard. It is so much easier to just require a technical solution than explain the underlying forces.
However, it turns out that it is a lot easier to discuss requirements with the other stakeholders than to discuss ad-hoc technical solutions. During these discussions, all parties learn a lot and most interestingly, tend to converge on the key requirements quite quickly. Interestingly, often the initiator learns a lot about his own solution as well.
Once the requirements are better understood, the best technical solutions can be much more easily compared which prevents many political discussions: our solution is better than yours. It is hard to underestimate what this does for the mood and efficiency of the expert groups.
The result of this more careful standardization process is a more cohesive and complete architecture than one finds in the JCP.
When the JCP would work more from requirements, so many problems could be prevented. The JSR 277 expert group would likely have found out that the OSGi R4 Service Platform satisfied most of their requirements before they invested in a developing their own solution. JSR 320 is a typical case. Where is the requirements document that I could look at and write a proposal for based on existing OSGi technology? Such a document does not exist. In a years time with the public review the solution will be so ingrained that fundamental changes are not possible.
JCP is a sad variation on the tragedy of the commons. Java is a common area that we share in a large community. The better we take care of the commons, the more people will join this community and the more we can prosper. However, the land grab process of the JCP is slowly destroying the value of the commons because it creates a hodgepodge that is harder and harder to navigate for its users, diminishing the value for all of us. How can we change this process before it is too late?
Peter Kriens
Monday, October 29, 2007
EclipseCon 2008
Time flies like an arrow (and fruit flies like a banana) ... One big disadvantage of organizing conferences is that it looks like time goes even faster. It is already time again to prepare for OSGi DevCon 2008 (a.k.a. EclipseCon). For OSGi DevCon we are looking for people that are using OSGi technology in unusual ways and that are willing to present it to their peers. You can present it in a lightning talk, a short talk, a long talk, or even a tutorial.
Finally, the OSGi technology is in the upswing after many years of preparation. It is wonderful to see how people apply OSGi tecnology in the most interesting ways. It is also very good to see how more and more tools and frameworks are being produced to simplify the use of OSGi technology.
There are three more weeks to submit your proposals. I really hope many people will take the effort to submit a proposal. You have time until the 19th of November.
Submit now!
Peter Kriens
Finally, the OSGi technology is in the upswing after many years of preparation. It is wonderful to see how people apply OSGi tecnology in the most interesting ways. It is also very good to see how more and more tools and frameworks are being produced to simplify the use of OSGi technology.
There are three more weeks to submit your proposals. I really hope many people will take the effort to submit a proposal. You have time until the 19th of November.
Submit now!
Peter Kriens
Wednesday, October 17, 2007
iJAM, Formalized Class Loading
(This blog is adapted after comment from Victor, I had not see that iJam had an exception for java.*, excuses).
There is a paper floating on the net that proposes an alternative to the class loading strategy of JSR 277. Richard S. Hall pointed me to this paper and told me to write a blog about it. So I did.
Though the paper provides a formalization of the class loading strategy, the modification it proposes is actually quite simple. Standard Java class loading rules is parent first, then self. In a modern Java system, there can be many ancestors so this rule is supposed to be applied recursively until one reaches the system class loader. This is a simple rule that provides Sun with the guarantee that its classes can not be overridden at a lower level. This model implies that a module can never override anything available in its parents. For example, if a module wants to use a newer version of the XML parser in the platform then it would be nice if these classes could be overridden on application level to ensure the proper version. For this reason, iJAM proposes to change the parent first rule to local first, except for java.* and javax.* classes. This allows a module to override any class in an ancestor class loader.
I do agree with the problem but I disagree rather strongly with the solution, it is too simple. Let me explain why I came to this conclusion.
First, loading only javax.* and java.* from the parent is ignoring classes that come from the bootclasspath but do not start with java.* or javax.*. An example is org.xml.sax. If this package is loaded from a module, then the system classes will load their own instance of this package and modules will see another. This will cause class cast exceptions if you try to give your SAX Handler to the parser because they will use different class loaders for the same package.
Another problem is that many javax.* packages are prime candidates to be downloaded as bundles. Though there are logical reasons to treat java.* as special because overriding java.lang.Object is quite disastrous, there are no reasons to treat javax.* in the same way.
A module must be able to define its priorities in loading, there are clear use cases where overriding a platform provided class is crucial. Can this be done as an all or nothing decision on module level? Don't think so, only in simple cases are these decisions module wide, and when was the last time you did something simple and got paid for it? Sometimes you want to provide a default implementation in case the platform, or any of the other modules, does not provide an implementation. Other times you want to be sure to get your own version. A simple rule as local first can not distinguish between these cases, nor can a rule like parent first satisfy you all the time.
Another problem with the JSR 277 and iJAM rules is that it treats classes as standalone entities, not part of a cohesive package. If your module overrides one class of a larger package you have something we call a split package. Split packages are nasty. First, classes in the same package have package private visibility (the default). However, this rule only works when those classes come from the same class loader. Obviously, you can get some very hard to understand errors when two classes in the same package get access errors for a field that is package private. Really, split packages are evil and it is quite surprising that JSR 277 allows them, as much as iJAM proposes this same behavior. Obviously, there are many other errors that can occur when half of your classes are loaded from your module and the other half from some other module. Each half could have quite interesting assumptions about the other half. Unless you enjoy debugging very strange errors, split packages are just not a recommended strategy. A package is not just part of the name of a class, it is a grouping concept that should be respected.
So how does OSGi address this issue? Well, our class loading rules are extensive because this is a highly complex area that we were unfortunate enough to have to learn over the last 9 years.
Our first rule is to load any java.* class from the parent class loader. As I explained, very few java.* classes can be overridden without wreaking havoc in some way (java.sql.* is the only example that comes to mind).
It then looks at the imported packages of a bundle. In OSGi, the manifest explicitly tells the framework which packages are expected to come from another bundle. These packages are listed in an Import-Package header with explicit constraints on the exporter.
If the package is not imported, the Framework will look in bundles linked with Required-Bundle. If that fails, the framework then looks in the jar and attached fragments. The developer can control searching the JAR in detail with the Bundle-Classpath header.
The class loading strategy is obviously an example that Einstein must have had in mind when he said: "Things should be as simple as possible, but not simpler". We learned the hard way that in the end, the developer must be able to control what he gets from where, but not at the expense of a potentially unworkable system. Though I like French laissez-faire, when it comes to class loading I prefer the more deterministic Anglo-Saxon approach.
To conclude, I really like the formal work that the iJAM paper shows, it is one of my secret desires to ever have a similar Z specification of the OSGi module layer. If the authors of the iJAM papers want to work on this, please let me help. However, I think that the class loading strategy in this paper is, just like the class loading strategy of JSR 277, unfortunately too simplistic for the complexity of the real world.
Peter Kriens
There is a paper floating on the net that proposes an alternative to the class loading strategy of JSR 277. Richard S. Hall pointed me to this paper and told me to write a blog about it. So I did.
Though the paper provides a formalization of the class loading strategy, the modification it proposes is actually quite simple. Standard Java class loading rules is parent first, then self. In a modern Java system, there can be many ancestors so this rule is supposed to be applied recursively until one reaches the system class loader. This is a simple rule that provides Sun with the guarantee that its classes can not be overridden at a lower level. This model implies that a module can never override anything available in its parents. For example, if a module wants to use a newer version of the XML parser in the platform then it would be nice if these classes could be overridden on application level to ensure the proper version. For this reason, iJAM proposes to change the parent first rule to local first, except for java.* and javax.* classes. This allows a module to override any class in an ancestor class loader.
I do agree with the problem but I disagree rather strongly with the solution, it is too simple. Let me explain why I came to this conclusion.
First, loading only javax.* and java.* from the parent is ignoring classes that come from the bootclasspath but do not start with java.* or javax.*. An example is org.xml.sax. If this package is loaded from a module, then the system classes will load their own instance of this package and modules will see another. This will cause class cast exceptions if you try to give your SAX Handler to the parser because they will use different class loaders for the same package.
Another problem is that many javax.* packages are prime candidates to be downloaded as bundles. Though there are logical reasons to treat java.* as special because overriding java.lang.Object is quite disastrous, there are no reasons to treat javax.* in the same way.
A module must be able to define its priorities in loading, there are clear use cases where overriding a platform provided class is crucial. Can this be done as an all or nothing decision on module level? Don't think so, only in simple cases are these decisions module wide, and when was the last time you did something simple and got paid for it? Sometimes you want to provide a default implementation in case the platform, or any of the other modules, does not provide an implementation. Other times you want to be sure to get your own version. A simple rule as local first can not distinguish between these cases, nor can a rule like parent first satisfy you all the time.
Another problem with the JSR 277 and iJAM rules is that it treats classes as standalone entities, not part of a cohesive package. If your module overrides one class of a larger package you have something we call a split package. Split packages are nasty. First, classes in the same package have package private visibility (the default). However, this rule only works when those classes come from the same class loader. Obviously, you can get some very hard to understand errors when two classes in the same package get access errors for a field that is package private. Really, split packages are evil and it is quite surprising that JSR 277 allows them, as much as iJAM proposes this same behavior. Obviously, there are many other errors that can occur when half of your classes are loaded from your module and the other half from some other module. Each half could have quite interesting assumptions about the other half. Unless you enjoy debugging very strange errors, split packages are just not a recommended strategy. A package is not just part of the name of a class, it is a grouping concept that should be respected.
So how does OSGi address this issue? Well, our class loading rules are extensive because this is a highly complex area that we were unfortunate enough to have to learn over the last 9 years.
Our first rule is to load any java.* class from the parent class loader. As I explained, very few java.* classes can be overridden without wreaking havoc in some way (java.sql.* is the only example that comes to mind).
It then looks at the imported packages of a bundle. In OSGi, the manifest explicitly tells the framework which packages are expected to come from another bundle. These packages are listed in an Import-Package header with explicit constraints on the exporter.
If the package is not imported, the Framework will look in bundles linked with Required-Bundle. If that fails, the framework then looks in the jar and attached fragments. The developer can control searching the JAR in detail with the Bundle-Classpath header.
The class loading strategy is obviously an example that Einstein must have had in mind when he said: "Things should be as simple as possible, but not simpler". We learned the hard way that in the end, the developer must be able to control what he gets from where, but not at the expense of a potentially unworkable system. Though I like French laissez-faire, when it comes to class loading I prefer the more deterministic Anglo-Saxon approach.
To conclude, I really like the formal work that the iJAM paper shows, it is one of my secret desires to ever have a similar Z specification of the OSGi module layer. If the authors of the iJAM papers want to work on this, please let me help. However, I think that the class loading strategy in this paper is, just like the class loading strategy of JSR 277, unfortunately too simplistic for the complexity of the real world.
Peter Kriens
Thursday, October 4, 2007
Universal OSGi
There is an idea that has been simmering in the OSGi Alliance for quite some time: Universal OSGi. My first recollection of this idea is from 1999, when I was working in Linkoping at Ericsson Wireless Technology, a place in the middle of nowhere. I worked there a couple of days a week helping them to use the OSGi specifications on the Ericsson ebox. This was not an easy task because there were virtually no Java programmers and lots of native Linux device driver developers. These guys have it in their genes to see anything between them and the Linux API as a personal insult. The fact that the ebox was severely underpowered for Java, 25Mhz 486 - 32Mb RAM - 8Mb flash, did obviously not do anything for my popularity either. However, these guys are intelligent and once they understood the modularity model and the remote management model that this enabled, they decided that they wanted to have it too. well, not the Java but the modularity. They even had visions to replace the OSGi specifications with something more native.
I objected to that model, and I still do. Though I am all too aware of the many flaws that the language was born with (I am a Smalltalker by heart, how could they miss closures!), I am profoundly impressed with what has been done on security and the abstraction of the the operating system. The OSGi technology provides some very crucial missing parts in Java, but a very large number of the features we provide are enabled by Java. I really do not believe that what is done in the modularity layer, the life cycle management layer, the service layer, and the security model can be done in any other existing environment. Not even .NET, though they get closest.
However, the need for integrating with native code does not go away because our model is better. The battlefield is littered with corpses that had a better model. It is a plain fact that in many of the markets that use OSGi technology, native code integration plays a major role. Java is good, but there is a lot of legacy code out there that is just not Java.
The only thing we have on offer for native code integration is through the Java Native Interface (JNI) and remote procedure calls like with web services, CORBA, etc. Anybody that tried to program with JNI knows how painful and intrusive it can be. I do not think I would have survived Linkoping proposing JNI. Remote procedure calls are better, and well known in the industry. However, remote procedure calls provide interoperation but no modularity, life cycle, or security. The interoperation through remote procedure calls works well as is proven many times, it lacks the tight integration of all important computing aspects that the OSGi technology provides.
Meet Universal OSGi. This is not a worked out concept but work in progress. Universal OSGi is an attempt to use the normal run of the mill Java based OSGi service platform but provide a model where native code can be integrated. This means that you should be able to download bundles with a native executable, like a DLL or shared library. Lets call these native bundles.
You should be able to express the dependencies of these native bundles in the manifest on other native bundles so that a management system could calculate the transitive dependencies, just like a Java bundle. Then you should be able to start and stop these bundles, which should result in loading the native code in another process and providing it with its dependencies, and starting it. The native code should be able to get and register services which are used by the native code to communicate with the rest of the system.
Handling the native code dependencies and the life cycle issues is not trivial but definitely doable. A native handler bundle can use the extender model to detect native bundles of a type it recognizes. This bundle would somehow have to interact with the resolver because the Java resolver needs to know when a native bundle can be resolved (or maybe this could be expressed with Require-Bundle). If the native bundle is started, the native handler will load the native code in another process. When the native bundle is stopped, the native handler cleans up by killing the process. Not trivial, but doable.
The hard part is the communication aspect. Obviously, the service layer is the crucial middle man, requiring that the native code can communicate with the OSGi service registry, hosted in the Java VM process. This requires a native API that maps the primitives of the OSGi service registry to C, C++, .NET, PHP, Haskell, etc. Primitives like registering a service, getting a service, and listening for services. And of course coupled to the life cycle layer. If a bundle is stopped, all its services must be unregistered. This registry is still doable, albeit a bit less trivial. The hardest part is how the services are mapped in the remote procedure calls. This is the problem that many have tried and few really succeeded because it somehow always remains messy. CORBA has the Interface Definition Language (IDL) which was supposed to be the mother of all languages but largely failed in the Java world because its C++ orientation made mapping it to Java painful. I remember a long ago project where we had two class for every parameter because that was the way output parameters could be modeled, a concept well known to C++ but unkown to Java.
For Universal OSGi, it is likely that the best solution is the Java interface as an "IDL". Not only do we already have a lot of experience with Java interfaces, they are also conceptually very clean, not associated with an implementation. In Java it is already trivial to proxy interfaces. it will therefore be necessary to map Java interfaces in a mechanical way to concepts known in the native environment. For example, in C++ a Java interface can be mapped to an abstract base class that can be used as mixin. Most OSGi service specifications are very suitable for this mapping.
A key problem in designing such a communication system is how the remote procedure calls are handled. A remote procedure call crosses the process boundary and pointers to memory locations are therefore no longer valid. Each process has its own memory. There are two solutions to this problem. One can pass the value to which the pointer is pointing, or one can pass a symbolic reference of the object. Passing a value can be done with immutable objects like int, String, etc. but it can not be done for complex objects like java.lang.Class. If a mutable object is passed by value, changes on the remote side are not reflected on the caller's side, changing the behavior for remote and local calling. However, one can proxy any complex object by passing a symbolic reference and doing the same for any objects that are exchanged in method calls. The other side must recognize this reference and do a remote procedure call back into the caller's process for all methods. This model is called proxying. It is too expensive for real life communications due to the latency and bandwidth constraints. For Universal OSGi it might be viable because it runs all the participants on the same device. That allows all communications tobe done with very fast techniques like shared memory.
These are intriguing and exciting ideas that could truly make OSGi technology more universally applicable. However, there are a lot of technical details to iron out and even when that has been done, there is a lot of spec work for different native languages. We need members that are willing to make this work. Interested?
Peter Kriens
I objected to that model, and I still do. Though I am all too aware of the many flaws that the language was born with (I am a Smalltalker by heart, how could they miss closures!), I am profoundly impressed with what has been done on security and the abstraction of the the operating system. The OSGi technology provides some very crucial missing parts in Java, but a very large number of the features we provide are enabled by Java. I really do not believe that what is done in the modularity layer, the life cycle management layer, the service layer, and the security model can be done in any other existing environment. Not even .NET, though they get closest.
However, the need for integrating with native code does not go away because our model is better. The battlefield is littered with corpses that had a better model. It is a plain fact that in many of the markets that use OSGi technology, native code integration plays a major role. Java is good, but there is a lot of legacy code out there that is just not Java.
The only thing we have on offer for native code integration is through the Java Native Interface (JNI) and remote procedure calls like with web services, CORBA, etc. Anybody that tried to program with JNI knows how painful and intrusive it can be. I do not think I would have survived Linkoping proposing JNI. Remote procedure calls are better, and well known in the industry. However, remote procedure calls provide interoperation but no modularity, life cycle, or security. The interoperation through remote procedure calls works well as is proven many times, it lacks the tight integration of all important computing aspects that the OSGi technology provides.
Meet Universal OSGi. This is not a worked out concept but work in progress. Universal OSGi is an attempt to use the normal run of the mill Java based OSGi service platform but provide a model where native code can be integrated. This means that you should be able to download bundles with a native executable, like a DLL or shared library. Lets call these native bundles.
You should be able to express the dependencies of these native bundles in the manifest on other native bundles so that a management system could calculate the transitive dependencies, just like a Java bundle. Then you should be able to start and stop these bundles, which should result in loading the native code in another process and providing it with its dependencies, and starting it. The native code should be able to get and register services which are used by the native code to communicate with the rest of the system.
Handling the native code dependencies and the life cycle issues is not trivial but definitely doable. A native handler bundle can use the extender model to detect native bundles of a type it recognizes. This bundle would somehow have to interact with the resolver because the Java resolver needs to know when a native bundle can be resolved (or maybe this could be expressed with Require-Bundle). If the native bundle is started, the native handler will load the native code in another process. When the native bundle is stopped, the native handler cleans up by killing the process. Not trivial, but doable.
The hard part is the communication aspect. Obviously, the service layer is the crucial middle man, requiring that the native code can communicate with the OSGi service registry, hosted in the Java VM process. This requires a native API that maps the primitives of the OSGi service registry to C, C++, .NET, PHP, Haskell, etc. Primitives like registering a service, getting a service, and listening for services. And of course coupled to the life cycle layer. If a bundle is stopped, all its services must be unregistered. This registry is still doable, albeit a bit less trivial. The hardest part is how the services are mapped in the remote procedure calls. This is the problem that many have tried and few really succeeded because it somehow always remains messy. CORBA has the Interface Definition Language (IDL) which was supposed to be the mother of all languages but largely failed in the Java world because its C++ orientation made mapping it to Java painful. I remember a long ago project where we had two class for every parameter because that was the way output parameters could be modeled, a concept well known to C++ but unkown to Java.
For Universal OSGi, it is likely that the best solution is the Java interface as an "IDL". Not only do we already have a lot of experience with Java interfaces, they are also conceptually very clean, not associated with an implementation. In Java it is already trivial to proxy interfaces. it will therefore be necessary to map Java interfaces in a mechanical way to concepts known in the native environment. For example, in C++ a Java interface can be mapped to an abstract base class that can be used as mixin. Most OSGi service specifications are very suitable for this mapping.
A key problem in designing such a communication system is how the remote procedure calls are handled. A remote procedure call crosses the process boundary and pointers to memory locations are therefore no longer valid. Each process has its own memory. There are two solutions to this problem. One can pass the value to which the pointer is pointing, or one can pass a symbolic reference of the object. Passing a value can be done with immutable objects like int, String, etc. but it can not be done for complex objects like java.lang.Class. If a mutable object is passed by value, changes on the remote side are not reflected on the caller's side, changing the behavior for remote and local calling. However, one can proxy any complex object by passing a symbolic reference and doing the same for any objects that are exchanged in method calls. The other side must recognize this reference and do a remote procedure call back into the caller's process for all methods. This model is called proxying. It is too expensive for real life communications due to the latency and bandwidth constraints. For Universal OSGi it might be viable because it runs all the participants on the same device. That allows all communications tobe done with very fast techniques like shared memory.
These are intriguing and exciting ideas that could truly make OSGi technology more universally applicable. However, there are a lot of technical details to iron out and even when that has been done, there is a lot of spec work for different native languages. We need members that are willing to make this work. Interested?
Peter Kriens
Friday, September 14, 2007
Simplicity, Size, and Seniority
It must have been 1996 when I explained to Ralph Johnson that I liked Java over Smalltalk because it was so small and simple. He looked at me warily and said: "wait a couple of years, it will grow". Ralph is a wise man and after a few years when Java grew from a handful of packages to hundreds of packages I understood what he had meant.
I was reminded of this conversation when I read a blog that compared HK2 with OSGi. HK2 was simpler because it had only 50 classes (!) and only a few pages of documentation. To be honest, I had not yet heard of HK2 so I investigated. This turns out to be the Glassfish module system developed by Sun. From the blog you get the false impression that OSGi is the Goliath spec of thousands of classes and HK2 is this new little David that has a more human size.
Interestingly, it is more the other way around, despite their immaturity. Their module/life cycle layer is comprised of 30 classes, their service/component layer is 20 classes. Security is absent. The only mandatory package in OSGi is org.osgi.framework, which is comprised of only 25 classes and contains the security, module, and service layer. Realistically one should add Package Admin, which adds another 3 classes. Still significantly smaller than the 50 classes of HK2, and they have only just started while we have been around for 9 years and had to address all the hard problems that happen when you build real life applications for constrained environments. All the other classes in the OSGi specifications are optional, when you have a specific problem we might have a specification that addresses that problem, when you do not care, just stick to the framework.
HK2 stands for Hundred Kilobytes Kernel. I guess this is something similar as the 1 dollar stores that over time become extremely creative or outright deceit because this (artificial) limit is not maintainable. I am not sure where they are today, but I am pretty sure that HK2 will require more than 100K within 2 years if they want to do it right, the real world tends to be quite messy.
Anyway, for a specification the code size of the implementations is a bit different. OSGi R3 can be implemented in less than 100k, look at Concierge. However, R4 likely requires a larger implementation because it addresses so many more use cases. Apache Felix and Knopflerfish are around 250K and Eclipse Equinox is around 900K. Why is Equinox so big in comparison? Well, their non-functional requirements state that they must be able to handle 5000 bundles. To make such a gigantic system work, they need to revert to many optimizations and caching techniques which add complexity. This difference is one of the best illustrations why the OSGi specifications are important. Depending on your needs you can choose to use Felix, Knopflerfish, Equinox, or a commercial implementation. Each implementation has made trade offs that you can match against your requirements. However, they all over the same basic OSGi environments and most bundles run on all systems.
Comparing the documentation of HK2 and OSGi it is the other way around: OSGi is bigger. The core OSGi framework specification with the optional system services is 288 pages. I agree it is a lot easier to read the 3 web pages and the Javadoc of HK2, but does it help you when there is a problem? It is interesting how the size of the documentation can be used against you. Yesterday, on the Python list someone asked if there could be an OSGi for Python and the reply was a joke about how "impressed" he was with the 1000 pages of the specification. To set the record straight, all the OSGi specifications combined comprise a 1000 pages but the core is less than 300 pages including the Javadoc. This is still a lot of text but we really tried very hard to make them as readable and usable as possible. So what do you prefer? A handful of web pages or a detailed specification. I can only defer to another wise man: "Things should be as simple as possible, but not simpler".
Peter Kriens
I was reminded of this conversation when I read a blog that compared HK2 with OSGi. HK2 was simpler because it had only 50 classes (!) and only a few pages of documentation. To be honest, I had not yet heard of HK2 so I investigated. This turns out to be the Glassfish module system developed by Sun. From the blog you get the false impression that OSGi is the Goliath spec of thousands of classes and HK2 is this new little David that has a more human size.
Interestingly, it is more the other way around, despite their immaturity. Their module/life cycle layer is comprised of 30 classes, their service/component layer is 20 classes. Security is absent. The only mandatory package in OSGi is org.osgi.framework, which is comprised of only 25 classes and contains the security, module, and service layer. Realistically one should add Package Admin, which adds another 3 classes. Still significantly smaller than the 50 classes of HK2, and they have only just started while we have been around for 9 years and had to address all the hard problems that happen when you build real life applications for constrained environments. All the other classes in the OSGi specifications are optional, when you have a specific problem we might have a specification that addresses that problem, when you do not care, just stick to the framework.
HK2 stands for Hundred Kilobytes Kernel. I guess this is something similar as the 1 dollar stores that over time become extremely creative or outright deceit because this (artificial) limit is not maintainable. I am not sure where they are today, but I am pretty sure that HK2 will require more than 100K within 2 years if they want to do it right, the real world tends to be quite messy.
Anyway, for a specification the code size of the implementations is a bit different. OSGi R3 can be implemented in less than 100k, look at Concierge. However, R4 likely requires a larger implementation because it addresses so many more use cases. Apache Felix and Knopflerfish are around 250K and Eclipse Equinox is around 900K. Why is Equinox so big in comparison? Well, their non-functional requirements state that they must be able to handle 5000 bundles. To make such a gigantic system work, they need to revert to many optimizations and caching techniques which add complexity. This difference is one of the best illustrations why the OSGi specifications are important. Depending on your needs you can choose to use Felix, Knopflerfish, Equinox, or a commercial implementation. Each implementation has made trade offs that you can match against your requirements. However, they all over the same basic OSGi environments and most bundles run on all systems.
Comparing the documentation of HK2 and OSGi it is the other way around: OSGi is bigger. The core OSGi framework specification with the optional system services is 288 pages. I agree it is a lot easier to read the 3 web pages and the Javadoc of HK2, but does it help you when there is a problem? It is interesting how the size of the documentation can be used against you. Yesterday, on the Python list someone asked if there could be an OSGi for Python and the reply was a joke about how "impressed" he was with the 1000 pages of the specification. To set the record straight, all the OSGi specifications combined comprise a 1000 pages but the core is less than 300 pages including the Javadoc. This is still a lot of text but we really tried very hard to make them as readable and usable as possible. So what do you prefer? A handful of web pages or a detailed specification. I can only defer to another wise man: "Things should be as simple as possible, but not simpler".
Peter Kriens
Thursday, September 6, 2007
OSGi Summer School
Sometimes being an evangelist has its perks. Last week I was a guest at a very beautiful spot in Brittany at the École été OSGi. The only thing I have to do is talk about OSGi! Ok, the travel was a tad long for the distance, the hotel where we are staying took the car, a plane, a bus, a train, and another bus to get to the path that floods twice a day. At times of flood, one has to use a simple ferry. However, the location definitely has charm.
To my surprise, there were 55 students registered, to my even bigger surprise they had to deny many people that registered late because there was only limited place. It really looks like the OSGi technology is starting to make inroads. Especially the fact that the business people outnumber the academics. For a long time, the academics (or at least research people) outnumbered the business people. I love academics, but of course we need people from the industry to move OSGi into the next phase of universal middleware.
This morning I gave the opening speech. Instead of trying to explain the technology, I gave a more emotional story about the way the work on the OSGi got started, how we struggled, what failed, and what finally worked. I also spent a good amount of time on lingering about why service oriented programming is so important. As I wrote last Monday, they key is that service oriented programming allows us to reason about systems that have a much coarser granularity than objects. I tried to hard to reach the French audience, however I betted that their English was better than my lousy French. Hope they got the message!
This afternoon everybody worked hard on a prepared course. It was really cool to see all those people struggling to sell and buy service on an Apache Felix framework. well, it is time for dinner so the only thing I can say, wish you were here!
Peter Kriens
Tuesday, September 4, 2007
SOA & OSGi
Nine years ago, when we started working on the OSGi specifications, we used the word service to describe the object that acted as a conduit between bundles. Today, Service Oriented Architectures (SOA) are hot and every software vendor seems intent on confusing a muddy picture even further by bringing their products under this wide umbrella. The result is that many people file OSGi under webservices, as the most popular exponent of SOA, and then conveniently ignore any further documentation because they already know what web services are. As an example, a friend of mine consistently gets his papers rejected because many of the reviewers make this very false assumption.
A bit of history. In 1985, I was developing systems for the newspaper market and we just started our Magnus Opus: a newspaper page makeup terminal (this was the time that a 25 Mhz 286 was state of the art). This product was of strategic importance for our company and I did due diligence on a lof of languages to select the best one for this project. After discovering Smalltalk/V 286, I fell head over heels in love. I thought object oriented (OO) software was the best thing since sliced bread! I never had been comfortable with the common design methodologies at that time called Structured Design. Discovering OO felt like coming home after a long journey. After grasping the technology I made my second worst advice ever (the worst was when I told a programmer friend in 1994 that building an e-commerce shopping cart application was a silly idea, sorry Matts). I told my manager that if we started to build our systems using objects we would be able to get fantastic reuse after a year or so. Obviously under the condition that he allowed me to go to all these cool conferences like the OOPSLA. I assured him, new applications could be build in days instead of years, just using objects ...
Well, as most of us know it did not turn out this way. Yes, we got a lot of advantages using OO technology but reuse turned out a lot harder than we thought. Sadly, I think the key cause of the lack of reuse was the violation of the two sacred rules of Structured Design: low coupling, high cohesion. It was very much like a couple of girls on the beach trying desperately not to expose themselves to the boys in front of them while changing into their swimming gear; not realizing that there were also some boys sitting behind them. We were so busy hiding our data in instance variables writing get and set methods that we never realized that we were exposing the object structure to anybody using our classes. That is, if you do
At the end of the previous millennium our industry learned many hacks to reduce this structural coupling; the most (in)famous is the
Also, if you look at one of the great insights of the last decade, Inversion of Control (IoC), you see that this is really about decoupling. IoC (or the Hollywood principle of "do not call us, we will call you") takes an object and provides it with its dependencies instead of this object trying to create its dependencies. This simple idea allows for POJOs, objects that are minimally coupled.
Services are trying to solve the same problem: minimize complexity by decoupling. The key insight that lead to the service model is that the OO has an important role in modularization on the small scale but we need something of a bigger granularity on the larger scale. A modularity that minimizes coupling to well defined conduits between coarse components. This was exactly the problem we tried to solve since that 1998 meeting in the Research Triangle Park in Raleigh where the technical OSGi work got started. Our key assignment was to let software from many different service providers collaborate in a single residential gateway. Trying to standardize large class libraries with highly coupled frameworks would have taken forever and it would have done a terrible job. By focusing on the coarse components (the bundles) that would collaborate through objects named by their interface that were found in a registry (the services) we created a model that had minimal, well defined, and documented coupling between bundles.
The key advantage of SOA is therefore the reduced coupling between the blocks of a system. The model became wide known with Webservices because Microsoft and IBM decided to put a lot of marketing dollars in that direction, making SOA almost synonymous with XML, SOAP, HTTP, some other acronyms, and associated distributed computing problems. It is not. Service Oriented Architectures reduces the complexity caused by not managing structural coupling by providing well defined conduits between components. Webservices achieve this using a myriad of acronyms, OSGi technology achieves this by using an extremely low overhead in-VM service publish/find/bind model.
So the key question is if the confusion between Webservices and OSGi is doing us harm? I think it does, many people fail to look further because they assume they understand. For example, I am flabbergasted that leading software magazines like ACM Communications and IEEE Software never had an in-depth article about OSGi technology because I strongly believe that the ramifications of this technology for our industry are profound.
Peter Kriens
A bit of history. In 1985, I was developing systems for the newspaper market and we just started our Magnus Opus: a newspaper page makeup terminal (this was the time that a 25 Mhz 286 was state of the art). This product was of strategic importance for our company and I did due diligence on a lof of languages to select the best one for this project. After discovering Smalltalk/V 286, I fell head over heels in love. I thought object oriented (OO) software was the best thing since sliced bread! I never had been comfortable with the common design methodologies at that time called Structured Design. Discovering OO felt like coming home after a long journey. After grasping the technology I made my second worst advice ever (the worst was when I told a programmer friend in 1994 that building an e-commerce shopping cart application was a silly idea, sorry Matts). I told my manager that if we started to build our systems using objects we would be able to get fantastic reuse after a year or so. Obviously under the condition that he allowed me to go to all these cool conferences like the OOPSLA. I assured him, new applications could be build in days instead of years, just using objects ...
Well, as most of us know it did not turn out this way. Yes, we got a lot of advantages using OO technology but reuse turned out a lot harder than we thought. Sadly, I think the key cause of the lack of reuse was the violation of the two sacred rules of Structured Design: low coupling, high cohesion. It was very much like a couple of girls on the beach trying desperately not to expose themselves to the boys in front of them while changing into their swimming gear; not realizing that there were also some boys sitting behind them. We were so busy hiding our data in instance variables writing get and set methods that we never realized that we were exposing the object structure to anybody using our classes. That is, if you do
new Foo()
from class Bar
, you just coupled the classes Foo and Bar for eternity. Trying to reuse Bar immediately drags in Foo. Now if Foo and Bar had been cohesive (meaning they are really parts of the same function) this would not be such a problem. However, because we were so busy writing get/set methods we mostly ended up with systems that had transitive dependencies that basically included all classes in your company's repository. If you want to see an example, write a "Hello world" program with an empty local Maven repository. Not only made this coupling reuse very hard, the coupling made making changes also very complicated because many changes were far reaching.At the end of the previous millennium our industry learned many hacks to reduce this structural coupling; the most (in)famous is the
Factory
pattern. This pattern allowed you to manage the coupling outside the original program by moving the coupling to a runtime decision. However, it is a hack. OO languages have no built-in answer to this problem, just look at the large number of attempts one can find in Java to solve this coupling problem.Also, if you look at one of the great insights of the last decade, Inversion of Control (IoC), you see that this is really about decoupling. IoC (or the Hollywood principle of "do not call us, we will call you") takes an object and provides it with its dependencies instead of this object trying to create its dependencies. This simple idea allows for POJOs, objects that are minimally coupled.
Services are trying to solve the same problem: minimize complexity by decoupling. The key insight that lead to the service model is that the OO has an important role in modularization on the small scale but we need something of a bigger granularity on the larger scale. A modularity that minimizes coupling to well defined conduits between coarse components. This was exactly the problem we tried to solve since that 1998 meeting in the Research Triangle Park in Raleigh where the technical OSGi work got started. Our key assignment was to let software from many different service providers collaborate in a single residential gateway. Trying to standardize large class libraries with highly coupled frameworks would have taken forever and it would have done a terrible job. By focusing on the coarse components (the bundles) that would collaborate through objects named by their interface that were found in a registry (the services) we created a model that had minimal, well defined, and documented coupling between bundles.
The key advantage of SOA is therefore the reduced coupling between the blocks of a system. The model became wide known with Webservices because Microsoft and IBM decided to put a lot of marketing dollars in that direction, making SOA almost synonymous with XML, SOAP, HTTP, some other acronyms, and associated distributed computing problems. It is not. Service Oriented Architectures reduces the complexity caused by not managing structural coupling by providing well defined conduits between components. Webservices achieve this using a myriad of acronyms, OSGi technology achieves this by using an extremely low overhead in-VM service publish/find/bind model.
So the key question is if the confusion between Webservices and OSGi is doing us harm? I think it does, many people fail to look further because they assume they understand. For example, I am flabbergasted that leading software magazines like ACM Communications and IEEE Software never had an in-depth article about OSGi technology because I strongly believe that the ramifications of this technology for our industry are profound.
Peter Kriens
Thursday, August 23, 2007
Diversity is Good
A couple of weeks ago there was a discussion on the OSGi developer list about a support library for OSGi programmers that is intended to make the life of an OSGi programmer easier. This library was developed on, well lets say, framework E. A developer in Asia tried this library on framework K and found out that the library did not work on this framework. After a couple of messages back and forth the library developer basically said that it was probably a bug in framework K and that the best solution was to let the framework K developers figure out where the bug was.
I made a lot of noise on the list because I think we should develop against the OSGi specifications, not against a particular framework implementation. The fact that it runs on framework E does not mean by definition that it is good, even though I know that the developers of framework E are really good.
Now I have been in this industry for 30 years and I still have not learned to sit still and wait until everybody has forgotten the issue. This stupidity obviously resulted the problem ending up in my lap. Today I decided to run the faulty code on frameworks K, E, and F. This is a lot easier to write down than to do it, actually. Each framework has its own peculiar way of starting and it took me quite some time to get the whole setup running. To be able to switch quickly, I used the aQute File Install bundle. This is a simple bundle that watches a directory and installs/uninstall bundles depending on their presence in this folder. As a bonus, it can also manage configuration data in a similar vein. I then created Eclipse launchers for each of the frameworks.
After this hard labor, finding the bug was not too hard. It turned out there was a misinterpretation of the spec that should be easy to fix. However, it was almost a shame to see how much preparation I had to do to be able to run bundles on multiple frameworks. The setup I had created I thought was quite fancy, it was easy to test all kinds of bundles this way.
I therefore decided to use the altruism of Google and use some of their abundant disk space as a Google project. Wonder if all those Google investors know how their money is spent to support open source? Anyway, I created the multi-osgi project. Within Eclipse you can check this out with SVN (you need Subclipse, Subversive, or some other SVN plugin. The URL to the project is http://code.google.com/p/multi-osgi/source.
The project works quite simple. In Eclipse, you can just find an Equinox, Knopflerfish, and Felix launch in the Run/Debug Dialog. From a shell, you can just switch to the Felix, Knopflerfish, or Eclipse directory and do run (in DOS) or ./run.sh in Unix.
Once you got one or more frameworks running, you can easily install the bundles by just dropping them in the
I will keep maintaining this project for the foreseeable future. However, I would welcome anybody that is willing to add other frameworks into the mix, or write scripts for special environments. Obviously, the committers of the different frameworks or commercial vendors are more then welcome.
Peter Kriens
I made a lot of noise on the list because I think we should develop against the OSGi specifications, not against a particular framework implementation. The fact that it runs on framework E does not mean by definition that it is good, even though I know that the developers of framework E are really good.
Now I have been in this industry for 30 years and I still have not learned to sit still and wait until everybody has forgotten the issue. This stupidity obviously resulted the problem ending up in my lap. Today I decided to run the faulty code on frameworks K, E, and F. This is a lot easier to write down than to do it, actually. Each framework has its own peculiar way of starting and it took me quite some time to get the whole setup running. To be able to switch quickly, I used the aQute File Install bundle. This is a simple bundle that watches a directory and installs/uninstall bundles depending on their presence in this folder. As a bonus, it can also manage configuration data in a similar vein. I then created Eclipse launchers for each of the frameworks.
After this hard labor, finding the bug was not too hard. It turned out there was a misinterpretation of the spec that should be easy to fix. However, it was almost a shame to see how much preparation I had to do to be able to run bundles on multiple frameworks. The setup I had created I thought was quite fancy, it was easy to test all kinds of bundles this way.
I therefore decided to use the altruism of Google and use some of their abundant disk space as a Google project. Wonder if all those Google investors know how their money is spent to support open source? Anyway, I created the multi-osgi project. Within Eclipse you can check this out with SVN (you need Subclipse, Subversive, or some other SVN plugin. The URL to the project is http://code.google.com/p/multi-osgi/source.
The project works quite simple. In Eclipse, you can just find an Equinox, Knopflerfish, and Felix launch in the Run/Debug Dialog. From a shell, you can just switch to the Felix, Knopflerfish, or Eclipse directory and do run (in DOS) or ./run.sh in Unix.
Once you got one or more frameworks running, you can easily install the bundles by just dropping them in the
load
directory. A good way to find bundles is to go to the OBR site. The input box allows you to type search criteria. For example, you want to install a shell. Just look for "Shell". This gives two bundles from the Apache Felix project. Just click on the download icon and save the file in the multi-osgi/load directory. You should be able to use this shell from the console. Equinox already starts up with a console, so you will get some interaction. Try it!I will keep maintaining this project for the foreseeable future. However, I would welcome anybody that is willing to add other frameworks into the mix, or write scripts for special environments. Obviously, the committers of the different frameworks or commercial vendors are more then welcome.
Peter Kriens
Monday, August 20, 2007
Help Wanted
Last week I was in Boston at IONA's Waltham offices for an OSGi Enterprise Expert Group meeting. This was a very interesting meeting and well attended by OSGi members. It was pretty intensive because we had so many different subjects to cover. SCA, how to handle WARs and EARs, distribution, database access, making security usable, a new OSGi Component model, hyper-packages (super packages that cross bundles), JNDI, and much more. Looking at the actors around the table and the technical progress it seems clear that this work will have major impact on the enterprise software industry. All the key players were there: Oracle, BEA, IBM, IONA, Siemens, RedHat, and others.
During this meeting it suddenly hit me that we only had vendors around the table. I understand that vendors often take the lead in these efforts because it will enable them to see different products. However, lesson one in software development is that you'd like to have the customers included so that you can focus on the real needs and not the perceived needs. Fortune 500 corporations spend hundreds of millions each year on software development, where are they in standardization efforts? In the end, these are the actors that benefit most when the standards are solid. Look at how easy J2EE turns into a vendor lock-in model because the standard leaves crucial aspects undefined or fuzzy. Good solid specifications make the vendor switching cost low, forcing vendors to compete on quality and performance instead of maximum lock-in.
So how do we get more people from the trenches involved?
Peter Kriens
During this meeting it suddenly hit me that we only had vendors around the table. I understand that vendors often take the lead in these efforts because it will enable them to see different products. However, lesson one in software development is that you'd like to have the customers included so that you can focus on the real needs and not the perceived needs. Fortune 500 corporations spend hundreds of millions each year on software development, where are they in standardization efforts? In the end, these are the actors that benefit most when the standards are solid. Look at how easy J2EE turns into a vendor lock-in model because the standard leaves crucial aspects undefined or fuzzy. Good solid specifications make the vendor switching cost low, forcing vendors to compete on quality and performance instead of maximum lock-in.
So how do we get more people from the trenches involved?
Peter Kriens
Tuesday, July 31, 2007
OSGi Bundle Repository Indexer Open Sourced
The OSGi Bundle Repository (OBR) is one of the most popular pages on the OSGi developer web site (www2.osgi.org), it receives almost as many hits as its main page. Surprising, because OBR is hardly advertised.
We developed OBR last year to provide a single place where OSGi developers can find bundles for their projects. One of the key drivers of the OSGi Alliance is to provide a universal middleware platform, reuse is therefore crucial. To reuse, one has to find appropriate components. OBR was developed to address this need.
How does it work technically? Well, the current OBR was derived from Richard Hall's Oscar Bundle Repository, and it is based on the same idea of a federated repository. A repository is a web server that allows access to the meta data and location of bundles. This information can be used to install bundles on a framework directly. For example, Richard developed a shell extension for Oscar/Felix that downloads a bundle from OBR, including its dependencies. Last year, I developed a web interface that allows for browsing the repository interactively.
At the bottom of OBR you find the repository.xml file. This file contains the meta data of the bundles and can link to other repositories as well. OBR is therefore a federated repository. This was an important aspect because we needed to make it as easy as possible for people to provide their own bundles to the repository. In the OBR model, any company or person can maintain its own repository and link it to others. This also enables repositories to be partly private on an Intranet while it still provides access to external repositories. In this model, the point where OBR is accessed can provide a trust model.
The information you find in the repository.xml is equivalent but not identical, to the information in the bundle. The reason it is not identical is that it has become more clear to us in the last few years that the OSGi technology needs a more generic dependency model. Over time, we have added more and more dependencies like Import-Package, Require-Bundle, Bundle-RequiredExecutionEnvironment, Bundle-NativeCode, etc. These dependencies had their own header and their own intricacies. Though this has some advantages, I am afraid that over time it will become too complex. For this reason, we used a generic model for OBR consisting of Capabilities and Requirements. This model resembles JSR 124 but we made it more powerful by making requirements OSGi filters instead of simple properties. Filter based requirements make it possible to create complex expressions and check magnitudes. For example, a requirement can be that there is a screen with a size of more than 100x100 pixels. JSR 124 can only handle exact matches.
An OSGi Environment, at a certain moment in time, has a set of capabilities. Each bundle (or hardware) requires a set of capabilities but can also provide a set of new capabilities when installed (and its requirements are met). This is a wonderfully elegant model. For example, if you have a mobile phone and insert a headset, the phone will be extended with a headset capability. This enables the installation of a bundle that requires a headset, which in its turn provides new capabilities. This model worked so well, that Richard Hall changed his framework resolver in Apache Felix (the program that wires bundles before they can be started) to work in this generic mode. Though we found a few nasty cases related to constraints that are currently not modeled, the idea performed very well.
This model is described in a document available from our website. Please note that this document does not mean that the OSGi Alliance will develop this concept further, we are testing the waters only!
Maintaining the repository.xml by hand is not an option. Eclipse alone provides over 1100 bundles. Fortunately, the OSGi specifications define lots of manifest headers that provide meta data. We therefore developed a program that reads a set of bundles and creates the repository.xml. Last month we got permission of the OSGi Alliance Board to open source this program. This makes it possible for organizations to include a bundle indexer. The sources are hosted on a Subversion server at http://www2.osgi.org/svn/public/trunk/org.osgi.impl.bundle.bindex/.
So how can we make OBR a success. Well, only if you help us do we have a chance. In my work I see so many companies that develop lots of bundles that are not strategic for their company. If we want to make the idea of a universal middleware work, we must share these non-strategic bundles with each others so that we do not get an umpteenth version of a serial port bundle.
If you have those bundles that you want to share, then setup your own repository. It is quite easy. Absolutely crucial is the meta data in the bundles. It is sad, but there are way to many bundles (and JARs) out there that have no name, no description, no license information, no nothing. It takes 5 minutes to add this information to a manifest and it will make bundles more reusable. So you have no excuse to skip this! Then if you have documented bundles, you just create a directory on your company's webserver and save the bundles there. Then once a day you run bindex on this directory. Bindex creates the repository.xml. The only thing left to do is the URL to me. I will link it into the main OSGi Bundle Repository. So, do not ask what bundle you can get from OBR, ask what you can mean for OBR!
Looking forward to seeing all your repositories,
Peter Kriens
We developed OBR last year to provide a single place where OSGi developers can find bundles for their projects. One of the key drivers of the OSGi Alliance is to provide a universal middleware platform, reuse is therefore crucial. To reuse, one has to find appropriate components. OBR was developed to address this need.
How does it work technically? Well, the current OBR was derived from Richard Hall's Oscar Bundle Repository, and it is based on the same idea of a federated repository. A repository is a web server that allows access to the meta data and location of bundles. This information can be used to install bundles on a framework directly. For example, Richard developed a shell extension for Oscar/Felix that downloads a bundle from OBR, including its dependencies. Last year, I developed a web interface that allows for browsing the repository interactively.
At the bottom of OBR you find the repository.xml file. This file contains the meta data of the bundles and can link to other repositories as well. OBR is therefore a federated repository. This was an important aspect because we needed to make it as easy as possible for people to provide their own bundles to the repository. In the OBR model, any company or person can maintain its own repository and link it to others. This also enables repositories to be partly private on an Intranet while it still provides access to external repositories. In this model, the point where OBR is accessed can provide a trust model.
The information you find in the repository.xml is equivalent but not identical, to the information in the bundle. The reason it is not identical is that it has become more clear to us in the last few years that the OSGi technology needs a more generic dependency model. Over time, we have added more and more dependencies like Import-Package, Require-Bundle, Bundle-RequiredExecutionEnvironment, Bundle-NativeCode, etc. These dependencies had their own header and their own intricacies. Though this has some advantages, I am afraid that over time it will become too complex. For this reason, we used a generic model for OBR consisting of Capabilities and Requirements. This model resembles JSR 124 but we made it more powerful by making requirements OSGi filters instead of simple properties. Filter based requirements make it possible to create complex expressions and check magnitudes. For example, a requirement can be that there is a screen with a size of more than 100x100 pixels. JSR 124 can only handle exact matches.
An OSGi Environment, at a certain moment in time, has a set of capabilities. Each bundle (or hardware) requires a set of capabilities but can also provide a set of new capabilities when installed (and its requirements are met). This is a wonderfully elegant model. For example, if you have a mobile phone and insert a headset, the phone will be extended with a headset capability. This enables the installation of a bundle that requires a headset, which in its turn provides new capabilities. This model worked so well, that Richard Hall changed his framework resolver in Apache Felix (the program that wires bundles before they can be started) to work in this generic mode. Though we found a few nasty cases related to constraints that are currently not modeled, the idea performed very well.
This model is described in a document available from our website. Please note that this document does not mean that the OSGi Alliance will develop this concept further, we are testing the waters only!
Maintaining the repository.xml by hand is not an option. Eclipse alone provides over 1100 bundles. Fortunately, the OSGi specifications define lots of manifest headers that provide meta data. We therefore developed a program that reads a set of bundles and creates the repository.xml. Last month we got permission of the OSGi Alliance Board to open source this program. This makes it possible for organizations to include a bundle indexer. The sources are hosted on a Subversion server at http://www2.osgi.org/svn/public/trunk/org.osgi.impl.bundle.bindex/.
So how can we make OBR a success. Well, only if you help us do we have a chance. In my work I see so many companies that develop lots of bundles that are not strategic for their company. If we want to make the idea of a universal middleware work, we must share these non-strategic bundles with each others so that we do not get an umpteenth version of a serial port bundle.
If you have those bundles that you want to share, then setup your own repository. It is quite easy. Absolutely crucial is the meta data in the bundles. It is sad, but there are way to many bundles (and JARs) out there that have no name, no description, no license information, no nothing. It takes 5 minutes to add this information to a manifest and it will make bundles more reusable. So you have no excuse to skip this! Then if you have documented bundles, you just create a directory on your company's webserver and save the bundles there. Then once a day you run bindex on this directory. Bindex creates the repository.xml. The only thing left to do is the URL to me. I will link it into the main OSGi Bundle Repository. So, do not ask what bundle you can get from OBR, ask what you can mean for OBR!
Looking forward to seeing all your repositories,
Peter Kriens