Being in the spotlight for OSGi has many good sides but it always pains when you get heavily criticized for a presentation about OSGi & Java Modularity that you gave. However, you usually learn more from criticism than from praise. Lets see if we can digest the criticism in William's blog and learn something from it.
From the first part I can see the criticism that OSGi is old technology and we keep (re)inventing component models. I guess there is a fine line between old and mature but it seems a tad unfair to hold age, without any further qualifications, against us. On the contrary, for stability, robustness, and usability a proven technology seems hard to beat. Then again, a promise is always more luring than hard reality I guess.
With the criticism that OSGi has many component models I cannot but disagree. OSGi has a single component model, since day one, and nothing has changed since then. True, on top of the OSGi component models we have different programming models because the needs in that area are quite diverse. However, all these programming models work seamlessly together through the single and only OSGi bundle and service model.
So what's left. In the end, William Louth makes a valid and interesting point: I said in my presentation that going modular is not without pain, class loaders are the major culprits in Java. However, I do not think this is a problem with OSGi, it is a fundamental problem when you modularize applications. Modules cannot see every class in the application, this global view is the antithesis of modularity. However, almost all class loader tricks I see that cause problems require this global view of the class path. Without addressing this problem, I do not think we can reap the fruits of going modular. How can Jigsaw will be able to solve these problems without destroying the benefits of modularity? Moving us from class path hell into module path hell.
The problem we're facing is that class loaders have become a very successful way to extend applications, something they were never designed for. In OSGi, we started with an extension model that makes the coupling between the modules very explicit: services. The class loading architecture was designed to support these services. The OSGi service model allows modules (bundles) to be very self contained and, ideally, only to export packages that are used to define the service.
So the big problem the OSGi faces is how to handle legacy code that heavily depends on class loaders for what OSGi does with services.
I spent considerable time thinking about this. Together with others, we created an RFP investigating all problems we could lay our hands on. My personal conclusion is that an application that requires a global unrestricted view of the class path cannot be modular. Yes, there are hacks that allow you to find the class your class loader need, but these hacks revert to class searching which implies that valuable properties like version constraints and class space constraints are no longer taken into account. My fear is that providing easy "solutions" to these practices will destroy the modular value that OSGi provides. I have found that these visibility problems are very fundamental. Time will tell if Sun is able to solve this problem with their "... much improved replacement."
Peter Kriens
Monday, July 6, 2009
Tuesday, June 23, 2009
Hi, We're OSGi. We mean no harm
Dear James,
I read your interesting interview with eWeek. There were many parts where we agreed, but I was also slightly puzzled with your observations about OSGi. "OSGi is this thing that kind of came from a different universe that's being used for modularity." I do agree that many people see the quality of the OSGi specs as out this world, but that seems a bit exagerated. I do agree we did not start out in the enterprise application space, we started in embedded world where space and performance constraints are pervasive, but after all I thought we both lived in the Java universe. However, this seems at odds with your remarks like "So we needed something that was a lot lighter weight." and "... OSGi's just too much fat."
Hmm, the OSGi core API is 27 classes. That is a all. Security, Module layer, Life cycle layer, and Service Layer. Exceptions, permissions, and interfaces. And one of them is even deprecated! Just the module layer in Jigsaw seems to have more classes, and they just got started ... Or did you mean the implementations? With Concierge at around 80k for an R3 implementation and Felix at 350k it seems a stretch to call us fat? OSGi is even deployed in smart cards.
It's true, our documentation is a bit fat. The core is described in 300 pages. Though we have lots of pictures! And we have virtually no errata, despite the fact that OSGi has been used in tens of thousands of applications over the last decade.
I'd like to tell you a little anecdote. In 1997 I tried to convince Ralph Johnson about Java. My key argument was that Java was so nicely small and therefore easy to understand. Only 11 packages! Ralph, a famous Smalltalker, looked at me wearily and said: "Just wait." Oh boy, was he right. That lesson still drives me every day to keep OSGi lean and mean, annoying many people along the way, but I guess that is the price one needs to pay.
If you think project Jigsaw will be leaner than OSGi, well, modularity is a problem where size does matter. You cannot demonstrate modularity with a Hello World because modularity solves the problem of large evolving code bases. [deleted]
So, James, I think that are lots of details where we did not get it perfect, but OSGi's weight is not one of them. The misconceptions about OSGi at Sun stand to cost our industry a lot of money and pain in the coming years. Project Jigsaw's simplicity is a fallacy, hidden by the fact that they do not address the hard issues that OSGi has now been working on for over a decade. All major application servers are today based on OSGi, not because it was fun or a hype, but because they had no choice. These are applications in themselves that have a scale where modularity is not an option but a necessity. The success of open source will move many Enterprise applications in this same realm.
If you believe that a simplistic solution can address the needed scale, well than indeed we do live in another universe. However, please remember that we're here as friends, to help, and mean no harm.
Your's sincerely,
Peter Kriens
I read your interesting interview with eWeek. There were many parts where we agreed, but I was also slightly puzzled with your observations about OSGi. "OSGi is this thing that kind of came from a different universe that's being used for modularity." I do agree that many people see the quality of the OSGi specs as out this world, but that seems a bit exagerated. I do agree we did not start out in the enterprise application space, we started in embedded world where space and performance constraints are pervasive, but after all I thought we both lived in the Java universe. However, this seems at odds with your remarks like "So we needed something that was a lot lighter weight." and "... OSGi's just too much fat."
Hmm, the OSGi core API is 27 classes. That is a all. Security, Module layer, Life cycle layer, and Service Layer. Exceptions, permissions, and interfaces. And one of them is even deprecated! Just the module layer in Jigsaw seems to have more classes, and they just got started ... Or did you mean the implementations? With Concierge at around 80k for an R3 implementation and Felix at 350k it seems a stretch to call us fat? OSGi is even deployed in smart cards.
It's true, our documentation is a bit fat. The core is described in 300 pages. Though we have lots of pictures! And we have virtually no errata, despite the fact that OSGi has been used in tens of thousands of applications over the last decade.
I'd like to tell you a little anecdote. In 1997 I tried to convince Ralph Johnson about Java. My key argument was that Java was so nicely small and therefore easy to understand. Only 11 packages! Ralph, a famous Smalltalker, looked at me wearily and said: "Just wait." Oh boy, was he right. That lesson still drives me every day to keep OSGi lean and mean, annoying many people along the way, but I guess that is the price one needs to pay.
If you think project Jigsaw will be leaner than OSGi, well, modularity is a problem where size does matter. You cannot demonstrate modularity with a Hello World because modularity solves the problem of large evolving code bases. [deleted]
So, James, I think that are lots of details where we did not get it perfect, but OSGi's weight is not one of them. The misconceptions about OSGi at Sun stand to cost our industry a lot of money and pain in the coming years. Project Jigsaw's simplicity is a fallacy, hidden by the fact that they do not address the hard issues that OSGi has now been working on for over a decade. All major application servers are today based on OSGi, not because it was fun or a hype, but because they had no choice. These are applications in themselves that have a scale where modularity is not an option but a necessity. The success of open source will move many Enterprise applications in this same realm.
If you believe that a simplistic solution can address the needed scale, well than indeed we do live in another universe. However, please remember that we're here as friends, to help, and mean no harm.
Your's sincerely,
Peter Kriens
Friday, June 12, 2009
Classpath hell just froze over?
Other blogs seems to drive this blog nowadays. Just read Classpath hell just froze over. This raises the questions where OSGi stands in the relation with JSR 294. I am not speaking from an official OSGi point of view, but I can of course give my personal opinion. I therefore posted a comment on the original blog but it turned out to be its own blog ...
JSR 294 contains 2 parts:
The module-info.java file? I have my severe doubts and have expressed this in the expert group. I do not think we fully understand the interactions with IDEs like Eclipse, IntelliJ, Netbeans, JDeveloper, etc. Currently, the command line with javac is completely driving the design while imho the IDE will be the common case for anyone needing modularity. Nobody needs modularity for a Hello World program, and letting this use case drive the design seems plain wrong.
One of the problems with this approach that I'm seeing is that a lot of dependencies are already specified in the sources (import package anyone?) and that a smart IDE can help find the proper modules to import those packages from. However, selecting package for import needs a wide range of modules because when you're developing, you want completion support in the IDE. However, when you have compiled your code, the compiler knows exactly which module was selecrted out of this wide scope it was compiled against.
And not to forget the build tools, they will start having to interpret the module-info file and link to the appropriate module system to find their class path. Today, a build tool tells the compiler its class path, in the future it would first have to compile or interpret the Java file. This alone will probably kill 90% of the ant scripts because the class path is used in other places then compiling. Also maven will have to start to interact with this.
This is not all. The current approach for module-info.java is creating a meta-module system. Both Jigsaw, OSGi, and any other module systems can put their metadata in the file. This is the famous design by committee problem: let's each have it our own way and make the other optional. Good for vendors, bad for users. This is something that I am always fighting inside the OSGi. Having a meta-module system will cause severe fragmentation on the module layer. Some people hope for runtime interaction between module systems. Well, this will be very hard, if not impossible. Java module systems are just to complex to be able to map one to another without causing some severe pain.
Then about Jigsaw. It is very focused on breaking up the JDK into modules. I like the native installers but I dislike the fact that they put this native packaging stuff in my face. Java should abstract the platform so I can write code for any platform and distribute my code in a singular form. It's Java's original promise to deploy and manage this code on all the variety of VMs/OSs/packagers out there. There are hundreds of VM's and even more VM-package manager combinations. There is no way anybody can support all of these combinations. Write once, deploy everywhere, and then run anywhere? This all goes against the original promise and architecture of Java.
Jigsaw is too simple for the kind of applications in the enterprise space. Most of the class path problems are still in the module-path: split packages, no real hiding of classes (they will be protected by the module keyword only, not like OSGi invisible to other bundles), no multiple versions of the same JAR to solve hard dependency problems in large applications. Looking at Mark Reinhold's slides I think he agrees, Enterprise applications should be build on OSGi. However, small applications do not have to bother with modularity, so why allow Jigsaw to be used for applications at all? Unfortunately, if Jigsaw becomes part of the JDK delivery, people will start using it, causing immediate irreversible fragmentation.
Ok, to summarize. The module keyword is heavily supported by me and other OSGi people I know. It would allow us to put the bundles in the accessibility check of the Java VM. We're not there yet, but it looks good.
From a module system point of view I think we're moving into a direction of a meta-module system, where one of the two users of this meta-module system has an awful lot of homework left to do. After ten years of working on OSGi I would not yet dare to say that I completely understand Java module systems, I therefore shudder of the thought of a meta-module systems ...
Hope this helps to enlighten some issues.
Peter Kriens
JSR 294 contains 2 parts:
- the module keyword
- module-info.java, with dependencies
The module-info.java file? I have my severe doubts and have expressed this in the expert group. I do not think we fully understand the interactions with IDEs like Eclipse, IntelliJ, Netbeans, JDeveloper, etc. Currently, the command line with javac is completely driving the design while imho the IDE will be the common case for anyone needing modularity. Nobody needs modularity for a Hello World program, and letting this use case drive the design seems plain wrong.
One of the problems with this approach that I'm seeing is that a lot of dependencies are already specified in the sources (import package anyone?) and that a smart IDE can help find the proper modules to import those packages from. However, selecting package for import needs a wide range of modules because when you're developing, you want completion support in the IDE. However, when you have compiled your code, the compiler knows exactly which module was selecrted out of this wide scope it was compiled against.
And not to forget the build tools, they will start having to interpret the module-info file and link to the appropriate module system to find their class path. Today, a build tool tells the compiler its class path, in the future it would first have to compile or interpret the Java file. This alone will probably kill 90% of the ant scripts because the class path is used in other places then compiling. Also maven will have to start to interact with this.
This is not all. The current approach for module-info.java is creating a meta-module system. Both Jigsaw, OSGi, and any other module systems can put their metadata in the file. This is the famous design by committee problem: let's each have it our own way and make the other optional. Good for vendors, bad for users. This is something that I am always fighting inside the OSGi. Having a meta-module system will cause severe fragmentation on the module layer. Some people hope for runtime interaction between module systems. Well, this will be very hard, if not impossible. Java module systems are just to complex to be able to map one to another without causing some severe pain.
Then about Jigsaw. It is very focused on breaking up the JDK into modules. I like the native installers but I dislike the fact that they put this native packaging stuff in my face. Java should abstract the platform so I can write code for any platform and distribute my code in a singular form. It's Java's original promise to deploy and manage this code on all the variety of VMs/OSs/packagers out there. There are hundreds of VM's and even more VM-package manager combinations. There is no way anybody can support all of these combinations. Write once, deploy everywhere, and then run anywhere? This all goes against the original promise and architecture of Java.
Jigsaw is too simple for the kind of applications in the enterprise space. Most of the class path problems are still in the module-path: split packages, no real hiding of classes (they will be protected by the module keyword only, not like OSGi invisible to other bundles), no multiple versions of the same JAR to solve hard dependency problems in large applications. Looking at Mark Reinhold's slides I think he agrees, Enterprise applications should be build on OSGi. However, small applications do not have to bother with modularity, so why allow Jigsaw to be used for applications at all? Unfortunately, if Jigsaw becomes part of the JDK delivery, people will start using it, causing immediate irreversible fragmentation.
Ok, to summarize. The module keyword is heavily supported by me and other OSGi people I know. It would allow us to put the bundles in the accessibility check of the Java VM. We're not there yet, but it looks good.
From a module system point of view I think we're moving into a direction of a meta-module system, where one of the two users of this meta-module system has an awful lot of homework left to do. After ten years of working on OSGi I would not yet dare to say that I completely understand Java module systems, I therefore shudder of the thought of a meta-module systems ...
Hope this helps to enlighten some issues.
Peter Kriens
Wednesday, June 10, 2009
OSGi Case Studies == Pain?
During JavaOne Atlassian presented a case study of OSGi and offered Atlassian Plugins. The presentation was very dualistic. One one side it was very positive towards OSGi but it also was (overly) critical, mostly because they ran in many problems with legacy code. This caused reactions, like the blog OSGi Case Studies == Pain.
As the OSGi evangelist I would probably say it a bit sweeter, but in principle the blog is right: don't use OSGi unless the benefit offsets the cost. Which should of course be true for everything you do. If you have a legacy application of a couple of thousands lines of code, don't bother with OSGi, it will probably be just in your way at its current incarnation.
The reason for the pain is not some unnecessary complexity in OSGi, on the contrary, with less than 30 well documented classes OSGi is actually quite simple in contrast with almost any other API I know. However, enforced modularity is painful because it confronts you with all the entanglements in your code, and even worse, the hacks and shortcuts in the libraries you use. Strong modularity puts your code in a straightjacket, and often that is no fun when you have legacy code that enjoys the anarchy of the Java class path.
Is that pain worth it? Well, both industry and science seems to have clear consensus that modularity provides lots of benefits for large applications. All the major Java app servers are not based on OSGi because they think it is fancy (notice that most do not provide the API to application developers), the sheer size of these systems just require strong modularity to survive their evolution.
Eli Whitney was one of the key drivers of the industrial revolution because he had the idea of interchangeable parts for guns. However, it took the effort of many others and long years before this simple idea could finally be put into practice. During that transition, the skilled gun making craftsmen of those days poked fun of the whole idea and told each other jokes about how interchangeable parts weren't. I think OSGi is that simple idea of interchangeable parts for software. It is a very, very hard problem, but I am convinced it is worth pursuing. Once you have experienced how you can mix and match bundles to construct a large part of an application it is imposible to ever go back.
OSGi is by far the furthest along this idea but I am the first to admit we're not there. It will require more time and effort of many, including you, to modularize our codebases and develop the tools to simplify that process. So yes, it can be a pain to move legacy code towards strong modularity but I am sure that when your applications are big enough the gain will more than compensate the pain. As many can already testify, including Atlassian.
Peter Kriens
As the OSGi evangelist I would probably say it a bit sweeter, but in principle the blog is right: don't use OSGi unless the benefit offsets the cost. Which should of course be true for everything you do. If you have a legacy application of a couple of thousands lines of code, don't bother with OSGi, it will probably be just in your way at its current incarnation.
The reason for the pain is not some unnecessary complexity in OSGi, on the contrary, with less than 30 well documented classes OSGi is actually quite simple in contrast with almost any other API I know. However, enforced modularity is painful because it confronts you with all the entanglements in your code, and even worse, the hacks and shortcuts in the libraries you use. Strong modularity puts your code in a straightjacket, and often that is no fun when you have legacy code that enjoys the anarchy of the Java class path.
Is that pain worth it? Well, both industry and science seems to have clear consensus that modularity provides lots of benefits for large applications. All the major Java app servers are not based on OSGi because they think it is fancy (notice that most do not provide the API to application developers), the sheer size of these systems just require strong modularity to survive their evolution.
Eli Whitney was one of the key drivers of the industrial revolution because he had the idea of interchangeable parts for guns. However, it took the effort of many others and long years before this simple idea could finally be put into practice. During that transition, the skilled gun making craftsmen of those days poked fun of the whole idea and told each other jokes about how interchangeable parts weren't. I think OSGi is that simple idea of interchangeable parts for software. It is a very, very hard problem, but I am convinced it is worth pursuing. Once you have experienced how you can mix and match bundles to construct a large part of an application it is imposible to ever go back.
OSGi is by far the furthest along this idea but I am the first to admit we're not there. It will require more time and effort of many, including you, to modularize our codebases and develop the tools to simplify that process. So yes, it can be a pain to move legacy code towards strong modularity but I am sure that when your applications are big enough the gain will more than compensate the pain. As many can already testify, including Atlassian.
Peter Kriens
Wednesday, June 3, 2009
Monday, May 18, 2009
Processes?
The last few weeks have been quite hectic, closing the Core specifications and working hard on the Compendium. The Compendium contains two major new specifications: Remote Services and Blueprint. The process of creating these specifications seems to have surprised many, and angered some. There is even a blog that makes us look like fools, just making up a process as we go along. Maybe that is understandable, we are trying to run the specification process on a trust and consensus basis and that goes better in an informal atmosphere. People that trust each other just work more effective and are more productive.
However, we do have a formal process documented in RFC 75 and we try to follow it meticulously. This process document has two purposes. First it defines for newcomers the model of how the work will be executed. Second, it is provides clear rules for the worst case when consensus cannot be reached. Unfortunately, it is of course one of the hundreds of things we have to do and most people assumed that the OSGi process was like most other standard organizations. They never have read the process document, really understood the early presentations about the process, nor read one of Eric Newcomer's blogs.
The (seemingly) unique aspect of the OSGi is that it recognizes the role of a Specification Document Editor (SDE). The SDE is paid by the OSGi; it is not an employee of a member company. So far, I had the honor to play this role for the releases 2, 3, and 4.
This makes the RFCs are what their abbreviation say they are: Request For Comments, they are not the final specification. RFCs are input of the specification writing process, a document to reach consensus about a technical design between disparate parties. They can be compared to a design document. And don't we all know very well how the final product changes during development? The vote for the RFC indicates a technical agreement among the EG members, not a vote for the final specification. And frankly, I am a bit disappointed that there is a confusion because the OSGi specification documents look in my eyes very different from the RFCs ...
So let me quote the applicable chapter that describes the current phase (TSC is the technical steering committee that consists of the EG chairs and the Technical Director (me)):
5.6.1 Input
The input to this process shall be the RFC(s) and RFP(s) and any supporting documentation from the EG. This shall be provided to the SDE by the TSC. It is possible that a single Specification may incorporate more than a single RFC and RFP. This integration shall be at the direction of the TSC.
5.6.2 Actions
The SDE shall create the appropriate documents based on the content of one or more RFCs. Under ideal circumstances the formulation of the Specification would be a mechanical process but it is expected that the SDE will uncover inconsistencies or other issues in the RFC(s) which require clarification by the appropriate EG. In this case the SDE shall liaise with the appropriate EGs directly to resolve the issue. The SDE shall have at least one and preferably several review cycles with the appropriate EGs to ensure accuracy prior to completion of the Specification.
5.6.3 Output
When a Specification has been completed it shall be electronically signed using the OSGi Alliance certificate and then voted upon by the EG as stated in section 5.5.7.2. If the document is rejected then it is returned to the SDE together with a written explanation as to the problems with it. The SDE, in conjunction with the EG, shall then modify the document as needed to address the EGs concerns before the document re-enters the formal process.
I think this phase of the process is for a large part the reason that our specs have so few errata. The process of taking documents from an EG and explaining the contents in a consistent tends to discover a lot of issues. Inconsistencies with other parts of the OSGi specs, hidden compromises that do not make sense when looking at things as a whole, hidden assumptions and knowledge, overlaps with other parts of the spec, etc. However good the RFC editors are, it is hard to create a good technical design and at the same time understand, and take into consideration the overall context in which it will be placed. RFC editing is a secondary responsibility, while the SDE is doing this as a primary responsibility. Then again, the SDE has no power whatsoever, any change as well as the final specification, must be approved by the EG.
I do not think any RFC has come through this Specification Writing phase unscathed. However, nobody has ever denied that the specification was better than the input RFC(s) due to this phase, well so far at least.
The second related frustration of last week was a bug report in Eclipse complaining about my reading schedule because there are API changes in an RFC that they based their product on. Since about two years, we addressed public concerns that we were too closed, by publishing interim drafts of the RFCs as well as specifications. Obviously, we made it crystal clear that there are no guarantees about the final specification. Worse, we had virtually no feedback for the RFCs so far and that we are now being banged on the head for fixing issues that come up late during specification writing. It ain't over 'til the fat lady sings ...
That said, I am not denying we have a problem. Though the core went out on the planned date, but the compendium is 4 weeks delayed and it is not completely clear Remote Services and Blueprint will be finished in that time frame. Part of the problem is that the OSGi Alliance has only one SDE, and that person (me) is only being paid part-time to work on it. The EGs have been very active lately and this has significantly increased the workload. We need to fix this somehow. However, I do think we should keep the SDE role for the sake of the final quality.
However, in my opinion, a specification is not cost free for a community, a bad specification can actually be quite expensive. I will not use any names here. I pride myself to work for an organization that actually wants to publish high quality specifications and is willing to pay the (sometimes) steep price. Tim Diekmann's (co-chair of the Enterprise EG) mail signature is:
Peter Kriens
P.S. Processes rate slightly above licenses on my list of favorite subjects ... Back to RFC 119 and RC 124.
However, we do have a formal process documented in RFC 75 and we try to follow it meticulously. This process document has two purposes. First it defines for newcomers the model of how the work will be executed. Second, it is provides clear rules for the worst case when consensus cannot be reached. Unfortunately, it is of course one of the hundreds of things we have to do and most people assumed that the OSGi process was like most other standard organizations. They never have read the process document, really understood the early presentations about the process, nor read one of Eric Newcomer's blogs.
The (seemingly) unique aspect of the OSGi is that it recognizes the role of a Specification Document Editor (SDE). The SDE is paid by the OSGi; it is not an employee of a member company. So far, I had the honor to play this role for the releases 2, 3, and 4.
This makes the RFCs are what their abbreviation say they are: Request For Comments, they are not the final specification. RFCs are input of the specification writing process, a document to reach consensus about a technical design between disparate parties. They can be compared to a design document. And don't we all know very well how the final product changes during development? The vote for the RFC indicates a technical agreement among the EG members, not a vote for the final specification. And frankly, I am a bit disappointed that there is a confusion because the OSGi specification documents look in my eyes very different from the RFCs ...
So let me quote the applicable chapter that describes the current phase (TSC is the technical steering committee that consists of the EG chairs and the Technical Director (me)):
5.6.1 Input
The input to this process shall be the RFC(s) and RFP(s) and any supporting documentation from the EG. This shall be provided to the SDE by the TSC. It is possible that a single Specification may incorporate more than a single RFC and RFP. This integration shall be at the direction of the TSC.
5.6.2 Actions
The SDE shall create the appropriate documents based on the content of one or more RFCs. Under ideal circumstances the formulation of the Specification would be a mechanical process but it is expected that the SDE will uncover inconsistencies or other issues in the RFC(s) which require clarification by the appropriate EG. In this case the SDE shall liaise with the appropriate EGs directly to resolve the issue. The SDE shall have at least one and preferably several review cycles with the appropriate EGs to ensure accuracy prior to completion of the Specification.
5.6.3 Output
When a Specification has been completed it shall be electronically signed using the OSGi Alliance certificate and then voted upon by the EG as stated in section 5.5.7.2. If the document is rejected then it is returned to the SDE together with a written explanation as to the problems with it. The SDE, in conjunction with the EG, shall then modify the document as needed to address the EGs concerns before the document re-enters the formal process.
I think this phase of the process is for a large part the reason that our specs have so few errata. The process of taking documents from an EG and explaining the contents in a consistent tends to discover a lot of issues. Inconsistencies with other parts of the OSGi specs, hidden compromises that do not make sense when looking at things as a whole, hidden assumptions and knowledge, overlaps with other parts of the spec, etc. However good the RFC editors are, it is hard to create a good technical design and at the same time understand, and take into consideration the overall context in which it will be placed. RFC editing is a secondary responsibility, while the SDE is doing this as a primary responsibility. Then again, the SDE has no power whatsoever, any change as well as the final specification, must be approved by the EG.
I do not think any RFC has come through this Specification Writing phase unscathed. However, nobody has ever denied that the specification was better than the input RFC(s) due to this phase, well so far at least.
The second related frustration of last week was a bug report in Eclipse complaining about my reading schedule because there are API changes in an RFC that they based their product on. Since about two years, we addressed public concerns that we were too closed, by publishing interim drafts of the RFCs as well as specifications. Obviously, we made it crystal clear that there are no guarantees about the final specification. Worse, we had virtually no feedback for the RFCs so far and that we are now being banged on the head for fixing issues that come up late during specification writing. It ain't over 'til the fat lady sings ...
That said, I am not denying we have a problem. Though the core went out on the planned date, but the compendium is 4 weeks delayed and it is not completely clear Remote Services and Blueprint will be finished in that time frame. Part of the problem is that the OSGi Alliance has only one SDE, and that person (me) is only being paid part-time to work on it. The EGs have been very active lately and this has significantly increased the workload. We need to fix this somehow. However, I do think we should keep the SDE role for the sake of the final quality.
However, in my opinion, a specification is not cost free for a community, a bad specification can actually be quite expensive. I will not use any names here. I pride myself to work for an organization that actually wants to publish high quality specifications and is willing to pay the (sometimes) steep price. Tim Diekmann's (co-chair of the Enterprise EG) mail signature is:
"There is never enough time to do it right, but there is always enough time to do it over" -- Murphy's LawWell, so far, we actually always tried to do it right because specs cannot be done over. Even if it sometimes really hurts.
Peter Kriens
P.S. Processes rate slightly above licenses on my list of favorite subjects ... Back to RFC 119 and RC 124.
Process Hell
The last few weeks have been quite hectic, closing the core specification and working hard on the compendium. The compendium contains two major
Subscribe to:
Posts (Atom)