Tuesday, May 4, 2010

Duct Tape


I discovered software when a high school friend got a programmable TI-57 calculator. This calculator made me fall in love with developing software; a love affair that has not ended yet. Falling in love made me want to read everything about her. In the pre-google era books were used for that purpose and I read all the books I could get my hands on. At that time software books were mostly about structured programming. The basic idea was that you take a function, decompose it in smaller function, and so on. The mantra for this decomposing was:
  • low coupling
  • high cohesion
The idea of coupling was not hard to understand but I had a bit of a problem with cohesion, it was more fuzzy. And I never liked the design magic that structured programming required. Therefore, when I discovered my mistress' objects around '83 I fell in love all over again. In the period between the calculator and the discovery of object oriented programming I had studied, started a company, and developed a number of editorial systems for newspapers. Until that time most with assembly and crafting a lot of code for networking, databases, operating system, and even a GUI (eat your heart out!). As any new generation we worked hard to ignore the lessons from our predecessors and objects seemed to be the perfect vehicle to put the old geezers in their place. Move out of the way Tom DeMarco, Edward Yourdan, and Michael Jackson, we're coming! So the cohesion and coupling mantra's went out of the window and we showed how data encapsulation with inheritance brought you all these goodies.

Most of you know the results. though objects work very well on the medium scale, once the systems grow and evolve they tend to create systems that resemble spaghetti and become hard to maintain. Something got lost along the way. Interestingly, spaghetti code was exactly the problem that drove structured programming.

Problems that we solve on one level appear almost identical on higher levels is a hallmark of fractal problems. The size of systems has increased exponentially since I started and spaghetti problems that we thought we had solved reappear in a different incarnation. So how do we address the spaghetti problem at our current scale? Could the old structured programming mantra help us when we apply it to objects? Lets take a look.

Modules were defined for functions, not for classes but when we took about a module today we talk about a set of classes with restricted accessibility. Now, classes have a tendency to act as duct tape: very useful and darned flexible. As the joke goes: if you can’t fix it with duct tape, you probably just haven’t used enough. However, classes can also be just as sticky and they can get surprisingly entangled when you're not paying attention. So even if OO modules are restricting accessibility, they've lots of classes sticking out eager to get entangled with other classes, preferably from other modules.

Clearly interfaces solve the sticky type problem by type-separating the client and the provider of an object. However, they do not help in getting in getting an instance without getting entangled with the implementation class. We have to solve that within the OO paradigm: Listeners, Factories, and Dependency Injection. However, these patterns never completely got rid of the stickiness of the implementation. Listeners require the client to know the library implementation, Factories require the factory implementation to know the implementations some way and with DI frameworks there is some all-encompassing God-XML that is sticky as hell. The big problem is that in ALL those cases the implementation class somehow leaks out of the module ready to stick. Often as a string instead of a class but the only advantage of that is cosmetically: your dependency graphs look better because the analyzers usually miss these couplings. However, the dependency still is as concrete as a class coupling and just as bad, just looks better.

So our current toolbox of patterns may hide the coupling to the implementation classes but they do not really get completely rid of it. They also have some surprisingly bad characteristics that we somehow got used to. These patterns are extremely biased to the client and forget that the module that provides the instance might have something interesting to say as well. They all force the provider to provide an object at the time they deem right without letting the provider decide if it is ready and if it really wants to provide an object. The provider has no way to signal when it is ready and maybe even offer multiple instances. Again, Factories and DI are surprisingly one sided.

Let us start with context. When you call a factory or use DI then the caller/receiver can provide parameters but the callee has nothing to say. An instance is created out of the blue from one of its classes. If there is any context, it must be stored in static fields, which is an evil software practice. A module that provides an implementation can not easily different instances based it inside knowledge. For example, maybe a provider tracks instantiated machines in the cloud and wants to offer an object per machine. Sorrt, can't do that. The provider module can only provide some manager where the manager then provides listeners, etc. etc. This lack of context makes APIs seriously more complex.

Similarly, a providing module has nothing to say about timing. The caller of the Factory or the DI engine make the decision when to instantiate the class from the module. The providing module has nothing to say about when it is actually ready to provide such an object. This makes startup ordering and dependencies really hard to manage.

We've all been working with these patterns for so long that it is very hard to see how limited they are. However, in a truly modular system the client module and providing module are peers that collaborate. Each module can play the client role and provider role in different times and neither the client role nor the provider role is passive. None of the existing patterns can handle this peer-to-peer module, all fall short in handling type coupling to the implementation class, the dynamic context, and the timing.

In a perfect world, each module should be able to offer an object to other modules. And each module should be able to react to the availability of those objects. This is the Broker pattern. If I need a Log object, I ask for a it (or get injected when it is available). With the Broker pattern, the providing module can decided when to offer, solving the timing problem. The providing module can instantiate as many objects as it likes and offers them, solving the context problem. And last but not least, it is the providing module that creates the object and thereby never exposing the implementation class.

That said, there is one thing that is still fuzzy in this model: who defines the collaboration contract? Is this the provider? Well, that makes it hard to use other providers for the same collaboration. After a lot of hard thinking it seems obvious that modules are not enough, we need to reify this contract in runtime. If we reify the contract then modules can provide this contract and use this contract without being bound to a specific implementation. Both providers and clients would be peers.

So if we talk about software modularity than in an Object Oriented world we need to augment modules with something that reifies the contract between the modules. So far we've struggled with half baked solutions like Factories and the current DI model to address the problems caused by the lack of a reified contract. These contracts are much more important for a design than module.

I strongly feel that today we're in an almost identical situation as in the eighties when objects addressed the shortcomings of structured programming. This was a hard sell because many people looked at the implementation details and not at the design primitives that OO provided: encapsulation of data, inheritance, and polymorphism. Making people see the primitive instead of the vtables and descriptors was hard work.

I believe that today we need a design primitive that allows us to reason about the collaboration of modules. A design primitive that allows us to design large scale systems and still understand the fundamental architecture of what've we created. A primitive that allows us to see a picture of a complex system and understand quickly how it can be extended. OSGi µServices are very close to that primitive I believe.

I must admit that several times in my life I was unfaithful to my mistress. When we first got married she was all Smalltalk and I loved her for it. But over time she developed a lot of C++ behavior and I not so happy about that. Fortunately, she decided to pick herself up and become so much more dynamic with Java. But if I am really honest I was eying other software the last few years, still missing some of her old dynamism. However, since I saw the potential of her µServices I was head over heals in love again.

Peter Kriens

Thursday, April 15, 2010

The Catwalk

I guess there is something in the air at the moment that makes people worried OSGi is not successful quickly enough because there are not 7 million Java programmers using the OSGi API on a day to day basis. Kirk Knoernschild gave us the choice between Feast or Famine and SD Times told us OSGi is too complex for the enterprise developer. Well, feasts tend to end in hangovers and I do agree the OSGi API is not very useful for most Java web developers. Is OSGi actually a technology that is used by a (web) application programmers? Will web developers have to start using Service References and Bundle objects? I don't think so.

If you develop, for example, a social networking app then you should not be concerned about low level infra structure details and OSGi should completely stay out of your face, this is the concept of high cohesion. OSGi should never be visible on the (web) application level. However, if you write a middleware component and need to interact with the applications then you need domain objects that represent these applications. Bundles and Service References are then the perfect domain objects that allows you to interact with that app on a system level. For example, the Spring DM extender bundle leverages the OSGi API to allow a developer to write POJOs oin his bundle. Many middle-ware APIs can be simplified because the OSGi API provides detailed information about the callers, making the APIs significantly more cohesive.

OSGi itself does not simplify application development, it only enables third parties to provide frameworks that then can simplify the life of the application developers, or empower them. They function provided by OSGi is the collaborative model that makes different frameworks no islands on their own but actually allows them to work together. OSGi defines the collaboration mechanisms, not the nice to have convenience functions for web development. What OSGi allows is breaking a huge application problem in smaller collaborating parts. The cumulative complexity of a collection of small parts is less than those parts combined. However, to enable this collaborative model we must enforce a number of modularity rules. Rules that are limiting on the component level to create more flexibility on the deployment level.

Unfortunately, those rules break a lot of existing code. We often talk about modularity but in reality we tend to create highly coupled components. When these "components" are run on OSGi they crash against the module boundaries because OSGi enforces these boundaries. Many people forget that a class encoded in a string in an XML file is creating as much coupling as that class used in your code. The only advantage is that these strings do not show up in your automated dependency graph ... OSGi is just the unfortunate messenger of evil hidden coupling.

Application servers adopted OSGi because their problem domain is highly complicated and large. So large that strong modularity was their only option to keep things manageable and OSGi was perfect for this because it already contained some of their domain objects. Most Java application developers develop web apps. Web apps are a highly restricted domain that has been extensively simplified by tools and libraries. Improving on this has a very high threshold. This is the same problem as with the combustion engine and helicopter; there are better technologies in principle but the incumbents of a huge head start in optimization. Therefore we've adopted the WAR format. WAR files will make it easier to start running web applications on OSGi without having to introduce new tools for the developers: their WARs will run on OSGi unchanged. Over time they then decompose their WARs into smaller bundles.

There is one innovation in OSGi that I think is highly undervalued: the µServices. µServices are a paradigm shift as important as the move from structured programming to object oriented programming. µServices are the pinnacle of modularity. If they're so good, why does it take so much time before everybody uses them? Well, SD Times provided some insight, they said that a new technology X is irrelevant because developers have been building fantastic systems for a long time without X. It is hard to better illustrate the reasons why paradigm shifts are so hard and can take multiple decades.

As with OO, there is a chicken-egg problem. To take advantage of µServices you need components that provide and consume these µServices. Our problem so far has been that the major adopters (Eclipse/App Servers/Spring) picked OSGi for its class loaders on steroids and treated the µServices as an extra. But things are changing. Last EclipseCon it was clear that µServices are moving to the front. People that could not care less about services now publicly declared their support for them. Eclipse provides now good tooling for µServices, which will make services more attractive for many Eclipse developers. I am sure this will create the needed network effect.

Kirk notes how our industry is more fashion driven than the fashion industry and both authors complain that OSGi is not visible on the catwalk. And that is correct because OSGi is the catwalk, present in every fashion show picture and sustaining virtually any application that runs on a Java EE Application server based on OSGi, which are actually most of them.

Monday, April 12, 2010

Calling your cake and sending it too

During the last EEG meeting in Mountain View at LinkedIn in March we discussed the next phase in Distributed OSGi: asynchronous messaging. With the Remote Service Admin specification we have an elegant model for handling the distributed topology of a cluster of systems but this model is based on synchronous calls to a service, like:

Baz n = service.foo( bar )

Synchronous function calls are very simple to use because the answer is returned inline on the same thread. This model of computing allows you to store the state on the stack, which is efficient and handy. However, In a distributed environment your thread will block for billions of instructions until the return comes in from the remote systems. Threads are relatively expensive resources and it is a pity they go to waste idling. Anyway, if you have to program in a concurrent environment a lot of advantages of synchronous calling seem to disappear. For example, you must be very careful not to hold locks when you call a remote service for it is easy to create deadlocks.

The alternative is messaging. With messaging you create a message (some object) and call a send method on some distribution provider. For example, in JMS there is a send() method that can take a Message, where the message object can contain arbitrary data. The receiver of the message then can send zero or more responses back. The sender can receive this through a proprietary callback mechanism or message queue.

Programs that are based on asynchronous messaging are highly scalable, are easier to make deadlock free, and are more extendable. For example, persistent queues are transparent for the sender and receiver but can provide some very interesting reliability characteristics to the system. In the early OSGi days I wrote an OSGi test framework that used synchronous calls from the GUI to the framework. After struggling with this model for some time I gave up and went to asynchronous message and I remember it felt like a dry warm towel after a heavy water-boarding session.

A big advantage for the OSGi Alliance is that services are a very convenient way to write their specifications using Javadoc. Message based APIs are not as nearly as easy to document. Also, in many cases the synchronous way of calling methods is by far the most efficient when the method is in the same process. With distributed OSGi, we are often not aware in our code that a service is remote. For the best of both worlds we'd like to be able to do both synchronously and asynchronously. There are actually different solutions that mix the idea of a synchronous call but asynchronously processing the result value.

The simplest solution is to adapt the API to handle the asynchronous return value. Google Web Toolkit uses this model extensively for its remote calls from the web browser to the backend. The basic API is defined with an interface but in reality the caller passes a callback object in every call. The following example shows two interfaces, first the normal and second the adapted version. The caller of the second declare method passes an object that is called back when the result comes in.


interface Tax {
Return declare( Declaration decl);
}
interface ServiceAsync {
void declare( Declaration decl, AsyncCallback result );
}

An alternative is the use of the Java 5 Future interface. A Future is an object that is immediately returned as the result of a synchronous call and can be used to get the result later asynchronously. Futures also require the adaptation of the the API to reflect the asynchronous nature:

interface ServiceFuture {
Future declare( Declaration decl);
}


Though these solutions are simple and provide type safety they are kind of ugly because it violates one of the important rules of modularity: cohesion. The interface that was very much about the domain now mixes in concerns of distribution. What does declaring taxes have to do with the callback? These two aspects are very unrelated and it is usually not a good idea to mix them in API design.

An alternative solution is provided by ECF, the Eclipse communications framework. They defined an IRemoteService that takes an IRemoteCall and an IRemoteListener parameter. The IRemoteCall object specifies the remote procedure call with parameters: a Method object, the actual parameters, and a timeout. The Remote Listener is called when the result comes in. This is an intriguing solution because it allows a call to any service, even a local one. The callee is oblivious of the whole asynchronicity, it is always called in a synchronous way. This solution is quite transparent for the callee but very intrusive for the caller because it is too awkward to use from normal code as long as Java does not provide pointers to methods. It is only useful for middle-ware that is already manipulating Method objects and parameter lists.

Could we use a synchronous type safe calling model to specify a service but use it in an asynchronous way if the callee (or an intermediate party like a distribution provider) could play along? After some toying with this idea I do think we can actually eat our cake and have it too.

It is always good practice to start with the caller because asynchronous handing of a method call is most intrusive for the caller. Assume Async is the service that can turn the synchronous world asynchronous and the player is a music player that is async aware. That is the player will finish the song when called synchronously but it will return immediately when called asynchronously. With these assumptions, the code could then look like:

Player asyncPlayer = async.create(player);
URL url = new URL("http://www.sounds.com?id=123212");
Future r = async.call( asyncPlayer.play( url ) );
// do other processing
System.out.println( r.get() );

This looks quite simple and it was actually not that hard to implement (I did a prototype during the meeting). It provides type safety (notice the generic type of the Future). So how could this work?

The heart of the system is formed by the Async service. The Async service can create a proxy to an actual service, this happens in the create method. For each call going through the proxy, the proxy creates a Rendez Vous object and places it in a Thread Local variable before it calls the actual service.

If the callee is aware of the Async service, for example a middleware distribution provider, then it gets the current RendezVous object. This RendezVous object can then be used to fill in the result when it arrives somewhere in the future.

After the asynchronous proxy has called the actual service the Rendez Vous was accepted or it was not touched. If it was not touched, the proxy has the result and fills it in. If the RendezVous object was accepted by the callee, the Rendez Vous is left alone; the callee is then responsible for filling in the result.

After the call the client calls the call method. This method takes the (null) result of the invoked method. The reason for this prototype is that it allows the returned Future to be typed correctly. Though the call method verifies it gets a null (to make sure it is used correctly) it will use the RendezVous object that must be present in the Thread Local variable. The previous sequence is depicted in the following diagram:

The proposed solution provides a type-safe and convenient way to asynchronously call service methods. It works both for services (or intermediate layers) that are aware of the asynchronicity or that are oblivious of it. For me this is a very nice example of how OSGi allows you to create adaptive code. The scenario works correct in all cases but it provides a highly optimized solution when the peers can collaborate. And best of all, it is actually quite easy to use.

Peter Kriens

P.S. These are just my ramblings, the post is just one of the many ideas discussed in the OSGi EGs, and it has no official status.

Tuesday, March 9, 2010

µServices

Whenever I submit something for a conference it gets easily accepted when it is about class loading modularity. Whenever the topic is services, I meet a complete lack of enthusiasm. This is in contrast with my own feeling after working with OSGi for 10 years. Though the modularity is advanced in comparison with all other class loading solutions, it pales in comparison to the innovation that we've called services.

Over time, I've become convinced that part of the problem is the name: services. The web-services guys have stolen our name; talking about services today lights up your conversation partner's neurons of: heavy, slow, XML, complicated, and other neurons you'd prefer to stay calm. Though web-services and OSGi services have the same underlying architecture for decoupling, their execution, purpose, and overhead differs day and night. A web-service communicates with a party outside your process, an OSGi service communicates always within the same process, without any overhead. Calling a method on a service is as fast as calling a method on an object, services are almost as lightweight objects. There is some overhead in signalling the life cycle of the service for the providers and the consumers but in runtime all that overhead is gone.

Though web-services have given the term service the connotation of heavy-weight, we're also to blame by not being ambitious enough. It is not until very recent that I've come to see how much we missed the part that has later been filled in by Service Binder, Declarative Services, iPOJO, Spring DM, and Blueprint. The original model of handling your service dependency manually is and was broken from the start. Sadly, I actually recall discussing moving this responsibility to the framework but it was deemed too hard and we did not have enough time. Due to the lack of built-in support for automatic service dependency handling we created the image of services being awkward constructs. Messing around service references and service trackers did not make it look lightweight at all! However, those days are gone and services are now not only lightweight with respect to performance and other overhead, today they are trivially to use, almost as easy as normal objects, albeit with a lot of built-in power that normal objects lack. With annotations, declaring a service has become trivially to use. For example:
@Component
public class ExecutorManager implements Executor {
ExecutorService executor;
LogService log;

public void execute( final Runnable r ) {
executor.execute( new Runnable() {
public void run() {
try {
r.run();
} catch( Throwable t ) {
log.log(LogService.LOG_ERROR, "execution failed", t );
}
});
}

void deactivate() {
executor.shutdown();
executor.awaitTermination(10, TimeUnit.SECONDS);
}

@Reference
void setThreadPool( ThreadPool pool ) {
executor = Executors.newCachedThreadPool(pool);
}

@Reference
void setLog( LogService log ) {
this.log = log;
}
}
Using the bnd annotations there is almost no cruft in the code. The following bnd file is the only extra file required:

Private-Package: com.example.executor
Service-Component: *
Really! And this is not limited to bnd, iPOJO and the Apache SCR runtime annotations provide similar simplicity.

This example is very little code but surprisingly robust in many dimensions. It is perfectly thread safe, all timing issues are managed. The Executor service is not registered before the Log Service and the Thread Pool service are available. And when one of these services go away, everything is correctly handled. The example is also very open to extensions that are completely invisible from the outside. As a service I can always find out what bundle is using me and have bundle specific options, for example, certain bundles should be limited in the amount of CPU time they can take through the executor.

In the eighties I discovered object oriented programming and quickly fell in love with it. OO caused a paradigm shift; we started thinking differently about how you solved problems. Today it is incredibly hard to imagine thinking without objects because objects have become an intrinsic part of our vocabulary. However, in the eighties we explained objects with C structs that had pointers to an array of methods and then when you send a message to an object it would be dispatched to the correct class method. I recall countless discussions with people that basically didn't see the innovation because they could only see the mechanic description and not the paradigm shift; how objects really simplified problems when you treated them as primitives. I do believe services are similar in this aspect, when you have to worry about Service References and cleaning up, the chores overwhelm the gains. However, when there no more chores to worry about, services are an incredibly elegant design primitive that map very well to domain specific problems.

Now I do realize that "paradigm shift" is a loaded term. In the nineties the paradigm word was heavily abused; for a short time it became the marketing term of choice for many software products. Soon after the abuse the word was ridiculed whenever used, paradigm shifts do not come that often. I am therefore fully aware that I use big words here, but I do believe that services are a similar layer on top of objects as objects were on top of structured programming.

If you look at the recent history of software after OO became mainstream then there are a number of patterns that stand out:
  • Listeners
  • Factories
  • Dependency Injection
All these patterns are trying to manage coupling. For this, they are based on the interface concept. Interfaces simplify separating implementation from specification, a concept that provides by far the best bang for the buck. If we look at type coupling in the previous three patterns we see exactly this idea: All parties are coupled to the interface and the interface is not coupled to anything. This is exactly the reason why interfaces work so well, both the provider and the client can evolve independently because they have no knowledge of each other. It really does not get much better in software to achieve simplicity than not knowing something ...

So the three aforementioned patterns use exactly the same trick, the difference between the three pattern is the dynamicity. With a listener, the control is from the library to the client. With a factory, the control is reversed, the client takes the initiative and the library is passive. Dependency injection is interesting because in this model both the client and the provider are passive, the DI framework has the initiative. Client and provider activity must be encoded in normal code, the DI framework is oblivious of this. This exactly the reason the service model took some time to integrate with Spring DM, this was the first time the providers and clients became active.

There are, however, four cases when both the client and the provider can be active or passive. What is this fourth case? This is the case where both the client and the provider are active. This is exactly the case that the OSGi service registry covers: active clients and active providers. The service registry fits perfectly in the list of patterns because it also use the interface to separate clients and providers. One could call the OSGi service registry the missing link ...

The OSGi service model does not provide an additional model, it only unifies the factory and listener pattern allowing both of them to exist simultaneously. It now becomes clear why it was such an oversight that we did not add a dependency injection facility until release 4, if we'd had that from the beginning we had covered all cases. However, with Declarative Services, the OSGi does cover all the 4 cases.

An OSGi service therefore unifies the Factory, Listener, and Dependency Injection into a single primitive idea. Because of this unification it also supports situations where both the client and the provider are active. In today's infrastructure this is no longer a luxury or nice feature, it has become a necessity. Clusters, cloud computing, and the interaction with other systems require that software does not fail when dependencies are not met all the time. All those semantics are contained in OSGi services for a very low price, both in performance, runtime costs, and conceptually.

However, the most exciting part of services is that they seem to map so well to many software domains. Maybe this excitement is partly caused by my background that is largely outside enterprise software. Most software I worked on was connected to the real world and the real world just happens to be awfully dynamic. Most of those problems can be more easily solved when services are used as a design primitive.

Trying to convince people to use services as design primitives seems to fly against the idea of abstracting oneself from the OSGi API. In my eternal quest against coupling I fully agree with this sentiment, it is exactly what I always do. However, OSGi services transcend OSGi, I am not promoting the OSGi APIs for using services, that is just the first place where this paradigm has matured and a good place to get experience. What I am promoting is the idea of µServices, the concepts of an OSGi service as a design primitive. Maybe I should start a JSR to introduce µServices to the Java Standard Edition ...

Peter Kriens

Friday, February 26, 2010

Three weeks to OSGi DevCon

Just a reminder, it is only two weeks to OSGi DevCon! We just had OSGi DevCon London and that was a great success. The OSGi DevCon London was organized by JAX and as always it was superbly organized in an excellent hotel. I always like it when the hotel and the conference are together, it increases the chance to get to talk to people. Tuesday night I did not get to my room until 1.30 AM. These on- and offline talks are crucially important to better understand where the OSGi eco system is moving to. Obviously it is interesting to hear what the usual suspects are thinking but these conferences also allow me to hear from the trenches. Sometimes you can correct invalid understandings but often you can learn a lot of the problems people face in software. This year's OSGi DevCon in Santa Clara will be held in the Hyatt hotel, I am already looking forward to have lots of discussions with people visiting the conference.

That said, one should not underestimate the program, we've got a very nice OSGi program this year. On Monday night we'll be introducing OSGi Enterprise R4.2. Tim Diekmann, the Enterprise Expert Group co-chair, will be introducing this seminal piece of work. We'll raffle the first copy of the Enterprise release book, signed by the co-chairs, might one day be a collectors item.

On Thursday we will also have a very interesting workshop on Cloud Computing that is already heavily oversubscribed, looks like a hot topic and I am expecting a very interesting day.

As the Michelin Guide would say: "Worth the trip!" So if you're not registered yet, do it now. Note the discount for OSGi members, just use your OSGi member mailing address.

Hoping to see many of you in Santa Clara!

Peter Kriens

Monday, February 8, 2010

OSGi & Cloud Computing

The Eclipse Foundation and the OSGi Alliance are holding a Cloud workshop during the OSGi DevCon/EclipseCon developer conference in Santa Clara, Thursday March 25.

They key question we want to answer in this workshop: what role can OSGi play in the cloud? Offerings like the ones from Amazon (aws.amazon.com) are agnostic of any application model and OSGi can play in their EC2 offering like anybody else because it is based on generic x86 machines. However, a model like the Google App Engine so severely knee-capped Java that it is doubt full that OSGi can run on it. Many cloud computing providers have free plans to get you started, or at least make the cost trivial. However, the costly part is your own investment in the software you develop for the cloud. On the desktop and on the server we've had a lot of advantage of standards that abstracted us from the vendors. This portability allows us to move our code to different app servers (well mostly). Though most of the lessons we learned in the past still apply to the cloud, the current vendors of cloud computing have very specific offerings that easily create portability problems. How to access the storage? How to discover and handle multiple instances of the application in the cloud? How to handle storage? How to share domain specific services? Standardizing interfaces for these aspects of cloud computing could provide a lot of portability. And portability is not only in the interest of the clients, also vendors gain by having a much larger market.

Perusing the different offerings for cloud computing I can clearly see that the OSGi bundle model would work very well in this area. Applications can easily be managed remotely because remote management is inside OSGi's genes. This always has made OSGi easy to use in clusters and much of those benefits apply equally to cloud computing. However, the advantages of the OSGi service model seems to be even more clear. A cloud computing environment is by definition a dynamic environment. Adding instances, removing instances, and instances that fail will likely influence the other instances. This means that the application will need to handle the dynamicity of the services that these computing instances provide. There will be also be dependencies that must be managed. OSGi services shine in these areas, making it relatively simple to correctly model these dynamic dependencies.

So overall the combination of cloud computing and OSGi is clearly an interesting one. With the workshop we want to bring together cloud people and OSGi people and see where there are areas where OSGi standards could help. This first workshop is by invitation only because for this first time we want to learn; we need people with experience in the area of cloud computing and that see OSGi as a potential standards player in this area; creating a discussion between cloud experts and OSGi experts. So if you're heavily into cloud computing and you want to attend, send me or Ian Skerret from the Eclipse Foundation a mail. Amazon? Google? Microsoft? You?

Peter Kriens

Tuesday, February 2, 2010

OSGi DevCon 2010!

Time flies, it is more than 3 years ago that Bjorn Freeman-Benson, BJ Hargrave, and me sat down after the 2006 conference to discuss the possibilities to organize an OSGi DevCon in conjunction with EclipseCon. Today I am proud to announce the 4th OSGi DevCon in Santa Clara, March 22-25. The program is, as usual, staggering. It always impresses me how many people are willing to contribute to EclipseCon/OSGi DevCon. Overall there were more than 350 submissions and about 60 of those were for OSGi DevCon. Picking the most interesting program was even harder than previous years because there is less space; we therefore have less time for OSGi DevCon. However, the resulting program is probably of even higher quality.

First I would like to draw your attention to the fact that we will officially publish the OSGi Enterprise Specification during EclipseCon. The OSGi Alliance will host a BOF on Monday night. One of the co-chairs of the OSGi Enterprise Expert Group, Tim Diekmann, will give a presentation during this BOF of what is in this specification and why it is ground breaking.

We have three tutorials. The first tutorial is from the people that wrote the OSGi and Equinox: Creating Highly Modular Java Systems book. You will get a feel for Toast telematics! See Working with OSGi: The stuff you need to know.

The next tutorial is from Kirk Knoernschild and Neil Bartlett, both very experienced developers and excellent writers and presenters. This tutorial was actually chosen in the EclipseCon Program Commitee top 5. The subject is a very hot topic at the moment: modularity. We all learned the lessons about coupling and cohesion. However, applying those lessons in large developments is still hard. This tutorial will give you theoretical as well as practical insight in modularity and using OSGi to achieve it. See Modular Architecture from Top to Bottom.

The last tutorial is from Karl Pauls and Marcel Offermans. They are the lead developers of the Apache ACE project and have been developing with OSGi forever. Their subject is absolutely core for OSGi although not always that visible. OSGi is not a "Hello World" technology, such examples only work well when the scope is small. The scope of OSGi is, however, large scale technology. Size does matter for OSGi. A consequence of the scale is that systems have a large number of bundles. This number becomes so large that handling these bundles requires automation because it is just too much to do by hand. Karl and Marcel will teach how to manage installations that reach these problems. See Become a Certified Bundle Manager today.

The first long talk is a must for anyone using OSGi. One of the most exciting pieces of work inside the OSGi is the nested framework RFC. Nested frameworks bring back the initial philosophy of OSGi: the bundles are your application. Enterprise servers based on OSGi starting to deploy many applications inside a single framework. In such a constellation, your peer bundles and peer services might no longer be yours. Nested frameworks returns to this model, an application will be installed in a child framework, also called composite bundles. The lead developers of Eclipse Equinox as well as Apache Felix will present the proposed architecture and discuss merits, pitfalls, and problems that still need to be solved. So do not miss Composite Bundles - Isolating Applications in a Collaborative OSGi World.

OSGi is like a sharp knife. When used well, it is extremely useful, when used wrongly it hurts. Chris Aniszczyk, Jeff McAffer, Martin Lippert, and Paul Vanderlei have been working with OSGi for the better part of the noughties and therefore have lots of experiences and the bruises and cuts to prove it. Between them they cover almost any computing aspect that can be used in conjunction with OSGi. Jeff was the driver behind Eclipse's adoption of OSGi, Chris is the lead developer of PDE, Martin has worked on Aspect Oriented Programming in Eclipse including the weaving issues and is an aficionado of OSGi as well, and Paul brings the experience from the embedded world. A must for anybody that wants to adopt OSGi. See OSGi Best and Worst Practices.

OSGi is at the foundation of RCP, obviously. However, you can use RCP and not see much of OSGi. David Orme has been contracting for J.P. Morgan where they created an internal platform based on RCP. In the last few years they re-architected this platform to take more advantage of OSGi. This is a very good experience report for anybody that has to develop software to be used inside large organizations. See OneBench Reloaded - Pushing the (OSGI) Modularity Story in an Enterprise-wide Rich Client Stack.

Looking at the size of this blog, I do not think I should loose more readers going through each of the 25 mins talks, even though I think they're more than worth it. I therefore list them here as bullets:

  • Apache Aries: Enterprise OSGi in Action - A report from a new open source project that will bring us lots of enterprise components for OSGi. Graham Charters from IBM will present.
  • My Unmanned System is Eclipse Powered - Next time you see an unmanned vehicle, OSGi might be behind the wheel. Talk about cool OSGi apps! Tankut Koray will show you the role OSGi plays in their architecture.
  • Next Generation OSGi Shells - Traditionally shells run inside the OSGi framework, however, this shell works as launching tool, interacting with a Paremus' Nimble to find the necessary bundles. Robert Dunne will tell you about these shells and show you how easy it is to deploy applications consisting of many bundles.
  • OSamI Tools for OSGi Application Developers - OSamI is a very large cross-european project to develop common technology for ambient intelligence, all based on OSGi. Naci Dai and Murat Yener from eteration A.S. will tell you more.
  • Managing OBR Repositories with Nexus - Maven is moving to OSGi and there is more and more collaboration. Sonatype has adopted OBR in their Nexus repository, allowing it to play with the advanced resolvers that appearing in the market. Jason van Zyl, the man behind Maven, will tell you about their strategy.
  • Using JPA in OSGi - Mike Keith and Timothy Ward are the lead authors of the OSGi JPA adaption, a part of the OSGi Enterprise Specification. See how you can simplify using persistence in OSGi bundles.
  • OSGi Enterprise for Java EE Developers - How do you go from Java EE to OSGi? Many patterns that are necessary in Java EE do not work well in a very modular environment. Timothy DeBoer will show you how to use Eclipse tools to ease the transition.
  • OSGi & Java EE in GlassFish - When Glassfish adopted OSGi a few years ago I was very excited to see how Java EE and OSGi can co-exist, each providing their strengths. Since then, the Glassfish team has more and more adopted OSGi, they even hired Richard Hall, the lead Apache Felix developer. Sahoo and Jerome Donchez are the lead architects and will report to you about the new cool features.
  • Realistic Remote Management of OSGi-based Residential Boxes - OSGi was made to be managed remotely. However, managing thousands of devices running OSGi somewhere out there remains a complex area. Dimiar Valtchev from ProSyst has a very long experience with this problem and will elucidate you on the issues and solutions.
  • Overcoming sticker shock: addressing the unexpected costs of moving to OSGi in the enterprise - Eric Johnson from TIBCO will explain you what you can expect when you move from a Java EE environment to OSGi, the rules and patterns that work are quite different. This will be an experience report but will also focus on how the community can work to ease this migration.
  • Making Dependency Injection work for you - Joep Rottinghuis and Parag Raval from eBay tell you how to use Spring DM to use Dependency Injection in bundles.
  • Logging in OSGi Enterprise Application - As a non-enterprise programmer I am always in awe when I see the avalanche of logging information coming out of enterprise programs. However, it seems important and OSGi puts some unique challenges in the way of traditional loggers because they often require global visibility and of course the OSGi Log Service. Ekkehard Gentz provides an overview and a demo of OSGi logging.
  • ScalaModules: OSGi the Easy Way with a Scala DSL - The last months I've tried to use Scala because it has features I know from my Smalltalk days and daily miss when using Java. Though any new programing language is painfull to learn (what takes you seconds in Java initially takes you minutes in Scala because you have to figure out how), Scala really looks very interesting. Roman Roelofsen and Neil Bartlett will report to you about Scala Modules, a way to bring modularity to the Scala Language.
On Valentine's day the early registration price will end and you'll have to pay the full amount. So be sure to register as soon as possible to take advantage of this discount. If you're an OSGi member, you can get an additional discount if you register here with the email address you use on the OSGi members web site.

I am looking forward to see you again in this 4th OSGi DevCon, lets hope it will be the best ever!

Peter Kriens