Wednesday, June 30, 2010

Reified Types: Conversions Using Generic Types

One of the most interesting aspects of the Blueprint specification was the ReifiedType. I ran into this nifty class again today because we want create a Conversion service that is useful by the new upcoming OSGi shell service. We had decided to reuse the Blueprint conversion model, but now as a separate service, something which we probably should have done already for Blueprint.

The problem that Reified Type solved for us is how to convert an Object to a specific target type by taking any generic information into account. For example, if you convert an int[] to a Collection<Long> you want to be sure that the collection contains Long objects and not Integer objects. That is, a naive implementation would convert this to Collection<Integer>, which would miserably fail later on when someone uses the collection.

I can hear many readers thinking: "But generic information is erased, isn't it????" Yup, generics are erased, but not completely! When you have an object there is no way to know the generic parameters (the parameters between the angled brackets like T, K, V etc.). I am not sure why they decided to not provide this information with the object because it seems very doable but it clearly is not present. However, the VM maintains extensive generic information everywhere else but in the object. If you have an instance field then it can describe exactly what its generic type constraints are. For example, if you have a field numbers, declared like:

Map<String,Class<T>> numbers;

Then you can get the generic type constraints with the getGenericType method that returns a Type. Virtually all reflective methods have been duplicated to provide this Type object where they used to return a Class object.

OSGi APIs are written in a restrictive subset of Java so that they can be used in a wide variety of VMs. Though we found a solution allowing us to use generics in Java 1.4 (JSR 14), we must assume a target of Java 1.4 with the minimum execution environment for our daily work. This means there is no Type class we can rely on in our APIs.

Obviously, in enterprise scenarios the VM is Java 5 or later, and the Type class is readily available. Is there a way we can eat our cake (use the generics info in our Conversion service) and have it too (no dependency on Java 5)?

The first solution that comes to mind is to duplicate all the sub-types of the Type interface in our own name space. There are actually quite a few sub-types of the Type interface. These sub-types models (and more!) all the flexibility of the generics constraints system: type variables, arrays, wild-cards (super, extends), and parametrized types.

Hmm, not only is duplicating no fun, it also turned out that the hierarchy is not that easy to work with. For our Conversion service we only need to know the raw type of a type parameter. That is, if our target type is Collection<Long> than the we only need to know that the type parameter is Long.

After struggling with this problem we found an interesting solution: the ReifiedType class. A Reified Type collapses all the intermediate levels of wildcards, variables, and arrays and provides direct access to the most specific types that can be used. It turns out that by traversing the network of Type objects it is always possible to end in a raw class, even with variables, wildcards, and arrays.

A Reified Type is always associated with a single raw class. The raw class is the class without any generics information. It provides access to the type parameters by their position. For example, the Map<String, ? extends Number> has a raw class of Map and 2 type parameters: String and Number. For this example, the previous Refied Type has a size() of 2, and provides a ReifiedType<String> and a ReifiedType<Number> for the type parameters.

In the Blueprint specification we provide a concrete implementation for Java 1.4 VMs. In Java 1.4, there are no generics so this implementations hard codes the number of type parameters to 0 by definition. We will do the same in the Conversion service. However, users of this service will be able to pass subclasses that provide the generic information to get better conversions.

So it looks like this was a case where we could have our cake and eat it too! You can already look at Reified Type in the Blueprint specification and in the next release I hope we can have a new Conversion service based on this model.

Peter Kriens

Tuesday, June 29, 2010

Mea Culpa

Yesterday the OSGi Alliance introduced the new look and feel and I choose the rather despicable way of spamming many of this blog loyal readers with blogs that were never meant to be published, at least not yet. By republishing the blog to slide into its wonderful new skin I made the mistake to select all blogs and forgot to deselect some of the drafts that were in the pipeline (or forgotten).

Excuses, at least I hope you enjoy the new look & feel as well as the updated content.

Peter Kriens

Friday, June 25, 2010

To Coordinate in OSGi

Since day one of the OSGi Framework (over 12 years ago) I have been trying to get a light weight transaction model into the specifications. After several attempts that were skillfully aborted by others or way to heavy for what I had in mind I had actually given up. That is, until the last face-to-face meeting in Mountain View. David Bosschaert (Redhat and EEG co-chair) was looking for a better Configuration Admin solution than the Managed Service (Factory) services provided. One of the key requirements he had was that of a composite update. In Configuration Admin updates are done per PID. A PID is a persistence identity for a Dictionary that contains the configuration properties. David argued that in larger enterprise applications there is need to compose the configuration properties out of a number of smaller dictionaries that each represent a configuration aspect. For example, there could be a com.example.system.config PID for system configuration and a com.example.http PID for configuring an HTTP server.

An very good idea I think but he was confronted with a serious problem. Though Configuration Admin nowadays allows the use of multiple PIDs per service, it does not give any timing guarantee other then that each update will be done on another thread than the setter. The problem with the multiple configuration updates is thus that you could get parts of your configuration milliseconds apart. That is normally not good because changes in configuration are potentially causing expensive operations. After several attempts to find a good solution we realized that the transaction model could solve this problem rather nicely. If the Configuration Admin was transaction aware it could wait with updating the target services and sending out events until the commit part of the transaction.

Now there is something funny with transactions, they have a weird effect on system developers. The moment you start talking about transactions they seem them gain 10 pounds and age 10 years. Transactions are seen as very heavy weight because of the recovery requirements and setup. And acting as a resource manager is a non-trivial task with the XA API. Having lost the battle to use real transactions several times (Framework, Dmt Admin, Deployment Admin) I was not prepared to start such a battle again. That is, until it dawned on me that what I really was looking for was a coordination API, transaction are providing much more than I needed.

My side of the software fence has been relatively free of persistence problems, the whole ACID part of transactions was never my favorite part. I liked transactions because it allowed coordinated collaboration between different parties. In an OSGi system you never know how the processor flows through different services when you call another party. In this model there are many cases where you could things more efficiently, or atomically, if you only knew when the task that was being worked upon was being finished. The coordination part of the transactions was always what I liked so much because I knew there was going to be a callback at the end of the transaction.

So could the answer be a light-weight coordination API? If the Configuration Admin was updated to use this API, it could not send out notifications or update managed targets until all the changes were made, that is, at the end of the coordination. So how could this look like (notice: API is work in progress):
Coordinator coordinator = ... ;
ConfigurationAdmin admin = ... ;

void updateConfigurations(Map configs) {
Coordination c = coordinator.begin("update configurations");
try {
for ( Map.Entry e : configs.entrySet() ) {
Configuration c = admin.getConfiguration(e.getKey());
c.update( e.getValue() );
}
if ( c.end() == OK )
return;
// log!
} finally {
terminate(); // Ensure proper termination
}
}
So how does this look like for the participants? For example, how would Configuration Admin schedule its updates when it would use coordinations? Well, a participant must indicate it wants to participate in a coordination. The method to start a participation is on the Coordinator service. The code to participate is as follows for a schedule method of a Runnable that wants to delay scheduling the Runnable's until the coordination is ended:
 final List queue = new ArrayList();

void schedule( Runnable r ) {
if ( coordinator.participate( this ) ) {
synchronized(queue) {
queue.add(r);
}
} else
executor.execute(r);
}
The Coordinator will callback the participant on either the failed() method or the ended() method. The failed() method can be called concurrently with the initiating thread. The ended() method is always called on the initiating thread.
 // the coordination failed, clear the result
public void failed() {
synchronized(queue) {
queue.clear();
}
}

// the coordination ended ok, clear the result
public void ended() {
for ( Runnable r : queue )
executor.execute(r);
queue.clear();
}
The coordination API therefore allows two completely different implementations to synchronize their work on a common task. This API seems incredibly useful to optimize many of our existing admin APIs: from the framework itself to Remote Service Admin. I wish I had realized much earlier not to call it a transaction API ...

Peter Kriens

P.S. This Coordination API is work in progress, there is no promise this work will ever end up in an official OSGi spec.

Thursday, June 10, 2010

How to use Config Admin?

Config Admin is one of the most powerful services the OSGi standardized but is often badly understood. The only reason why it seems hard is because it was designed for highly dynamic long living environments. Most applications start, read their configuration, do something, and then die because someone types control-C or the application exists the VM. In a dynamic environment your configuration can change at any time and you're expected to react to these changes. Users tend to like this model.

What we did is merge the initial "get my configuration" phase that so many people are so used to with the notification that there is a configuration change. With the intent to simplify the coding we only have one mechanism: update. The initial phase is a guarantee that the the Configuration Admin service will update you, even if there is no data provided yet. Many programmers hate this model because they do not feel in charge, they want to grab their configuration when they need and not wait until some louse Config Admin calls them. Though this is feasible, it just a lot more work. Relax, and leave the initiative to Config Admin and it all falls into its place.

The key concept of Config Admin is that the receiver of the configuration registers a Managed Service with a property called service.pid. The value of the property is some unique identification. The Config Admin will then call the service's update(Dictionary) method. If it has a configuration for that service.pid, the value will be a set of properties in the Dictionary. If there has been no configuration set, it will call will null. The key thing is to let the Config Admin control you instead of trying to be charge. Another type of Inversion of Control ...

Lets do a simple example of an echo service on an internet port:

public EchoServer implements ManagedService, BundleActivator {
EchoServerImpl server = EchoServerImpl(-1); // Dummy server

public void update(Dictionary props ) {
int port = -1;

if ( props != null) {
Object o = props.get("port");
if ( o != null )
port = (Integer) o;
}
if ( server.getPort() != port ) {
server.quit();
server = new EchoServerImpl(port);
}
}
}
To set Config Admin data, take a look at the Felix Webconsole, Felix FileInstall, or the Knopflerfish environment. They all support a range of (G)UIs to create configuration records.

Peter Kriens

Friday, June 4, 2010

Bundlefest in Girona!

Next week the OSGi organizes yet another bundlefest in Girona. This time we convene with the Residential Expert Group (REG). The week will start off with a REG meeting and then the remainder will be hacking with bundles for the REG release's compliance test suites and reference implementations. If you were a member, you could have participated in the fun! I am really looking forward to this week because it is a nice group of people to hang out with, the weather looks good in Spain next week, the hotel seems very nicely located, and the residential area is becoming hot again.

There is a surprising amount of activity in the residential area. All the signs are green for this area to finally happen. Many operators are now starting to develop residential gateways. Big projects are happening all over the world, which is now creating more and more interests of vendors. Where we had a hard time selling OSGi in this world a decade ago, today it is more or less a given. Ten years ago the technology was deemed unproven, today that is hard to argue. There are also no real alternatives to the comprehensive execution model that the OSGi provides.

To see how OSGi is solving real problems in gigantic applications running on enterprise servers but also able to hold its own in the embedded world is incredible. As I argued many times before, I do believe we've developed a software model (┬Áservices!) that is surprisingly fundamental. It it is very exciting to be part of this development.

Peter Kriens

P.S. The Enterprise Specification is now available in book form, you can order it now!

Blog Archive