Friday, September 27, 2019

OSGi Connect Revisited

The Java 9 release introduced the Java Platform Module System (JPMS) - finally modularizing the class libraries provided by the JVM. In addition, JPMS can be used to modularize applications, enabling developers to split their applications into modules. The OSGi specification has been providing a module system for Java applications for over 20 years which also allows developers to modularize their applications into modules (a.k.a. bundles). Consequently, the question may arise about what happens when developers want to use Java modules to compose applications which are running in a container that is built using the OSGi module system?

In search of an answer to this question, two old ideas spring to mind.  The first idea is an OSGi framework implementation without the module layer. The second idea is to provide virtual modules representing resources from outside the OSGi framework. Over the years, the first idea kept coming up mainly as a way to simplify migration to the service layer for existing applications. The second idea mostly was relevant in the context of bundles whose content is not managed by the framework. With the advent of JPMS it might be worthwhile to revisit the two together not only as a good transition model to the service layer or a resource-specific optimization, respectively, but as a way to bridge modules from the outside world into the OSGi framework. This insight was the impetus for the new RFP 196 tentatively called OSGi Connect. 

What is OSGi Connect?

At the core, the idea is to have framework SPI hooks that allow a management agent to provide bundles whose content is not managed by the framework. More specifically, the idea is to introduce a new kind of bundle where the framework does not expect to have access to byte entries in an archive and instead allows another entity to manage entry access and more importantly bundle class loading. The ultimate goal is to reduce the need to modify the framework to support JPMS modules and other class path entries on the outside on the one hand and to represent these modules inside the framework as bundles that are still subject to the lifecycle and the service layer on the other hand.

How does it work?

At present, the idea is to use the location URLs of bundles to give the SPI a chance to take over managing the entry access and class loading for that bundle. The framework will for each bundle install ask the SPI if it handles the location given. If so, it will delegate resource and class load requests to the SPI while still representing it as a bundle inside the framework and manage the lifecycle like normal. This set-up makes it possible to represent JPMS modules and/or class path entries as bundles inside a framework and because it will use the provided class loader, services from the inside could be used on the outside as well.

What does it look like?

In the end, the result is an OSGi framework where one can install a bundle from a location string recognized and understood by the SPI implementation that will make the actual content and classes available. Since the bundle is otherwise still represented as normal, it will have its bundle activator be called and declarative services will be handled. In addition, it will be possible to have it wired up correctly i.e., other bundles will be able to wire to it and can introspect the wiring like normal. The resulting hybrid solution hopefully will ease the tension between using OSGi or JPMS for applications. Furthermore, as this approach makes it possible to have all bundle classes be loaded by the same class loader (assuming that side-by-side packages are not required) it might be useful in the context of AOT compilation where it is typically not desirable to have user-defined class loaders in the mix.

What are the limitations?

As so often in life, there is no free lunch and trying to make something look like a bundle that isn't is no exception. The OSGi framework will not be able to magically make things dynamically updatable nor can it help in making sure something obeys the OSGi modularity rules. That said, the actual limitations aside from dynamism and module layer verification are surprisingly few with maybe the biggest being that the class loader used by the SPI does not necessarily implement the BundleReference interface and some nuances around handling resource URLs. 

When will it be available?

RFP 196 has been approved recently and is in the RFC phase. The expert group has started to define the SPI interfaces and prototype work has started that shows that they can be used successfully to bridge JPMS modules and class path entries into Eclipse Equinox and Apache Felix. While the actual interfaces are not yet public, we expect some early experiments to be available soon.

Tuesday, September 17, 2019

Type Safe Eventing - Teaching an Old Spec New Tricks

The rapid growth in the number of connected devices in homes, factories, cities, and everywhere else embracing the IoT revolution has driven a huge increase in the volume and rate of data that systems are expected to handle. Properly managed, the use of "events" can provide a flexible, scalable way to deal with these large data volumes.

Doesn't OSGi Already Support Eventing?

Yes! Event-based applications have long been supported by OSGi using the Event Admin Specification. This provides a broker-based mechanism for distributing event data to whiteboard recipients, either synchronously or asynchronously.

So if we already have an Eventing Specification, why do we need another?

While the current Event Admin does offer a solution for Event Based applications, there are some ways in which it is a little limited:
  • The Event Admin API is based around Maps with opaque String keys and Object values
  • There is no way to monitor the flow of events through Event Admin
  • There is no way to determine whether an event was actually received by anyone
These limitations can make it challenging to use Event Admin, and also lead to a lot of boilerplate code. For example, if you've ever used Event Admin you'll know that you need to be very defensive when trying to consume an event - you may expect the value for the "price" key to be a Double, but it's easy for someone to accidentally supply a Float or a String!

What's being proposed?

The primary enhancement being proposed is to allow the event data to sent in a type-safe way, but what does this actually mean? Well at the moment all the data that goes into an event is stored as a set of String keys which map to Object values, and the only way for an event consumer to know what keys and associated value types to expect is based on the Event Topic. This means that my event consumer is relying on an implied contract where the topic defines the schema of the event data.

The really yucky part of this isn't that the schema is implied, it's that it isn't able to be validated and enforced very easily. If I write an event consumer then it has to rummage around inside the map (hopefully using the correct keys) and assume that the associated values will match the types I expect.

So far Type Safe Eventing is only an RFP, so there isn't any implementation to discuss, however, you can imagine how much nicer it would be to use a type-safe data object like an OSGi DTO. Rather than receiving an opaque map an Event Handler can receive a data object which matches the implied contract of the topic and formalizes the key names and value types. Instead of checking your code for typos in the keys the Java compiler will guarantee that the key exists and that the value is of the right type!

Further proposed enhancements include:
  • Monitoring the flow of events sent via the broker - at the moment the only way to determine what events are flowing is to add your own Event Handler, however, it would be much simpler (and less invasive) if the broker could provide a "monitor" view of the data flowing.
  • Handlers of last resort - the current Event Admin gives no feedback if an event is not handled by anyone. This can lead to problems, especially when debugging, as the data flow seems to "stop" without any obvious reason. Adding the equivalent of a "Dead Letter Queue" would help in this situation, and it could enable other more advanced use cases.

What's Next?

This RFP initially came about as a result of requirements gathered in the development of the BRAIN-IoT Horizon 2020 project, an event-based IoT Platform. The RFP is nearly complete, but it's not too late to supply more requirements if you have any. Soon this specification will turn into an RFC and the implementation work can begin. You can take a look at it at Github:

Tuesday, September 10, 2019

Messaging Comes Into OSGi R8

Distributed communication plays an important role in today’s business applications. No matter if we are dealing with IoT, cloud or microservice infrastructures: components or services need to talk with each other.

The variety of available products that can handle asynchronous communication is large. We all know messaging systems like Kafka, RabbitMQ, MQTT, JMS, Websockets, just to name a few. Some of them are broker based, some of them are not.

In OSGi we all know the Event Admin specification, that can handle synchronous and asynchronous messaging within a framework instance. The new Messaging specification is aimed to handle situations, in which you want to talk with messaging systems outside the OSGi world.

The idea is to have a uniform API, that enables messaging independently from the implementation behind. If you need to talk to a third-party application using JMS and, at the same time, you need to receive MQTT messages from IoT devices, you can use the same messaging facade, without having a deep knowledge of the underlying protocol specific client API.

Why a common interface?

If we take a look at the messaging systems named above, each of them has its own specialities. There is, for example, an own naming convention for channels that are sometimes called topics or/and queues. So, a common API would provide a common ground when we talk about messaging.

There are also common features in those systems, that are usually available in most of the implementations, like concepts of transporting messages using “Send and Forget” aka “Fire-and-Forget” or “Send-and-Forward”. These patterns have something to do with guaranteeing a successful delivery and receipt  of messages.

Common behaviors of messaging systems are also a typical “point-to-point”-,publish-subscribe”- or “reply-to” - communication.

When sending messages, most of the systems use byte data under the hood. Besides that, some of the products have their own message objects to configure the transport for a message, while others use maps and properties for that. So we will end up in different implementations, for different protocols, but the same thing. A common API can ease the use of different messaging protocols, without having a deep knowledge about the implementations.

An intention of the Messaging specification is also to allow all the features, described above, to be used in the typical OSGi manner. Having several implementations hidden behind a common API, makes it easier to substitute implementations and allow our application to talk over different protocols.

Asynchronous Programming Model

In messaging we do not really know when the next message will arrive. Also, the frequency of messages is unknown. We can have broker-based topologies or peer-to-peer based communication, where participants can shut down. Everything can happen at any time. In a messaging system, we have to handle those situations.

Luckily there are already some specifications available to deal with an asynchronous programming model. The PushStream and Promises are useful tools in that context. Especially PushStreams provide the flexibility to scale workers, independently from a possible protocol-client specific implementation. Promises can also be useful when using a “reply-to” call. This is like an asynchronous call to a remote recipient, that returns the answer, also in an asynchronous way.

This last use-case would make it possible to create a Remote Service Admin implementation based upon this specification and let remote services communicate in a distributed way. Using the new OSGi Messaging together with PushStreams enables you, to easily bridge between the EventAdmin and an external messaging system. You would get remote events.

One Size Fits All

This specification is not intended to provide a full-featured “one-size-fits-all” solution. As the requirements define the idea is to provide the basic capabilities of common used messaging concepts and patterns.

Each product on the market has its own ideas and own features that make it unique. It is not possible to bring all their capabilities into one API. But the extensibility of OSGi, we all know, will always allow vendor-specific features, if necessary.

The RFP for this specification will now turn into an RFC and the implementation work can begin. You can take a look at it at Github: