Monday, March 23, 2020

OSGi Core R8 Specification Draft Available - Get Connected

The OSGi Core Expert Group has released a draft of the upcoming OSGi Core Release 8 specification. The draft includes two new specifications for the Core Framework as well as a number of smaller improvements.
  • OSGi Connect - Provides a mechanism to connect bundles in the Framework with content managed outside of the Framework itself
  • OSGi Condition Service - Provide a mechanism to signal that a condition is satisfied at runtime
For this post, I will focus on the OSGi Connect specification included in chapter 60 of the draft specification. Karl Pauls previously blogged about OSGi Connect last year and now I would like to describe the progress made since that post for the draft specification.

A Bit of History

Over the last 20 years, the OSGi Framework has provided a solid foundation to develop modular software.  This foundation is built upon the various layers provided by the Core OSGi Framework itself.

At the bottom of the foundation is the module layer.  This layer provides the rules for how modules (or bundles) can share or hide the packages they include in their own bundle JAR.  The Framework provides the class loader implementation that enforces the rules of the module layer.  This includes a rich capabilities and requirements model and a resolver that wires up the requirements to the available capabilities in the Framework.  Once a bundle is installed and resolved it is then able to participate in the layers that are built on top of the module layer.

The lifecycle layer provides control over the activation of the bundles resolved in the Framework.  Activation provides an entry point for each bundle to allow them to interact with the Framework and other bundles installed in the Framework.  A bundle may include a bundle activator which allows them to execute code when they are activated.  The activation concept also allows for more powerful runtimes to be built on top that can use introspection of the bundle capabilities and provide a mechanism for enabling the capabilities of the bundle at runtime.  This concept is called the extender pattern.  Two good examples of the extender pattern are the OSGi Declarative Services specification and the CDI Integration specification.  These two examples provide powerful dependency injection runtimes for developers.

The service layer provides a dynamic, concise and consistent programming model for developers, simplifying the development and deployment of services by de-coupling the service's specification from its implementations. This model allows developers to bind to services only using their interface specifications. The selection of a specific implementation, optimized for a specific need or from a specific vendor, can thus be deferred to runtime. The service layer model provides a common layer which allows for things like declarative service components and CDI components have dependencies on services available in the service registry.  This is powerful because it allows the components from declarative services and CDI to interact with each other through a shared layer without needing to know the details of each other’s runtimes.

The combination of the activation model with the service model in OSGi provides for an unparalleled platform for developing loosely coupled components that are highly configurable at runtime. In order to use this powerful tool, developers must be able to live within the confines of the OSGi Framework module layer. In some scenarios, it is challenging to live within the OSGi module layer as it is defined today. The following are a few examples:
  • Modules built into a jlink image. The jlink tool was introduced in Java 9 along with the Java Platform Module System (JPMS) and provides a way to “right size” the JVM along with a set of application modules to provide an image that only includes what is required by an application’s modules. At runtime Java modules are typically loaded by class loaders provided by the JVM.  In the case of a jlink image all the modules are loaded by a single class loader.  This limits the ability to load bundle content out of the modules included in a jlink image.  Similar limitations exist with modules provided on the module path.
  • Applications natively compiled using something like Graal Substrate.  With native-image compilation not only is it not possible to have a custom class loader, the classes at runtime are not really loaded at all.  Instead they simply exist with the execution of the native-image. This limits the ability to load bundles out of content compiled into the native-image.
  • Bundles included on the class path or more generally loaded by some form of the URLClassLoader. The concept of the Java class path has been around since the early days of Java and many of today’s popular frameworks take advantage of the class path environment.  For example, Spring Boot loader mimics the class path when a Spring Boot application is packaged as a uber JAR (a JAR with many embedded JARs).  It has been challenging to integrate OSGi technologies into such environments.
  • Other environments that compile Java into alternative deployment artifacts, such as Android, limit the ability of the framework to control the actual installation and deployment of new bundles.

The Idea to Connect

It would be helpful to allow content that lives outside of the module layer to participate in the lifecycle and service layers of OSGi. The new OSGi Connect specification introduces a mechanism that allows content managed outside of the Framework control to be represented (or connected) with a bundle installed in the Framework.  Because the content is managed outside of the Framework itself it may not follow all the rules of the OSGi module layer. For example, multiple bundles connected to outside content may be loaded by the same class loader. The class loader loading the content may not provide the same isolation as the Framework managed class loaders. With Connect now that content has the ability to participate in the activation model as well as the service layer of the framework.

The Connect specification introduces a concept of a module connector. The module connector is called by the framework when the framework needs to access content of the bundle. The content of the bundle includes the following:
  • A list of entries contained in the bundle and access to read from them.  For example, a declarative service component XML file, the bundle manifest, or any other resources included in the bundle.
  • An optional map of bundle manifest headers. Typically bundle manifest headers are specified in the bundle’s META-INF/MANIFEST.MF entry, but for connect content the headers may come from an alternate source.
  • An optional class loader for the bundle. If a class loader is provided then the framework must use it for the bundle and must not create a framework managed class loader.
In order to install connect bundles, the Framework must be created with a module connector instance (defined by the interface org.osgi.framework.connect.ModuleConnector). The module connector hooks into the initialization of the framework as well as the activation, allowing it to interact with the Framework lifecycle and service layers. More importantly the module connector hooks into the installation of bundles in the Framework.  When a bundle is installed into the Framework a location string and an optional input stream to the bundle content is provided by the installer.  When no input stream content is provided the Framework must determine what the content of the bundle is.  Typically the Framework implementation will attempt to convert the location string into a URL and load the bundle content from there.

When a module connector is used the Framework must first ask the module connector if it can provide content for the bundle location.  If the module connector can provide content then it supplies a connect module to the framework (defined by the interface org.osgi.framework.connect.ConnectModule). A connect module is then associated with the connect bundle installed in the framework.  For each revision of the connect bundle the connect module provides connect content to the Framework (defined by the interface org.osgi.framework.connect.ConnectContent). A connect content provides access to read entries out of the content, provides an optional class loader for the content and may provide the bundle manifest headers for the content.

To create a new Framework that uses a module connector a new framework factory has been introduced with the interface org.osgi.framework.connect.ConnectFrameworkFactory.  A Framework implementation that supports the Connect specification provides a ConnectFrameworkFactory just like the Framework provides the org.osgi.framework.launch.FrameworkFactory.  A launcher that is looking for a factory to create a Framework instance can use the Java ServiceLoader to load a ConnectFrameworkFactory implementation.  This factory can then be used to create a new Framework instance that uses a module connector instance.

Seeing it in Action

While developing the Connect specification, Karl Pauls (Apache Felix project lead) and myself (Tom Watson - Eclipse Equinox project lead) have been busy implementing the Connect specification in our respective OSGi Framework implementations. I have also been working on a project called Atomos that I used as a proof of concept to test out the ability of the Connect specification to connect bundles to various content from different environments, such as: a jlink image, Graal substrate, Spring Boot uber JAR and an Android Dexified JAR (still a work in progress).

With the Connect specification entering into the draft phase, Karl and I thought it would be a good idea to have my Atomos project contributed to the Apache Felix project.  This has been done and is now available in the GitHub repository https://github.com/apache/felix-atomos

Among other things, the Atomos project provides a runtime with a module connector and a launcher that can be used to easily launch a framework and a set of bundles contained on the class path or module path.  For example, to launch a framework with the Apache Felix Gogo console you could have the following directory containing the necessary JARs:

bundles/
bundles/atomos.osgi.framework-0.0.1-SNAPSHOT.jar
bundles/jline-3.14.0.jar
bundles/org.apache.felix.atomos.runtime-0.0.1-SNAPSHOT.jar
bundles/org.apache.felix.gogo.command-1.1.0.jar
bundles/org.apache.felix.gogo.jline-1.1.0.jar
bundles/org.apache.felix.gogo.runtime-1.1.2.jar
bundles/org.eclipse.osgi-3.16.0.tjwatson_osgiConnect11.jar

The org.apache.felix.atomos.runtime JAR is the Atomos runtime snapshot built from the felix-atomos GitHub repo.  It contains the Atomos module connector implementation and the AtomosLauncher class which discovers the framework implementation and launches it with the Atomos module connector. The org.eclipse.osgi JAR is a snapshot of the Equinox framework that implements the connect specification. The atomos.osgi.framework JAR is a Java module that acts as a facade for the Framework implementation so that the Atomos runtime module can require it instead of requiring directly the org.eclipse.osgi module.  This allows Atomos to work with other Framework implementations, such as Felix.  The atomos.osgi.framework JAR is only required if you are launching from the module path.

To launch Atomos with the Gogo console using the class path, the following Java command can be used:

java -cp "bundles/*" org.apache.felix.atomos.launch.AtomosLauncher

To launch using the module path instead the following can be used:

java -p bundles -m org.apache.felix.atomos.runtime

Both will give you a gogo console with the bundles, Framework implementation, and Atomos all loaded from the same class loader.  The Atomos runtime does support Java 8 when using the class path, but class path mode can also be used with Java 11. If you are using Java 11 or higher you will notice that a number of other bundles are installed from the modules included in the JVM.  Atomos discovers all the modules that got loaded and will map each of them as an OSGi connect bundle.  This includes not only the modules from the specified module path, but also the modules from the boot layer of the JVM.  It includes things like the java.base module.  For example, if you run the Gogo command “lb” you will see things like the following:

g! lb | grep java.base
   38|Active     |    1|java.base (11.0.6)|11.0.6

Atomos will read the module descriptors and generate an equivalent OSGi bundle manifest for it so that it can be represented as a connected bundle in the Framework.

At this point, you have a fully functional OSGi Framework instance.  You can even install other bundles dynamically.  Any additional bundles that you install will not be connect bundles because they will not have been included on the original class path or module path at launch.  That means they will have the typical Framework managed class loader and have all the expected behaviors of living in the OSGi module layer.

The Atomos project also has a number of examples that can be looked at in https://github.com/apache/felix-atomos/tree/master/atomos.examples. This includes a jlink, Spring loader and a few Graal Substrate examples.  The Spring Loader and substrate examples all include a version of the Apache Felix WebConsole and the substrate examples also include the Felix SCR (Declarative Services implementation) and a set of test bundles that have declarative service components.  On my laptop the substrate examples can be launched in an impressive 40 milliseconds.

We are also working on a Maven plugin in Atomos to make the configuration of a Graal substrate compilation easier and automatic.  Also in the works is the ability to produce something that can easily be used on Android.

I am excited about the upcoming OSGi Core R8 specification release with the Connect specification.  I think this will help enable the use of OSGi technologies in environments that were not previously easy to do and in some cases not possible.  Now go download the draft specification and read more details about the new Connect specification.

Tuesday, March 17, 2020

OSGi Alliance IoT Vision

The OSGi Alliance started more than 20 years ago with the mission to develop specifications for connected homes and buildings – something we now consider part of the Internet of Things. Over the last couple of months, we have worked on an update of our IoT vision that describes challenges and requirements, and how OSGi addresses these in the chaotic IoT standards landscape. The challenges and requirements are not necessarily all new, but the ever-growing IoT landscape now requires more than ever standards to ensure that IoT solutions are economically and technically sustainable, maintainable, agile and evolvable over many decades. The OSGi Alliance with its specification for standardized software modularity and life-cycle management, connectivity of IoT devices, device management and software provisioning have been industry proven for many years with millions of deployments. The IoT Expert Group will continue to work on new specifications such as interoperability with oneM2M and support for real-time environments.

Please review the OSGi Alliance IoT vision and give us your feedback. Should you feel that we are on the right track, please join us to help make sustainable IoT solutions a reality.

Wednesday, January 22, 2020

To Embed or Not To Embed Your API Packages

For many years the OSGi Alliance, like other Java Standards organisations, has been creating APIs as part of its specifications. Multiple communities have then been involved in providing implementations of those APIs and specifications (see https://en.wikipedia.org/wiki/OSGi_Specification_Implementations).

In Java, not just OSGi, to use a specification you need both the API and the code which provides the API implementation. As a convenience many OSGi bundles implementing an API also provide the API package(s) from the bundle via ‘Substitutable Exports’ (see https://www.osgi.org/developer/white-papers/semantic-versioning/exporter-policy/). This means that the API package is both exported as well as imported by the implementation bundle. Because the OSGi framework controls the classloading on the package level, when multiple bundles export the API package(s), the OSGi framework can decide which exporting bundle ultimately provides the API package(s) to each bundle importing the API, that is, to each API consumer. This is not possible in “vanilla” Java, where the first API package(s) found on the classpath will be used, potentially causing problems if multiple versions of the API are present.

The use of substitutable exports had historical advantages because it made the job of a management agent easier when trying to find a set of functional bundles to install into an OSGi framework. By providing the API packages that they implement, implementations make themselves easier to deploy. The management agent (or a human) does not also have to find a bundle providing the API to make an implementation resolve. As the OSGi specifications continued to evolve the process to  search for dependencies was replaced by a solution that uses the standard Resolver and Repository services. Due to the limited metadata available, the use of substitutable exports also had advantages here. If a client bundle imports an API package and that API package is provided by another bundle which also provides an implementation of the API then a resolver will automatically pull in the implementation bundle along with the API when resolving the set of bundles to install.  If the API is provided by a separate bundle from the implementation then the resolver needed some proprietary way to discover the requirement for the implementation, and then to discover the bundles that provide the implementation.  In modern OSGi the “resolution problem” can now be completely solved by using the generic capability and requirements functionality provided by the OSGi core specification.  Not only can bundles declare requirements on the API using the Import-Package header, but they can also declare requirements on an implementation using the Require-Capability header.  Similarly bundles can declare that they provide an implementation using the Provide-Capability header.

In the past there was another benefit to substitutably exporting your API, getting API package(s) for individual OSGi specifications used to be tricky. The main Jar file that had them available was typically the osgi-cmpn.jar which contained all the API packages for all the Compendium specs and this Jar was certainly not designed or intended to be used at runtime.  Fortunately this last problem is now resolved since OSGi Release 6 as individual specification API jars are now available in Maven Central for each specification and these jars can be used at runtime.

So while the approach for embedding API packages has served the OSGi community well for many years, it requires a runtime supporting type visibility encapsulation, such as an OSGi Framework, that is in control of the class loading to select the appropriate package provider at runtime. Recently, work has started at the OSGi Alliance to support operating in environments where the class loader may not be provided by an OSGi Framework, or where there is no class loader at all. See RFC 243 OSGi Connect and RFC 245 Resource Encoding For Java Modules. This could be when running with the Java Platform Module System (JPMS), running as part of another framework such as Spring Boot, or even in a AOT-compiled environment such as SubstrateVM. Many existing OSGi bundles can work in these environments, providing the dynamic OSGi service model and allowing users to continue using popular OSGi technologies such as Config Admin, Declarative Services, the Converter and many others. However the embedding of API packages defined in specifications becomes a problem if there are multiple bundles that contain and provide the packages, since in these contexts there is no OSGi classloader to select the proper packages and hide, through type visibility encapsulation, the unused copies of the packages.

Going Forward

Going forward, to support all environments for your bundles, you should consider not embedding API packages that are defined by OSGi specifications. The better approach is to declare them as dependencies using Import-Package manifest headers. Most build tools will do this automatically for you. At runtime, simply use the API runtime bundles provided by OSGi. If you do embed the API packages in an implementation bundle then be sure to only embed the API packages you directly implement.  In most all cases you should not embed the APIs that you use but do not implement.  For example, if you use the OSGi Log Service you should avoid embedding the org.osgi.service.log package unless your bundle is actually providing the implementation of the Log Service.

Friday, September 27, 2019

OSGi Connect Revisited

The Java 9 release introduced the Java Platform Module System (JPMS) - finally modularizing the class libraries provided by the JVM. In addition, JPMS can be used to modularize applications, enabling developers to split their applications into modules. The OSGi specification has been providing a module system for Java applications for over 20 years which also allows developers to modularize their applications into modules (a.k.a. bundles). Consequently, the question may arise about what happens when developers want to use Java modules to compose applications which are running in a container that is built using the OSGi module system?

In search of an answer to this question, two old ideas spring to mind.  The first idea is an OSGi framework implementation without the module layer. The second idea is to provide virtual modules representing resources from outside the OSGi framework. Over the years, the first idea kept coming up mainly as a way to simplify migration to the service layer for existing applications. The second idea mostly was relevant in the context of bundles whose content is not managed by the framework. With the advent of JPMS it might be worthwhile to revisit the two together not only as a good transition model to the service layer or a resource-specific optimization, respectively, but as a way to bridge modules from the outside world into the OSGi framework. This insight was the impetus for the new RFP 196 tentatively called OSGi Connect. 

What is OSGi Connect?

At the core, the idea is to have framework SPI hooks that allow a management agent to provide bundles whose content is not managed by the framework. More specifically, the idea is to introduce a new kind of bundle where the framework does not expect to have access to byte entries in an archive and instead allows another entity to manage entry access and more importantly bundle class loading. The ultimate goal is to reduce the need to modify the framework to support JPMS modules and other class path entries on the outside on the one hand and to represent these modules inside the framework as bundles that are still subject to the lifecycle and the service layer on the other hand.

How does it work?

At present, the idea is to use the location URLs of bundles to give the SPI a chance to take over managing the entry access and class loading for that bundle. The framework will for each bundle install ask the SPI if it handles the location given. If so, it will delegate resource and class load requests to the SPI while still representing it as a bundle inside the framework and manage the lifecycle like normal. This set-up makes it possible to represent JPMS modules and/or class path entries as bundles inside a framework and because it will use the provided class loader, services from the inside could be used on the outside as well.

What does it look like?

In the end, the result is an OSGi framework where one can install a bundle from a location string recognized and understood by the SPI implementation that will make the actual content and classes available. Since the bundle is otherwise still represented as normal, it will have its bundle activator be called and declarative services will be handled. In addition, it will be possible to have it wired up correctly i.e., other bundles will be able to wire to it and can introspect the wiring like normal. The resulting hybrid solution hopefully will ease the tension between using OSGi or JPMS for applications. Furthermore, as this approach makes it possible to have all bundle classes be loaded by the same class loader (assuming that side-by-side packages are not required) it might be useful in the context of AOT compilation where it is typically not desirable to have user-defined class loaders in the mix.

What are the limitations?

As so often in life, there is no free lunch and trying to make something look like a bundle that isn't is no exception. The OSGi framework will not be able to magically make things dynamically updatable nor can it help in making sure something obeys the OSGi modularity rules. That said, the actual limitations aside from dynamism and module layer verification are surprisingly few with maybe the biggest being that the class loader used by the SPI does not necessarily implement the BundleReference interface and some nuances around handling resource URLs. 

When will it be available?

RFP 196 has been approved recently and is in the RFC phase. The expert group has started to define the SPI interfaces and prototype work has started that shows that they can be used successfully to bridge JPMS modules and class path entries into Eclipse Equinox and Apache Felix. While the actual interfaces are not yet public, we expect some early experiments to be available soon.

Tuesday, September 17, 2019

Type Safe Eventing - Teaching an Old Spec New Tricks

The rapid growth in the number of connected devices in homes, factories, cities, and everywhere else embracing the IoT revolution has driven a huge increase in the volume and rate of data that systems are expected to handle. Properly managed, the use of "events" can provide a flexible, scalable way to deal with these large data volumes.


Doesn't OSGi Already Support Eventing?

Yes! Event-based applications have long been supported by OSGi using the Event Admin Specification. This provides a broker-based mechanism for distributing event data to whiteboard recipients, either synchronously or asynchronously.

So if we already have an Eventing Specification, why do we need another?

While the current Event Admin does offer a solution for Event Based applications, there are some ways in which it is a little limited:
  • The Event Admin API is based around Maps with opaque String keys and Object values
  • There is no way to monitor the flow of events through Event Admin
  • There is no way to determine whether an event was actually received by anyone
These limitations can make it challenging to use Event Admin, and also lead to a lot of boilerplate code. For example, if you've ever used Event Admin you'll know that you need to be very defensive when trying to consume an event - you may expect the value for the "price" key to be a Double, but it's easy for someone to accidentally supply a Float or a String!

What's being proposed?

The primary enhancement being proposed is to allow the event data to sent in a type-safe way, but what does this actually mean? Well at the moment all the data that goes into an event is stored as a set of String keys which map to Object values, and the only way for an event consumer to know what keys and associated value types to expect is based on the Event Topic. This means that my event consumer is relying on an implied contract where the topic defines the schema of the event data.

The really yucky part of this isn't that the schema is implied, it's that it isn't able to be validated and enforced very easily. If I write an event consumer then it has to rummage around inside the map (hopefully using the correct keys) and assume that the associated values will match the types I expect.

So far Type Safe Eventing is only an RFP, so there isn't any implementation to discuss, however, you can imagine how much nicer it would be to use a type-safe data object like an OSGi DTO. Rather than receiving an opaque map an Event Handler can receive a data object which matches the implied contract of the topic and formalizes the key names and value types. Instead of checking your code for typos in the keys the Java compiler will guarantee that the key exists and that the value is of the right type!

Further proposed enhancements include:
  • Monitoring the flow of events sent via the broker - at the moment the only way to determine what events are flowing is to add your own Event Handler, however, it would be much simpler (and less invasive) if the broker could provide a "monitor" view of the data flowing.
  • Handlers of last resort - the current Event Admin gives no feedback if an event is not handled by anyone. This can lead to problems, especially when debugging, as the data flow seems to "stop" without any obvious reason. Adding the equivalent of a "Dead Letter Queue" would help in this situation, and it could enable other more advanced use cases.

What's Next?

This RFP initially came about as a result of requirements gathered in the development of the BRAIN-IoT Horizon 2020 project, an event-based IoT Platform. The RFP is nearly complete, but it's not too late to supply more requirements if you have any. Soon this specification will turn into an RFC and the implementation work can begin. You can take a look at it at Github:

Tuesday, September 10, 2019

Messaging Comes Into OSGi R8

Distributed communication plays an important role in today’s business applications. No matter if we are dealing with IoT, cloud or microservice infrastructures: components or services need to talk with each other.

The variety of available products that can handle asynchronous communication is large. We all know messaging systems like Kafka, RabbitMQ, MQTT, JMS, Websockets, just to name a few. Some of them are broker based, some of them are not.

In OSGi we all know the Event Admin specification, that can handle synchronous and asynchronous messaging within a framework instance. The new Messaging specification is aimed to handle situations, in which you want to talk with messaging systems outside the OSGi world.

The idea is to have a uniform API, that enables messaging independently from the implementation behind. If you need to talk to a third-party application using JMS and, at the same time, you need to receive MQTT messages from IoT devices, you can use the same messaging facade, without having a deep knowledge of the underlying protocol specific client API.

Why a common interface?

If we take a look at the messaging systems named above, each of them has its own specialities. There is, for example, an own naming convention for channels that are sometimes called topics or/and queues. So, a common API would provide a common ground when we talk about messaging.

There are also common features in those systems, that are usually available in most of the implementations, like concepts of transporting messages using “Send and Forget” aka “Fire-and-Forget” or “Send-and-Forward”. These patterns have something to do with guaranteeing a successful delivery and receipt  of messages.

Common behaviors of messaging systems are also a typical “point-to-point”-,publish-subscribe”- or “reply-to” - communication.

When sending messages, most of the systems use byte data under the hood. Besides that, some of the products have their own message objects to configure the transport for a message, while others use maps and properties for that. So we will end up in different implementations, for different protocols, but the same thing. A common API can ease the use of different messaging protocols, without having a deep knowledge about the implementations.

An intention of the Messaging specification is also to allow all the features, described above, to be used in the typical OSGi manner. Having several implementations hidden behind a common API, makes it easier to substitute implementations and allow our application to talk over different protocols.

Asynchronous Programming Model

In messaging we do not really know when the next message will arrive. Also, the frequency of messages is unknown. We can have broker-based topologies or peer-to-peer based communication, where participants can shut down. Everything can happen at any time. In a messaging system, we have to handle those situations.

Luckily there are already some specifications available to deal with an asynchronous programming model. The PushStream and Promises are useful tools in that context. Especially PushStreams provide the flexibility to scale workers, independently from a possible protocol-client specific implementation. Promises can also be useful when using a “reply-to” call. This is like an asynchronous call to a remote recipient, that returns the answer, also in an asynchronous way.

This last use-case would make it possible to create a Remote Service Admin implementation based upon this specification and let remote services communicate in a distributed way. Using the new OSGi Messaging together with PushStreams enables you, to easily bridge between the EventAdmin and an external messaging system. You would get remote events.

One Size Fits All

This specification is not intended to provide a full-featured “one-size-fits-all” solution. As the requirements define the idea is to provide the basic capabilities of common used messaging concepts and patterns.

Each product on the market has its own ideas and own features that make it unique. It is not possible to bring all their capabilities into one API. But the extensibility of OSGi, we all know, will always allow vendor-specific features, if necessary.

The RFP for this specification will now turn into an RFC and the implementation work can begin. You can take a look at it at Github:






Monday, August 26, 2019

New in OSGi R8: Condition Service

OSGi R7 made developing Applications with OSGi very convenient. It supports complex applications, to become easier manageable and allows for a structure every developer can keep track of. It is furthermore a great Framework to avoid unnecessary complexity, even though it cannot totally prevent it in large applications.

Using services, either with DS and/or CDI, naturally leads to dependent services, which both models can handle conveniently and transparently. A developer can define mandatory services that need to be injected and/or filter for what kind of service he desires. This makes sure that services are available and activated, only if their preconditions are met. The system is not without limits though and the Condition Service is a supplement for such situations.

What are the Use Cases?

Indirect Preconditions
As an example: The whiteboard is one of the most powerful concepts OSGi allows, but it can be a double-edged sword. At development time, it is great to write a whiteboard that works, if none or numerous participants are available. Usually, at startup, you will find yourself in a situation, where you want a whiteboard to be available to you, only if it is already populated with certain services.

Service Not Available
OSGi allows you to tell a component to become active only if certain required services are available. It is, however, not possible to have an active service, as long as one of its required services is not available. Take the example of a Web-Application, that mainly depends on an external component. As long as this component is not available, you might want to have a registered servlet  that responds to every request with “Service temporarily not available”, but goes away the moment the component becomes available.

Configure a Component to activate only if service A and B are Available
Imagine a configurable service, that provides access to billing providers like Credit, Debit and PayPal, where every individual billing provider is a service by itself. At the moment it is not possible to create such a service via ConfigAdmin, that only becomes active if e.g., the Credit and Debit provider is available, without writing a lot of OSGi specific custom code to the component.

Condition Service to the Rescue!

What is a Condition?
In OSGi, a Condition is simply a component, that registers the marker interface org.osgi.service.condition.Condition. DS and CDI will soon have an implicit mandatory reference on every component that can be directed against any condition you like. Thus a component will only become active if such a condition is available. The framework itself will conveniently provide a default TRUE-Condition service registration, DS and CDI use as default. Thus no action is needed if you do not desire to utilize conditions. 

How can I modify this Condition?

Via Annotation
OSGi will provide a convenient component property annotation to set your own target filter to the default condition. 

Via ConfigAdmin
You can use the ConfigAdmin to modify the default condition after the fact by addressing it via the osgi.condition.target property and supplying a valid filter.

How Can I Create a Condition?
There are two main possibilities. A developer can simply register any service exposing the org.osgi.service.condition.Condition interface. Here you have everything in your own hand.

The other possibility is the usage of the ConditionFactory. This will be a configurable component, that can be addressed via ConfigAdmin to register a condition if certain filters apply. 

An example could look like the following Configurator JSON:
{
":configurator:resource-version": 1,
"osgi.condition.factory~test" : {
"osgi.condition.identifier" : "resulting.condition.id",
"osgi.condition.properties.custom.condition.prop" : "my.property",
"osgi.condition.match.all" : [
"(&(objectClass=org.foo.Bar)(my.prop=foo))",
"(my.prop=bar)"
],
"osgi.condition.match.none" : [
"(&(objectClass=org.foo.Fizz)(my.prop=buzz))"
]
}
}

This configuration defines three preconditions where both of the "osgi.condition.match.all" filters must find a service and no service must be available that matches "osgi.condition.match.none". If this is the case, the ConditionFactory registers a condition service, with the properties:

"osgi.condition.id" : "resulting.condition.id",
"custom.condition.prop" : "my.property"

The moment services come and go and violate the configured preconditions the condition will be unregistered and all the dependent components will be deactivated.

Conclusion
Usually, OSGi bundles and services have no preset starting order, because the system can figure this out by itself. For quite some time now, this is a contested and discussed issue in the community, because there are a couple of valid cases where the system needs hints and help to figure out the right starting order. The condition service will be a great step here, by providing missing information to the system.