Tuesday, September 17, 2019

Type Safe Eventing - Teaching an Old Spec New Tricks

The rapid growth in the number of connected devices in homes, factories, cities, and everywhere else embracing the IoT revolution has driven a huge increase in the volume and rate of data that systems are expected to handle. Properly managed, the use of "events" can provide a flexible, scalable way to deal with these large data volumes.


Doesn't OSGi Already Support Eventing?

Yes! Event-based applications have long been supported by OSGi using the Event Admin Specification. This provides a broker-based mechanism for distributing event data to whiteboard recipients, either synchronously or asynchronously.

So if we already have an Eventing Specification, why do we need another?

While the current Event Admin does offer a solution for Event Based applications, there are some ways in which it is a little limited:
  • The Event Admin API is based around Maps with opaque String keys and Object values
  • There is no way to monitor the flow of events through Event Admin
  • There is no way to determine whether an event was actually received by anyone
These limitations can make it challenging to use Event Admin, and also lead to a lot of boilerplate code. For example, if you've ever used Event Admin you'll know that you need to be very defensive when trying to consume an event - you may expect the value for the "price" key to be a Double, but it's easy for someone to accidentally supply a Float or a String!

What's being proposed?

The primary enhancement being proposed is to allow the event data to sent in a type-safe way, but what does this actually mean? Well at the moment all the data that goes into an event is stored as a set of String keys which map to Object values, and the only way for an event consumer to know what keys and associated value types to expect is based on the Event Topic. This means that my event consumer is relying on an implied contract where the topic defines the schema of the event data.

The really yucky part of this isn't that the schema is implied, it's that it isn't able to be validated and enforced very easily. If I write an event consumer then it has to rummage around inside the map (hopefully using the correct keys) and assume that the associated values will match the types I expect.

So far Type Safe Eventing is only an RFP, so there isn't any implementation to discuss, however, you can imagine how much nicer it would be to use a type-safe data object like an OSGi DTO. Rather than receiving an opaque map an Event Handler can receive a data object which matches the implied contract of the topic and formalizes the key names and value types. Instead of checking your code for typos in the keys the Java compiler will guarantee that the key exists and that the value is of the right type!

Further proposed enhancements include:
  • Monitoring the flow of events sent via the broker - at the moment the only way to determine what events are flowing is to add your own Event Handler, however, it would be much simpler (and less invasive) if the broker could provide a "monitor" view of the data flowing.
  • Handlers of last resort - the current Event Admin gives no feedback if an event is not handled by anyone. This can lead to problems, especially when debugging, as the data flow seems to "stop" without any obvious reason. Adding the equivalent of a "Dead Letter Queue" would help in this situation, and it could enable other more advanced use cases.

What's Next?

This RFP initially came about as a result of requirements gathered in the development of the BRAIN-IoT Horizon 2020 project, an event-based IoT Platform. The RFP is nearly complete, but it's not too late to supply more requirements if you have any. Soon this specification will turn into an RFC and the implementation work can begin. You can take a look at it at Github:

Tuesday, September 10, 2019

Messaging Comes Into OSGi R8

Distributed communication plays an important role in today’s business applications. No matter if we are dealing with IoT, cloud or microservice infrastructures: components or services need to talk with each other.

The variety of available products that can handle asynchronous communication is large. We all know messaging systems like Kafka, RabbitMQ, MQTT, JMS, Websockets, just to name a few. Some of them are broker based, some of them are not.

In OSGi we all know the Event Admin specification, that can handle synchronous and asynchronous messaging within a framework instance. The new Messaging specification is aimed to handle situations, in which you want to talk with messaging systems outside the OSGi world.

The idea is to have a uniform API, that enables messaging independently from the implementation behind. If you need to talk to a third-party application using JMS and, at the same time, you need to receive MQTT messages from IoT devices, you can use the same messaging facade, without having a deep knowledge of the underlying protocol specific client API.

Why a common interface?

If we take a look at the messaging systems named above, each of them has its own specialities. There is, for example, an own naming convention for channels that are sometimes called topics or/and queues. So, a common API would provide a common ground when we talk about messaging.

There are also common features in those systems, that are usually available in most of the implementations, like concepts of transporting messages using “Send and Forget” aka “Fire-and-Forget” or “Send-and-Forward”. These patterns have something to do with guaranteeing a successful delivery and receipt  of messages.

Common behaviors of messaging systems are also a typical “point-to-point”-,publish-subscribe”- or “reply-to” - communication.

When sending messages, most of the systems use byte data under the hood. Besides that, some of the products have their own message objects to configure the transport for a message, while others use maps and properties for that. So we will end up in different implementations, for different protocols, but the same thing. A common API can ease the use of different messaging protocols, without having a deep knowledge about the implementations.

An intention of the Messaging specification is also to allow all the features, described above, to be used in the typical OSGi manner. Having several implementations hidden behind a common API, makes it easier to substitute implementations and allow our application to talk over different protocols.

Asynchronous Programming Model

In messaging we do not really know when the next message will arrive. Also, the frequency of messages is unknown. We can have broker-based topologies or peer-to-peer based communication, where participants can shut down. Everything can happen at any time. In a messaging system, we have to handle those situations.

Luckily there are already some specifications available to deal with an asynchronous programming model. The PushStream and Promises are useful tools in that context. Especially PushStreams provide the flexibility to scale workers, independently from a possible protocol-client specific implementation. Promises can also be useful when using a “reply-to” call. This is like an asynchronous call to a remote recipient, that returns the answer, also in an asynchronous way.

This last use-case would make it possible to create a Remote Service Admin implementation based upon this specification and let remote services communicate in a distributed way. Using the new OSGi Messaging together with PushStreams enables you, to easily bridge between the EventAdmin and an external messaging system. You would get remote events.

One Size Fits All

This specification is not intended to provide a full-featured “one-size-fits-all” solution. As the requirements define the idea is to provide the basic capabilities of common used messaging concepts and patterns.

Each product on the market has its own ideas and own features that make it unique. It is not possible to bring all their capabilities into one API. But the extensibility of OSGi, we all know, will always allow vendor-specific features, if necessary.

The RFP for this specification will now turn into an RFC and the implementation work can begin. You can take a look at it at Github:






Monday, August 26, 2019

New in OSGi R8: Condition Service

OSGi R7 made developing Applications with OSGi very convenient. It supports complex applications, to become easier manageable and allows for a structure every developer can keep track of. It is furthermore a great Framework to avoid unnecessary complexity, even though it cannot totally prevent it in large applications.

Using services, either with DS and/or CDI, naturally leads to dependent services, which both models can handle conveniently and transparently. A developer can define mandatory services that need to be injected and/or filter for what kind of service he desires. This makes sure that services are available and activated, only if their preconditions are met. The system is not without limits though and the Condition Service is a supplement for such situations.

What are the Use Cases?

Indirect Preconditions
As an example: The whiteboard is one of the most powerful concepts OSGi allows, but it can be a double-edged sword. At development time, it is great to write a whiteboard that works, if none or numerous participants are available. Usually, at startup, you will find yourself in a situation, where you want a whiteboard to be available to you, only if it is already populated with certain services.

Service Not Available
OSGi allows you to tell a component to become active only if certain required services are available. It is, however, not possible to have an active service, as long as one of its required services is not available. Take the example of a Web-Application, that mainly depends on an external component. As long as this component is not available, you might want to have a registered servlet  that responds to every request with “Service temporarily not available”, but goes away the moment the component becomes available.

Configure a Component to activate only if service A and B are Available
Imagine a configurable service, that provides access to billing providers like Credit, Debit and PayPal, where every individual billing provider is a service by itself. At the moment it is not possible to create such a service via ConfigAdmin, that only becomes active if e.g., the Credit and Debit provider is available, without writing a lot of OSGi specific custom code to the component.

Condition Service to the Rescue!

What is a Condition?
In OSGi, a Condition is simply a component, that registers the marker interface org.osgi.service.condition.Condition. DS and CDI will soon have an implicit mandatory reference on every component that can be directed against any condition you like. Thus a component will only become active if such a condition is available. The framework itself will conveniently provide a default TRUE-Condition service registration, DS and CDI use as default. Thus no action is needed if you do not desire to utilize conditions. 

How can I modify this Condition?

Via Annotation
OSGi will provide a convenient component property annotation to set your own target filter to the default condition. 

Via ConfigAdmin
You can use the ConfigAdmin to modify the default condition after the fact by addressing it via the osgi.condition.target property and supplying a valid filter.

How Can I Create a Condition?
There are two main possibilities. A developer can simply register any service exposing the org.osgi.service.condition.Condition interface. Here you have everything in your own hand.

The other possibility is the usage of the ConditionFactory. This will be a configurable component, that can be addressed via ConfigAdmin to register a condition if certain filters apply. 

An example could look like the following Configurator JSON:
{
":configurator:resource-version": 1,
"osgi.condition.factory~test" : {
"osgi.condition.identifier" : "resulting.condition.id",
"osgi.condition.properties.custom.condition.prop" : "my.property",
"osgi.condition.match.all" : [
"(&(objectClass=org.foo.Bar)(my.prop=foo))",
"(my.prop=bar)"
],
"osgi.condition.match.none" : [
"(&(objectClass=org.foo.Fizz)(my.prop=buzz))"
]
}
}

This configuration defines three preconditions where both of the "osgi.condition.match.all" filters must find a service and no service must be available that matches "osgi.condition.match.none". If this is the case, the ConditionFactory registers a condition service, with the properties:

"osgi.condition.id" : "resulting.condition.id",
"custom.condition.prop" : "my.property"

The moment services come and go and violate the configured preconditions the condition will be unregistered and all the dependent components will be deactivated.

Conclusion
Usually, OSGi bundles and services have no preset starting order, because the system can figure this out by itself. For quite some time now, this is a contested and discussed issue in the community, because there are a couple of valid cases where the system needs hints and help to figure out the right starting order. The condition service will be a great step here, by providing missing information to the system.

Monday, August 12, 2019

OSGi Community Event 2019 -- Keynote and Talks Announced


We are pleased to announce that the details of the selected talks for this year's Community Event are now available.  You
can find a list of talks, titles, and abstracts online.

Congratulations to everyone selected and thanks to everyone who made a submission for the OSGi Community Event 2019. We recognize that it takes time and effort and appreciate all of the submissions.
Keynote Speaker
The OSGi keynote speaker for the 2019 Community Event is Matt Rutkowski, IBM CTO Serverless Technologies and he will be speaking on Java in the Age of Serverless: The Path Forward.  Read the full abstract here.
Registration and Hotel
If you are joining us in Ludwigsburg at the event October 22-24 then we encourage you to register early to secure the best price and also to make your hotel reservations as soon as you can as the conference hotel (Nestor Hotel) always sells out well before the event.

Monday, July 29, 2019

New OSGi Work - Features

As recently the OSGi R7 Core, Enterprise and Compendium specs were completed, work has started on the next specifications for OSGi R8. Over the coming few months we'll focus on some of the new work that is underway at OSGi, here on this blog. In this post, we'll look at OSGi Features.

OSGi Frameworks are being used in many different scenarios, from embedded devices and set-top boxes to UI desktop applications and enterprise back-end server applications.  As your OSGi-based application grows, so does the number of bundles that define the application. In many cases, OSGi applications can grow to hundreds of bundles or even more. When an application reaches such a number of bundles, it can become hard to fully understand the role of each and every bundle in the application.

For this reason, many open-source communities and commercial products have started to develop solutions to group features together into more chunky, reusable components. The existing solutions generally combine both a design mechanism for these larger components plus an accompanying runtime. For example, there are Apache Karaf Features, Eclipse Features, Apache Sling Features, OSGi Subsystems, and others.

OSGi Features focus on architecting groups of bundles, configuration, and metadata, defining a mechanism to describe these groups. The Features can at runtime be mapped to existing implementations that support such functionality and in this way provide portability across these.

For example, you might create a Feature that defines a web service to record and provide gaming high scores. Your Feature might use other existing Features that provides HTTP, Jax-RS and database access. As Feature definitions can be published into a repository, such as a Maven Repository, you can use other Features simply by referring to them using repository coordinates. Once your Feature is ready to be used, you can publish yours in a repository, so that others can use it as well.

OSGi Features are extensible so that you can store your own metadata, along with the declaration of your binaries and configuration. This keeps things that belong together, defined together. For example, let's say your application needs a specific database schema to run. You can then declare this inside your Feature, rather than in a side file which might get out of sync.  And, Features are a neat way to scope your entire application: if you want to create a Docker image with a slimmed down version of the Java runtime, your Feature definition is a great way to know what you need from Modular Java and what parts you don't need, and could even be used to generate a minimal runtime image.  As OSGi Features are defined using JSON, so they can be handled by tools and other runtimes, which can be useful when validating them or for processing custom extensions.

To summarize, OSGi Features are aimed at architecting reusable components for OSGi that are larger than bundles. OSGi Features are a design artifact, they don't mandate a specific runtime for it. Therefore OSGi Features can be mapped to existing technologies that already exist in this space, or be used as-is by tools and runtimes that natively handle them.

To learn more about OSGi Features. they are being discussed in RFP 188 and RFC RFC 241. Both can be found at the OSGi design github repo: https://github.com/osgi/design

Tuesday, July 9, 2019

OSGi Community Event 2019 Early Bird Pick & Registration Now Open

Congratulations to Raymond Augé from Liferay, Inc. for being selected as the Early Bird pick!  The title of his talk submission is OSGi CDI Integration Specification.

Abstract
The OSGi Alliance has developed a specification describing integration between OSGi and CDI. The combination of these two powerful development technologies opens the door to new possibilities. This talk will walk through the most essential features of the specification and show some code and running examples

About Raymond Augé 
Raymond Augé is a Sr. Software Architect at Liferay, Inc. As an Apache Software Foundation member and PMC of Apache Aries; Apache Felix and Apache Geronimo projects; committer on the Bndtools.org project; lead of the Eclipse Project for Common Annotations specification; committer and company representative at the Eclipse Foundation and OSGi Alliance Board of Directors and Enterprise Expert Group co-chair; Raymond demonstrates a vigorous passion for open source and open standards.

Event Registration
If you are already planning on joining us you will be pleased to know that Registration for the event is now open and you can secure the best prices by booking early.

Wednesday, May 29, 2019

OSGi After 20 Years


Post written by Peter Kriens, OSGi Fellow and CEO of aQute SARL 


Looking at the current adoption of microservices it is hard to not think, "We told you so 20 years ago!” The reason why microservices work so well is it provides a well-defined API entry point into a module. The caller has a dependency to an API but can ignore the messy details of how that API is implemented; and even more important, what kind of dependencies that module has. Since software complexity grows exponentially with the number dependencies reduction in complexity can be humongous.

This is exactly the core idea of the OSGi service model we developed already 20 years ago, long before REST was a well-known term. Does that mean that OSGi can be retired since its architecture has become an overnight success? I don't think so because OSGi has one humongous advantage over REST services: choice.

OSGi applications are structured as nano-services. Nano services follow the architectural rules of the micro-services but have a much lighter weight. Calling a nano-service has no overhead, unlike calling a micro-service. The good news is that well-designed nano-services can easily be upgraded to micro-services with no impact on the implementations. This makes it possible to start simple in a simple container and then gradually promote nano-services to micro-services in other containers. No other environments but OSGi makes it so easy to do this kind of migration.

I point out this clear advantage of OSGi because it is so recognizable for every OSGi developer. However, when you use OSGi in anger you also learn that it provides more advantages. OSGi provides the plumbing that puts you in control of your development process in a way that I've not seen anywhere else. Few people outside the OSGi even remotely understand the requirements and capability model, and that is their loss. However, organizations that reach the maturity level to use it will never let it go.

Last week, we had the perfect example. One designer had made a change to the runtime setup and 10 minutes later the CI build failed because he had forgotten a crucial capability that was used in one of the hundreds of bundles. Something that might not have been found until the code had been deployed to hundreds of thousands of gateways. OSGi provides the type safety between modules that Java provides between classes.

Being the first in the Java market to do modularity we've taken a lot of bad rap for problems caused by the nature of modularity. JPMS has by now shown that modularity is not a secret sauce that can be sprinkled over a code base. And yes, there is a certain amount of vindication.

To reach modularity maturity requires hard work because our industry has a surprising number of unmodular practices. However, as I can see with my customers, OSGi does deliver when you apply it as it was intended. The road to maturity is a drag but once you're on cruise level, it is indeed very smooth flying.