Wednesday, July 18, 2018

OSGi Community Event 2018 Early Bird Pick & Registration Now Open

Congratulations to Lisa Nafeie from DLR, the German Aerospace Centre.  Lisa's talk, Visualization of OSGi based Software Architectures in Virtual Reality, has been selected as the Early Bird pick for the OSGi Community Event 2018.

We were lucky to have the opportunity to speak with Lisa to get some further background on her talk.  You can find our questions and her answers on the OSGi website.

We hope that this talk gives you a taste for all of the interesting OSGi content we will have at this year's Community Event in Ludwigsburg in October.  The OSGi Program Committee are busy reviewing all of the submissions to put a packed program together again this year.

If you are already planning on joining us you will be pleased to know that Registration for the event is now open and you can secure the best prices by booking early.

Monday, July 2, 2018

OSGi Presentation Recordings with BGJUG and Software AG

For those who follow this blog, you will know that we had an Evening of OSGi in mid-April hosted by BGJUG and Software AG. I am pleased to be able to share with you the video recordings that Software AG made at the meetup.  These have been posted on the OSGi Alliance YouTube channel and are available for you to watch.


There were two talks presented by the OSGi Alliance:
  • An 'Introduction to OSGi' by BJ Hargrave, OSGi Alliance CTO
  • A look at 'How OSGi Alliance specifications have influenced the IoT market' by Pavlin Dobrev & Kai Hackbarth (Bosch)
In addition, Todor Boev from Software AG presented 'OSGi and JPMS in practice' which discussed Software AG's experiences with OSGi and Jigsaw/JPMS.

Thanks once again to BGJUG and Software AG for arranging the meetup and allowing the Expert Group members to meet with the local Java community.

Tuesday, June 19, 2018

OSGi R7 Highlights: Configuration Admin and Configurator

The OSGi Compendium Release 7 specification contains an update to the Configuration Admin Service specification which includes a number of new features and also introduces the new Configurator specification.

Configuration Admin


One of the most used but on the other hand barely noticed services from the OSGi compendium specification is the Configuration Admin. This service allows to create, update and delete configurations. It is up to the implementation where these configurations are stored. A configuration has a unique persistent identifier (PID) and a dictionary of properties.

Usually, configurations are tied to an implementation of an OSGi service but configurations can be used for any purpose like database connections, current temperature or the set of available nodes in a cloud setup. While the Configuration Admin service has an API to find configurations or create them, it supports a more inversion of control like behavior by supporting a callback mechanism. The callback (known as the ManagedService interface) gets invoked for existing/created configurations of a certain PID and also if that configuration is deleted.

While this callback exists, again it's not that common to use it directly. The most common and easiest way to develop OSGi components and services is to use Declarative Services. Declarative Services (DS) provides build-in support for configurations. Simply by implementing an activation method the component can get its configuration. The implementor of that component does not have to worry whether such a configuration exists, gets deleted or is modified. DS takes care of all of this and invokes the right actions on the component.

In addition to single configurations, Configuration Admin provides support for factory configurations: multiple configurations of the same type, like for example different logger configurations for different categories or different database connection configurations. These factory configurations have a factory PID which is the same for all configurations of that type and a configuration PID to distinguish those configurations.

This should explain the big picture. Let's start talking about the new things in Release 7.

Configurator Specification


The Configurator specification is a new specification in the R7 release. It basically consists of two parts. The first part describes a common textual representation of a set of configurations. Previous to this specification each and every tool was using its own format for provisioning configurations. For example, the famous Apache Felix File Install uses a properties like format. Other tools use slightly different formats. One problem is that you can't simply switch from one tool to another and the other major problem is that some of the formats do not allow to specify the real type of a property value. For example, the value for the service ranking property must be of type Integer. Or you might have a special implementation that is expecting (for whatever reason) a value to be of type Byte. However, some tools are simply always using a Long to represent numbers or a String to represent anything else.

Therefore a common definition eliminates these problems and allows interchangeability of configurations between various tools. The format is JSON based and uses the PIDs of a configuration as the keys. The value is the configuration object with the properties:

{
    "my.component.pid": {
        "port:Integer" : 300, 
        "array:int[]" : [2, 3, 4], 
        "collection:Collection<Integer>" : [2,3,4], 
        "complex": { 
            "a" : 1, 
            "b" : "two"
        }
    }
}

As you can see in the example, it is possible to specify the runtime type of a configuration property by separating the property name from the type using a colon. For example, the "port" property value is of type Integer, the "array" property value is an array of ints and the "collection" property value is of type Collection<Integer>. You can specify all allowed types for a configuration and the configurator implementation uses converting rules as defined by the Converter specification - another one of the new specifications of Release 7.

In addition, a configuration property can hold structured JSON as a string value. In the example above "complex" contains at runtime a string value of the specified JSON.

Factory configurations can be specified by using the following syntax: FACTORY_PID~NAME. With the updated Configuration Admin it is possible to use a meaningful name to address factory components. The tilde separates the factory PID from the name:

{
    "my.factory.component~foo" : {
        ...
    },
    "my.factory.component~bar" : {
        ...
    }
}

Please note the errata for the published specification.

OSGi Bundle Configurations


The second part of the configurator specification describes a new extender based mechanism that picks up configurations from within a bundle and applies them. A bundle can contain one or more JSON files with configurations and once the bundle is started the configurations will be put into Configuration Admin by the Configurator. The configurator manages the state handling and ordering in a deterministic way. For example, if two bundles contain a configuration for the same PID, a ranking mechanism is used to specify which configuration is put into Configuration Admin, regardless of their installation or start order.

In addition to provide configurations through bundles, the Configurator supports providing initial configurations through system properties on startup of the OSGi framework. This is especially useful for customising an application without changing the distributable for the application. By specifying the system property configurator.initial with either a JSON document as described above or a list of URLs pointing to such JSON documents, the Configurator will apply the contained configurations in the same manner as if they would have been provided through a bundle.

With this new feature, provisioning of configurations through bundles and allowing to override them on startup becomes part of the OSGi specifications. You will find an example application using the Configurator at the OSGi enRoute website. The specification of the Configurator has driven the update of the Configuration Admin specification. So let's talk about the most important new features in Configuration Admin.

Improved Factory Configuration Handling


The handling of factory configurations has been greatly improved. With previous versions, when you create a factory configuration, the PID part is randomly generated which makes identifying a particular factory configuration later on much harder. In addition, as the PID is auto-generated, it has no meaning. With the updated Configuration Admin, it is now possible to specify the PID of a factory configuration, eliminating those problems.

New methods on the Configuration Admin allow to create and retrieve factory configurations based on the factory PID and a name. These methods behave the same as the already existing methods for plain configurations. The PID for those factory components is generated by appending the name to the factory PID separated by a tilde. The Configurator uses this syntax to specify factory configurations as shown above.

Improved Configuration Plugin Handling


When a configuration is delivered to a managed service, the configuration is passed through registered configuration plugin services. Such a service can manipulate the configuration. One common use case is to handle placeholders in the configuration properties and replace them with real values when delivered. For example, a property of a database connection configuration could just contain the value "${database.url}" which is replaced with the actual URL when this configuration is passed to the component processing the configuration. Or if you have sensitive configuration data, you can store it encrypted in the configuration and just decrypt it in a configuration plugin before it is passed to the managed service.

While this mechanism sounds useful, it is only useful if you register a managed service. However, when you are using Declarative Services (or other component frameworks) for your components, the plugins are not called at all. This gap is closed now and the DS implementation uses a new functionality of the Configuration Admin service and calls the plugins before it is passing the configuration to its components. This ensures plugins will be called regardless how you get your configuration. And this is making those use cases mentioned above possible.

Conclusion


The standard format for OSGi configurations is a great step forward for tooling and the Configurator implementation allows to deploy configurations through bundles in a standardized and well-specified way. The update of the Configuration Admin resolves some long outstanding issues and allows for new use cases around configuration management. For all the new features of the Configuration Admin service, have a look at the specification and make sure to also read the new Configuration specification.


Want to find out more about OSGi R7?

This is one post in a series of 12 focused on OSGi R7 Highlights.  Previous posts are:
  1. Proposed Final Draft Now Available
  2. Java 9 Support
  3. Declarative Services
  4. The JAX-RS Whiteboard
  5. The Converter
  6. Cluster Information
  7. Transaction Control
  8. The Http Whiteboard Service
  9. Push Streams and Promises 1.1
Be sure to follow us on Twitter or LinkedIn or subscribe to our Blog feed to find out when it's available.

Tuesday, June 5, 2018

OSGi R7 Highlights: Push Streams and Promises 1.1

The OSGi Compendium Release 7 specification contains new and updated specifications from the OSGi Alliance. Today I have the good fortune to be writing about one of each! The OSGi Push Streams specification is a brand new reactive data processing API in R7, and is closely related to the newly updated OSGi Promises 1.1 specification.

OSGi Utility Specifications

Before we dive into the new features provided by both of these specifications I will highlight an important, and often overlooked, fact about the OSGi specifications. You may have noticed that there are some large gaps in the OSGi specification numberings, in particular, that there is a really big gap between chapters 151 and 702.

This numbering isn't an accident. Chapters numbered in the range 700-799 are "utility" packages. These differ from other OSGi compendium specifications in that they don't define OSGi service interfaces that you look up from the service registry, instead they define classes that you use directly. This means that the OSGi utility specification packages are self-implementing, i.e., when you get the specification jar you get the reference implementation at the same time!

Push Streams and Promises are both utility specifications because the types and behaviors they define don't fit a service model, instead, they provide abstractions for asynchronous behaviors. There is, however another important feature of both specifications which makes them particularly special. Neither specification has any dependency on the OSGi Core specification! This means that not only does downloading the specification API give you an implementation, but you can also use that specification outside of an OSGi framework. Push Streams and Promises are therefore great technologies to use in any Java application that you write.

What are Push Streams?

The OSGi Push Streams specification is completely new for OSGi R7, and it defines a model for processing asynchronous streams of data using Reactive Programming techniques. Reactive Programming simply refers to a program which responds to the arrival of data or events, rather than attempting to pull data or events from the source. Reactive systems are also expected to run asynchronously and to process long running (or even infinite) streams of data.

Streaming data with Java Streams

You're probably familiar with the Java Stream API which was added in Java 8. The Stream API provides a rich functional mechanism to process streams of data (typically Java Collections).
List<Person> guests = getWeddingGuestList();

// How much wine will we need to buy?
long adults = guests.stream()
    .map(Person::getAge)
    .filter(age -> age > 21)
    .count();

Processing data using the Stream API allows you to write simple, effective code, however, it does not work well for data that cannot be generated on demand. If the list of wedding guests in the above example were based on email responses then the processing thread would spend a huge amount of time blocked polling the email server!

Streaming event-based data using Push Streams

Event-based data is data that occurs when a specific action happens. This may be a clock tick, a user clicking on a web-page, or it may indicate a train passing a signal. The important thing about event-based data is that it is generated based on an external stimulus, and not because a consumer asked for the data.

Push Streams differ from the Java Stream API because they expect data to be generated and processed asynchronously. The API, however, remains very similar:
PushStream<Person> guests = getWeddingGuestList();

// How much wine will we need to buy?
Promise<Long> adults = guests
    .map(Person::getAge)
    .filter(age -> age > 21)
    .count();

The most important difference is that the return value of a Push Stream's terminal operation is a Promise. This allows the Stream of asynchronously arriving data to be processed in a non-blocking way.

Creating your own Push Streams

Push Streams can be created easily using a Push Stream Provider. A Push Stream Provider can be configured to use specific thread pools, queueing policies, back-pressure, and circuit breaker behaviors.
PushEventSource<Email> emails = getWeddingEmailResponses();

PushStreamProvider psp = new PushStreamProvider();

// How much wine will we need to buy?
long adults = psp.createStream(emails)
    .map(Email::getFromAddress)
    .map(this::lookupPersonByEmail)
    .map(Person::getAge)
    .filter(age -> age > 21)
    .count();

Once you have a Push Stream Provider you can use (and re-use) it to create a Push Stream from any Push Event Source. While it is possible (and sometimes desirable) to create your own Push Event Source implementation, in most cases, you can create a Simple Push Event Source from the Push Stream Provider. Events can then be pushed into the  Simple Push Event Source as they occur, and they will be passed into any connected streams.
PushStreamProvider psp = new PushStreamProvider();

SimplePushEventSource<Email> spes = 
        psp.createSimpleEventSource(Email.class);

// When an "email" event occurs
spes.publish(email);

// When an error occurs
spes.error(anException);

// If/When the data stream finishes
spes.endOfStream();

Buffering and Back Pressure

In the Java Stream API the rate of data processing is determined by how fast the consumer can pull data from the source. This places a natural "brake" on the system, where the consumer cannot be overwhelmed by incoming data. In an event-based system this is no longer the case! Data events may occur far more rapidly than they can be consumed.

You can always attempt to consume events using more threads, but sooner or later your system will run out of capacity, and events will have to be queued until they can be processed. A queue of events is usually called a Buffer, and buffering is natively supported by the Push Stream specification

Buffers are useful for dealing with short-term spikes in the flow of events, but they cannot help if the long-term event arrival rate is higher than the long-term consumption rate. In this case the only options are to:
  • Discard some of the events
  • Fail the stream
  • Communicate to the event source that it should slow down
Deciding whether to discard events or fail the stream is the job of the buffer's Queue Policy. There are several built-in Queue Policy Options which provide basic behaviors, but you can implement your own behaviors if desired. Telling the producer to slow down, however, is the job of back pressure.

Back pressure in a Push Stream is a long indicating the number of milliseconds for which the event source should stop sending events in order to give the consumer time to catch up. Back pressure can be provided in a number of ways.
This back pressure is then sent back to the event producer, which may (or may not) slow down as a result.

Using Push Streams with Real Data

The UK's rail network produces data events every time a train passes a monitoring point. This information is used to help manage signalling, report delays, and for all sorts of other reasons. This data is also part of the UK government's open data program, and has a feed streaming data to anyone who signs up for an account!

There is a public example on GitHub using Push Streams to consume this data. The data can be consumed live, however, to avoid users having to sign up for an account this example replays events recorded from the real stream.

The incoming events are JSON arrays containing batches of data events. The main stream pipeline can be seen here. It uses Jackson to read the JSON, filter the data for interesting event types, map the data into Java types and then turns those data events into train reporting locations.

What's New in Promises 1.1?

The 1.1 update to Promises addresses a number of different areas.

Time-based behaviors

The first version of the OSGi Promise API gave you no way to deal with the passage of time. For example what should happen if a Promise isn't resolved for a very long time, or if it is resolved much faster than we expect?

The timeout and delay methods have been added to the Promise API to allow more sophisticated time-based behaviors to be built using Promises.

  • The timeout method returns a Promise which fails with a TimeoutException if the original Promise doesn't resolve before the supplied timeout elapses.
  • The delay method returns a Promise which does not resolve until the supplied delay has elapsed after the original promise resolves.
Using a combination of timeouts and delays can throttle a busy system, or ensure a live feel in a system that is receiving little input.

Thread management

Probably the largest change in the Promises specification is the introduction of the PromiseFactory. The Promise Factory is a powerful type which can be used as a factory for Deferred instances and resolved Promises. In addition, the Promise Factory can also be used to define the threads that should be used to execute callbacks, and to manage the time-based behaviors of the Promises. By changing the properties of the thread pool used by the Promise Factory you can ensure your callbacks are executed serially, or in parallel, or on the same thread that initiated the callback.

Usability improvements

Some of the less obvious improvements to Promises are the small usability enhancements that have been made to the API.
  • The OSGi callback functions now declare throws Exception so that your promise usage is more lambda friendly!
Promise<String> pString = getStringPromise();

// No need to worry about the URISyntaxException
Promise<URI> pURI = p.map(URI::new);
Promise<String> pString = getStringPromise();

// Doing work onResolve used to be a bit messy
pString.onResolve(() -> {
    Throwable t = pString.getFailure();
    if(t != null) {
      try { finished(pString.getValue()); } catch (Exception e) {}
    } else {
      logFailure(t);
    }
  });

// It is much simpler now!
pString.onSuccess(this::finished)
       .onFailure(this::logFailure);
  • A new thenAccept method for when you don't need the full power of then 
Promise<String> pString = getStringPromise();

// This behavior
pString.then(p -> {
    storeValue(p.getValue());
    return null;
  });

// becomes
pString.thenAccept(this::storeValue);

In Summary

OSGi Push Streams and Promises are powerful tools for asynchronous programming, even if you're not an OSGi user! If you're interested in using Push Streams or Promises then the implementations are freely available for you to start using now.



Want to find out more about OSGi R7?

This is one post in a series of 12 focused on OSGi R7 Highlights.  Previous posts are:
  1. Proposed Final Draft Now Available
  2. Java 9 Support
  3. Declarative Services
  4. The JAX-RS Whiteboard
  5. The Converter
  6. Cluster Information
  7. Transaction Control
  8. The Http Whiteboard Service
  9. Configuration Admin and Configurator
Be sure to follow us on Twitter or LinkedIn or subscribe to our Blog feed to find out when it's available.

Tuesday, May 22, 2018

OSGi R7 Highlights: The Http Whiteboard Service

The OSGi Compendium Release 7 specification contains version 1.1 of the Http Whiteboard specification which includes a number of new features.

Before we dive into the new features, let's start with a summary of what the Http Whiteboard Specification is about: It provides a light and convenient way of using servlets, servlet filters, listeners and web resources in an OSGi environment through the use of the Whiteboard Pattern. The specification supports registration of the above-mentioned web entities and grouping them together in context. A three-part introduction into the Release 6 version of the Http Whiteboard can be found here, here and here.

Component Property Types


Registering web entities using the Http Whiteboard usually requires specifying several service registration properties. In Release 7, Declarative Services added the ability to use component property types to annotate components and set property values in a type-safe manner. A set of annotations has been added to the Http Whiteboard specification to make use of this new feature in Declarative Services, simplifying the development of such web entities.

For example, registering a servlet at the path /game looks now like this:
@Component(service = Servlet.class)
@HttpWhiteboardServletPattern("/game")
public class MyServlet extends HttpServlet {
  ...
}

Similarly, defining additional service properties can easily be done by adding more annotations to the class. In the following example, we declare the servlet to support asynchronous processing and mount the servlet in a specific Http Context (in contrast to using the default context as above):
@Component(service = Servlet.class)
@HttpWhiteboardServletPattern("/game")
@HttpWhiteboardContextSelect("(" 
  + HttpWhiteboardConstants.HTTP_WHITEBOARD_CONTEXT_NAME
  + "=mycontext)")
@HttpWhiteboardServletAsyncSupported
public class MyServlet extends HttpServlet {
  ...
}

Further annotations have been added to simplify the development of servlet filters, resources, listeners error pages, and servlet context. A full list of these can be found here.

Multipart File Upload


Support for multipart file upload handling and configuring this handling has been added. The possibilities are the same as supported by the servlet specification. The multipart handling can be enabled for a servlet by specifying additional service registration properties. Again, using a component property types simplifies the specification of the required properties. In the following example we enable multipart file upload for the servlet and restrict the size of the uploaded files to 500,000 bytes:
@Component(service = Servlet.class)
@HttpWhiteboardServletPattern("/game")
@HttpWhiteboardServletMultipart(maxFileSize=500000)
public class UploadServlet extends HttpServlet {
  ...
}

The chapter about Multipart File Upload contains a complete description of the service properties for multipart file upload.

Pre-Filtering


A servlet filter is registered with an Http Context together with some rules when the filter is applied, e.g., by specifying a path pattern or a servlet name. However, servlet filters are run after a potential user authentication and therefore never get run when this authentication fails. In addition, these filters are only run if the request is targeting an existing endpoint, either a servlet or a resource.

On the other hand, some use cases require running some code with every request or before authentication. For example, logging all requests, regardless of whether authentication is successful is one of those use cases. Preparing the request by adding additional information from a third party system might be another one.

With the updated Http Whiteboard specification, a new web entity, the Preprocessor has been added. All services registered with this interface are invoked before request dispatching or authentication is performed. Therefore such a preprocessor will receive all requests. The Preprocessor interface is just an extension of the servlet filter interface and just acts as a marker to distinguish a preprocessor from a normal servlet filter. The following example implements a simple preprocessor, logging all requests:
@Component(service = Preprocessor.class)
public class LoggingFilter implements Preprocessor {

  public void doFilter(ServletRequest request,
                       ServletResponse response,
                       FilterChain chain)
  throws IOException, ServletException {
    System.out.println("New request to "
      + ((HttpServletRequest)request).getRequestURI());
    chain.doFilter(request, response);
  }

  public void init(FilterConfig filterConfig)
  throws ServletException {
    // initialize the preprocessor
  }

  public void destroy() {
    // clean up
  }
}

As a preprocessor is invoked for every request, there are no special service properties for this type of service, especially this service is not associated with any Http Context as the dispatching to a context happens after the preprocessors are invoked.

Updated Security Handling


Security handling or authentication can be implemented by registering your own implementation of the ServletContextHelper service and implementing the handleSecurity method. Web entities like servlets or filters can then be associated with this context ensuring that they are only invoked if the authentication is successful.

While the handleSecurity methods provide a good mechanism to check for authentication and potentially add additional information to the current request like a user context which can then be used by the web components, a method for cleaning up such state was missing. With the update of the Http Whiteboard, a new method finishSecurity has been added which is now closing this gap. This new method is the counterpart of handleSecurity and is invoked when the request processing is done. By implementing this method any resources allocated through handleSecurity can be cleaned up.

More on the Http Whiteboard Update


This blog post mentions only those new features which I think are the most important ones of the new version 1.1. You can find the full list of changes at the end of the Http Whiteboard specification.


Want to find out more about OSGi R7?

This is one post in a series of 12 focused on OSGi R7 Highlights.  Previous posts are:
  1. Proposed Final Draft Now Available
  2. Java 9 Support
  3. Declarative Services
  4. The JAX-RS Whiteboard
  5. The Converter
  6. Cluster Information
  7. Transaction Control
Be sure to follow us on Twitter or LinkedIn or subscribe to our Blog feed to find out when it's available.

Thursday, May 17, 2018

OSGi Community Event 2018 - CFP Now Open

We are pleased to announce the OSGi Community Event 2018 and that the Call For Papers (CFP) for this year's conference is open.

The OSGi Community Event 2018 is returning to Ludwigsburg in Germany where we are pleased to be co-located with EclipseCon Europe again.  The event will take place October 23-25 inclusive and will feature an OSGi tutorial, talks, a BOF and other OSGi community-related activities.

Attendees will have access to the full program at the OSGi Community Event and EclipseCon Europe.


CALL FOR PAPERS IS OPEN

We are looking for one, 3-hour tutorial and many more 35-minute standard talks. The Call for Papers (CFP) is open to anyone who has an experience, expertise or story to share about OSGi technology or the OSGi ecosystem. We are particularly interested in use cases and new initiatives around OSGi in enterprise, embedded, Cloud, Telco, and IoT.

The Call For Papers for this year's event is open now and closes 16 July.

If you fancy a chance to win a €50 Amazon voucher, then get your submission in by 2 July and you will be considered for the Early Bird pick where the winner will receive the voucher.

You can find out further details and information on how to submit a talk or tutorial on our conference website.

There you can also find links to speaker and submission FAQs that EclipseCon has put together.

If you have any questions about submissions or the conference in general, then please contact us.

We are looking forward to reading your talk and tutorial submissions and also seeing the OSGi Community in Ludwigsburg in October.

Tuesday, May 8, 2018

OSGi R7 Highlights: Transaction Control

The OSGi R7 specification contains a wide variety of exciting new features covering a wide variety of different use cases. The OSGi Transaction Control service is one of these new specifications providing modularity for transactional resource management.

A New Take on an Old Problem


Transactions have been used in software for decades, and over time they have become simpler to use. The Java Transaction API provides a common API for transaction management in Java, but this is still considered too hard to use directly. Java EE and the Spring framework, therefore, created a variety of declarative models. Before we talk about the new OSGi Transaction Control service we should understand how to work (or not) with these technologies.

Working with Declarative Transactions


The recommended approach for using transactions in Java EE and Spring is to apply the @Transactional annotation to your transactional methods.
public class TransactionalBean {

  DataSource ds;

  @Transactional
  public void addUser(String user) throws SQLException {
    // Add the user
    try (Connection conn = ds.getConnection();
         Statement s = conn.createStatement(
                 "insert into users values(?)")) {
      s.setString(1, user);
      s.executeUpdate();
    }
  }
}

This solution is incredibly simple, unfortunately, it is also deceptively so. Rather than the explicit complexity of managing your own transactions all of the complexity is hidden behind the @Transactional annotation. Unfortunately, hiding complexity doesn't make it go away, it just leads to other questions.
  • How did the transaction actually start and stop?
  • How did the database connection know to participate in the transaction?
  • What will cause the transaction to roll back?

Answering the Questions 


It turns out that there are a lot of moving parts behind the curtain!

Firstly, the transaction is started and stopped by container code running immediately before and after your method runs. This is almost always achieved by creating a proxy for a managed instance and ensuring that all access to the instance is through the proxy. Unfortunately, it's pretty easy to violate that restriction:
public class TransactionalBean {

  ...

  @Transactional(SUPPORTED)
  public void addIfNecessary(String user) {
    if(!userExists(user) {
      addUser(user);
    }
  }
}

In this case, we have another method on our object which can run without a transaction. When this method makes a call to the addUser method it does not touch the proxy. As a result, we can end up running the addUser method outside a transaction.

Secondly, the data source is enlisted because it is also proxied - when the getConnection call is made it locates the current transaction and enlists the connection with that transaction. Importantly this only works if the same person is doing all the proxying and uses the correct transaction manager.

Thirdly, a Java EE (or Spring) transaction will roll back if the method completes with an unexpected exception. Checked exceptions are part of the method signature and therefore not considered to be unexpected. This means that the SQLException in our example does not trigger a rollback

The biggest problem with proxying is that you need to have an all-knowing container ready, running and responsible for managing the objects and resources in the system. In Java EE this is the Application Server, in Spring it is the Application Context, but in OSGi? One thing that we learn over and over is that for a system to be modular you cannot have a single global container. The provider of the resources and the provider of the business objects must be free to use whatever frameworks they choose. We also learn that for a system to be robust we cannot rely on other modules to start before we do. Proxying to provide transactions is, therefore, a fundamentally flawed approach in OSGi.

Transaction Control - A Modular Approach


The OSGi Transaction Control service is a new specification which is designed to address the issues with the Java EE/Spring transaction management model.

One of the biggest differences when using Transaction Control is that transaction management is programmatic, not declarative, and uses a functional decorator pattern. This means that there is no need for a proxy to introduce transaction management instructions into your code, and the transaction is guaranteed to start however your method gets called.
@Component
public class TransactionalComponent {

  @Reference
  TransactionControl txControl

  public void addUser(String user) throws SQLException {
    // Add the user
    txControl.required(() -> {
        // This scoped work runs in a transaction
        return 42;
    });
  }
}

The Transaction Control service offers convenient methods for:

  • Requiring a transaction
  • Requiring a new transaction
  • Suspending a transaction
  • Checking to see whether a transaction is active or not

Completing the Transaction


The transaction and all other state associated with the scope is completed when the scoped work returns. Normally the work will return a value, and this value will be returned by the required method (the integer 42 in the example above). Returning a normal value will cause the transaction to be committed. If the commit fails then the Transaction Control service will throw a TransactionException from the required method.

Rolling Back 


One other important difference between Java EE/Spring transactions and the Transaction Control Service is that in Transaction Control every Exception triggers rollback by default. This is much more likely to give the correct behavior when things go wrong.

The easiest way to get a rollback is, therefore, to throw an exception from your scoped work! If your scoped work does complete with an exception then this will be wrapped in a ScopedWorkException and re-thrown by the required method. Sometimes, however, you don't want to commit your work, but it isn't an exceptional circumstance. In this case throwing an exception is the wrong thing to do, instead, you should simply mark the transaction for rollback.

@Component
public class TransactionalComponent {

  @Reference
  TransactionControl txControl

  public void addUser(String user) throws SQLException {
    // Add the user
    txControl.required(() -> {
        // This transaction must roll back
        txControl.setRollbackOnly();
        return 42;
    });
  }
}


A full description of the scope lifecycle is available in the Transaction Control specification.

Scoped Resources


In order for a transaction to be useful, it must have one or more resources participating in it. In Java EE and Spring these resources must be managed by the container so that they can be enlisted. The Transaction Control specification changes this model to let the OSGi bundle take control of enlistment.

Scoped Resources are created from a ResourceProvider, an object which is either found in the service registry or created by a factory service. The ResourceProvider interface is generic and usually subclassed to provide a specific resource type. The Transaction Control specification includes standard interfaces for creating scoped JDBC Connection objects and JPA EntityManager instances.

The easiest way to get a resource provider instance is to configure the resource provider implementation. For example, the Reference Implementation from Apache Aries can be configured as follows (using the configurator JSON format)
{
    // Global Settings
    ":configurator:resource-version" : 1,
    ":configurator:symbolic-name" : "org.osgi.blog.tx.config",
    ":configurator:version" : "0.0.1.SNAPSHOT",
    
    
    // Configure a JDBC resource provider
    "org.apache.aries.tx.control.jdbc.xa~resource": {
           "osgi.jdbc.driver.class": "org.h2.Driver",
           "url": "jdbc:h2:./data/database" }
}

The configured JDBCConnectionProvider can then be combined with a Transaction Control Service to create a scoped resource and used as follows:

@Component
public class TransactionalComponent {

  private final TransactionControl txControl;

  private final Connection txConnection;

  @Activate
  public TransactionalComponent(
      @Reference TransactionControl txControl,
      @Reference JDBCConnectionProvider provider) {
    this.txControl = txControl;
    this.txConnection = provider.getResource(txControl);
  }

  public void addUser(String user) throws SQLException {
    // Add the user
    txControl.required(() -> {
      Statement s = txConnection.createStatement(
             "insert into users values(?)");
      s.setString(1, user);
      return s.executeUpdate();
    });
  }
}

Resource Lifecycle


You may have noticed that the example using Transaction Control never closes its connection or the Statement it creates. In fact, the same connection instance is used for every call to addUser! This isn't a mistake but in fact, the recommended way to make use of a scoped resource. When you call getResource on a resource provider you aren't being given a physical resource, but a transaction-aware resource object.

The first time that you call the scoped resource object inside a piece of scoped work it does several things:

  • It obtains a real physical resource from an underlying pool
  • It enlists the physical resource in the transaction (if there is one)
  • It registers the physical resource with the ongoing transaction context so the same physical resource is used for the rest of the scoped work
  • It registers a completion callback so that the physical resource can be automatically returned to the pool
All of this means that you never need to worry about getting a different connection instance, or about closing it when you're done. All resource access is implicitly and automatically bounded by the scoped work, this includes the objects created by the scoped work such as Statements and ResultSets.

In Summary


The Transaction Control Service provides a simple, reliable, modular solution for transaction lifecycle management and resource access. If you're interested in seeing more usage of Transaction Control then you should check out the data access services from the OSGi enRoute for R7 microservice example.



Want to find out more about OSGi R7?

This is one post in a series of 12 focused on OSGi R7 Highlights.  Previous posts are:
  1. Proposed Final Draft Now Available
  2. Java 9 Support
  3. Declarative Services
  4. The JAX-RS Whiteboard
  5. The Converter
  6. Cluster Information
Be sure to follow us on Twitter or LinkedIn or subscribe to our Blog feed to find out when it's available.