Tuesday, November 20, 2018

OSGi Community Event 2018 - Slides & Videos Now Available



Click here for OSGi Community Event 2018 Slides and Videos



So the OSGi Community Event 2018 has come and go again for another year.  Thanks to everyone
who joined us and also to the speakers who provided the talks (and lots of pre-event hard work preparing them) that are essential to the success of the conference.

This year we learnt how OSGi is being used by ESA for tracking and controlling spacecraft, we discovered how virtual reality can be used to understand the dependencies within OSGi software, we had an insight on using Node-RED with OSGi, and we also learnt how OSGi is being used to build Open Liberty and the benefits that it brings to a large codebase like this.

BJ Hargrave, OSGi Alliance CTO also provided an overview of what has been added to OSGi R7 and Bnd to help navigate Java 9+ and JPMS.  There were also several looks in to the future of how OSGi can be used with Docker and Kubernetes and how OSGi fits with MicroProfile.

Thanks also to our hosts, EclipseCon Europe, who we were pleased to co-locate with again this year in Ludwigsburg.  The social events and networking opportunities were in abundance again and appeared to be thoroughly enjoyed by the OSGi and Eclipse communities.

The slides and videos from the talks at this years event are now available and you can find these by reviewing the 2018 schedule on our conference web page.

The final word of thanks goes to our 2018 OSGi Community Event Sponsors, whose support helps ensure the conference can take place.  This year Bosch and IBM were Dual Elite (OSGi and EclipseCon) sponsors and Liferay was an OSGi Conference sponsor.

Mike

Friday, October 5, 2018

OSGi Users' Forum Germany Meeting 22 Oct - Ludwigsburg

The OSGi Users' Forum Germany are having their next meeting the afternoon before the OSGi Community Event in Ludwigsburg - Oct 22 from 14.00hrs.

This meeting will be a workshop focusing on the recent OSGi Release 7 and there are three talks which will include a review on whats new in Release 7, the updated OSGi enRoute for R7 and an intro to gecko.io.  In addition there will be a Q&A and discussion session. The full agenda is available here.

So if you are joining us for the OSGi Community Event or attending EclipseCon Europe this provides a perfect opportunity to learn more about the latest OSGi release.

The meeting is open to all - OSGi Users' Forum Germany members and non-members and is being hosted as part of the Eclipse Community Day which takes place all day on Monday Oct 22.  Its €60 per person to attend and this provides you with access to any of the sessions all day, lunch and refreshments at breaks.

To secure your place please visit https://www.eclipsecon.org/europe2018/registration#special.

If you haven's got your pass for OSGi Community Event and EclipseCon there is still time to get one of these too. We have a packed conference and would be great to see you there.

Monday, October 1, 2018

Countdown to OSGi Community Event 2018

So it's reaching that time of year again. Not only is it, gulp, only 91 days to the end of the year and a mere, double gulp, 84 sleeps until Christmas, October is a really big month in the OSGi schedule.

Today, being October 1, means we are just 22 sleeps until OSGi Community Event 2018.

We have a packed schedule again this year in Ludwigsburg, Oct 23 to 25, including:
And if all that OSGi goodness isn't enough, we are co-located with EclipseCon Europe again this year and attendees get full access to the EclipseCon schedule and exhibition, along with all the excellent social events.

Registration is still open, so if you haven't booked your ticket yet I encourage you to do so fast and to secure your hotel as rooms are selling out.

Big thanks go to the following companies who have sponsored the OSGi Community Event in 2018. It is their support that makes it possible for us to bring this event to you.


See you in Ludwigsburg soon.

Tuesday, September 18, 2018

Meet us at Liferay DEVCON

The OSGi Alliance is pleased to be attending Liferay DEVCON in Amsterdam in November.  The main conference is taking place November 7 and 8 and we will have an OSGi booth in the exhibition area.  There is also an unconference taking place the day before the main conference on November 6.


We are happy to answer any questions you have about modularity and OSGi technology, the OSGi Alliance, the new R7 release and anything else you can think of that is OSGi related.

We would also encourage anyone who might be interested in joining the OSGi Alliance and contributing to the ongoing evolution and development of the open standard OSGi specifications to come and chat with us at our booth.  Even though R7 is only just out the door, we are already drawing up ideas for Release 8 (R8) and so now is a great time to join us and get involved in the new activities.

Liferay is an extensive user of OSGi technology and we are really pleased to have the opportunity to meet their community of users and developers at this key event.  Ray Augé, a Senior Software Architect at Liferay, and co-chair of the OSGi Enterprise Expert Group, will be presenting on the upcoming OSGi CDI integration specification. Ray's talk will demonstrate common usage patterns and its component model that brings OSGi dynamics; like services and configuration, to CDI and provides for an ecosystem of CDI portable extensions.

You can find out more about this year's Liferay DEVCON including how to register on the conference website.

We realize that this is coming hot on the heels of the OSGi Community Event and EclipseCon Europe in October and Liferay is offering a 25% discount on DEVCON ticket prices to anyone that is attending that event this year too.  To obtain your discount code please see the Liferay advert in the OSGi Community Event and EclipseCon Europe event brochure.

Wednesday, September 5, 2018

OSGi R7 Highlights Blog Series

The OSGi R7 Highlights blog series has come to a close and we certainly want to thank you for following the series and hope you found it insightful and useful!  The series featured posts from technical experts at the OSGi Alliance sharing some of the key highlights of the new R7 release.

Its worth noting that the OSGi Core specification has provided a stable platform for developing modular Java applications for almost two decades. The OSGi Core R7 specification continues this tradition by providing updates to the OSGi Core specification which are fully compatible with previous versions of the OSGi Core specification.  This gives developers and users of OSGi technology an established track record of certainty of their investment protection from adopting OSGi.

The blog posts covered the topics below so if you missed reading any of the posts, be sure to take a moment to check them out.

Feel free to send questions regarding any of the OSGi R7 blog posts to us by email or as comments on the post in question.  We welcome any and all feedback and want to hear from you if you have any suggestions on future topics related to OSGi R7 or otherwise.

OSGi R7 Highlights Blog Series

  • Java 9 Support – Multi-release JAR support and runtime discovery of the packages provided by the JPMS modules loaded by the platform.
  • Declarative Services – Constructor injection and component property types.
  • JAX-RS – A whiteboard model for building JAX-RS microservices.
  • Converter – A package for object type conversion.
  • Cluster Information – Support for using OSGi frameworks in clustered environments.
  • Transaction Control – An OSGi model for transaction life cycle management.
  • Http Whiteboard – Updates to the Http Whiteboard model.
  • Push Streams and Promises – The Promises packages is updated with new methods and an improved implementation and the new Push Streams package provides a stream programming model for asynchronously arriving events.
  • Configurator and Configuration Admin – Configuration Admin is updated to support the new Configurator specification for delivering configuration data in bundles.
  • LogService – A new logging API is added which supports logging levels and dynamic logging administration and a new Push Stream-based means of receiving log entries is also added.
  • Bundle Annotations – Annotations that allow the developer to inform tooling on how to build bundles.
  • CDI – Context and Dependency Injection support for OSGi developers.

Be sure to follow us on Twitter or LinkedIn or subscribe to our Blog feed.

Friday, August 31, 2018

Join us at JUG Thüringen on September 19

Join us for an evening of OSGi on September 19 with the Java User Group Thüringen (@jugthde).
The OSGi Alliance Expert Groups are gathering in Jena that week for their next face-to-face technical meetings. The JUG Thüringen have kindly arranged a meetup on the evening of September 19 at 6pm while the OSGi technical experts are in town.

There will be three, 30-minute 'talklets' and a one-hour open mic discussion session.  Registration is essential and please visit their meetup page for the event to book your place.

Thanks to Intershop for hosting the meeting.
The agenda for the evening meetup is as follows:
18:00 - Open
18:15 - Welcome from Intershop by Johannes Metzner and JUG Thüringen by Benjamin Nothdurft
18:30 - OSGi enRoute Quickstart - A Beginners Guide to OSGi by Tim Ward
19:00 - Gecko.io - Kickstart your professional OSGi Development by Jürgen Albert
19:30 - Intelligent robots - Resolving the promise by Tim Verbelen
20:00 - Break with finger food
20:30 - Open Mic / Panel Discussion (WIP)
21:30 - End with pub visit (Wagnergasse)
Abstracts and further information can be found on the registration page. We hope you can join us.

Friday, August 17, 2018

OSGi R7 Highlights: CDI Integration

The OSGi Enterprise Release 7 specification targeted for release in the coming months contains a brand new specification: CDI Integration. This specification brings the exciting features and capabilities of the Contexts and Dependency Injection (CDI) specification to OSGi.

CDI itself is a vast specification and with that in mind several goals were established to guide the development of the integration:
  1. Do not reinvent the wheel, follow established approaches such as leveraging the CDI Service Provider Interface model
  2. Make code look and feel as natural to CDI developers as possible using CDI designs and best practices where applicable and generally adopt CDI form and function
  3. Provide uncompromising support for key OSGi features such as services, configuration and the dynamics these entail, while not over-complicating or over-engineering the design
  4. Enable modularity for CDI Portable Extensions such that an ecosystem of portable extensions may emerge

Beans

The most basic interaction a developer has with CDI comes from the "Contexts" portion of the spec and the creation of beans, which stems from defining in which context a bean's instances reside. Generally this is accomplished by applying a scope annotation to a POJO.
@ApplicationScoped
public class ShopImpl implements Shop {
  public List<Product> getProducts() { ... }
}
This POJO is a bean whose scope is defined as @ApplicationScoped whose context defines its instance as visible to the entire application.

Injection

The next interaction with CDI comes from the "Dependency Injection" portion of the spec. This is accomplished by applying the @Inject annotation to a field, method or constructor of a POJO.
@ApplicationScoped
public class ShopImpl implements Shop {
  @Inject
  Logger logger;

  public List<Product> getProducts() { ... }
}
This POJO will now have a Logger instance injected into it when the instance is created.

OSGi Services

With a basic understanding of beans and dependency injection let's move on to the OSGi CDI Integration features. The most important feature provided by the CDI integration is the ability to register services with or obtain services from the OSGi service registry.

Registering a service can be as simple as applying the @Service annotation to a bean.
@ApplicationScoped
@Service
public class ShopImpl implements Shop {
  ...
}
This POJO is registered into the service registry with the service type Shop.

Adding service properties is accomplished using annotations that are meta-annotated with the @BeanPropertyType annotation. A couple of examples are the @ServiceDescription and @ServiceRanking annotations defined by this specification.
@ApplicationScoped
@Service
@ServiceDescription("This is the primary implementation of the Shop service.")
@ServiceRanking(1000)
public class ShopImpl implements Shop {
  ...
}

Obtaining services is accomplished by using the @Reference annotation in conjunction with @Inject.
@ApplicationScoped
@Service
public class ShopImpl implements Shop {
  @Inject
  @Reference
  ProductStore productStore;

  public List<Product> getProducts() {
    return productStore.all();
  }
}
This POJO is injected with a service of type ProductStore.

Filtering services is accomplished by specifying a target filter from the @Reference annotation and/or by adding one or more annotations meta-annotated with @BeanPropertyType to the injection point. The following example filters services having the service.vendor service property equal to Acme, Inc.
@ApplicationScoped
@Service
public class ShopImpl implements Shop {
  @Inject
  @Reference
  @ServiceVendor("Acme, Inc.")
  ProductStore productStore;

  public List<Product> getProducts() {
    return productStore.all();
  }
} 
 

Optionality, Cardinality and Dynamics

In OSGi services may need to be expressed in terms of their optionality (if a service is required at all), cardinality (how many services are required) and their dynamics (if the service(s) may change during the lifetime of the bean's context). These concerns are handled elegantly using the Java type system.

The previous example demonstrated a mandatory, unary cardinality reference.

Optional references are expressed using Java's Optional type.
@ApplicationScoped
@Service
public class ShopImpl implements Shop {
  @Inject
  @Reference
  Optional<ProductStore> productStore;

  public List<Product> getProducts() {
    return productStore.map(s -> s.all()).orElse(Collections.emptyList());
  }
}

Multi-cardinality references are expressed using a Collection or List types. The default cardinality in this scenario is 0 (which is to say that the services are optional by default). A minimum cardinality can optionally be expressed in conjunction with multi-cardinality using the @MinimumCardinality annotation.
@ApplicationScoped
@Service
public class ShopImpl implements Shop {
  @Inject
  @Reference
  @MinimumCardinality(1)
  List<ProductStore> productStores;

  public List<Product> getProducts() {
    return productStores.stream().flatMap(
      s -> s.all().stream()
    ).collect(Collectors.toList());
  }
}

Dynamic references are expressed using the Provider type.
@ApplicationScoped
@Service
public class ShopImpl implements Shop {
  @Inject
  @Reference
  Provider<ProductStore> productStore;

  public List<Product> getProducts() {
    return productStore.get().all();
  }
}

Greediness

One of the key distinctions between Declarative Services and CDI Integration with respect to references to services is greediness. Where the greediness of references in Declarative Services is reluctant by default, the greediness of references in CDI Integration is greedy by default. This means that CDI Integration references will always reflect the best service(s) available. This also means that CDI Integration beans may have a more volatile life cycle depending on their references and how often matching services come and go.

Defining a reference to be reluctant is accomplished using the @Reluctant annotation. (This means that once an adequate service is bound it is unlikely to be replaced with a better service in the future.)
@ApplicationScoped
@Service
public class ShopImpl implements Shop {
  @Inject
  @Reference
  @Reluctant
  ProductStore productStore;

  public List<Product> getProducts() {
    return productStore.all();
  }
}

Beans vs. Components

In a traditional CDI application, all beans make up the application and form a single cohesive unit; the CDI container. Modeling external dependencies such as (non-dynamic) references to services and configurations without complexities like proxies or byte code instrumentation means the CDI container has to be treated as a single unit, resulting that whenever any dependency changes in a significant way the entire CDI container must be destroyed and recreated. This is a fundamental difference to the model defined in Declarative Services which permits individual components to exist independently from each other. It is also rather limiting. In order to address this limitation the CDI Integration defines the concept of components. Components are cohesive collections of beans which have a consistent, related life cycle which may operate individually from one another.

The CDI Integration defines 3 component types:
  1. Container Component - All traditional beans (@ApplicationScoped, @RequestScoped, @SessionScoped, @Dependent, custom scopes, etc.) are part of the container component (in fact, any bean which is not @ComponentScoped is part of the container component)
  2. Factory Components - are collections of @ComponentScoped beans rooted by a bean having the stereotype @FactoryComponent (such components are driven by factory configuration)
  3. Single Components - are collections of @ComponentScoped beans rooted by a bean with the stereotype @SingleComponent
These components are arranged in two levels where the container component exists as the first level and any number of factory and/or single component children exist in the second level.

Type (relation to CDI Container)
1 Container Component (1..1)
2 Factory Component (0..n) Single Component (0..n)

Factory and single components exist and react to change independently from each other, just like Declarative Services components, while also depending on the container component. If the container component needs to be recreated then all factory and single components must also be recreated.

With this model, it's possible to replicate the Declarative Services component model, while also supporting the traditional monolithic CDI approach (with the additional capability to share beans between container, factory and single components provided they are @ApplicationScoped or @Dependent and also to use other CDI features like Decorators, Interceptors and CDI Portable Extensions.)

Let's see an example:
@ApplicationScoped
@Service
public class ShopImpl implements Shop {
  @Inject
  @Reference
  Provider<List<ProductStore>> productStores;

  public List<Product> getProducts() {
    return productStores.get().stream().flatMap(
      s -> s.all().stream()
    ).collect(Collectors.toList());
  }
}

@BeanPropertyType
public @interface StoreConfig {
  String vendor_name();
  String data_file();
}

@FactoryComponent("product.store")
@Service
public class ProductStoreImpl implements ProductStore {
  @Inject
  @ComponentProperties
  StoreConfig storeConfig;

  public List<Product> all() {
    return read(storeConfig);
  }
}
This application provides a Shop service that is dynamically tracking a number of ProductStore services. ProductStore instances are created by adding new factory configuration instances using the factory PID product.store. Each ProductStore instance is injected with its component properties which are coerced into a typesafe, user-defined StoreConfig for easy processing.

Conclusion

The CDI Integration specification bridges the powerful features of CDI and OSGi in a clean and concise way which should empower developers. There are many other aspects of the specification that don't fit into a single blog post such as modularity for CDI Portable Extensions, further discussion about BeanPropertyType, configuration dependencies, tracking service events, the relationship with Decorators and Interceptors, etc. So don't forget to read the latest draft of the spec. The CDI Integration specification is an important step forward in OSGi dependency injection story that hopefully opens the OSGi door to a wider audience already familiar with CDI.


Want to find out more about OSGi R7?

Wednesday, August 1, 2018

OSGi Community Event 2018 Talks Announced

We are pleased to announce that the details of the selected talks for this year's Community Event are now available.  You can find the list of talks, titles, and abstracts online.

Congratulations to everyone selected.

Thanks to everybody who made a submission for the OSGi Community Event 2018. We recognize that it takes time and effort and appreciate all of the submissions, whether successful or not.

Forum am Schlosspark -- Venue for the OSGi Community Event 2018

If you are joining us in Ludwigsburg at the event October 23 to 25 then we encourage you to book your tickets early to secure the best price and also to make your hotel reservations as soon as you can as the conference hotel (Nestor Hotel) always sells out well before the event.


Tuesday, July 31, 2018

OSGi R7 Highlights: Bundle Annotations

The OSGi Core Release 7 specification introduces some new bundle annotations for use by programmers. These annotations focus on generating bundle metadata in the bundle's manifest. Like the Declarative Services and Metatype annotations, the bundle annotations are CLASS retention annotations and are intended for use by tools that generate bundles such as Bnd. Bnd includes support for Gradle, Maven and Eclipse via the Bndtools Eclipse plugins.

Package Export


One of the key features of OSGi is, of course, modularity through encapsulation of classes in bundles. But we still need to share classes between bundles to get things done! So importing and exporting package is important. When building bundles, we must make sure that an Export-Package header is present in the bundle's manifest to identify which packages in the bundle are to be exported and thus available for other bundles to import. Normally, the list of packages to export is described to the bundle building tool. For example, when using Bnd, we can specify the -exportcontents or Export-Package instruction in the bnd.bnd file to tell Bnd what packages must be exported from the generated bundle. These instructions support wildcards, include and exclude information so that the bundle developer does not have to list all the desired export package names.

But this information is separate from a package being exported. So a new Export annotation is now available which can be applied to the package source in its package-info.java file.

@org.osgi.annotation.bundle.Export
@org.osgi.annotation.versioning.Version("1.0")
package my.package;

When a bundle containing this package is generated, the Export-Package header in the bundle's manifest will list the package with its version number. When using the Export annotation you must also use the Version annotation to specify the version of the package.

The Export annotation also includes some elements to provide control of the export package information of the package. If you need to specify some specific attributes, or directives, for the package, the attribute element can be used to specify them. Normally when a package is exported, you want it also imported to allow the framework the choice to substitute the import for the export when resolving the bundle at runtime. This is generally the best practice. Bnd will do this automatically when it detects that at least one other package in the bundle uses the exported package. (If no other package in the bundle uses the exported package, then there would be no value in substitutably importing the package.) The substitution element can be used to override the default behavior. Finally, the uses element can be used to replace the calculated uses information of the exported package. These latter two elements are highly specialized and the calculated values are almost always the best choice.

Capabilities and Requirements


The OSGi R4.3 spec introduced the concept of capabilities and requirements to the OSGi specifications which were further refined into the Resource API Specification and Framework Namespaces Specification in OSGi R5.

Capabilities and requirements let a bundle offer capabilities to other bundles and require capabilities from other bundles. This metadata is placed in the Provide-Capability and Require-Capability manifest headers which can be used by resolvers such as the framework resolver to match bundles and wire them together at runtime. Resolving during development, such as in Bnd, or provisioning can also use this information to construct a set of bundles which will work together. So as a developer writing bundles, you want to make sure the capabilities offered by your bundle and requirements needed by your bundle are properly expressed in your bundle's manifest. But writing manifest headers is no fun and subject to mistakes when code if refactored or assembled into a different bundle.

So a new set of annotations is introduced to make managing the capabilities and requirements of a bundle more error-proof and straightforward. The new Capability annotation can be used on a type or package to declare that a capability is offered by that package when it is included in a bundle. The Capability annotation must declare the namespace of the capability and can optionally declare additional information about the capability such as a name, a version, the effective time of the capability, and uses constraints as well as any additional attributes and directives. For example, a Declarative Services SCR extender can use this annotation to declare it offers the extender capability for osgi.component version 1.4.

@Capability(namespace=ExtenderNamespace.EXTENDER_NAMESPACE,
  name="osgi.component", version="1.4.0")

The new Requirement annotation can be used on a type or package to declare that a capability is required by that package when it is included in a bundle. The Requirement annotation must declare the namespace of the requirement and can optionally declare additional information about the requirement such as a name, a version, the effective time of the requirement, and a filter which must match a capability as well as any additional attributes and directives for the requirement. For example, a bundle using  Declarative Services can use this annotation to declare it requires the extender capability for osgi.component version 1.4.

@Requirement(namespace=ExtenderNamespace.EXTENDER_NAMESPACE,
  name="osgi.component", version="1.4.0")

As useful as this is to help generate the proper requirement in the bundle's manifest for a Declarative Services SCR extender, you don't even have to put this in your source code. This is because the Capability and Requirement annotations are supported as meta-annotations! This means you don't have to use these annotations directly in your code, you just need to use an annotation which itself is annotated with these annotations to get their benefit. For example, the RequireServiceComponentRuntime annotation is defined by the Declarative Services specification and it is annotated with the above Requirement annotation example.

@Requirement(namespace = ExtenderNamespace.EXTENDER_NAMESPACE,
  name = "osgi.component", version = "1.4.0")
public @interface RequireServiceComponentRuntime {}

So this captures all the details of the requirement in a single, easy-to-use annotation. Furthermore, the standard Component annotation, which is used by all Declarative Services components, is now annotated with the RequireServiceComponentRuntime annotation. So this means that just by writing a component and using the Component annotation, your bundle's manifest will automatically contain the requirement for the Declarative Services extender capability.

Other OSGi specifications now also take advantage of this meta-annotation support to ensure the proper requirements end up in your bundle's manifest when you use the specification. For example, the Http Whiteboard Specification defines the RequireHttpWhiteboard annotation which is itself annotated with a Requirement for the osgi.http implementation namespace. And most of the Http Whiteboard component property types are themselves annotated with the RequireHttpWhiteboard annotation. So by using one of the Http Whiteboard component property types in your bundle, your bundle's manifest will automatically contain the requirement for the Http Whiteboard implementation capability.

If you define your own capability namespaces for your applications, make sure to define your own requirement annotations annotated with Requirement to make it simple for your users to take advantage of the meta-annotation support and automatically get the desired requirements in their bundle's manifest. You can also use the new Attribute and Directive annotations on the elements of your requirement or capability annotation so that these elements can be automatically mapped onto attributes or directives in the requirement or capability generated in the bundle's manifest.

Simple Manifest Headers


And finally, there is the humble Header annotation. It can be used on a type or package if you just need to get a simple manifest header in the bundle's manifest. For example,

@Header(name=Constants.BUNDLE_CATEGORY, value="osgi")

will put Bundle-Category: osgi in the bundle's manifest.

Conclusion


The addition of the new bundle annotations to the OSGi specifications and support for them in tooling like Bnd make building bundles easier and less error-prone. If you are designing API which includes defining a capability namespace, make sure to design some requirement annotations for your namespace and use them to make users of your API much happier!

PS. While the OSGi Core R7 specification is done, tooling which supports the new bundle annotations may still be under development (at the time of this writing).


Want to find out more about OSGi R7?

This is one post in a series of 12 focused on OSGi R7 Highlights.  Previous posts are:
  1. Proposed Final Draft Now Available
  2. Java 9 Support
  3. Declarative Services
  4. The JAX-RS Whiteboard
  5. The Converter
  6. Cluster Information
  7. Transaction Control
  8. The Http Whiteboard Service
  9. Push Streams and Promises 1.1
  10. Configuration Admin and Configurator
  11. Log Service
Be sure to follow us on Twitter or LinkedIn or subscribe to our Blog feed to find out when it's available.

Thursday, July 26, 2018

OSGi R7 Highlights: Log Service

The OSGi Compendium Release 7 specification contains version 1.4 of the Log Service specification which includes a number of exciting new enhancements and features.

Version 1.4 is a major update to the OSGi Log Service specification. A new logging API is added which supports logging levels and dynamic logging administration. A new Push Stream-based means of receiving log entries is also added. There is also support in the Declarative Services1.4 specification to make it easy to use the new logging API in components.

Logger


Most developers are probably familiar with SLF4J which is a popular logging API for Java. The new Logger interface is inspired by the ideas in SLF4J. The Logger interface allows the bundle developer to:
  • Specify a message, message parameters, and an exception to be logged.
  • Specify the Service associated with the message being logged.
  • Query if a log level is effective.
The logging methods of Logger support curly brace "{}" placeholders to format the message parameters into the message.

logger.error("Cannot access file {}", myFile);

Sometimes message parameters can be expensive to compute, so avoiding computation is important if the log level is not effective. This can be done using either an if block

if (logger.isInfoEnabled()) {
    logger.info("Max {}", Collections.max(processing));
}

or a LoggerConsumer which is convenient with a lambda expression.

logger.info(l -> l.info("Max {}", Collections.max(processing)));

Both the prior examples avoid computation if the log level is not effective. The latter example only calls the lambda expression if the log level is effective.

LoggerFactory


Logger objects can be obtained from the LoggerFactory service using one of the getLogger methods. Loggers are named. Logger names should be in the form of a fully qualified Java class name with segments separated by full stop. For example:

com.foo.Bar

Logger names form a hierarchy. A logger name is said to be an ancestor of another logger name if the logger name followed by a full stop is a prefix of the descendant logger name. The root logger name is the top ancestor of the logger name hierarchy. For example:

com.foo.Bar
com.foo
com
ROOT

Normally the name of the class which is doing the logging is used as the logger name. The LoggerFactory service can be used to obtain two types of Logger objects: Logger and FormatterLogger. The Logger object uses SLF4J-style ("{}") placeholders for message formatting. The FormatterLogger object use printf-style placeholders from java.util.Formatter for message formatting. The following FormatterLogger example uses printf-style placeholders:

FormatterLogger logger = loggerFactory.getLogger(Bar.class, FormatterLogger.class);
logger.error("Cannot access file %s", myFile);

The old LogService service extends the new LoggerFactory service and the original methods on LogService are now deprecated in favor of the new LoggerFactory methods.

Logger Configuration


A LoggerAdmin service is defined which allows for the configuration of Loggers. The LoggerAdmin service can be used to obtain the LoggerContext for a bundle. Each bundle may have its own named LoggerContext based upon its bundle symbolic name, bundle version, and bundle location. There is also a root LoggerContext from which all named LoggerContexts inherit so default logging configuration can be set in the root LoggerContext.

If Configuration Admin is present, then logger configuration information can be stored in Configuration Admin. This allows external logger configuration such as via the Configurator Specification.


Receiving Log Entries

The Log Service specification has never dealt with persisting or displaying log entries. It provides API for logging and another API for receiving what has been logged. This latter API can then be used to store logged information in any place appropriate for the application. Since Release 1, the LogReaderService has provided access to logged information. This service predates the formulation of the Whiteboard pattern and is thus out-of-date with OSGi best practices. For the version 1.4 update of the Log Service specification, we added a newer, more modern way to receive logged information: the LogStreamProvider service. The Log Stream Provider uses the new Push Stream API added for R7. To receive a stream of LogEntry objects pushed as they are created, use the createStream method to receive a PushStream<LogEntry> object.

logStreamProvider.createStream()
  .forEach(this::writeToLogFile)
  .onResolve(this::closeLogFile);

The stream can be created with any past history if desired.

logStreamProvider.createStream(LogStreamProvider.Options.HISTORY)
  .forEach(this::writeToLogFile)
  .onResolve(this::closeLogFile);

The LogEntry interface has also been updated to return additional information about the log entry. It now contains the name of the Logger used to create the entry, location and thread information about the source of the log entry creation, and a sequence number so that log entry creation order can be inspected.

Conclusion

Version 1.4 of the Log Service is a pretty significant update to the specification. It includes a much more modern API for both logging information and receiving logged information as well as the ability to configure the log levels of loggers and bundles. Make sure to try it out in your next project. The Eclipse Equinox framework 3.13.0 implements version 1.4 of the Log Service specification and the companion bundle registers the LogStreamProvider service if you want to use it.



Want to find out more about OSGi R7?

This is one post in a series of 12 focused on OSGi R7 Highlights.  Previous posts are:
  1. Proposed Final Draft Now Available
  2. Java 9 Support
  3. Declarative Services
  4. The JAX-RS Whiteboard
  5. The Converter
  6. Cluster Information
  7. Transaction Control
  8. The Http Whiteboard Service
  9. Push Streams and Promises 1.1
  10. Configuration Admin and Configurator
Be sure to follow us on Twitter or LinkedIn or subscribe to our Blog feed to find out when it's available.

Wednesday, July 18, 2018

OSGi Community Event 2018 Early Bird Pick & Registration Now Open

Congratulations to Lisa Nafeie from DLR, the German Aerospace Centre.  Lisa's talk, Visualization of OSGi based Software Architectures in Virtual Reality, has been selected as the Early Bird pick for the OSGi Community Event 2018.

We were lucky to have the opportunity to speak with Lisa to get some further background on her talk.  You can find our questions and her answers on the OSGi website.

We hope that this talk gives you a taste for all of the interesting OSGi content we will have at this year's Community Event in Ludwigsburg in October.  The OSGi Program Committee are busy reviewing all of the submissions to put a packed program together again this year.

If you are already planning on joining us you will be pleased to know that Registration for the event is now open and you can secure the best prices by booking early.

Monday, July 2, 2018

OSGi Presentation Recordings with BGJUG and Software AG

For those who follow this blog, you will know that we had an Evening of OSGi in mid-April hosted by BGJUG and Software AG. I am pleased to be able to share with you the video recordings that Software AG made at the meetup.  These have been posted on the OSGi Alliance YouTube channel and are available for you to watch.


There were two talks presented by the OSGi Alliance:
  • An 'Introduction to OSGi' by BJ Hargrave, OSGi Alliance CTO
  • A look at 'How OSGi Alliance specifications have influenced the IoT market' by Pavlin Dobrev & Kai Hackbarth (Bosch)
In addition, Todor Boev from Software AG presented 'OSGi and JPMS in practice' which discussed Software AG's experiences with OSGi and Jigsaw/JPMS.

Thanks once again to BGJUG and Software AG for arranging the meetup and allowing the Expert Group members to meet with the local Java community.

Tuesday, June 19, 2018

OSGi R7 Highlights: Configuration Admin and Configurator

The OSGi Compendium Release 7 specification contains an update to the Configuration Admin Service specification which includes a number of new features and also introduces the new Configurator specification.

Configuration Admin


One of the most used but on the other hand barely noticed services from the OSGi compendium specification is the Configuration Admin. This service allows to create, update and delete configurations. It is up to the implementation where these configurations are stored. A configuration has a unique persistent identifier (PID) and a dictionary of properties.

Usually, configurations are tied to an implementation of an OSGi service but configurations can be used for any purpose like database connections, current temperature or the set of available nodes in a cloud setup. While the Configuration Admin service has an API to find configurations or create them, it supports a more inversion of control like behavior by supporting a callback mechanism. The callback (known as the ManagedService interface) gets invoked for existing/created configurations of a certain PID and also if that configuration is deleted.

While this callback exists, again it's not that common to use it directly. The most common and easiest way to develop OSGi components and services is to use Declarative Services. Declarative Services (DS) provides build-in support for configurations. Simply by implementing an activation method the component can get its configuration. The implementor of that component does not have to worry whether such a configuration exists, gets deleted or is modified. DS takes care of all of this and invokes the right actions on the component.

In addition to single configurations, Configuration Admin provides support for factory configurations: multiple configurations of the same type, like for example different logger configurations for different categories or different database connection configurations. These factory configurations have a factory PID which is the same for all configurations of that type and a configuration PID to distinguish those configurations.

This should explain the big picture. Let's start talking about the new things in Release 7.

Configurator Specification


The Configurator specification is a new specification in the R7 release. It basically consists of two parts. The first part describes a common textual representation of a set of configurations. Previous to this specification each and every tool was using its own format for provisioning configurations. For example, the famous Apache Felix File Install uses a properties like format. Other tools use slightly different formats. One problem is that you can't simply switch from one tool to another and the other major problem is that some of the formats do not allow to specify the real type of a property value. For example, the value for the service ranking property must be of type Integer. Or you might have a special implementation that is expecting (for whatever reason) a value to be of type Byte. However, some tools are simply always using a Long to represent numbers or a String to represent anything else.

Therefore a common definition eliminates these problems and allows interchangeability of configurations between various tools. The format is JSON based and uses the PIDs of a configuration as the keys. The value is the configuration object with the properties:

{
    "my.component.pid": {
        "port:Integer" : 300, 
        "array:int[]" : [2, 3, 4], 
        "collection:Collection<Integer>" : [2,3,4], 
        "complex": { 
            "a" : 1, 
            "b" : "two"
        }
    }
}

As you can see in the example, it is possible to specify the runtime type of a configuration property by separating the property name from the type using a colon. For example, the "port" property value is of type Integer, the "array" property value is an array of ints and the "collection" property value is of type Collection<Integer>. You can specify all allowed types for a configuration and the configurator implementation uses converting rules as defined by the Converter specification - another one of the new specifications of Release 7.

In addition, a configuration property can hold structured JSON as a string value. In the example above "complex" contains at runtime a string value of the specified JSON.

Factory configurations can be specified by using the following syntax: FACTORY_PID~NAME. With the updated Configuration Admin it is possible to use a meaningful name to address factory components. The tilde separates the factory PID from the name:

{
    "my.factory.component~foo" : {
        ...
    },
    "my.factory.component~bar" : {
        ...
    }
}

Please note the errata for the published specification.

OSGi Bundle Configurations


The second part of the configurator specification describes a new extender based mechanism that picks up configurations from within a bundle and applies them. A bundle can contain one or more JSON files with configurations and once the bundle is started the configurations will be put into Configuration Admin by the Configurator. The configurator manages the state handling and ordering in a deterministic way. For example, if two bundles contain a configuration for the same PID, a ranking mechanism is used to specify which configuration is put into Configuration Admin, regardless of their installation or start order.

In addition to provide configurations through bundles, the Configurator supports providing initial configurations through system properties on startup of the OSGi framework. This is especially useful for customising an application without changing the distributable for the application. By specifying the system property configurator.initial with either a JSON document as described above or a list of URLs pointing to such JSON documents, the Configurator will apply the contained configurations in the same manner as if they would have been provided through a bundle.

With this new feature, provisioning of configurations through bundles and allowing to override them on startup becomes part of the OSGi specifications. You will find an example application using the Configurator at the OSGi enRoute website. The specification of the Configurator has driven the update of the Configuration Admin specification. So let's talk about the most important new features in Configuration Admin.

Improved Factory Configuration Handling


The handling of factory configurations has been greatly improved. With previous versions, when you create a factory configuration, the PID part is randomly generated which makes identifying a particular factory configuration later on much harder. In addition, as the PID is auto-generated, it has no meaning. With the updated Configuration Admin, it is now possible to specify the PID of a factory configuration, eliminating those problems.

New methods on the Configuration Admin allow to create and retrieve factory configurations based on the factory PID and a name. These methods behave the same as the already existing methods for plain configurations. The PID for those factory components is generated by appending the name to the factory PID separated by a tilde. The Configurator uses this syntax to specify factory configurations as shown above.

Improved Configuration Plugin Handling


When a configuration is delivered to a managed service, the configuration is passed through registered configuration plugin services. Such a service can manipulate the configuration. One common use case is to handle placeholders in the configuration properties and replace them with real values when delivered. For example, a property of a database connection configuration could just contain the value "${database.url}" which is replaced with the actual URL when this configuration is passed to the component processing the configuration. Or if you have sensitive configuration data, you can store it encrypted in the configuration and just decrypt it in a configuration plugin before it is passed to the managed service.

While this mechanism sounds useful, it is only useful if you register a managed service. However, when you are using Declarative Services (or other component frameworks) for your components, the plugins are not called at all. This gap is closed now and the DS implementation uses a new functionality of the Configuration Admin service and calls the plugins before it is passing the configuration to its components. This ensures plugins will be called regardless how you get your configuration. And this is making those use cases mentioned above possible.

Conclusion


The standard format for OSGi configurations is a great step forward for tooling and the Configurator implementation allows to deploy configurations through bundles in a standardized and well-specified way. The update of the Configuration Admin resolves some long outstanding issues and allows for new use cases around configuration management. For all the new features of the Configuration Admin service, have a look at the specification and make sure to also read the new Configuration specification.


Want to find out more about OSGi R7?

This is one post in a series of 12 focused on OSGi R7 Highlights.  Previous posts are:
  1. Proposed Final Draft Now Available
  2. Java 9 Support
  3. Declarative Services
  4. The JAX-RS Whiteboard
  5. The Converter
  6. Cluster Information
  7. Transaction Control
  8. The Http Whiteboard Service
  9. Push Streams and Promises 1.1
Be sure to follow us on Twitter or LinkedIn or subscribe to our Blog feed to find out when it's available.

Tuesday, June 5, 2018

OSGi R7 Highlights: Push Streams and Promises 1.1

The OSGi Compendium Release 7 specification contains new and updated specifications from the OSGi Alliance. Today I have the good fortune to be writing about one of each! The OSGi Push Streams specification is a brand new reactive data processing API in R7, and is closely related to the newly updated OSGi Promises 1.1 specification.

OSGi Utility Specifications

Before we dive into the new features provided by both of these specifications I will highlight an important, and often overlooked, fact about the OSGi specifications. You may have noticed that there are some large gaps in the OSGi specification numberings, in particular, that there is a really big gap between chapters 151 and 702.

This numbering isn't an accident. Chapters numbered in the range 700-799 are "utility" packages. These differ from other OSGi compendium specifications in that they don't define OSGi service interfaces that you look up from the service registry, instead they define classes that you use directly. This means that the OSGi utility specification packages are self-implementing, i.e., when you get the specification jar you get the reference implementation at the same time!

Push Streams and Promises are both utility specifications because the types and behaviors they define don't fit a service model, instead, they provide abstractions for asynchronous behaviors. There is, however another important feature of both specifications which makes them particularly special. Neither specification has any dependency on the OSGi Core specification! This means that not only does downloading the specification API give you an implementation, but you can also use that specification outside of an OSGi framework. Push Streams and Promises are therefore great technologies to use in any Java application that you write.

What are Push Streams?

The OSGi Push Streams specification is completely new for OSGi R7, and it defines a model for processing asynchronous streams of data using Reactive Programming techniques. Reactive Programming simply refers to a program which responds to the arrival of data or events, rather than attempting to pull data or events from the source. Reactive systems are also expected to run asynchronously and to process long running (or even infinite) streams of data.

Streaming data with Java Streams

You're probably familiar with the Java Stream API which was added in Java 8. The Stream API provides a rich functional mechanism to process streams of data (typically Java Collections).
List<Person> guests = getWeddingGuestList();

// How much wine will we need to buy?
long adults = guests.stream()
    .map(Person::getAge)
    .filter(age -> age > 21)
    .count();

Processing data using the Stream API allows you to write simple, effective code, however, it does not work well for data that cannot be generated on demand. If the list of wedding guests in the above example were based on email responses then the processing thread would spend a huge amount of time blocked polling the email server!

Streaming event-based data using Push Streams

Event-based data is data that occurs when a specific action happens. This may be a clock tick, a user clicking on a web-page, or it may indicate a train passing a signal. The important thing about event-based data is that it is generated based on an external stimulus, and not because a consumer asked for the data.

Push Streams differ from the Java Stream API because they expect data to be generated and processed asynchronously. The API, however, remains very similar:
PushStream<Person> guests = getWeddingGuestList();

// How much wine will we need to buy?
Promise<Long> adults = guests
    .map(Person::getAge)
    .filter(age -> age > 21)
    .count();

The most important difference is that the return value of a Push Stream's terminal operation is a Promise. This allows the Stream of asynchronously arriving data to be processed in a non-blocking way.

Creating your own Push Streams

Push Streams can be created easily using a Push Stream Provider. A Push Stream Provider can be configured to use specific thread pools, queueing policies, back-pressure, and circuit breaker behaviors.
PushEventSource<Email> emails = getWeddingEmailResponses();

PushStreamProvider psp = new PushStreamProvider();

// How much wine will we need to buy?
long adults = psp.createStream(emails)
    .map(Email::getFromAddress)
    .map(this::lookupPersonByEmail)
    .map(Person::getAge)
    .filter(age -> age > 21)
    .count();

Once you have a Push Stream Provider you can use (and re-use) it to create a Push Stream from any Push Event Source. While it is possible (and sometimes desirable) to create your own Push Event Source implementation, in most cases, you can create a Simple Push Event Source from the Push Stream Provider. Events can then be pushed into the  Simple Push Event Source as they occur, and they will be passed into any connected streams.
PushStreamProvider psp = new PushStreamProvider();

SimplePushEventSource<Email> spes = 
        psp.createSimpleEventSource(Email.class);

// When an "email" event occurs
spes.publish(email);

// When an error occurs
spes.error(anException);

// If/When the data stream finishes
spes.endOfStream();

Buffering and Back Pressure

In the Java Stream API the rate of data processing is determined by how fast the consumer can pull data from the source. This places a natural "brake" on the system, where the consumer cannot be overwhelmed by incoming data. In an event-based system this is no longer the case! Data events may occur far more rapidly than they can be consumed.

You can always attempt to consume events using more threads, but sooner or later your system will run out of capacity, and events will have to be queued until they can be processed. A queue of events is usually called a Buffer, and buffering is natively supported by the Push Stream specification

Buffers are useful for dealing with short-term spikes in the flow of events, but they cannot help if the long-term event arrival rate is higher than the long-term consumption rate. In this case the only options are to:
  • Discard some of the events
  • Fail the stream
  • Communicate to the event source that it should slow down
Deciding whether to discard events or fail the stream is the job of the buffer's Queue Policy. There are several built-in Queue Policy Options which provide basic behaviors, but you can implement your own behaviors if desired. Telling the producer to slow down, however, is the job of back pressure.

Back pressure in a Push Stream is a long indicating the number of milliseconds for which the event source should stop sending events in order to give the consumer time to catch up. Back pressure can be provided in a number of ways.
This back pressure is then sent back to the event producer, which may (or may not) slow down as a result.

Using Push Streams with Real Data

The UK's rail network produces data events every time a train passes a monitoring point. This information is used to help manage signalling, report delays, and for all sorts of other reasons. This data is also part of the UK government's open data program, and has a feed streaming data to anyone who signs up for an account!

There is a public example on GitHub using Push Streams to consume this data. The data can be consumed live, however, to avoid users having to sign up for an account this example replays events recorded from the real stream.

The incoming events are JSON arrays containing batches of data events. The main stream pipeline can be seen here. It uses Jackson to read the JSON, filter the data for interesting event types, map the data into Java types and then turns those data events into train reporting locations.

What's New in Promises 1.1?

The 1.1 update to Promises addresses a number of different areas.

Time-based behaviors

The first version of the OSGi Promise API gave you no way to deal with the passage of time. For example what should happen if a Promise isn't resolved for a very long time, or if it is resolved much faster than we expect?

The timeout and delay methods have been added to the Promise API to allow more sophisticated time-based behaviors to be built using Promises.

  • The timeout method returns a Promise which fails with a TimeoutException if the original Promise doesn't resolve before the supplied timeout elapses.
  • The delay method returns a Promise which does not resolve until the supplied delay has elapsed after the original promise resolves.
Using a combination of timeouts and delays can throttle a busy system, or ensure a live feel in a system that is receiving little input.

Thread management

Probably the largest change in the Promises specification is the introduction of the PromiseFactory. The Promise Factory is a powerful type which can be used as a factory for Deferred instances and resolved Promises. In addition, the Promise Factory can also be used to define the threads that should be used to execute callbacks, and to manage the time-based behaviors of the Promises. By changing the properties of the thread pool used by the Promise Factory you can ensure your callbacks are executed serially, or in parallel, or on the same thread that initiated the callback.

Usability improvements

Some of the less obvious improvements to Promises are the small usability enhancements that have been made to the API.
  • The OSGi callback functions now declare throws Exception so that your promise usage is more lambda friendly!
Promise<String> pString = getStringPromise();

// No need to worry about the URISyntaxException
Promise<URI> pURI = p.map(URI::new);
Promise<String> pString = getStringPromise();

// Doing work onResolve used to be a bit messy
pString.onResolve(() -> {
    Throwable t = pString.getFailure();
    if(t != null) {
      try { finished(pString.getValue()); } catch (Exception e) {}
    } else {
      logFailure(t);
    }
  });

// It is much simpler now!
pString.onSuccess(this::finished)
       .onFailure(this::logFailure);
  • A new thenAccept method for when you don't need the full power of then 
Promise<String> pString = getStringPromise();

// This behavior
pString.then(p -> {
    storeValue(p.getValue());
    return null;
  });

// becomes
pString.thenAccept(this::storeValue);

In Summary

OSGi Push Streams and Promises are powerful tools for asynchronous programming, even if you're not an OSGi user! If you're interested in using Push Streams or Promises then the implementations are freely available for you to start using now.



Want to find out more about OSGi R7?

This is one post in a series of 12 focused on OSGi R7 Highlights.  Previous posts are:
  1. Proposed Final Draft Now Available
  2. Java 9 Support
  3. Declarative Services
  4. The JAX-RS Whiteboard
  5. The Converter
  6. Cluster Information
  7. Transaction Control
  8. The Http Whiteboard Service
  9. Configuration Admin and Configurator
Be sure to follow us on Twitter or LinkedIn or subscribe to our Blog feed to find out when it's available.