Friday, December 28, 2012

Register by Dec 31 to Save for OSGi DevCon 2013

The preparations for OSGi DevCon 2013 are well underway. Its taking place in less than 3 months between March 25 to 28, 2013 in Boston.

It is set to be an excellent event with a packed OSGi Program of 17 talks, 2 tutorials, a BOF and a Workshop over the 4 days. All attendees also get access to all of the EclipseCon 2013 sessions.

With a 40% reduction in registration fees compared with last year there has never been a better time to feast on the huge Smörgåsbord of content for learning new things and brushing up your skills. Not to forget the opportunity to network with your peers and many of the OSGi experts from around the world.

There are only a few days left to secure the best price for attending as the Early Bird price of $800 expires on New Years Eve.  And if you are an OSGi member you can benefit from an additional $100 saving.

So why delay? Sign up today and secure you place to join us for one of the key OSGi events of next year.

If you have any questions you can reach the Program Committee by email.

Happy Holidays.

OSGi DevCon 2013 Program Committee
BJ Hargrave & Mike Francis

Monday, December 10, 2012

The Growing OSGi Ecosystem

Let’s step back for a second to look at OSGi, the big picture. We’re all so busy facing both everyday and strategic challenges that we don’t always see how many we are, how much OSGi is being deployed and, as a result, how much the OSGi ecosystem is growing.

Industry conferences are an excellent illustration of ecosystem growth and OSGi value.

We just wrapped the OSGi Community Event 2012 last quarter.  The buzz around OSGi was very visible. The event was co-located with EclipseCon Europe and some talks drew 80 attendees, a healthy snapshot of the OSGi ecosystem.  The OSGi keynote by John Duimovich from IBM showed just how well-known OSGi technology is and how it is deployed in other communities. It was exciting to see business deployments, such as the QIVICON solution from Deutsche Telekom or Cisco’s product solution, not only show the benefits of OSGi technology, but also attract additional market players to become solution partners and, accordingly, grow the ecosystem. Also, the technical presentations received positive feedback and covered a range of markets, from enterprise, cloud, and embedded topics to native OSGi. An engaged audience and an inspiring BoF showcased the high interest in the various topics and the increasing interest in OSGi-based solutions.

Excitement about OSGi is expected at the OSGi Community Event, but it also builds at other events and shows the breadth of the OSGi ecosystem as it presents the value of OSGi. For instance, there were 14 sessions with OSGi technology at Java One 2012. Importantly, they weren’t all “OSGi sessions,” but OSGi is a part of so many efforts that it works its way into diverse talks.

There is an increasing understanding and recognition that OSGi is the right technology when it comes to reducing complexity with modularity, whether it’s for large-scale distributed systems or small, embedded applications.

In the Smart Home market, which touches both large-scale utilities as well as embedded applications, OSGi adoption is gaining speed. OSGi is deployed in a variety of Smart Home devices and portal-based solutions because it provides a dynamic programming model for all applications and the capability to integrate and enhance multiple devices in a networked environment at runtime. There is an entire ecosystem building around Smart Home and Smart Energy solutions – and OSGi is integral to more and more of these solutions. You can expect that several operators and their partners will commercially launch OSGi based solutions in 2013.

Key standardization organizations are joining forces at their industry members’ request to further speed up the process for such end-to-end solutions. An OSGi workshop in October launched coordinated efforts regarding the device abstraction layer and follow-up meetings and action plans have followed.  It’s moving fast because the market wants a standardized device abstraction layer solution as soon as possible – and OSGi is considered to be a key piece of the puzzle. 

OSGi is also an industry standard for enterprise application server providers, and is embraced by open source projects within the Apache and Eclipse communities. Enterprise adopters don’t necessarily promote their use of OSGi yet, but it could become a notable competitive differentiator, as it has in the Smart Home market.

This momentum across industries promotes and builds the case for both OSGi adoption and Alliance membership. Open source as well as commercial projects propel the development of tooling and fuel steady development of the OSGi ecosystem itself, including a broad variety of industry players from enterprise software companies, operators and utility providers to software providers, manufacturers of Customer Premise Equipment (CPE), white goods, SoC vendors, Independent Software Vendors (including portal and application vendors), automotive manufacturers, and telematics providers. The cross-industry OSGi ecosystem allows companies to discover new partners and to share the workload while enhancing their product portfolio and service offerings –even in the aftermarket. That’s why it’s beneficial to become part of it – a self-fulfilling prophecy.

Inspiring times -- when you take a step back to notice. Of course, there is always more work to do and more ideas of where we can go, but we know the cycle between specification work and adoption is shrinking while the ecosystem grows. And that is to all of our benefit.

Susan Schwarze
OSGi VP Marketing

Thursday, November 8, 2012

OSGi DevCon 2013 CFP Closes 19 Nov, 2012

Thanks to everyone who made a submission for the Early Bird Talk selection for OSGi DevCon 2013.  As you have probably seen from @OSGiAlliance on Twitter, the talk selected was Modularity in the Cloud: a Case Study by P Bakker and Marcel Offermans from Luminis.

Well we are quickly on to the next Call For Papers deadline... and this time its your last chance if you want to be considered for a speaking slot at OSGi DevCon next year.

The CFP closes on Monday, November 19, so don't delay and be sure to make your submissions by then.  

We are especially looking for talks about OSGi systems that have been deployed in the embedded, enterprise or desktop worlds; along with talks about tools and frameworks that improve the OSGi experience for developers.  In addition any use cases or projects that are taking advantage of OSGi and Cloud are also of interest.  Finally we would like to hear from anyone who would like to give an OSGi tutorial, bearing in mind that this should ideally include some hands-on activities for the audience.

For more information about the CFP please visit the OSGi DevCon 2013 page.

We have an eager Selection Committee on hand to review all the submissions and select the program, and we are hoping to announce the final program before the end of this year.

If you have any questions feel free to contact us by email.

OSGi DevCon 2013 Program Committee

Friday, October 26, 2012

4.3 Companion Code for Java 7

Starting in version 4.3, OSGi started to use generics in some of the API including the Core specification. Generics were introduced to the Java language in Java 5. However, OSGi needed to continue to support embedded use cases which use the CDC/Foundation 1.1 runtime which is still based upon the Java 1.4 language level and JVM. To address this issue, OSGi compiled the APIs with -target jsr14; an undocumented javac flag introduced before Java 5 was final. So we had the best of both worlds: we can use generics and still compile to run on Java 1.4 based runtimes.

This worked for Java 5 and Java 6. But when Java 7 shipped, two things changed: javac no longer understood the jsr14 option to -target and javac refused to recognize the attributes containing the generics information in class files already compiled with -target jsr14. The change to no longer support creating -target jsr14 class files was ok; we could continue to compile with Java 6 javac. But the change to the javac to cease to recognize the class file attributes with the generics information in existing class files was a bigger problem. It meant that the 4.3 API jars published by OSGi were not useable by people who need to compile with Java 7 javac. By not useable, I mean javac treated the classes as if they did not contain any generics information: they were raw. A bug was filed against Java to see if this was some mistake or oversight. The reply was that the change was intentional.

At the time this was first noticed, Java 7 was new and not too widely used. OSGi also included the source code in the jars so you could recompile the code yourself if you needed. Later, when it came time to ship Core R5, we changed to compile the API classes with -target 1.5 and so they work fine on Java 7. So problem solved; the new release's jars don't use -target jsr14! Except some of the current OSGi implementations (I'm looking at you Felix and Karaf) are still based upon Core 4.3 and thus people using those implementations still need to use the Core 4.3 API. And if they also want to use Java 7, they need to recompile the OSGi API source. So after some prodding by a few folks, OSGi rebuilt the Core and Compendium API jars as Core 4.3.1 and Compendium 4.3.1. The new jars have the same packages at the same package versions having the same API signatures. They are just not compiled with -target jsr14 so they work fine with Java 7.

So if you need to use the 4.3 API with Java 7, pick up these new 4.3.1 jars. They should also be available on maven shortly.

OSGi DevCon 2013 - Call For Papers

So with the OSGi Community Event finishing yesterday the focus now shifts to OSGi DevCon 2013.

We are pleased to be co-locating OSGi DevCon 2013 with EclipseCon in Boston, MA from March 25 to 28, 2013.

The Call For Papers is open and runs until November 19 this year.

However an Early Bird Selection of one talk for the program will be made from the submissions received by October 31.

Yes thats right, the Early Bird Selection deadline is next Wednesday. So you have only 6 days left if you want to maximize your chances of selection and get a free pass to the OSGi DevCon and EclipseCon conference.

We want to hear about experiences from speakers who have deployed OSGi based systems, large and small, as well as embedded, enterprise, or desktop systems. We are also especially keen on tools and frameworks that make it easy to build, use, and deploy OSGi-based applications along with the growing number of uses of OSGi with cloud.

To find out more about how to make a talk submission please visit the OSGi DevCon 2013 conference page.

If you have any questions please contact the OSGi DevCon 2013 Program Committee by email.

We look forward to reviewing your submissions.

OSGi DevCon 2013 Program Committee

Monday, September 10, 2012

You Need to Attend the OSGi Community Event 2012

Do you want to get more engaged, meet with interesting people, share ideas, difficulties, and successes? When’s the last time you were able to combine work and pleasure very easily? Well, there is a good opportunity on the horizon.

It’s the time of the year when the OSGi Community gets together to learn about and share new technology updates and latest business deployments, discuss technical insights and further developments. Do you want to talk about
 These and more questions will be answered at the OSGi Community Event 2012 - the program reflects the broad use of OSGi technology in diverse markets and product solutions and also includes
 In order to further leverage the benefits for all attendees and foster the exchange of information and activities in the constantly growing ecosystem, the OSGi Community Event 2012 is co-located with EclipseCon Europe. Mingle and benefit from the cool program … and during the evenings chill and digest the huge information flow while listening to a live band and having loads of fun, including a competition involving an Aibotix drone which also will be presented in Jörg Lamprecht’s key note.

So there’s a terrific program, a lot of opportunities to learn, share, and mingle and, certainly, also to party together.  To top all past conferences, we wait for you and your colleagues to participate and get involved. Discounts? First of all, enter the coupon code 'OSGi'; then, benefit from the early bird (until September 30th), student, press, academics or group discounts.

Now it’s your turn to register – welcome (back) to exciting times!

Susan Schwarze
OSGi Alliance VP Marketing

Wednesday, August 29, 2012

OSGi Picking Up The Pieces

Jigsaw – the prototype project that was widely expected to become the Java™ SE 8 Module System – might now be deferred to at least Java 9. Given this uncertainty, the OSGi Alliance felt it was appropriate to state our position with respect to Java platform modularity.

Modular systems reduce maintenance and development time and so cost. For these reasons, the OSGi Alliance has dedicated more than a dozen years to the pursuit of modularity, creating the dynamic module system for Java. Now in 2012, OSGi is in daily use creating highly maintainable, extensible, and agile business applications and solutions. Application and systems developers already benefit from the mature OSGi dynamic module system for Java, proven with its adoption worldwide in Fortune Global 100 company products and services and in diverse markets including enterprise, mobile, home, telematics and consumer. The OSGi Alliance and its members continue to simplify the use of OSGi and invite the wider Java community to support this work, including documentation efforts.

Clearly, a modularized Java platform would complete this story: reducing both bloat and memory footprint, improving startup times, better accommodating embedded environments, and future-proofing Java with removable components would be highly beneficial for Java developers, architects and users.

Designing a module system takes time and experience and so the OSGi Alliance supports Oracle’s consideration not to ship Java SE 8 with Jigsaw. Especially as an incomplete modularity design would hurt the Java community. Yet, we should not delay the work on the Java Module System JSR.

The solution

OSGi technology is based on open industry specifications and more than a decade of experience. The OSGi Alliance recommends that the Java community use OSGi as the cornerstone technology to modularize the Java platform; OSGi already providing not only the necessary technology foundation to successfully achieve modularization of Java SE, but also a growing ecosystem of developers and deployers of OSGi technology. The Module System JSR should be based on OSGi concepts, and enable similar maintenance, extensibility, and agility benefits for the Java platform, creating a single, coherent modularization approach from the Java platform up through the application.

How do we move forwards?

Today the only module framework that exists for Java is that defined by the OSGi Alliance standards. Based on these foundations, the Alliance is willing to work with our existing ecosystem of experienced partners, the JCP and the wider Java community to collectively design the best Java Module System to secure the future of the Java platform. Collectively, we can accelerate the JCP process and related projects like Penrose; and most importantly deliver the module system design that the Java community deserves. The Java Module System JSR needs to be started right away to begin this challenging work.

Richard Nicolson
OSGi Alliance President

Friday, July 13, 2012

New RFPs available for feedback

Over the past while we have started making RFPs available at OSGi for public comments before they are finalized and voted on. This has worked really well for the OSGi Cloud RFP and the OSGi/CDI integration RFP.

Four new RFPs have been made available as part of this process. The RFPs describe uses cases and requirements which will ultimately feed into RFC work that will be the basis of future OSGi specifications.

RFP 143 OSGi Connect
The people at PojoSR have done some great work in showing how you can use OSGi Services in cases where you may not want to take on OSGi modularity (yet). RFP 143 captures requirements for creating a specification around this. The RFP can be found here: RFP 143.

RFP 150 HTTP Service Updates
The OSGi HTTP Service is widely used and it provides a very nice programming model for sevlets in an OSGi environment. However the specification is in need for an update to modernize it in relation to the Servlet spec itself. Additionally, the whiteboard pattern is introduced for servlet. You can find the RFP here: RFP 150.

RFP 151 Declarative Service Updates
A variety of updates are proposed to the Declarative Services specification. They can be found in RFP 151.

RFP 152 EJB Integration
EJB integration with OSGi has been done at Apache Aries and Glassfish. Other appservers such as JBoss and others also provide some form of interaction between EJBs and OSGi. This RFP aims a writing a specification for this integration to make OSGi-enabled EJBs portable. You can find the RFP here: RFP 152.

If you have thoughts or opinions about the above topic, have a read of the RFPs and put your comments on the associated bugs.

Wednesday, June 13, 2012

Core Release 5 and Enterprise Release 5 specifications

The OSGi Alliance has just published the recently approved Core Release 5 and Enterprise Release 5 specifications and made them available for download.

Some highlights from the new specifications include:

OSGi Core Release 5

OSGi Enterprise Release 5

  • New Repository Service Specification provides declarative access to artifact repositories based on the generic capabilities and requirements model. Where traditional repositories have typically provided artifacts based on their name, version and group, the OSGi Repository can provide artifacts based on capabilities, such as packages exported, services provided, extender functionality provided or custom-defined capabilities.
  • New Resolver Service Specification. Based on the generic capabilities and requirements model, a management agent can use the Resolver service to compute the set of necessary resources needed to satisfy the given set of requirements. The Resolver is designed to work with the Repository Service, if available.
  • New Subsystems Service Specification provides the ability to group multiple bundles into a single manageable entity, allows for complete isolation as well as various sharing models of code, services, and resources through a management agent. The Subsystem Service Specification defines an archive format to package multiple bundles, the Enterprise Subsystem Archive (.esa).
  • New Service Loader Mediator Specification addresses common problems of bundles that rely on the java.util.ServiceLoader API to load custom Service Provider Implementations. It describes how to use the service registry for lookup of Service Providers as well as a solution for existing code to continue functioning using Service Loader API in a OSGi environment.
  • New Common Namespaces Specification for use with the generic OSGi capabilities and requirements model.
    • The Extender Namespace allows a bundle that requires an extender, such as Declarative Services or Blueprint, to express this dependency.
    • The Contract Namespace provides a shorthand for many Import-­‐Package statements for technologies which span multiple packages.
    • The Service Namespace allows a bundle to express that it provides or consumes a certain service.
  • Updated JMX Management Model Specification.
    • Object names now contain framework name and UUID, which allow multiple frameworks to be represented side-­by-­side.
    • Updated the JMX API to reflect the latest Core API, specifically the bundle wiring API.
    • Many improvements as requested by users, often focused on limiting the amount of data communicated via JMX APIs.
  • Updated Configuration Admin Specification.
    • Added targeted PIDs, which can be useful when configuring multiple versions of the same bundle through Configuration Admin.
    • Added persistent change count to make it easier to detect changes.
    • Added Synchronous Configuration Listener.
The specification PDFs, companion code jars and javadoc are now all available for download from the OSGi website.

Monday, May 21, 2012

OSGi/CDI integration RFP available for public comments

One of the new topics being discussed in the OSGi Enterprise Expert Group is integration with CDI (JSR 299). In short - the idea is that CDI beans in OSGi can receive their injections from the OSGi Service Registry. Additionally, CDI beans themselves can be registered in the OSGi Service Registry so that other bundles running in the OSGi framework can use them as OSGi Services.

Obviously, other component models already exist as OSGi specifications: we have Declarative Services and Blueprint. The CDI/OSGi integration will bring the CDI programming model into OSGi, and especially developers from the JavaEE world could be interested in continuing to use CDI inside OSGi. In my opinion it's not one size fits all. Developers have their own preferences and having a choice of programming models is good.
Also, the nice thing about the OSGi Service Registry-based approach is that all these component models work together, so I can write a Blueprint component and consume that as an OSGi Service in my CDI bean. And any other combination is also possible.

The OSGi/CDI discussion takes place in RFP 146. The RFP is a requirements document, once we agree on the requirements the details of the technical design will be discussed in an RFC.
As we did in the past with the Cloud Computing RFP 133, we're inviting everyone to comment or refine the requirements.
You can find the OSGi/CDI RFP here: OSGi RFP 146.

Repository Complex Requirements
Besides the OSGi/CDI RFP, another RFP has also been published for comments. This is RFP 148 Complex Repository Requirements. This is a much more narrow-focused RFP that aims at extending the capabilities of the Repository Service which is part of the upcoming R5 Enterprise Release (already available as EA Draft). Currently the Repository query API only accepts requirements within a single namespace. RFP 148 aims at extending this to allow requirements that cross namespaces. An example of this would be: I need a bundle that provides the package javax.servlet and has the Apache License.
Why wasn't this part of the Repository spec in the first place? Very simple, we ran out of time. So rather than rushing this feature in or delaying the Repository spec, we decided to work on it after the initial Repository Release. That's why we're looking at this RFP now, you can find it here: OSGi RFP 148.

Tuesday, May 8, 2012

Compendium 4.3 and Residential 4.3 published!

After some delays, I am happy to make the Compendium Version 4.3 and Residential Version 4.3 specification documents available for download.

Compendium Version 4.3

The Compendium Version 4.3 document adds all the specifications that were introduced in the Enterprise Version 4.2 document including
  • Remote Services Admin
  • JTA
  • JDBC
  • JNDI
  • JPA
  • Web Applications
  • SCA Configuration
The Compendium Version 4.3 document also introduced the new Coordinator specification.
The Coordinator service provides a mechanism for multiple parties to collaborate on a common task without a priori knowledge of who will collaborate in that task. A collaborator can participate by adding a Participant to the Coordination. The Coordination will notify the Participants when the coordination is ended or when it is failed.
And there were also enhancements to some of the existing specifications. Configuration Admin is updated to allow multiple bundles to access the same configuration; and extend the security model to allow configuration "regions".

Declarative Services is updated to allow service references to receive service updates; to allow greedy service bindings; and support the use of compile time annotations to simplify authoring component descriptions.

Event Admin is also updated to support out-of-order asynchronous delivery of events.

Residential Version 4.3

We are also pleased to make available the first edition of the Residential specification.
The services of this Residential Specification have been designed with the residential market in mind. Requirements and management protocols for this environment are defined in the specifications by consortias like the Home Gateway Initiative (HGI), the Broadband Forum (BBF) and the UPnP Forum. These specifications provide requirements for execution environments in a Consumer Premises Equipment (CPE) and other consumer devices, as well as protocols for the management of residential environments.
The DMT Admin service has been updated to version 2.0 with a set of major improvements including overlapping subtrees, mount points, and scaffold nodes. These changes provide the basis for use with the TR-069 protocol.

A new Residential Device Management specification defines a Residential Management Tree, the RMT. This tree provides a general DMT Admin object model that allows browsing and managing the OSGi platform remotely over different protocol adapters.

The TR-157 Amendment 3 Software Module Guidelines chapter provides guidelines for implementers of the TR-157a3 Internet Gateway Device Software Modules specification on an OSGi platform.

The DMT Admin service and the TR-069 protocol have different semantics and primitives. The new TR069 Connector Service specification provides an API based on the TR-069 Remote Procedure Calls concept that is implemented on top of DMT Admin. This connector supports data conversion and the object modeling constructs defined in the DMT Admin service.

The specification pdfs, companion code jars and javadoc are now all available for download from the OSGi website.

Wednesday, May 2, 2012

Follow-up on the 2nd Cloud Workshop

The second OSGi Cloud Workshop was held during EclipseCon/OSGi DevCon 2012 last March. It was a very interesting morning with some good presentations and some great discussion. You can still find the presentations linked from here:

We learned that people are already widely using OSGi in Cloud environments, and part of the morning was spent discussing what OSGi could do to make it even more suitable for use in the Cloud. As a result of that a number of topics were proposed for people active in the OSGi Alliance to look at. You can find a summary of these topics here:

Last week the OSGi Enterprise Expert Group and the Residential Expert Group met to discuss these topics and to find potential routes to address them. Below you can find the results of these discussions. In this list I'll start each topic with the requirement as posted earlier to the cloud-workshop mailing list. The follow-ups below describe the thinking that we came to during the recent EEG/REG meeting.

1. Topic: Make it possible to describe various Service unavailable States. A service may be unavailable in the cloud because of a variety of reasons:

  • Maybe the amount of invocations available to you is exhausted for today.
  • Maybe your credit card expired 
  • Maybe the node running the service crashed. 
  • etc...

It should be possible to model these various failure states and it should also be possible to register 'callback' mechanisms that can deal with these states in whatever way is appropriate (blacklist the service, wait a while, send an email to the credit card holder, etc).

1. Follow-up: A potential new RFP is under discussion around monitoring and management. This RFP is currently being discussed in the Residential Expert Group, but it should ultimately be useful to all contexts in which OSGi is run. The requirements in this RFP could address some of the service quality issues referred to in this topic.

Additionally, there was a discussion whether it would make sense to extend the OSGi ServiceException so that various types of service failures could be reported (i.e. payment needed, quota exceeded, etc).

2. Topic: WYTIWYR (What you test is what you run) It should be possible to quickly deploy and redeploy.

2. Follow-up: One of the requirements that this expresses is the need to remotely run a test suite in an existing (remote) framework. There are test OSGi test frameworks that support this kind of behavior today (Pax Exam, Arquillian and others), but possibly they need to be enhanced with a remote deployment/managament solution that is cloud-friendly, for example the REST-based OSGi Framework management as is being discussed in RFC 182.

2b. Topic: There was an additional use-case around reverting the data (and configuration) changes made during an upgrade. If we need to downgrade after an upgrade then we may need to convert the data/configuration back into its old state.
2b. Follow-up: It might be possible to achieve this by providing an OSGi API to snapshot the framework state. This API could allow the user to save the current state and to retrieve a past saved state. When reverting to a past deployment this operation could be combined with a pluggable compensation process that converts the data back, if applicable.
The idea of snapshotting the framework state will be explored in a new RFP that is to be created soon. The data compensation process itself is most likely out of scope for OSGi.

3. Topic: Come up with a common and agreed architecture for Discovery. This should include consideration of Remote Services, Remote Events and Distributed Configuration Admin.

3. Follow-up: This is the topic of the new RFC 183 Cloud Discovery.

4. Topic: Resource utilization. It should be possible to measure/report this for each cloud node. Number of threads available, amount of memory, power consumption etc… Possibly create OSGi Capability namespaces for this. 

4. Follow-up: This relates to the monitoring RFP mentioned above.

5. Topic: OBR scaling. Need to be able to use OBR in a highly available manner. Should support failover and should hook in with discovery. 

5. Follow-up: The Repository service as defined in OSGi Enterprise R5 spec chapter 132 (see for download instructions of the latest draft) provides a stateless API which can work with well-known HA solutions (replication, failover, etc). Additionally, the Repository supports the concept of referrals, allowing multiple, federated repositories to be combined into a single logical view.
The discovery piece is part of RFC 183.

6. Topic: We need subsystems across frameworks. Possibly refer to them as 'Ecosystems'. These describe a number of subsystems deployed across a number of frameworks. 

6. Follow-up: While the general usefulness of this isn't disputed, there is nobody at this point in time driving this. If people strongly feel it should be addressed they should come forward and help out with defining the solution to addressing the issue.

7. Topic: Asynchronous services and asynchronous remote services. 

7. Follow-up: This is the topic of RFP 132 which was recently restarted. RFP 132 is purely about asynchronous OSGi services. Once this is established, asynchronous remote services can be modeled as a layer on top.

8. Topic: Isolation and security for applications 
  • For multi-tenancy 
  • Protect access to file system 
  • Lifecycle handling of applications 
  • OBR - isolated OBR (multiple tenants should not see each other's OBR) 
This all needs to be configurable.

8. Follow-up: Clearly separate VMs provide the best isolation, while separate JavaVMs within a single OS-level VM also provide fairly strong isolation (however, be aware of possible side effects of native code and possible resource exhaustion). Nested OSGi Frameworks and Subsystem Regions also provide isolation to a certain degree (see Graham's post on Subsystems), but the level of protection that is required clearly depends on the required security for the given application. The deployer can choose from these options as a target for deploying bundles and/or subsystems.

9. Topic: It should be possible to look at the cloud system state: 
  • where am I (type of cloud, geographical location)? 
  • what nodes are there and what is their state? 
  • what frameworks are available in this cloud system? 
  • where's my OBR? 
  • what state am I in? 
  • what do I need here in order to operate? 
  • etc… 
9. Follow-up: This is part of what is being discussed in RFC 183 Cloud Discovery.

10. Topic: There should be a management mechanism for use in the cloud 
  • JMX? Possibly not 
  • REST? Most likely 
Management of application state should also be possible in addition to bundle/framework state 

10. Follow-up: A cloud-friendly REST-based management API for the framework is currently being worked on in RFC 182. Once that is established it can also form the baseline for Subsystems management technology which can be used for application-level management.

11. Topic: Deployment - when deploying replicated nodes it should be possible to specify that the replica should not be deployed on certain nodes, to avoid that all the replicas are deployed on the same node.

11. Follow-up: This also relates to discovery as discussed in RFC 183. A management agent can use this information to enforce such a constraint.

12. Topic: Single Sign-on for OSGi.

12. Follow-up: One member company has done a project in relation to this on top of the User Admin Service. A new RFP will be created to discuss this requirement further.

So there you are - the ideas from the cloud workshop were greatly appreciated and provide very useful input into future work. If you're interested in following the progress, as usual we're planning to release regular early access drafts of the documents that are relatively mature. Or, if you're interested in taking part in the creation of these specs, join in! For more information see: or contact me (david at or anyone else active in OSGi for more information.

Friday, April 20, 2012

Standard Applications on the Horizon

Those following the OSGi draft specifications will have seen that the OSGi Alliance has been working in the area of "bundle collections" for quite some time. By bundle collections, I mean the ability to define and deploy a set of bundles as a single entity. Many OSGi runtimes already provide such capabilities, and some example include:
  • Apache Aries Applications
  • Apache Geronimo Applications
  • Apache Karaf Features
  • Eclipse Virgo Plans and PARs
  • Eclipse Platform Features
  • IBM WebSphere Application Server Applications, Composites and Features
  • Oracle GlassFish Applications
  • Paremus Service Fabric Systems
When you see a list that long, and I've no doubt there are many more, it's clear that this is an area crying out for standardization. Digging a little deeper on these bundle collections, we see a lot of commonality between them; they define collections of bundles (obviously), they give the collection an identity, they allow version ranges when identifying the content bundles, and they also often allow locking down of versions (e.g. for a specific tested deployment). There's also some variability; many, but not all, provide isolation to prevent undesirable interactions between collections deployed to the same runtime. This is similar to the isolation you get with Java EE applications on an applications server, but rather than replicating the Java EE model, they also allow bundles to be shared by applications to remove duplication and reduce disk and memory footprint (allows common libraries and frameworks to be packaged and deployed just once). Other types of collections do not impose isolation and are often used as a way to simplify the assembly of runtimes.

So, where are we in the standards? Well, there are two parts to this:

The first part is an enabler in the OSGi Core Framework. To enable isolation to be applied to collections of bundles, the OSGi Core defined a set of framework hooks. Hooks hide things from bundles such that they cannot resolve to, or find, them. Core 4.2 had already added the Service hooks and Core 4.3 complemented these with Resolver and Bundle hooks. Hooks are a low-level core features. It was never the intention that these would be used by most OSGi developers, and so something else was needed.

The second part is the specification that uses the core enabler, but makes it simple to develop, test and deploy these bundle collections. That's where the new Subsystems specification comes in. Subsystems defines a programming model that uses a format familiar to bundle developers for developing bundle collections. It simplifies the task further still by defining different types of bundle collection which have default isolation policies matching the most common isolation models. The subsystem types defined in the specification are:

  • Application - allows the collection to use packages and services from shared bundles, but does not share its own with others. This type fits closely with what people would typically think of as an application running in an application server
  • Composite - does not use or share any packages or services unless explicitly configured to do so. This enables the collection of bundles to keep some aspects of their internals private, only exposing what they want the rest of the runtime to see or provide. A common use case for composites is where you have teams providing parts for solutions and those parts are composed of multiple bundles and have public APIs and then implementation details they do not wish to be externalized.
  • Feature - freely uses and shares any packages and services it provides or are visible to it. This simplifies the management of bundle collections but does not isolate them. A common use case for these is to make the definition of runtimes or solutions simpler.

The following is an example of an application subsystem containing three bundles:

Subsystem-ManifestVersion: 1
Subsystem-Name: Acme Foo Application
Subsystem-Version: 1.2
Subsystem-Type: osgi.subsystem.application

If you've developed bundles in the past, then many of the headers will look familiar. This particular application has a name (Subsystem-SymbolicName) and a version (Subsystem-Version), including human readable information (Subsystem-Name). It has a type (Subsystem-Type) that identifies it as an application which means its packages and services are not exposed outside the subsystem. Finally, it lists its content (Subsystem-Content). The type of the content defaults to bundle, but other types are permitted (such as fragments, or other subsystems, e.g. to nest a feature within an application), and a subsystem implementation can also choose to enable its own custom types (not required to be supported by different subsystem implementations).

One other important area of management that subsystems addresses is that of provisioning. You'll see from the example that it lists content but not the specific versions, nor any bundles it might require in support of that content. If you install that definition and provide no further details, it will resolve the subsystem to choose the exact bundle versions and any necessary dependency bundles. A second option is to provide a pre-defined deployment that specifies the exact content and supporting bundles. That way, you can test a specific deployment before putting it into production.

The Subsystem specification was made available in the OSGi Enterprise Release 5 Proposed Final Draft in March. As with all OSGi specifications, they're written for implementors, but as projects and vendors begin to provide implementations I expect to see user-level documentation emerge and then users can enjoy the flexibility of creating modular standard applications, composites and features for a number of target environments.

Monday, April 2, 2012


This is my last blog on the OSGi Web site ... After 237 blog posts it is time to say goodbye. I'd like to thank the OSGi Alliance for giving me a sounding board for these years. I'd like to thank everybody in the Alliance in the past years for their support and giving me a chance to work with them. I am excited to start my new work but there is sadness that my time as OSGi Technical Director has ended. I've been incredibly privileged to work on this technology, it was a fantastic adventure.

I wish the OSGi Alliance the best. I think the specifications are mostly rock solid and I think the increasing adoption reflect this. The last OSGi DevCon/EclipseCon was a big success and it was great to see how the OSGi service model is making inroads. Though there is never an opportune time to leave, I do feel that the specifications are mature now and the most crucial parts are in place. So to that extent my work is done.

I will continue blogging on, I do hope you will follow me there. I will also tweet as @pkriens. Also do not forget to connect on LinkedIn and/or provide a recommendation.

Thanks for having had the patience to read this blog,

  Peter Kriens

Friday, March 30, 2012

What happened to pre-release versions?

In RFC 175, we proposed to introduce two new things for Core Release 5. One was a change to the version syntax to support pre-release versions and the other was to introduce a VersionRange class to provide a common implementation of version range operations. When the Core Release 5 spec was completed, it included the VersionRange class but did not include pre-release version support. So, what happened?

Pre-release, aka. snapshot, versions seemed like a good idea when first proposed. From the RFC:
During development of a bundle (or package), there is a period of time where the bundle has not been declared final. That is, the bundle has a planned version number once final, but that version number is not practically consumed until the bundle has been declared final. However, during development of the bundle, it must have a version number. This version number must be larger than the version number of the previous final version of the bundle but less than the version number intended for the bundle once final.
There are several usage patterns for version numbers which have emerged to deal with this problem. For example, some use an odd/even version number for the minor version to differentiate between development versions and final (release) versions. Some also place the build timestamp in the qualifier to distinguish all built versions of a bundle, but there is no clear marking which is the final version so dependents cannot mandate a final version.
So we proposed a change to the version syntax to open up space between version numbers so that before the unqualified version (e.g. 1.2.3) there would be pre-release versions. So 1.2.3-snapshot would be a lower version number than 1.2.3. It would have a total ordering over all versions and be backwards compatible with existing version syntax.

1.2.3- < 1.2.3-x < 1.2.3 = 1.2.3. < 1.2.3.x

However, we also had to work properly with existing version range syntax. For example, is the version 1.2.3-snapshot included in the range [1.2.3,2.0.0)? We defined two rules for this.
  1. If a version range having an endpoint specified without a qualifier (e.g. [1.2.3,2.0.0)) would include a version with a release qualifier of the empty string (e.g. 1.2.3), then the version range must also include that version when identified as pre-release (e.g. 1.2.3-x).
  2. If a version range having an endpoint specified without a qualifier (e.g. [1.2.3,2.0.0)) would exclude a version with a release qualifier of the empty string (e.g. 2.0.0), then the version range must also exclude that version when identified as pre-release (e.g. 2.0.0-x).
All together we had a complete design and I was able to implement the changes to the Version and VersionRange classes and write compliance tests. We even implemented it in Equinox. So from a runtime point of view, things looked OK.

But the big concern come around interactions with existing tooling, management and provisioning systems. These systems would not understand a bundle having a pre-release version string. They would require a lot of changes to properly handle support the pre-release version syntax.

Furthermore, we also become concerned about the mental complexity of pre-release versions. In numerous discussions within CPEG and with peers, people would get confused over the ordering of versions and whether some version was included in some range. If we, the "experts", couldn't keep it straight in our minds, we might expect others to also have a hard time.

So in the end, given the mental complexity and the downstream impact to tools, repositories and management systems, CPEG decided that the benefit of the changes was not sufficient to justify the cost of the change. So we agreed, after some lengthy discussions to discard the pre-release version proposal.

BJ Hargrave

Friday, March 23, 2012

Surprising Services

Services are arguably what OSGi is about but at the same time they are also the least understood. When we set out to design OSGi our key requirement was to create an ad-hoc collaborative programming model. Instead of components that are centrally managed and/or are closely coupled to their siblings, we looked for a model of independent peers. To understand what I mean:

In a peer-to-peer model you need to be able to find out who your peers are, and how you can collaborate with them. In the agent or actor model the peer is identified and messages are exchanged based on peer identity. Peer dependencies suffers from the transitive dependency aggregation, quite soon almost all agents are transitively dependent on each other and there is no more component reuse. This is the big ball of mud problem. To address this problem we came up with the service model.  OSGi services combine package based contracts with an instance broker, to provide a managed conduit between peers. 

That said, then what is actual the value of the services for application developers?

Well, the surprising effect of services is that the contracts can often be significantly simplified over traditional APIs because in most Java APIs the collaborative parts are mixed with instance coupling techniques (hacks?)  (DocumentBuilderFactoryInitialContextFactoryBuilder anyone?) and administrative aspects.

It turns out that with services you rarely need to define contracts for these aspects since they are taken care of by the service model (factories, but much more), or in most cases can stay inside the component. Many administrative aspects can be handled by the implementation. Service contracts can be limited to strictly the collaborative parts. And size does matter, the smaller the contract, the easier it is to use.

For example, assume you need to use a persistent queue that is used by a set of workers. Active MQ, Amazon SQS, etc. have a significant number of API calls about maintaining the queues, setting the properties, and interacting with it. However, virtually all of those aspects can be defined in Configuration Admin, the only collaborative aspects are how does the worker get its task and how do you queue a task?

The by far simplest solution I know is to define a contract where only the worker registers a Worker service and a MessageQueue service:

  public class MyWorker implements Worker<MyTask> {
      MessageQueue queue;

      public void work(MyTask task) { 
         ... AnotherTask.class, another );

      void setQueue( MessageQueue queue ) {
         this.queue = queue;

This queuing contract is sufficient to allow a wide variety of different implementations. Implementing this with the Amazon Simple Queue Service is quite easy. A puny little bundle can look for the services, uses the queue service property to listen to queues and dispatch the messages. In this case, the web based AWS console can be used to manage the queues, no code required. A more comprehensive implementation can use Configuration Admin to manage queues, or it can create queues on demand. Implementing this on another persistent queue can be done quite different without requiring any change in the components that act as senders and workers.

If there is one rule about simplifying software that works consistently then it is hiding. Something that you could not see can't bug you. OSGi services are by far the most effective way to hide implementation details; minimizing what must be shared. Our industry is predicted to double the amount of code in the next 8 years, we better get our services on or this avalanche will bury us.

 Peter Kriens

Tuesday, March 20, 2012


Last year we introduced the Coordinator in the Compendium 4.3. Unfortunately, this release 4.3 was held up over some legal issues. However, it will soon be available, in the 4.3 Compendium as well as the Enterprise 5.0.

The Coordinator service is a bit my baby. When we started with OSGi almost 14 years ago one of the first things I proposed was to start with a transaction manager. I'd just read in 3days Transaction Processing from Gray & Reuters and was thrilled, that had been the best non-fiction book I ever read. Obviously the ACID properties were interesting, and very informative to see how they could be implemented, but the most exciting part was the coordination aspect of transactions. Transactions, as described in the seminal book, allowed different parties (the resource managers) to collaborate on a task without prior knowledge of each other. Resource managers when called could discover an ongoing transaction and join it. The transaction guaranteed them a callback before the task was finished. This of course is a dream in a component model like OSGi where you call many different parties of which most you have no knowledge of. Each called service could participate in the ongoing task and be informed when the task was about to be finished. When I proposed a transaction manager the guys around the table looked at me warily and further ignored me, transactions in an embedded environment?

We got very busy but I kept nagging and the rest kept looking if I was silly in the head. In 2004 I finally wrote RFC 98, a modest proposal for a transaction manager light. Of course I immediately ran into the situation that, even though few if any had used it, that there was an already existing Java Transaction API (JTA). Prosyst did some work on this since they saw some value but in the end it was moved to a full blown JTA service. This was not what I wanted because JTA is weird (from my perspective then) because it distinguishes too much between the Container people and the application people. OSGi is about peer-to-peer, containers are about control from above. Try to act like a resource manager with XA (which would give the coordination aspects), however, it looks like it was made difficult on purpose.

The it hit me, I always ran into the opposition because I used a name that too many people associated with heavy and complexity. Though a full blown distributed high performance robust transaction manager is to say the least a non-trivial beast I was mostly interested in the coordination aspects inside an OSGi framework, a significantly simpler animal. So choose to change the name! The Coordinator service was born!

The first RFC was a very simple thread based Coordinator service. When you're called in your service you can get the current Coordination (or you can start one). Each thread had its own current Coordination. Anybody could then join the Coordination by giving it a participant. The Coordination can either fail or succeed, after which all the participants are called back and informed of the outcome. Anybody that has access to the Coordination can fail it, the Coordination will also fail with time out if it is not terminated before a deadline.

So how would you use it? The following code shows how a Coordination is started:

  Coordinator  coordinator = ...
  Coordination c = coordinator.create("work",0);
  try {
  } catch( Throwable t ) {;
  } finally {
This template is very robust. The Coordination is created with a name and a timeout. The work is then done in a try/catch/finally block. The catch block will fail the Coordination. Calling end on a failed Coordination will throw an exception so the exception does not get lost. A worker would do the following to participate in the Coordination:

 Coordinator coordinator = ... 
 void foo() {
   if ( !coordinator.addParticipant(this))
A worker can use the Coordination service to add itself as a participant. It is then guaranteed to get a call back when the current Coordination is terminated.

An example use case is batching a number of updates. Normally you can significantly optimize if you can delay communications by batching a number of updates. However, how you do you know when you had the last update so you can initiate the batch? If there is no Coordinator, the updates are done immediately, with a Coordinator they can be batched until the Coordination is terminated.

During the specification process a number of features were added: direct Coordinations (not thread based), a variable store per Coordination, and a reflective API.

I guess it will take some time before the Coordinator is really taken advantage of since the model is quite foreign to most developers. However, I am convinced that the service is really what OSGi is all about: coordinated collaborative components.

Peter Kriens

Friday, February 24, 2012

OSGi DevCon 2012 / EclipseCon

Did you already make your arrangements for OSGi DevCon? It is almost 4 weeks before the conference starts and you sure do not want to miss it. Not only do we have a very strong program this year, we also have an OSGi Cloud workshop on Thursday. And, last but I hope not least, it will be the last time to meet me in my role as OSGi Evangelist/Editor/Technical Director/Gopher!

I really hope I will meet most of you on the conference!

   Peter Kriens

Friday, February 10, 2012

Cloud Workshop During OSGi DevCon/EclipseCon

The OSGi Alliance is organizing a second Cloud workshop during OSGi Devcon! Thursday March 29. The workshop will run from 9am to 1pm. Attendance is free but requires registration.

RFP 133, our document describing the areas we could look into, has matured over the past year. It is clear from this work that OSGi and the cloud are a natural fit. The next stage is working on actual specifications that will make building reliable and robust cloud based systems easier. For this, we need to set priorities.

We will start with a number of presentations from Paremus, JClouds, eBay and RedHat/JBoss about the work that has been done in this area. After this we will move on to a discussion about priorities since we want to start the RFC work.

We have a limited number of places, if you want to attend register quick because there are a limited number of places and we already have quite a list of registrations. Details can be found here.

I will definitely attend the Cloud workshop since it is close to my heart. It will, however, be one of my last activities as the OSGi Technical Director before moving on ...

Peter Kriens

Monday, January 23, 2012

Objects Revisited

Alan Kay is the  inventor of Smalltalk, the first fully truly object oriented language. I learned Smalltalk in the early eighties and almost everyday that I use Java I am crunching my teeth that James Gosling did not steal more ideas from Smalltalk. About 20 years ago, during an OOPSLA, Alan Kay presented the idea that data should always carry its own methods to access that data. His example was a tape (!) that would contain the data as well as the code to interpret that data.

I think this idea was very much at the core of the Java Management standard first proposed around 1997. Each device would have a Java VM on board and the management system could send little management programs that would be executed on the device. However good it sounded at the time (and I tried to push this idea in Ericsson) the idea never became successful, it was just too complex to make it work reliably on a larger scale where machines have different versions and are implemented in more languages I could ever learn in this life. It was just too complicated, error prone, and risky. Exchanging, or relying on, arbitrary code between loosely coupled machines turned out to be a surprisingly bad idea. Objects, however useful they are in many places, seem to be getting more and more in the way when you build larger distributed systems.

The reason is that objects are so ill suited to go outside their process is that they force the objects to expose their innards, the very thing objects try so hard to hide. Even if we could encapsulate the data during the transition as Alan Kay suggested we would create a huge burden on the receiver to understand (and trust) the code that encapsulates the data. We also created a huge dependency problem that the code provided with the data can actually correctly run on the receiver.

There has always been an impedance mismatch between persistence and object orientation. JPA does a decent job but there is something fishy when you need such huge, complicated, and performance intensive middleware only to simplify the life of the developer. Recently I've been doing some more thinking about this subject and I think that though objects work beautifully in a single process they are ill suited for anything that involves crossing the process boundary, which obviously includes persistence.

Last week during an OSGi EG conference call the problem came up again during the discussion of a specification: do we support serialization for some of the domain objects or not? What is often not realized is that serialization is a public interface since it is shared with the world, it is not an internal implementation detail. This is the essence of modularity, there is an inside and there is an outside. What is on the inside only can be changed what escaped from the inside must be carefully (and thus more expensively) evolved since its dependencies are unknown. 

The problem is acute with interface based programming. Two systems running a service defined in interface S (maybe separated in time) that need to communicate their domain objects can only do so if the specification for S defines a serialization format.  Putting a serializedVersionUID in an interface is a total waste of bits (although they do occur!). The only solution that I see is that we need to make the marshalling a first class citizen in the contract since the data representation is part of the public API.

However, what format should be used? The standard Java serialization format is quite awkward to parse except for implementation classes.There is good old XML but JSON is increasing in popularity and there are enough other serialization standards out there to fill books. SQL is also a kind of serialization format. Picking one without making others unhappy will be hard.

I've come to the conclusion that the best format is actually ... Java.  I started to use what I call data classes. These are classes with only public fields of primitives (or their wrappers), strings, data classes, and collections or arrays of data classes. This subset is very easy to (un)marshal to almost any available marshalling technique using simple rules and reflection. These data classes can act as a very convenient schema for my public interface to other processes, including me in the future (a.k.a. persistence). Since they are part of the Java type system they are easy to use and the compiler can do a lot of sanity type checking. And they can easily be versioned in OSGi.

The data classes are a solution to a problem I see becoming prevalent. It is against pure object orientation but I honestly do not see another solution; The shared code model just does not work very well. Sad, but I think it is time to declare defeat, maybe Java 8 should not steal from Smalltalk but the struct from C?

Peter Kriens

Wednesday, January 11, 2012

Java Generics are a Lemon

After working with Java for almost 15 years and deep knowledge of Java generics on the class format level I learned something very basic the really, really hard way. I knew the collections in Java were not that good in comparison what you find in other environments (immutable anyone?) but now I learned that even adding all that extra cruft on my classes is useless when you have a major refactoring.

This week I learned that for the collections and maps the get, remove, containsKey, containsValue, and equals methods do not use the generic type parameter. This means you can call it with any type and you do not get an error if you call it with a type that is not compatible with the generic type of the collection.

I found this out when I changed many Map types to take another key type, expecting that Eclipse would nicely point me out what to change. Well it does not. The puts and parameter calls are nicely pointed out but a significant amount of code fails because it always fails because the object is now no longer found. Fortunately I am saved by having hundreds of solid test cases that tell me where to look.

I understand these methods were not generified because things became too hairy. Why that did not raise concerns about the power of the generics at the time beats me.

Well, guess I learned something.

Peter Kriens

Monday, January 2, 2012

Moving On

A bit more than 13 years ago I was asked to go to Linköping, Sweden to help out an Ericsson business unit to get the Java Embedded Server running on their e-box. This single appointment quickly cascaded into an almost full time job managing the OSGi specification process on behalf of Ericsson. In 2001 I switched to the OSGi Alliance to become the Technical Director and in that capacity the editor of the specifications. A hectic decade followed with too much travel, several economic booms and busts, various controversies, working with some really great people, and many rock solid specifications to show for it. When I look at my bookshelf I see a satisfying sight of two shelves with OSGi specifications and books. All said, it was a pretty good decade.

However, it is time to move on. Not because I feel OSGi is not the right answer, on the contrary. I think the OSGi service model is as important as structured programming and  object orientation was in the previous decades to increase productivity in the software industry. The reason to leave is that I see a business opportunity in the gap between the mainstream Java developer and where OSGi is today. Working with the myriad of problems around modularity has given me a solid background to ease the transition of existing applications into more modular software. And after a decade of writing specifications creating real systems again looks pretty attractive.

I will stay on until after OSGi DevCon 2012 in Reston, Virginia at the end of March. During that time I will finish the upcoming Core and Enterprise specifications that are currently in the pipeline. After that, well, to find out you have to follow me on Twitter (@pkriens) ...

If you want to show your like what I've done in the OSGi then I would appreciate if you linked with me at LinkedIn and/or provide a recommendation.

Now back to work on my last two OSGi specifications.
Peter Kriens