Wednesday, August 21, 2013

The Perfect OSGi Persistence Model?

After having such good experiences with Mongodb last year it is a tad frustrating to have to dive into the messy and fuzzy Java persistence world. Ok, I did miss the transactions in Mongodb a lot but for the rest it was a dream. However, there are good cases to be made for relational databases, not in the least because of their popularity and therefore widespread support. What I found so far is that virtually no implementations are as easy to use as the OSGi specifications imply. I've now got a working configuration that works but is a far cry from the pick and choose model that OSGi promises. I've created a small blog application that consist of a Web based GUI and a Blog Manager service. The Blog Manager service is then implemented in multiple ways:
  • Dummy database
  • JDBC based on the OSGi DataSourceFactory
  • JPA
  • JDO
The intention is to then have configurations for different implementations for each of the persistence standards. So far I have the dummy version running as well as an OpenJPA/H2 based one. Though H2 worked out of the box, OpenJPA was much harder. I could actually only get it to work by using a number of Aries bundles. Another struggle was to get the Transaction Manager working. After trying out different versions I selected the Jonas Transaction Manager (JOTM) but also this transaction manager designed for OSGi required glue code.

The whole purpose of specifications is that you can pick and choose and not have to spent time writing silly glue code. However, today the implementations are clearly not properly supporting the OSGi specifications, even though in most cases latent support is present. I also see that people are struggling with implementations, often doing things much more complicated than required.

 So what is the ideal OSGi model? The ideal OSGi model is that an application depends on an Entity Manager service. This Entity Manager is configured by Configuration Admin service and uses an OSGi Data Source Factory service from the registry as the JDBC driver. If a Transaction Manager service is registered, then this manager must be used by the JPA and database implementations. Just selecting different bundles should allow you to experiment with different configurations. Such a model is highly decoupled and allows for a lot of flexibility.

 We need a community effort to fulfill the persistence promise of OSGi: plug & play. So, to move this forward, if you have a persistence configuration using JTA, JDBC, JPA, or JDO that works well under OSGi, please send this configuration to me or point me to a public project. If you're a committer (open or closed source) that implements JTA, JPA, JDO, or JDBC implementations and want to work together to make your project work out of the box in OSGi, then send me a mail and I will see how I can help. I will do reporting through this blog and twitter.

 Peter Kriens (@pkriens)


  1. Hi Peter,

    In Peergreen, and in Peergreen WAS profile providing both Java EE and OSGi features, we're using JTA (OW2 JOTM), JDBC (by using for example H2 OSGi JDBC or any other JDBC) JPA (Hibernate or EclipseLink) and we're running on top of an OSGi kernel.
    We've made glue to provide OSGi JTA services with JOTM and we're using OW2 EasyBeans for the glue on top of JPA (Hibernate, EclipseLink and OpenJPA)
    We're committer on OW2 JOTM and OW2 EasyBeans.

    You can contact me at Florent.Benoit at peergreen dot com

  2. H2 committer here. Let us know if there is anything missing we can help with.

  3. This comment has been removed by the author.

  4. Peter, it is not so much that the Java persistence world is messy or fuzzy, just that you are doing things that are beyond what the OSGi JPA spec defines (and that you helped decide the contents of :-). If a JPA application is written to that spec then it can run with any compliant JPA project, be it Aries or Gemini JPA. I can only speak for Gemini JPA, but I know alot of people who use this and are quite happy with it.

  5. @Mike: Yes, I can see individuals can get this to work but what I experience (and hear from others) is that it becomes a case of selecting the right components and glue code. In OSGi, the goal is to do plug&play with compliant components.

  6. There are ample people using DataNucleus with JDO and JPA in OSGi environments, without any significant "glue code" being written that I'm aware of (more was needed in earlier releases, but much less now); clearly you'd have to solicit their input for details, but an entry on your own blog isn't likely the best place for that - try the DataNucleus forum or the Apache JDO mailing list. Same likely applies to users of other JPA implementations.

    In terms of JPA, DataNucleus implements "OSGi blueprint" for JPA (user contrib), but not "OSGi JPA Service Specification v1.0". Are you expecting a JPA implementation (for JPA1, JPA2, JPA2.1) to also implement this other spec of yours, or is that for the OSGi provider/container (Aries etc)? I'd strongly suggest that the separation should be like with JavaEE and a JPA implementation, where the container has its contract to implement, and can cope with any JPA implementation.

  7. @andy: I am working on OSGi skeleton that should be as easy to use as RoR or a PHP setup but with the added power of OSGi. This means that I need a service based setup to make things truly modular and plug&play. I looked at Datanucleus and as you confirm, such a setup was not supported. After I've a working model with at least one out of the box providers for JTA, JPA, and JDBC servicesI will likely consult the other providers to help me out to make it work for other environments.

  8. Great article. The persistence layer has always been a blocker for me, in every attempt to create a truly modular architecture.
    Thanks Peter to dive willingly into this fussy mess!

    As for our persistence setup: we are using Gemini JPA (EclipseLink), Gemini DBAccess and Derby. All that successfully deployed in Virgo and Felix.
    I had to write some glue code to pull JDBC properties from the ConfigAdmin, but I think it's not necessary anymore since the latest release.
    Gemini uses the OSGi JPA and JDBC spec so it should be plug & play out of the box (but I didn't try it myself).

    An other thing which is a big issue for me is the monolithic nature of the JPA persistence units (that is, the limitations in the JPA spec, §127.3.2).
    That's another topic, but I wonder if there are plans to address this in a near futur?

    /Thomas Gillet

  9. Quoting from your blog: "If a Transaction Manager service is registered, then this manager must be used by the JPA and database implementations".

    I agree this should be true in an ideal OSGi model, and this is more or less obvious to anyone familiar with Java EE or Spring, but sadly, the current OSGi JTA and JPA specs are completely unrelated. So even if you have a transaction manager service and an entity manager factory service, it is completely undefined how they should interact.

    I'm not aware of any OSGi standard dealing with declarative transactions, and JPA + JTA without declarative transactions is only half a story.

    Aries has a Blueprint extension for declarative transactions - I once built a demo with it which worked fine, but I don't really want to be forced to use Blueprint (aka Spring in disguise with lots of ugly XML configuration.)

    My preferred combination would be OSGi JDBC + JPA + CDI + a CDI-only transactional interceptor.

    I had this running some time ago based on Pax JDBC, Pax CDI, Pax JPA (unreleased) and a snapshot version of DeltaSpike JPA (implementing a @Transactional bean scope), but this stopped working after some changes in DeltaSpike that broke OSGi imports and I haven't tried again for a while.

    Anyway, it shouldn't be too hard to build a demo with that approach, possibly using a patched version of DeltaSpike.

  10. I think one part that is missing in the OSGi JPA and JTA specs is how they interact with the user code of a typical class using JPA.

    I see two cases:
    1. Using the JEE like programming model with @PersistenceContext to inject the EM and @Transaction to mark transaction boundaries.
    2. Using a programmatic approach where you mark transactions using a Template based approach. Closures make this feasible at last.

    Both approaches require definition of Transactional boundaries and how they react when objects call each other. Most of this should be specified in JEE already but some thing might be OSGi specific.

    I think we are also missing a nice model to inject the EntityManager. The JEE model of directly injecting the EM looks a bit broken to me when one instance is used by several threads. I would prefer somethng like a Supplier to be injected to cope with the fact that EM is not thread safe.
    I also think a managed EM could be defined as service by publishing such a Supplier as a service per persistence unit.
    I worked out some of this at