One of the new topics being discussed in the OSGi Enterprise Expert Group is integration with CDI (JSR 299). In short - the idea is that CDI beans in OSGi can receive their injections from the OSGi Service Registry. Additionally, CDI beans themselves can be registered in the OSGi Service Registry so that other bundles running in the OSGi framework can use them as OSGi Services.
Obviously, other component models already exist as OSGi specifications: we have Declarative Services and Blueprint. The CDI/OSGi integration will bring the CDI programming model into OSGi, and especially developers from the JavaEE world could be interested in continuing to use CDI inside OSGi. In my opinion it's not one size fits all. Developers have their own preferences and having a choice of programming models is good.
Also, the nice thing about the OSGi Service Registry-based approach is that all these component models work together, so I can write a Blueprint component and consume that as an OSGi Service in my CDI bean. And any other combination is also possible.
The OSGi/CDI discussion takes place in RFP 146. The RFP is a requirements document, once we agree on the requirements the details of the technical design will be discussed in an RFC.
As we did in the past with the Cloud Computing RFP 133, we're inviting everyone to comment or refine the requirements.
You can find the OSGi/CDI RFP here: OSGi RFP 146.
Repository Complex Requirements
Besides the OSGi/CDI RFP, another RFP has also been published for comments. This is RFP 148 Complex Repository Requirements. This is a much more narrow-focused RFP that aims at extending the capabilities of the Repository Service which is part of the upcoming R5 Enterprise Release (already available as EA Draft). Currently the Repository query API only accepts requirements within a single namespace. RFP 148 aims at extending this to allow requirements that cross namespaces. An example of this would be: I need a bundle that provides the package javax.servlet and has the Apache License.
Why wasn't this part of the Repository spec in the first place? Very simple, we ran out of time. So rather than rushing this feature in or delaying the Repository spec, we decided to work on it after the initial Repository Release. That's why we're looking at this RFP now, you can find it here: OSGi RFP 148.
Monday, May 21, 2012
Tuesday, May 8, 2012
Compendium 4.3 and Residential 4.3 published!
After some delays, I am happy to make the Compendium Version 4.3 and Residential Version 4.3 specification documents available for download.
The Compendium Version 4.3 document adds all the specifications that were introduced in the Enterprise Version 4.2 document including
Declarative Services is updated to allow service references to receive service updates; to allow greedy service bindings; and support the use of compile time annotations to simplify authoring component descriptions.
Event Admin is also updated to support out-of-order asynchronous delivery of events.
We are also pleased to make available the first edition of the Residential specification.
A new Residential Device Management specification defines a Residential Management Tree, the RMT. This tree provides a general DMT Admin object model that allows browsing and managing the OSGi platform remotely over different protocol adapters.
The TR-157 Amendment 3 Software Module Guidelines chapter provides guidelines for implementers of the TR-157a3 Internet Gateway Device Software Modules specification on an OSGi platform.
The DMT Admin service and the TR-069 protocol have different semantics and primitives. The new TR069 Connector Service specification provides an API based on the TR-069 Remote Procedure Calls concept that is implemented on top of DMT Admin. This connector supports data conversion and the object modeling constructs defined in the DMT Admin service.
The specification pdfs, companion code jars and javadoc are now all available for download from the OSGi website.
Compendium Version 4.3
The Compendium Version 4.3 document adds all the specifications that were introduced in the Enterprise Version 4.2 document including
- Remote Services Admin
- JTA
- JDBC
- JNDI
- JPA
- Web Applications
- SCA Configuration
The Coordinator service provides a mechanism for multiple parties to collaborate on a common task without a priori knowledge of who will collaborate in that task. A collaborator can participate by adding a Participant to the Coordination. The Coordination will notify the Participants when the coordination is ended or when it is failed.And there were also enhancements to some of the existing specifications. Configuration Admin is updated to allow multiple bundles to access the same configuration; and extend the security model to allow configuration "regions".
Declarative Services is updated to allow service references to receive service updates; to allow greedy service bindings; and support the use of compile time annotations to simplify authoring component descriptions.
Event Admin is also updated to support out-of-order asynchronous delivery of events.
Residential Version 4.3
We are also pleased to make available the first edition of the Residential specification.
The services of this Residential Specification have been designed with the residential market in mind. Requirements and management protocols for this environment are defined in the specifications by consortias like the Home Gateway Initiative (HGI), the Broadband Forum (BBF) and the UPnP Forum. These specifications provide requirements for execution environments in a Consumer Premises Equipment (CPE) and other consumer devices, as well as protocols for the management of residential environments.The DMT Admin service has been updated to version 2.0 with a set of major improvements including overlapping subtrees, mount points, and scaffold nodes. These changes provide the basis for use with the TR-069 protocol.
A new Residential Device Management specification defines a Residential Management Tree, the RMT. This tree provides a general DMT Admin object model that allows browsing and managing the OSGi platform remotely over different protocol adapters.
The TR-157 Amendment 3 Software Module Guidelines chapter provides guidelines for implementers of the TR-157a3 Internet Gateway Device Software Modules specification on an OSGi platform.
The DMT Admin service and the TR-069 protocol have different semantics and primitives. The new TR069 Connector Service specification provides an API based on the TR-069 Remote Procedure Calls concept that is implemented on top of DMT Admin. This connector supports data conversion and the object modeling constructs defined in the DMT Admin service.
The specification pdfs, companion code jars and javadoc are now all available for download from the OSGi website.
Wednesday, May 2, 2012
Follow-up on the 2nd Cloud Workshop
The second OSGi Cloud Workshop was held during EclipseCon/OSGi DevCon 2012 last March. It was a very interesting morning with some good presentations and some great discussion. You can still find the presentations linked from here: http://www.osgi.org/Design/Cloud.
We learned that people are already widely using OSGi in Cloud environments, and part of the morning was spent discussing what OSGi could do to make it even more suitable for use in the Cloud. As a result of that a number of topics were proposed for people active in the OSGi Alliance to look at. You can find a summary of these topics here: https://mail.osgi.org/pipermail/cloud-workshop/2012-March/000112.html.
Last week the OSGi Enterprise Expert Group and the Residential Expert Group met to discuss these topics and to find potential routes to address them. Below you can find the results of these discussions. In this list I'll start each topic with the requirement as posted earlier to the cloud-workshop mailing list. The follow-ups below describe the thinking that we came to during the recent EEG/REG meeting.
1. Topic: Make it possible to describe various Service unavailable States. A service may be unavailable in the cloud because of a variety of reasons:
It should be possible to model these various failure states and it should also be possible to register 'callback' mechanisms that can deal with these states in whatever way is appropriate (blacklist the service, wait a while, send an email to the credit card holder, etc).
Additionally, there was a discussion whether it would make sense to extend the OSGi ServiceException so that various types of service failures could be reported (i.e. payment needed, quota exceeded, etc).
2. Topic: WYTIWYR (What you test is what you run) It should be possible to quickly deploy and redeploy.
2. Follow-up: One of the requirements that this expresses is the need to remotely run a test suite in an existing (remote) framework. There are test OSGi test frameworks that support this kind of behavior today (Pax Exam, Arquillian and others), but possibly they need to be enhanced with a remote deployment/managament solution that is cloud-friendly, for example the REST-based OSGi Framework management as is being discussed in RFC 182.
2b. Topic: There was an additional use-case around reverting the data (and configuration) changes made during an upgrade. If we need to downgrade after an upgrade then we may need to convert the data/configuration back into its old state.
4. Follow-up: This relates to the monitoring RFP mentioned above.
We learned that people are already widely using OSGi in Cloud environments, and part of the morning was spent discussing what OSGi could do to make it even more suitable for use in the Cloud. As a result of that a number of topics were proposed for people active in the OSGi Alliance to look at. You can find a summary of these topics here: https://mail.osgi.org/pipermail/cloud-workshop/2012-March/000112.html.
Last week the OSGi Enterprise Expert Group and the Residential Expert Group met to discuss these topics and to find potential routes to address them. Below you can find the results of these discussions. In this list I'll start each topic with the requirement as posted earlier to the cloud-workshop mailing list. The follow-ups below describe the thinking that we came to during the recent EEG/REG meeting.
1. Topic: Make it possible to describe various Service unavailable States. A service may be unavailable in the cloud because of a variety of reasons:
- Maybe the amount of invocations available to you is exhausted for today.
- Maybe your credit card expired
- Maybe the node running the service crashed.
- etc...
It should be possible to model these various failure states and it should also be possible to register 'callback' mechanisms that can deal with these states in whatever way is appropriate (blacklist the service, wait a while, send an email to the credit card holder, etc).
1. Follow-up: A potential new RFP is under discussion around monitoring and
management. This RFP is currently being discussed in the Residential
Expert Group, but it should ultimately be useful to all contexts in
which OSGi is run. The requirements in this RFP could address some of
the service quality issues referred to in this topic.
Additionally, there was a discussion whether it would make sense to extend the OSGi ServiceException so that various types of service failures could be reported (i.e. payment needed, quota exceeded, etc).
2. Topic: WYTIWYR (What you test is what you run) It should be possible to quickly deploy and redeploy.
2. Follow-up: One of the requirements that this expresses is the need to remotely run a test suite in an existing (remote) framework. There are test OSGi test frameworks that support this kind of behavior today (Pax Exam, Arquillian and others), but possibly they need to be enhanced with a remote deployment/managament solution that is cloud-friendly, for example the REST-based OSGi Framework management as is being discussed in RFC 182.
2b. Topic: There was an additional use-case around reverting the data (and configuration) changes made during an upgrade. If we need to downgrade after an upgrade then we may need to convert the data/configuration back into its old state.
2b. Follow-up: It might be possible to achieve this by providing an OSGi API to
snapshot the framework state. This API could allow the user to save the
current state and to retrieve a past saved state. When reverting to a
past deployment this operation could be combined with a
pluggable compensation process that converts the data back, if
applicable.
The idea of snapshotting the framework state will be explored in a new RFP that is to be created soon. The data compensation process itself is most likely out of scope for OSGi.
The idea of snapshotting the framework state will be explored in a new RFP that is to be created soon. The data compensation process itself is most likely out of scope for OSGi.
3. Topic: Come up with a common and agreed architecture for Discovery. This
should include consideration of Remote Services, Remote Events and
Distributed Configuration Admin.
3. Follow-up: This is the topic of the new RFC 183 Cloud Discovery.
4. Topic: Resource utilization. It should be possible to measure/report
this for each cloud node. Number of threads available, amount of memory,
power consumption etc… Possibly create OSGi Capability namespaces for this.
4. Follow-up: This relates to the monitoring RFP mentioned above.
5. Topic: OBR scaling. Need to be able to use OBR in a highly available
manner. Should support failover and should hook in with discovery.
5. Follow-up: The Repository service as defined in OSGi Enterprise R5 spec chapter 132
(see http://www.osgi.org/News/20120326 for download instructions of the latest draft)
provides a stateless API which can work with well-known HA solutions
(replication, failover, etc). Additionally, the Repository supports the
concept of referrals, allowing multiple, federated repositories to be
combined into a single logical view.
The discovery piece is part of RFC 183.
6. Topic: We need subsystems across frameworks. Possibly refer to them as
'Ecosystems'. These describe a number of subsystems deployed across a
number of frameworks.
6. Follow-up: While the general usefulness of this isn't disputed, there is nobody at
this point in time driving this. If people strongly feel it should be
addressed they should come forward and help out with defining the
solution to addressing the issue.
7. Topic: Asynchronous services and asynchronous remote services.
7. Follow-up: This is the topic of RFP 132 which was recently restarted. RFP 132 is
purely about asynchronous OSGi services. Once this is established,
asynchronous remote services can be modeled as a layer on top.
8. Topic: Isolation and security for applications
- For multi-tenancy
- Protect access to file system
- Lifecycle handling of applications
- OBR - isolated OBR (multiple tenants should not see each other's OBR)
This all needs to be configurable.
8. Follow-up: Clearly separate VMs provide the best isolation, while separate JavaVMs
within a single OS-level VM also provide fairly strong isolation
(however, be aware of possible side effects of native code and possible
resource exhaustion). Nested OSGi Frameworks and Subsystem Regions also
provide isolation to a certain degree (see Graham's post on Subsystems), but the level of protection that
is required clearly depends on the required security for the given
application. The deployer can choose from these options as a target
for deploying bundles and/or subsystems.
9. Topic: It should be possible to look at the cloud system state:
- where am I (type of cloud, geographical location)?
- what nodes are there and what is their state?
- what frameworks are available in this cloud system?
- where's my OBR?
- what state am I in?
- what do I need here in order to operate?
- etc…
10. Topic: There should be a management mechanism for use in the cloud
- JMX? Possibly not
- REST? Most likely
Management of application state should also be possible in addition to
bundle/framework state
10. Follow-up: A cloud-friendly REST-based management API for the framework is currently being worked on in RFC 182. Once that is established it can also form the baseline for Subsystems management technology which can be used for application-level management.
11. Topic: Deployment - when deploying replicated nodes it should be
possible to specify that the replica should not be deployed on certain
nodes, to avoid that all the replicas are deployed on the same node.
11. Follow-up: This also relates to discovery as discussed in RFC 183. A management
agent can use this information to enforce such a constraint.
12. Topic: Single Sign-on for OSGi.
12. Follow-up: One member company has done a project in relation to this on top of the User Admin Service. A new RFP will be created to discuss this requirement further.
So there you are - the ideas from the cloud workshop were greatly appreciated and provide very useful input into future work. If you're interested in following the progress, as usual we're planning to release regular early access drafts of the documents that are relatively mature. Or, if you're interested in taking part in the creation of these specs, join in! For more information see: http://www.osgi.org/About/Join or contact me (david at redhat.com) or anyone else active in OSGi for more information.