Saturday, December 12, 2015

IoT & Standards

About 8 years ago Apple steamrolled the mobile telephony industry with the first iPhone. They drastically disrupted the symbiosis of operators and device manufacturers. Nokia and Motorola did not survive this mayhem and knowing the operators of 15 years ago they are still have not recovered.
After visiting the ETSI OneM2M workshop in Nice for three days I am wondering if history will repeat itself? It feels like the telecom industry never analyzed why Apple ate their lunch and thought about how to defend itself against the next attack. In the workshop the work is about standardizing protocols, abstract reference semantic reference models, and maybe some open source influence. The underlying rationale is the somewhat tired lesson that collaborating on protocols will enable interoperability, which will increase the pie many times. True, but how do we prevent that an Apple will come again and steal the pie under our nose?

Apple succeeded so easily because it hit the soft underbelly of the mobile telekom industry: software. Software was proprietary in the telekom industry, protocols were paramount. Only after NTT Docomo succeeded on generating revenues on applications did the industry enable a severely crippled software model on the phones. I did participate in an attempt of Motorola, Nokia, IBM and others to set a better software standard based on OSGi just before the iPhone hit. I can ensure you that we didn't stand a chance because the focus was on irrelevant aspects like managing the device, constraining the application developer, and lowering the cost. Instead the focus should have been on what independent developers could do with a programmable device.

The rest was history.

The iPhone enabled Facebook, WhatsApp, Google Maps, and all of the other millions of applications because anybody could write cool applications for it which is the truest source of innovation.

The telekom industry is now sitting on the fence of  a huge new market: The Internet of Things. The industry is eminently suited to provide the connectivity and having first row access to the humongous pie of IoT services. Instead of learning the lessons of the mobile telephone industry it feels like history will repeat itself.

It is the software, stupid!

Peter Kriens

Thursday, December 3, 2015

Functional Modularity - Building Better APIs for the OSGi Ecosystem

Introduction


One of the new RFCs (RFC 216) being discussed in the Enterprise Expert Group is called PushStreams and is all about stream processing. I’m leading the RFC and whilst a final specification is still a way off, the open processes at the OSGi Alliance mean that people (including me) have already been talking publicly about this RFC.

My talk about PushStreams at the OSGi Community Event was well attended, and feedback on it led me to write a blog post about why we need PushStreams. Since then there have been even more questions, including a few about the design of the API and the theory behind it. I thought therefore I should talk a little about the reasoning behind the APIs from some OSGi specifications that I've worked on in the last few years.

Functional Programming at the OSGi Alliance


The OSGi framework is a module system for Java, and Java is inherently Object Oriented in its approach. Given that this is the case why is OSGi starting to define APIs with a more functional usage pattern? In fact OSGi is far from alone here, Open Source Projects, and even Java itself, are building more functional concepts into the libraries that we all use.

The reasons for this change are varied, but in general functional concepts are introduced to solve problems in a simpler way than is possible using Object Oriented programming techniques. OO programming provides a lot of benefits, but as systems become more distributed, more highly parallel, and more asynchronous they become much harder to manage and to reason about.

OSGi as a technology is used across a huge range of sectors, and across many layers of the technology stack. OSGi is used in embedded software, such as car entertainment systems and home gateways; it is used in desktop applications, such as Eclipse RCP apps; and it is used in server runtimes, such as Java EE application servers and distributed cloud runtimes.

In order to support such a large ecosystem it is vital that OSGi continues to evolve to better match the areas it is used. To better support distributed server platforms the OSGi compendium release 6 included Promises and the Async Service. The addition of PushStreams will help to support new use cases from IoT, where event filtering, forwarding, and aggregation are vital parts of allowing systems to work in the real world.

For those of you already familiar with functional programming, you will notice that the PushEventSource and PushEventConsumer are an example of an Enumerator and an Iteratee. Once a basic implementation is available I’ll put together some examples showing just how much they can simplify processing a stream of events.

Tuesday, December 1, 2015

Using Requirements and Capabilities

by Ray Augé, Senior Software Architect, Liferay  


I really love how everyone at Liferay is taking to the OSGi development model. It makes me proud to see how much work has been done in this regard.
There's one very cool area I think is worth expanding on.

Requirements and Capabilities


What is it? Some history.


The Requirements and Capabilities model is a surprisingly powerful concept born from the 17 years of experience the OSGi Alliance has in designing modular software: defining strong contracts. In OSGi contracts are described with meta-data and enforcement by a strict runtime.

The process involved in defining these contracts led to a frequently repeating pattern. First a contract was defined using special manifest headers and their specific semantics. An example of this is the Import-Package header which is used to specify what java packages should be made available so that your code can execute. This was followed by the need to implement the specific logic to enforce the definition. The result of this work manifested itself (excuse the pun) by what we recognize as OSGi Bundle manifest headers and the OSGi frameworks that enforce the semantics of those headers.

Around 2005 some very smart people in the OSGi community and OSGi Alliance recognized a pattern and developed it into a constraint model called Requirements and Capabilities.

How does it work?


A contract is defined starting with a unique namespace. Within this namespace semantics are defined for a set of attributes and directives. The entire definition forms a type of language from which can be created instances of Capabilities and Requirements.

Let's take an example.

Suppose I want to describe a service (which begins by defining a contract) where people can take their pets to be groomed. There are many types of pets, and many grooming agencies who can only groom certain kinds of pets because of the special skills, equipment or facilities required for each type. It can be a challenge to find an appropriate agency to groom your pet.

Let's imagine that every agency declared their capabilities using a single name-space called pet.grooming having 4 attributes:
  • type: a list of strings naming the type of pets groomed by the agency
  • length: a positive integer specifying the maximum size of the pet which the agency can groom
  • soap: a string naming the type of soap used by the agency
  • rate: a positive integer specifying the rate per hour charged by the agency

Here we have three example agencies using this contract in the syntax found within an OSGi Bundle manifest:

Agency A: Haute Pet Coiffure
Provide-Capability: pet.grooming;type:List="dog,cat";length:Long=800;soap="organic";rate:Long="50"
Agency B: Great Big Pets
Provide-Capability: pet.grooming;type:List="cat,horse";length:Long=3000;soap="commercial";rate:Long="20"
Agency C: Joe's Pets
Provide-Capability: pet.grooming;type:List="dog,cat";length:Long=1500;soap="regular";rate:Long="15"
Clients could then declare their Requirements using the pet.grooming name-space and a special LDAP filter.

Let's take a look at 5 clients:

Client A: I love my cat Cathy, but not rich!
Require-Capability: pet.grooming;filter:="(&(type=cat)(rate<=20))"
Which agencies do you think satisfy this requirement? (hint: B & C)

Client B: Huge Dog Doug
Require-Capability: pet.grooming;filter:="(&(type=dog)(length>=1000))"
Which agencies do you think satisfy this requirement? (hint: C)

Client C: Horse Haurice
Require-Capability: pet.grooming;filter:="(type=horse)"
Which agencies do you think satisfy this requirement? (hint: B)

Client D: Stylish Stan
Require-Capability: pet.grooming;filter:="(&(type=dog)(soap=organic))"
Which agencies do you think satisfy this requirement? (hint: A)

Client E: Cat lady Clara
Require-Capability: pet.grooming;filter:="(type=cat)"
Which agencies do you think satisfy this requirement? (hint: A & B & C)

Client F: Hiccup
Require-Capability: pet.grooming;filter:="(type=dragon)"
Which agencies do you think satisfy this requirement? (hint: Oh my!!!)

Observation: What happens for client F? This is a case where requirements cannot be satisfied. What does this mean? This might translate directly into the resource containing this requirement not resolving. In other words, it might be completely blocked from doing whatever it was it intended to do. This is a remarkable characteristic. That we could know in a safe and reproducible way that a resource cannot be satisfied could prevent any number of catastrophic situations we would hard pressed recovering from at runtime.

Once again, note the name-space and the filter used to query or match the attributes of the available Capabilities.

This language, which first materialized as OSGi RFC 112, is very powerful and can model a wide range of contracts. It's power was demonstrated by the fact that all bundle headers from prior OSGi specifications could be modeled by it. It also became possible to implement an engine which could calculate a set of resources given an initial set of requirements. This engine is known as the resolver and all OSGi frameworks beginning at release R4.3 have such a resolver at their heart. Since then it's possible by specifying new name-spaces to model new contracts specific to your own needs. These new contracts play at par with any of the OSGi defined name-spaces.

In Liferay we have used this language to generalize contracts for JSP Tag Libraries to enable modularity around their use, for Service Builder extender support, so that the correct version of service builder framework is available to support your SB modules. We have also used it to create a prototype of RFP 171 - Web Resources to enable modular UI development.

One of great benefits of having such succinct way of defining a contract is that much of the information can be auto generated through introspection which makes it both easy to implement and use. The majority of cases require little to no effort from the developers who are requiring capabilities, and greatly reduces the effort for developers who are providing them.

Recently the bnd project was enhanced with a set of annotations to easily produce arbitrary Requirements and Capabilities automatically. These annotations can be seen in use in the OSGi enRoute project.

As a follow up to the bnd POC an RFP was submitted to and accepted by the OSGi Alliance under RFP 167 - Manifest Annotations to specify a standard set of annotations for simplifying and enhancing manifest management programmatically, including Requirements and Capabilities.

There is a lot of exciting work going on in this area and many opportunities to get involved, the simplest of which is giving feedback on or testing current work.

How could you use the Requirements and Capabilities model in your project?

Monday, November 23, 2015

OSGi and IoT: The emergence of a continuously growing platform


An article was published by JAXenter back in August 2015 entitled "OSGi and IoT: The emergence of a continuously growing platform".

The original article was in German however we are pleased to say that JAXenter have agreed for us to provide an English translation.  This can be downloaded in pdf format from http://bit.ly/1IbsDOF.

Happy reading.

Mike

Monday, October 19, 2015

Java 9 Module System

Mark Reinhold published the 'State of the Module System' a few weeks ago as a kick off for the JSR 376 Expert Group. Since then, we've slowly started to do some discussions in this expert group. Just a quick update of how the proposal relates to OSGi.

The module system consists of a dependency- and a service model.

The dependency model is based on exported packages and required modules. It introduces a new namespace for modules so that it can require them. A module can limit the exports to friend modules and a module can re-export its dependencies. It is basically a Require-Bundle with re-exports but without any versions.  Modules are specified in the root of the JAR with a module-info.class file that is compiled from a module-info.java file. This file is not extensible and does not support annotations.

I just wish the dependency model was symmetric. That is, require should use package names instead of module names. Around 2004 when we worked with Eclipse and they insisted on a similar model. Over time we learned that a symmetric model prevents a lot of problems. For example, if you split  a bundle into two bundles then the bundles that depended on the original model do not have to be changed, they will get the imported packages from the right bundle. The proposal introduces a brand new namespace for modules but making the model symmetric would make this complexity unnecessary.

The most surprising part for me in the proposal was the lack of versions. No version means that the module-path given to the VM must be free of duplicates, putting the onus on the build system to achieve this. This seems to imply that the build system will generate the module-info.java to prevent redundancy. When we make module-path an artifact created by the build system we can probably make the module system even simpler, I think.

Modules are properly encapsulated in the VM, resource loading and class access has gotten proper module access rules. The rules OSGi implements with class loaders will now get proper VM support. (And OSGi will be able to take advantage of this transparently.)

It will be interesting to see how the industry reacts to this strong encapsulation. Over the last few years a lot of people complained about OSGi when in reality it was only telling them that their baby was unmodular. It looks like people will run in identical problems when they will start to use JSR 376 modules. Very little code will work as a module without updates to the some of the cornerstone specifications like JPA, CDI, etc. (or at least implementations) since these specification assume access to resources and classes to scan the annotations.

The JSR 376 service model is based on the existing Service Loader. A module can 'use' a service by its interface name and 'provide' a service by specifying the interface and implementation class tuple. This is of course a static model, unlike the OSGi service model. Service Loader services are like global variables and are created only on request without context. Very unfortunate if you know how the dynamics of OSGi µservices can simplify so many hard problems.

For example, in the OSGi Community Event IoT Contest we have a railway track with multiple trains. In the SDK's emulator the train bundles run local and are represented by a Train Manager service. The number of instances depends on the bundles and the configuration of those bundles. In the real world the trains run on a Raspberry Pi. This drastic change in topology is completely transparent for the rest of the software since we are using distributed OSGi to connect these computers. The static Service Loader model can of course not help in these scenarios.

It is good that the JSR 376 module system starts out with a very minimal design, simplicity is good. My concerns are that there are some implications in the current design that would make it possible to reduce the complexity of the module dependency model even further by reifying the module path.

For the service model I find the Service Loader too simple. The trend to microservices makes it clear that modern applications must be able to transparently interact with local services as well remote services and this cannot be modeled with Service Loader. Providing a proper service registry at the VM level would be more than worth the added complexity as all OSGi  users can testify.

@pkriens




Tuesday, October 13, 2015

An OSGi Scheduler

The OSGi enRoute initiative (just released!) is already acting as an incubator for future OSGi specifications. It already generated ten RFPs that are available on github. Each of these RFPs has a (often simple) implementation in enRoute. In this blog I want to discuss one of my favorites: the Scheduler.
There are many services that I've desired for a long time but a scheduler that could use Unix cron like expressions is definitely one of the oldest.
Quite often you need to run a task every hour, every third Wednesday, or on January one on odd years.  From an OSGi perspective the design is quite straightforward, just register a service, specify the cron expression as a property and wait to be called back. The service diagram is as follows:
An extensive description can be found at the OSGi enRoute service catalog. An example using this service would look something like:
@Component(
    property = CronJob.CRON + "=1-30/2 * * * * ?"
  )
  public class CronComponent implements CronJob
    @Override
    public void run(Object data) throws Exception {
      System.out.println("Cron Component");
    }
  }
So what does that CRON expression mean? Well, there are 7 fields:
  • Second in minute
  • Minute in hour
  • Hour in day
  • Day of month
  • Month in year
  • Day of week
  • Year (optional)
The expression is matched against the actual time. Every time the time matches, the corresponding action is executed. For matching, each field can have quite a complex syntax. The simplest is if it is a wildcard, the asterisk ('*'). A wildcard, well, always matches. The second form is just a number, this just matches at that position. For example, 0 12 * * * * * will match noon every day. You can also specify a range like 0-30. To repeat you can add a slash and the step value like 0/5 which repeats every fifth second (if used as the second field). And last but not least you can add additional syntax after a comma (,) like 1,5,15,45. The scheduler uses the syntax made popular in Java by Quartz. There are cron simulators to test these expressions since they can become quite complex.
The primary use case is nice, but there are other scheduling problems. What about having a delay in your code? Especially in distributed middleware and IoT applications scheduling on time is a crucial aspect.
We've made this service a real OSGi service in a Java 8 world. We took advantage of the OSGi R6 Promises and the amazing Java 8 Date & Time support. We also could make the service API friendlier to lambdas by allowing exceptions to be thrown. The scheduler supports fire once and repeated schedules with various combinations of milliseconds and Java 8 temporals.
A few samplers. First a one shot simple delay:
scheduler.after( () →
    System.out.println("fire!"), 100 ); // ms
  scheduler.after( () →
    System.out.println("fire!"), 
    Duration.ofMillis(100) );
We can also schedule an event at a given time:
  LocalDateTime localDateTime =
    LocalDateTime.parse("2017-01-13T09:54:42.820Z", ISODATE);
  ZonedDateTime zonedDateTime =
    localDateTime.atZone(ZoneId.of("UTC"));
  Instant instant = zonedDateTime.toInstant();
  scheduler.at( () → System.out.println("fire!"), instant );
And using promises:
     scheduler.after( 100 ).
       then( this::start ).
       then( this::secondStage ).
       then( this::thirdStage, this::failure );
The scheduler also provides a wrapper for allowing Promises to time out. If the provided promise does not resolve before the timeout then the resulting promise fails with a Time Out Exception.
  void foo( Promise p ) {
     CancellablePromise cp = scheduler.before( p, 
       Duration.ofMinutes(5) );
     cp.then( this::start ).
        then( this::secondStage ).
        then( this::thirdStage, this::failure );
  }
Instead of firing once we also need to handle repeating schedules. Schedules should not always run until the end of the component's life cycle, sometimes you want to stop them long before. Therefore all schedules return a Closeable. If this object is closed, the schedule will stop. (If the component is stopped, all schedules are stopped automatically of course.)
The simplest example is again a simple milliseconds repeat. You can specify any number of initial delays, the last delay is repeated until the schedule is stopped:

  Closeable rampUp = scheduler.schedule( 
    this::tick, 10, 20, 40, 80, 100 );
The last, but definitely not least is the cron schedule. The following example ticks every second second in the first half of each minute.
  Closeable cron = scheduler.schedule( 
    this::cronTick, "0-30/2 * * * * *" );
OSGi enRoute also contains an example scheduler application. This application has a web based GUI that allows you to exercise the API. just check out the workspace and run or debug it in bndtools.
The experiences in OSGi enRoute with this service resulted in RFP 166 Scheduling. These RFPS are discussed in the Expert Groups and this already created some new ideas and nibbled on the including ones. Liferay indicated that it would be really useful if the service could support persistent schedules. There was also a discussion about the guarantees one should get for repeating schedules. Currently the schedules are repeated by actual time and not interval as long as they do not overlap. However, we likely need to consider cases where cron events could be missed. Lots of work!
If you're interested in this area do not hesitate to read the RFP and provide comments on our public Bugzilla, or better, join the Expert Groups.
An interesting IoT example of the Scheduler service is in the OSGi IoT Contest for the upcoming OSGi Community Event 2015 in Ludwigsburg. The contest is about Lego® trains and their tracks. You can either write a Train Manager or a Track Manager.


During the Community Event we will run these bundles on an actual track in different combinations, best performers win a prize.
In the SDK you find an emulator, a sample Train, and an example Track Manager. This Track Manager makes extensive use of the Scheduler to control the color of the signals. For example:

  // set the signal to green for 10 seconds
  private void greenSignal(
  Optional<Signalhandler<Object>> signal){
    if(signal.isPresent()){
      setSignal(signal.get().segment.id, Color.GREEN);
      scheduler.after(() → 
        setSignal(signal.get().segment.id,Color.YELLOW),10000);
      scheduler.after(() →
        setSignal(signal.get().segment.id,Color.RED),15000);
    }
  }

Ok, from a safety point of view using you might have noticed that timings for track segment signals might not be, well, safe. Alas, we do not have presence sensors in each track segment and the changing signals look pretty cool in the emulator!
But if you think you can do better? Then why not participate in the contest? There are plenty of opportunities to use this service in this context and it is sure to be a lot of fun. You are not required to show up in Ludwigsburg, you can see your train by video if you are unfortunately cannot be there. The lucky ones that are present will be able to 'practice' during the conference hours. I do admit, I am awfully curious how those trains will interact with each other during the finale. I do expect a few spectacular crashes.

Thursday, October 1, 2015

OSGi enRoute 1.0!

On September 29, 2015, we finally released OSGi enRoute 1.0 ... The road has been longer than expected but we expanded the scope with IoT and a lot happened in the past year.

So what is OSGi enRoute 1.0?

If this blog is too long to read (they tell me millennials have a reading problem :-), then you could start with the quick start tutorial.

OSGi enRoute is an open source project that tries to make getting started developing on OSGi easier and more accessible to newcomers. Both Java and OSGi suffer a bit from the fact that they have been around for a long time.This makes it difficult for newcomers to separate the wheat from the chaff; there is so much history out there and even more software crimes in the name of backward compatibility. Really, a newcomer is quickly confused. From the OSGi Alliance, we should be proud of the number of "Hello World" tutorials for OSGi were it not for the fact that they're almost all demonstrating OSGi in the wrong way because they are old and not maintained.

When a newcomer wants to build a simple application with a GUI in Java, they first have to evaluate a zillion confusing libraries (almost 1 million on Maven Central) and then figure out how to build and debug this system using a myriad of tools. Often being confronted with bizarre APIs and patterns that are kept in the name of backward compatibility. Though there are some very interesting efforts, Karaf comes to mind, getting started is really quite daunting for any newcomer. The sad result is that many (especially the younger ones) will look elsewhere.

Therefore, OSGi enRoute does something Java developers generally have a hard time doing: committing ourselves. We wanted a developer to be able to have the skeleton of a working application up and running in minutes. That meant we needed to commit ourselves to certain choices. Shudder. Even more horrifying, we did not want to carry along 20 years of backward compatibility while making those choices. We decided to create a green-field taking advantage of OSGi Release 6 and Java 8. Though we realize this excludes a lot of potential design wins, it does allow us to compete with the non-Java worlds out there on a more even footing. It also allows us to showcase what happens when you use OSGi as it is intended. Quite awesome.

So we first created an API for enRoute. The idea of the OSGi enRoute base API is to provide a common environment for the most simple "Hello World" up to a REST or JSON RPC server that plays nice with an HTML-5 front end. Now this OSGi enRoute Base API is an API, not an implementation. This means that the OSGi enRoute Base API is not just Java code. It also contains web-resources for Angular, Bootstrap, and other popular HTML-5 programs. As stated, it is a limited but complete environment to actually get something done.

Though the OSGi specifications are very thorough, they tend to focus on the implementers of the specification and not focus that much on the users. We therefore started a service catalog that explains the services for users. The catalog explains where the service is useful and provides many snippets that can be copied and pasted. To further elucidate, we also provide a workspace with example projects that use OSGi enRoute to demonstrate how certain services actually work. Both the service catalog and the examples workspace more than welcome contributions. We hope we can make this the first place where people will go to see how a service should be used. And to be honest, the service catalog has a number of open spots so we can use some contributions.

We then also provided a number of tutorials. A quick start tutorial to get acquainted with the basic ideas or just to get some simple application done quickly. Then there is a more extensive tutorial that demonstrates the best practices ways of working in OSGi. It goes through the whole development chain from design to continuous integration releases. There is also an IoT tutorial that shows how you can use OSGi enRoute on a small machine.

Though we wanted to commit ourselves to a single API, we of course still wanted to allow different distros. A distro is a repository with implementations that provide all the capabilities that are required by the OSGi enRoute base API. The API promises and the distro provides.

Creating a distro is a major effort since its repository must be self consistent. And since not all bundles are perfect, the distro has to correct for their flaws in the build, resolve, release, and runtime phases.

Since we wanted the developer to start quickly, we picked a distro based on open source projects. You will find bundles in the distro from Amdatu, Apache Felix, Eclipse Equinox, Knopflerfish and other open source projects. Some API had no (suitable) implementations in open source. We therefore decided to create a GitHub repo with the missing bundles. Hopefully this is temporary. The goal of these bundles is to migrate to one of the open source foundations that dabble in OSGi. For this reason, some of the 'proprietary' OSGi enRoute APIs, like for example Authentication, DTOs, and others, is planned to be standardized for OSGi R7.

We decided to pick Eclipse for the IDE, with bndtools as the OSGi development plugin. Eclipse provides an amazing environment for developing Java applications and bndtools extends this amazingness right into the OSGi world. If you're forced to use another environment then do not try out the edit-debug cycle because it will be impossible to go back ... And once you're used to the version baselining you wonder how people an live without it.

For source control we obviously selected GitHub. Needs no further comment?

Though the IDE can perform all the releases of a software cycle, there is a huge advantage in having a command line build. We selected Gradle to perform this task. Gradle has the bnd plugin that can read the identical build data as bndtools does, ensuring fidelity between Eclipse and the Gradle builds. Gradle can be used in any project or workspace directory. It supports the same functionality as is available in Eclipse.

One of the best practices in our industry is continuous integration. We therefore picked Travis CI because it integrates so well with GitHub. All OSGi enRoute workspaces are ready to build on Travis without any special effort except signing up and activating.

Overall, OSGi enRoute provides a complete solution based on open source to develop OSGi-based systems. This makes it an excellent environment to learn OSGi and start making applications. Though experienced OSGi users will undoubtedly miss their favorite components, we hope they do give it a chance. It is a really nice environment to quickly create applications. And after all this is OSGi, so it is straightforward to modify the environment with your own components. You will even be able to plug in Maven and IntelliJ IDEA in the near future.

Though we are at release 1.0, we need help. We've prepared a complete environment for OSGi development but we need feedback, articles, examples, and tutorials from the community. We'd like to be the central hub where many other communities collaborate. Don't hesitate to contact us or send us pull requests.

The first usage for OSGi enRoute is the OSGi IoT Contest 2015. We've created an SDK on top of OSGi enRoute for managing a railroad track or to manage a train on that track. At the OSGi Community Event in Ludwigsburg Nov 3-5 2015, we will have a contest to see who can write the best bundle to manage a track or train. If you want to explore OSGi enRoute, you might want to participate. How much more fun can a real developer have than playing with OSGi and Lego® trains?





Get involved! The OSGi IoT Contest 2015 Has Begun

So its Oct 1, 2015 and we are pleased to announce that the SDK for the OSGi IoT Contest 2015 is available as promised.  The Contest is open to all and you don't have to attend the OSGi Community Event to participate, although we would certainly love to see you there.

The OSGi Community Event is co-located with EclipseCon Europe and will be taking place from Nov 3 to 5, 2015 in Ludwigsburg, Germany. You can register for both conferences here.

The Contest SDK can be downloaded from the OSGi Alliance GitHub pages and its provided under Apache 2.0 licensing.

The release of this SDK signifies the launch of the OSGi IoT Contest for 2015 with the theme being trains.

The Contest is open from now until Wednesday November 4 if you are attending the OSGi Community Event in Ludwigsburg, or Oct 31 if you can't join us there.

We have also put together some pretty detailed guidelines [here, here and |here] on how you can participate in the Contest, along with key dates and deadlines, the SDK architecture and some important hints and tips. All of this information should set you well on your way to showing your peers your coding prowess and proficiency with OSGi.

The Contest this year is fully integrated with OSGi enRoute, which has been created to improve the developer experience when working with OSGi. Using enRoute gives developers who are newer to OSGi the perfect opportunity to try their hand at it, and moreover be in with the chance of winning a 200 Euro Amazon Gift certificate, or one of two 50 Euro runner up Amazon Gift certificates.

A dedicated OSGi IoT Contest forum has also been setup and we encourage everyone who is interested in the Contest to sign up and join the conversations.  There is no such thing as a stupid question in this forum, and you can ask anything you like about the Contest - technical, logistics, process, contest rules, you name it.

We can't wait to see your ideas and submissions.

Thursday, September 24, 2015

OSGi Developer Certification Exams Announced - Europe and N America

We have just announced the next two dates for OSGi Developer Certification Exams. These exams are open to developers from OSGi Members and non-Members and are an excellent opportunity to demonstrate and validate your knowledge and experience with OSGi.

The keen eyed of you will notice that we have renamed the exam and certification level to OSGi Developer Certification - Professional to better reflect the level of competency exhibited by the developers that pass the exam.

The next two exams will be held as follows:
  • Mon Nov 2, 2015 - 1.30pm to 5.00pm  CET in Ludwigsburg, Germany.
  • Sun Nov 15, 2015 - 1.30pm to 5.00pm CT in Chicago, IL, USA.

Full details of the exam format, exam topics covered, expected experience and what you need to bring with you to the exam is provided here.

The price of the exam is $500 per person. Students are offered a discounted price of $200 (please contact us to obtain your student discount code).

The Ludwigsburg exam is taking place the day before the OSGi Community Event and EclipseCon Europe and will be held adjacent to the Forum am Schlosspark where the conference is being held.  Anyone who  has purchased registration for the OSGi Community Event and EclipseCon Europe can obtain a 10% discount off the exam price.

The Chicago exam is taking place the day before the Liferay Symposium North America and is being held at the same location as the Liferay Symposium (Chicago Marriott Downtown Magnificent Mile).  Many thanks to Liferay for their support and assistance in organising this exam.  Delegates to the Liferay Symposium can also obtain a 10% discount off the exam price.

To obtain your 10% discount code for either exam please contact us by email stating the exam date you are interested in and provide the name and email you used to register for the respective conference.

Looking forward to seeing you in Ludwigsburg or Chicago in November.

Monday, August 24, 2015

Building the Internet of Things with OSGi



by Tim Verbelen, iMinds, Ghent University


Tim Verbelen One year ago, I was speaking on my first EclipseCon Europe about the results of my PhD research (Mobilizing the Cloud with AIOLOS). Since then, I have been working for iMinds as a research engineer. iMinds is a digital research centre in Belgium, which joins the 5 Flemish universities in conducting strategic and applied research in areas such as ICT, Media and Health.
Therefore, iMinds is uniquely positioned to bring together multi-disciplinary teams to work on various emerging topics. I myself am working within the iMinds IoT lab, which works on a wide area of topics related to IoT, ranging from antenna design and wireless MAC protocols, to software security and distributed computing, the last one being my main expertise.
In our research, we try to not only come up with theoretical solutions, but also strive to create tangible results in the form of demonstrations and proof of concepts. As a researcher, you get more freedom in choosing which technology to use in building your solutions, in contrary to in industry, where you are often tied to a lot of legacy software. For IoT, the choice of using OSGi was made early on, which proved to be a good fit for a lot of IoT requirements.
One challenge in IoT environments is the hardware heterogeneity you have to cope with. You need to deploy (parts of) software on a wide variety of devices ranging from embedded devices up to high-end servers in the Cloud. As OSGi was initially designed for service gateways, it is well suited to run on even lower-end devices. The recent work in the Enterprise Expert group also equipped it with a lot of features to operate in a server environment. This makes OSGi a perfect fit as a base for an IoT platform, as the modularity allows you to pick and place the software modules you need on any of your devices.
A second challenge where OSGi really shines is the dynamics you have to cope with when developing IoT applications. As the physical world is constantly changing, you will need to adapt to new devices that are coming online and other devices that disappear at runtime. This is incredibly hard to manage in software as the complexity increases. The OSGi bundle and service model already handles these dynamics and offers the developer a nice and easy way to cope with this using for example Declarative Services.
Third, OSGi offers a nice solution for software distribution with the Remote Services specification. This enables you to delay the decision on which part has to run on which device until deployment time or even at runtime, instead of having this fixed already at development time. This gives you a lot more flexibility in deploying complex applications on a distributed infrastructure.
In order to even better match IoT industry requirements, the OSGi Alliance has recently started anIoT Expert Group, which will build on the already available specification work and where additional IoT-specific RFPs can be submitted to become part of the OSGi specification.
In my talk, I will present and demo some IoT use cases we have developed, and illustrate how we really benefit from using OSGi. If you are interested in OSGi and/or IoT, you are invited to attend my session,OSGi for IoT: the good, the bad and the ugly at EclipseCon 2015 in Ludwigsburg.

Reposted with permission from Tim Verbelen, original post at Eclipsecon Europe 2015

Wednesday, August 19, 2015

OSGi Residential Release 6 Specification

The OSGi Residential Expert Group is excited about the new specifications introduced in the OSGi Residential Release 6 Specification. You can find it at: http://www.osgi.org/Specifications/HomePage. It contains a number of new specifications mostly dealing with device interoperability and configuration, monitoring and management services. As with the previous Residential 4.3 Specification, the OSGi Alliance and the Home Gateway Initiative (HGI) synchronized their work in yearly workshops.
We are proud to present the following new service specifications that are now part of the OSGi ecosystem:

Device Abstraction Layer - The Device Abstraction Layer specification provides a unified interface for application developers to interact with sensors, devices, etc. connected to a gateway. Application developers don't have to deal with protocol specific details. This greatly simplifies the development of applications. This abstraction layer is also the basis to support the integration of semantic technologies in future specifications.

Device Abstraction Layer Functions - The Device Abstraction Layer Functions specification defines a minimal set of basic device operations and the related properties. They can be extended or replaced to cover domain specific scenarios. The set is not closed and can be incorporated with vendor specific functions. There is support for control, monitoring and metering information.

EnOcean Device Service Specification - This specification defines how OSGi bundles can both discover and control EnOcean devices, and act as EnOcean devices and interoperate with EnOcean clients. In particular, a Java mapping is provided for the standard representation of EnOcean devices called EnOcean Equipment Profile (EEP).

Network Interface Information Service - This service specification defines services that provide a standard way for bundles to receive notifications about changes in the IP network interfaces and IP addresses.

Resource Monitoring - The Resource Monitoring specification defines an API for applications to monitor resources consumed by any set of bundles. This includes hardware resources but also covers other resource types as well. Data derived from monitoring enables applications to take decisions on management actions to apply. Resource management actions are mentioned as examples in this specification, including actions on the lifecycle of components, bundles, the framework and the JVM, Java threads, and the raising of exceptions.

Serial Device Service Specification - The Serial Device Service specification defines an API to communicate with controllers, devices and other equipment that is connected via a serial port.

USB Information Category – This specification adds a new Device Access Category for USB devices to the Device Access specification in order to handle, for example, the integration of communication protocols, for example, ZigBee or Z-Wave via USB dongles.


Please share the news, review the specification and give us your feedback.

Andreas Kraft & Kai Hackbarth (co-chairs Residential Expert Group)

Tuesday, August 18, 2015

More jars on Maven Central and JCenter

In my last blog post, I announced the availability of the Release 6 specifications including their companion code jars which were released to Maven Central and JCenter. Those companion code jars were collections of companion code associated with the specification documents. For example, the osgi.enterprise jar contained all the packages for the all the APIs specified in the OSGi Enterprise specification document. Those jars are meant for compile time use only and are not meant for runtime use by being installed as a bundle in an OSGi framework. In fact, those jars now contain unresolvable requirements to prevent their use at runtime.

But what if you want to use the APIs at runtime? To support using the APIs at runtime, OSGi has now made the companion code for individual specifications available as individual companion code bundles. These bundles are now also available from Maven Central and JCenter. So if, for example, you need the Configuration Admin Service API at runtime (and you are not using a Configuration Admin Service implementation bundle which already exports the API), you can get the org.osgi:org.osgi.service.cm:1.5.0 bundle and install it. Each of the companion code bundles is versioned at the version of the specification they are for. Since the current release of the Configuration Admin Service Specification is version 1.5, that is also the version of its companion code bundle.

There are now individual companion code bundles for all of the specifications in the OSGi Compendium, OSGi Enterprise, and OSGi Residential Release 6 specifications:
org.osgi.application-1.0.0
org.osgi.jmx-1.1.0
org.osgi.namespace.contract-1.0.0
org.osgi.namespace.extender-1.0.1
org.osgi.namespace.implementation-1.0.0
org.osgi.namespace.service-1.0.0
org.osgi.service.application-1.1.0
org.osgi.service.async-1.0.0
org.osgi.service.blueprint-1.0.2
org.osgi.service.cm-1.5.0
org.osgi.service.component-1.3.0
org.osgi.service.component.annotations-1.3.0
org.osgi.service.coordinator-1.0.2
org.osgi.service.dal-1.0.0
org.osgi.service.dal.functions-1.0.0
org.osgi.service.deploymentadmin-1.1.0
org.osgi.service.device-1.1.0
org.osgi.service.dmt-2.0.1
org.osgi.service.enocean-1.0.0
org.osgi.service.event-1.3.1
org.osgi.service.http-1.2.1
org.osgi.service.http.whiteboard-1.0.0
org.osgi.service.io-1.0.0
org.osgi.service.jdbc-1.0.0
org.osgi.service.jndi-1.0.0
org.osgi.service.jpa-1.0.0
org.osgi.service.log-1.3.0
org.osgi.service.metatype-1.3.0
org.osgi.service.metatype.annotations-1.3.0
org.osgi.service.monitor-1.0.0
org.osgi.service.networkadapter-1.0.0
org.osgi.service.prefs-1.1.1
org.osgi.service.provisioning-1.2.0
org.osgi.service.remoteserviceadmin-1.1.0
org.osgi.service.repository-1.1.0
org.osgi.service.resolver-1.0.1
org.osgi.service.resourcemonitoring-1.0.0
org.osgi.service.rest-1.0.0
org.osgi.service.serial-1.0.0
org.osgi.service.serviceloader-1.0.0
org.osgi.service.subsystem-1.1.0
org.osgi.service.tr069todmt-1.0.1
org.osgi.service.upnp-1.2.0
org.osgi.service.usbinfo-1.0.0
org.osgi.service.useradmin-1.1.0
org.osgi.service.wireadmin-1.0.1
org.osgi.util.function-1.0.0
org.osgi.util.measurement-1.0.1
org.osgi.util.position-1.0.1
org.osgi.util.promise-1.0.0
org.osgi.util.tracker-1.5.1
org.osgi.util.xml-1.0.1

Note: the Core API is only available in the osgi.core jar for compile time use since the Core API is exported at runtime by the framework implementation and thus should not be exported by bundles.

Sunday, August 16, 2015

Taking Exception

Did you ever look at how we as developers are handling our exceptions? Open source lets us see that we've developed an intriguing number of ways of handling exceptions. Let's take a look at the myriad ways developers handle their exceptions.

First the closeted C developer that was forced to use Java.

  int main(String[] args, int argc) {
    FileInputStream file_input_stream;
    int first_char;
    try {
      if (argc != 0)
        throw new IllegalArgumentException("Not enough arguments");
    } catch (IllegalArgumentException illegal_argument) {
      System.err.format("Exception: " + illegal_argument.getMessage());
      return -1;
    }
    try {
      file_input_stream = new FileInputStream(args[0]);
    } catch (FileNotFoundException e) {
      return ENOENT;
    }
    try {
      first_char = file_input_stream.read();
    } catch (IOException e) {
      try {
        file_input_stream.close();
      } catch (IOException ee) {
        return EIO;
      }
      return EIO;
    }
    if (first_char > 0) {
      System.out.format("first character is %c\n", first_char);
      try {
        file_input_stream.close();
      } catch (IOException e) {
        return EIO;
      }
    } else {
      try {
        file_input_stream.close();
      } catch (IOException ee) {
        return EIO;
      }
      return EEOF;
    }
    return EOK;
  }

Then the testosterone driven developer that basically reasons that if the caller does not want its checked exceptions they better swallow Runtime Exceptions!
  
public void main(String args[]) {
    try (FileInputStream input = new FileInputStream(args[0]);) {
      int c = input.read();
      if (c > 0)
        System.out.format("first character is %c%n", c);
    } catch (Exception e) {
      throw new RuntimeException(e);
    }
  }

There are of course the persnickety developers that feel that since exceptions are good, more exception classes must be better. They wrap the exception in their own, better, exception, thereby creating a profound stack trace. Their variation looks like:


  public void main(String args[]) throws MeTooException {
    try (FileInputStream input = new FileInputStream(args[0]);) {
      int c = input.read();
      if (c > 0)
        System.out.format("first character is %c%n", c);
    } catch (Exception e) {
      throw new MeTooException(e);
    }
  }

And then we have the financial developer that figured out his productivity is measured by the lines of code he produces, regardless how mindless they are. They are especially vicious combined with the persnickety approach that wraps each exception in their own variation.

  public void main(String args[]) throws MeTooException {
    try (FileInputStream input = new FileInputStream(args[0]);) {
      int c = input.read();
      if (c > 0)
        System.out.format("first character is %c%n", c);
    } catch (FileNotFoundException e) {
      log("File not found Exception");
    } catch (EOFException e) {
      log("File EOF Exception");
    } catch (ClosedChannelException e) {
      log("Closed Channel Exception");
    } catch (ConnectIOException e) {
      log("Connect IO Exception");
    } catch (FileSystemException e) {
      log("File System Exception");
    } catch (FileLockInterruptionException e) {
      log("File Lock Interrupt Exception");
    } catch (InterruptedIOException e) {
      log("Interrupted IO Exception");
    } catch (MalformedURLException e) {
      log("Malformed URL Exception");
    } catch (IIOException e) {
      log("IIO Exception");
    } catch (RemoteException e) {
      log("Remote Exception");
    } catch (ProtocolException e) {
      log("Protocol Exception");
    } catch (SocketException e) {
      log("Socket Exception");
    } catch (SSLException e) {
      log("SSL Exception");
    } catch (SyncFailedException e) {
      log("Sync Failed Exception");
    } catch (UnknownHostException e) {
      log("Unknown Host Exception");
    } catch (JarException e) {
      log("Jar Exception");
    } catch (ZipException e) {
      log("Zip Exception");
    } catch (IOException e) {
      log("IO Exception");
    }catch (SecurityException e) {
      log("Security Exception");
    }
  }
Then we have the 'what checked exceptions?' developer that worked out how they can bypass the type system to throw a non-runtime exception without the caller knowing it:

  public static void main(String args[]) {
    try (FileInputStream input = new FileInputStream(args[0]);) {
      int c = input.read();
      if (c > 0)
        System.out.format("first character is %c%n", c);
    } catch (Exception e) {
      Throw.asUncheckedException(e);
    }
  }
  public static class Throw {
    public static void asUncheckedException(Throwable throwable) {
      Throw. asUncheckedException0(throwable);
    }
    @SuppressWarnings("unchecked")
    private static  void asUncheckedException0(Throwable throwable) throws E {
      throw (E) throwable;
    }
  }

Fortunately we can all hate the ostrich developers that swallow exceptions. Any experienced Java developer knows what it means to trace a problem for hours only to find that some idiot had not reported an error. A better argument for licensing software professionals is hard to find.

  public static void main(String args[]) {
    try (FileInputStream input = new FileInputStream(args[0]);) {
      int c = input.read();
      if (c > 0)
        System.out.format("first character is %c%n", c);
    } catch (Exception e) {}
  }

And then we have the pragmatic developer that realizes that there is no difference between checked and runtime exceptions. Hated by its consumers that are still believing in the myth of checked exceptions:



  public static void main(String args[]) throws Exception {
    try (FileInputStream input = new FileInputStream(args[0]);) {
      int c = input.read();
      if (c > 0)
        System.out.format("first character is %c%n", c);
    }
  }

So in which camp am I? Well, you probably have guessed that I am in the pragmatic camp.  My reasoning is that checked exceptions do not exist, get over it. Bad Idea.

Let me explain why.

I am a firm believer in contract based design and OSGi is imho the best example of this model. In such a world a function call succeeds when the contract is obeyed by the consumer and the provider. However, in the real world there are cases where the contract cannot be fulfilled. Exceptions are for signalling this failure to the consumer. Maybe the input arguments are wrong, one of the downstream calls fails, or a disk goes haywire. The number of things that can go wrong are infinite so it is infeasible to figure out what to do about this failure except to ensure that the state of the current object remains correct.

In all most all cases if anything could be done, then the provider already should have done it. It is crucial to realize that exceptions are therefore by definition not part of the contract. For example, bnd does not see a change in the throws clause as a binary change.

When an exception happens, the consumer could try an alternative strategy but it must never try to understand the reason of the failure for this creates very brittle code. This is especially true in a component world like OSGi where the actual implementations on a call stack can vary. The function succeeds when no exception is thrown, and the function fails when contract could not be obeyed.

Any information in the exception is for the human user to figure out the root problem so that the software contracts can be adjusted to cover the exceptional case or some repair initiated. When an exception happens it is the root cause that the user needs to know. Wrapping exceptions obscures this root cause as we all realize when we see that the root happened 17 exceptions deep and our environment decided to only show 16.

Handling the checked exceptions creates a tight coupling between the consumer and provider for no reason since the consumer should treat all exceptions equal: the contract could not be obeyed, the cause for the consumer is irrelevant. The type, message, and other information of the exception is only intended for the end users to diagnose the problem.

Once you accept this way of thinking about exceptions you realize that checked exceptions were a really bad idea since they give the impression that the consumer should do something specific with them while the best thing is in all most all cases to forward the original exception to the function that is responsible for error handling on that thread.

 I started throwing Exception on all my methods a long time ago. I am often resented for this because users of my code are often still under the illusion that checked exception have utility. So they feel forced to obscure there code in the myriad of ways described in this blog. Well, get over it, the emperor has no clothes.

Since the runtime does not distinguish between checked and unchecked exceptions Oracle could probably provide a compiler annotation that would disable the checking for checked exception. Would be a relieve to also get rid of this nonsensical throws Exception line in my code. To conclude, checked exceptions were a failed experiment. Maybe we should start accepting this.

Peter Kriens
@pkriens

Blog Archive