Oomph changes the way you handle multiple eclipse installations

November 27, 2015 07:53 AM

I work on many Eclipse projects and projects based on Eclipse technologies. I have of course the project I work on for our customer (daily job), but I also try to contribute to different open source projects and I also like trying new stuff and playing around with new technologies.

All these different tasks require different Eclipse installations. When I experiment with Xtend I use “Eclipse IDE for Java and DSL Developers”. Sometime I need an IDE with Maven support, but not always. After the EclipseCon I just installed an Eclipse supporting Gradle and Groovy development.

So far, I have never tried to install everything in the same Eclipse IDE. I feared the RAM consumption, overloaded menu trees and an IDE containing too many plugins. So I ended up with a folder containing different eclipse installations (one for each domain):

Such a folder structure both consumes disk space and maintaining each installation is time consuming and repetitive. Take the Mars.1 update release as an example. I have so many bad reasons for not updating my IDEs: it is boring, it takes time, it takes disk space, ….

At the last EclipseCon Europe I finally found the time to get into Oomph and the great new is: with Eclipse Oomph (a.k.a. Eclipse Installer), I am convinced that I can do much better than in the past.When you look at a classic Eclipse installation, it looks like this:

I wasn’t really aware of it, but this folder contains the result of 3 different mechanisms: an Eclipse installation containing the executable and some configuration, a p2 bundle pool where the plugins are effectively stored and a p2 agent that keeps everything together by storing the installation details into profiles. Oomph takes advantage of the flexibility offered by P2 and does not create a standalone installation.

The main advantage of this approach is that the Bundle Pool can be shared across your Eclipse installations. This drastically reduces the disk space for your eclipse installations. It also reduces the time it takes to install and to update each of your eclipse installations.

When you install something in one IDE, files are stored in the bundle pool. Installing the same plugin again in another IDE is straight forward. Oomph just updates the profile and reuse the files already present in your bundle pool.

In addition Oomph is all about setting up your Eclipse environment. Automation is the key word here. You can easily define setup tasks that are shared across all your Eclipse installations. This way you no longer loose time with modifying the default Eclipse settings to obtain the installation you want.

The Oomph approach is really great. In my opinion you can now have one Eclipse IDE setup for each project you work on. On this blog I will continue to explain the advantages of Oomph and what it changes when you work with it. The Scout team will of course also contribute a setup task to facilitate contributions on our framework.

Feedback: please use this forum thread.

Project Home, Forum, Wiki, Twitter, Google+

November 27, 2015 07:53 AM

Vert.x ES6 back to the future

by pmlopes at November 25, 2015 12:00 AM

On October 21th, 2015 we all rejoiced with the return from the past of Marty McFly with his flying car and so on, however in the Vert.x world we were quite sad that the JavaScript support we have was still using a technology released in December 2009. The support for ES5 is not something that we Vert.x team controls but something that is inherited from running on top of Nashorn.

With all these nostalgic thoughts on my mind I’ve decided to bring us back to the future and by future I mean, lets start using a modern JavaScript, or more correctly, lets start using ECMAScript 6.

It turned out to be quite simple to achieve this so I’ll pick the hello world example and write it in ES6 just to show how you can port your code to ES6 and still use the current Vert.x APIs. Note that Vert.x internals still are ES5 and have not been touched or modified to support any of ES6 features.


Traditionally your main.js file would reside in the root of your module (this is where NPM will look for it by default); however as we are going to transpile to ES5 you’ll want to put your index file in /src/main.js.

However, because we are transpiling to ES5, your package.json‘s main block should point to the transpiled index.js file in the /lib directory.

  "name": "vertx-es6",
  "version": "0.0.1",
  "private": true,

  "main": "lib/main.js",

  "scripts": {
    "build": "rm -Rf lib && ./node_modules/.bin/babel --out-dir lib src",
    "start": "./node_modules/.bin/vertx run lib/main.js"

  "dependencies": {
    "vertx3-full": "3.1.0",
    "babel-cli": "6.2.0",
    "babel-preset-es2015": "6.1.18"

As you can see, the main idea is to invoke the transpiler (Babel) when we are building our project, and run it using the generated files. This is slightly equivalent to a compilation process you would have using compiled language.


If you’re planning to deploy your package to npm either local or private you should be aware that npm will exclude anything listed on your .gitignore since we should ignore the generated code from git it need to inform npm to ignore that rule and keep the lib directory. The .gitignore should be something like:


And the .npmignore:


Hello fat arrows and let keywords

So all the heavy work has been done, in order to create our hello world we just need to code some ES6 in our src/main.js file:

var Router = require("vertx-web-js/router");
var server = vertx.createHttpServer();

var router = Router.router(vertx);

router.get("/").handler((ctx) => {

    let response = ctx.response();
    response.putHeader("content-type", "text/plain");

    response.end("Hello ES6 World!");


As you can see we’re using fat arrows instead of writing a function closure and scoped variables using let keyword. If you now compile your project:

npm run build

And then start it:

npm start

You have your first back to the future ES6 verticle!

by pmlopes at November 25, 2015 12:00 AM

Eclipse Key Binding: Select Enclosing Element

by waynebeaton at November 24, 2015 04:03 PM

Here’s a Eclipse command that’s pretty handy: Select Enclosing Element  (key binding: Shift+Alt+Up on Linux).


Every time you hit the key combination, it expands the selection to the enclosing element. In this example, it starts with the method selector, and expands to include the parameters, the receiver, the statement, the block, the method, … all the way up to the entire compilation unit.

To go the other way (back towards the original selection), use the Restore Last Selection command (key binding Shift+Alt+Down on Linux).

It is, of course, up to you to decide what to do with the selection once you have it just right: Maybe use Alt+Up or Alt+Down to move selection around in the file…

You can find this and more commands by hitting Ctrl+3 and just typing a bit of what you’re looking for, or hit Shift+Ctrl+L to open a pop-up with the full list of key bindings.

by waynebeaton at November 24, 2015 04:03 PM

Andmore 0.5-M3 available.

by kingargyle at November 24, 2015 03:33 PM

Screen Shot 2015-04-19 at 4.44.52 PM

The third stable milestone is ready for you, the user community to kick the tires, and use for your development.  This is primarily a bug and stability milestone.  The one big addition is that Andmore now supports multi-dexing of the APK files.    Also, starting with the next Neon milestone, an Android Developers EPP package will be available.   This effort is being lead by the newest committer Kaloyan Raev.

Also this release of Andmore can be installed on older versions of Eclipse other than Mars.   There is also now an Eclipse Marketplace entry for the project as well to make installing the tooling even easier.



The latest version can always be obtained from the following p2 url:


by kingargyle at November 24, 2015 03:33 PM

Exploring TypeScript Support in Eclipse

by aaronlonin at November 24, 2015 02:34 PM

 TypeScript is an open source superset of JavaScript that adds class based objects that compile to JavaScript plain code. Currently there are two main options to support TypeScript in Eclipse. I’m going to discuss their features and pros and cons of each.Palantir’s TypeScript (Version 1.6.0.v20151006)This plugin offers minimal support for TypeScript and uses the official TypeScript […]

The post Exploring TypeScript Support in Eclipse appeared first on Genuitec.

by aaronlonin at November 24, 2015 02:34 PM

AnyEdit 2.6.0 for beta testers

by Andrey Loskutov (noreply@blogger.com) at November 22, 2015 09:09 PM

I did some bigger changes in AnyEdit plugin related to "Compare To/Replace With" menu entries.

They are now contributed differently to avoid duplicated menu entries and to better support workspace external files. Together with the latest nightly build of EGit one can see this menus first time in the Git Repositories view:

This beta offline update site contains the not yet released 2.6.0 version of AnyEdit.

It would be really nice if you could test and report possible regressions. If no one complain, I will release this in a week or so.

P.S: be aware, I've dropped support for Eclipse 3.7 in AnyEdit. While technically this still should work, I do not plan to support Eclipse 3.7 anymore. Eclipse 3.8 is now the minimal platform version for AnyEdit.

by Andrey Loskutov (noreply@blogger.com) at November 22, 2015 09:09 PM

IncQuery and VIATRA at EclipseCon Europe 2015

by István Ráth at November 21, 2015 06:53 PM

This year, István Ráth and Ákos Horváth have attended EclipseCon Europe and represented our team. IncQuery and VIATRA have been featured in two talks:

  • "IoT Supercharged: Complex Event Processing for MQTT with Eclipse Technologies" (session, slides) - featuring a live demo squeezed into a 10 minute lightning talk - luckily everything worked like a charm :-) This talk is about VIATRA-CEP, a new, high-level framework for complex event processing over runtime models. CEP is a hot topic in IoT for obvious reasons: it is one of the key technologies you want to use to process data/event streams - preferably as close to the data source as possible. In an IoT context, this would typically mean your gateway - which is now possible via new technologies such as Eclipse Kura. VIATRA-CEP is part of our Eclipse.org model transformation framework VIATRA, completely open source and licensed under the EPL.
  • "IncQuery gets Sirius: Faster and Better Diagrams" (session, slides, video). This talk is about a new innovation coming from the IncQuery team, namely the integration between EMF-IncQuery and Sirius. Once this is publicly released, you will be able to use IncQuery patterns in Odesign diagram definitions (next to the traditional options and the new AQL), and enjoy the performance benefits. Additionally, we also provide an integration with IncQuery Viewers, which allows you to define "live diagrams" that are synchronized to the semantic model automatically by incremental transformations.

Both talks were well received, we were asked quite a few interesting questions and even suggestions on future development ideas. The entire conference schedule was very strong this year, with hot topics such as the "IoT Day", "Project Quality Day", and the "LocationTech Day" that all provided high quality talks on interesting topics. Here are my favorite picks:

  • The first day keynote by Stefan Ferber of Bosch Software Innovations was one of the best keynotes I have seen at EclipseCons. It is good to see such a powerful entity joining the Eclipse ecosystem.
  • I really liked two Xtext talks: Business DSLs in Web Applications and The Future of Xtext. As both IncQuery and VIATRA rely on Xtext, it is good to see this framework moving ahead at such a high pace and quality.
  • GEF4 - Sightseeing Mars was one of the highlights of the Conference to me. I was very pleased to see such an important piece of technology gaining new momentum and fresh ideas. The demos were impressive too!
  • Processing GeoSpatial Data at Scale was very useful to anyone interested in this topic, as it provided a thorough overview of the domain, including challenges and technologies. Stay tuned for some new innovation coming from the IncQuery team in this area in the near future.
  • Finally, I'm very happy to see GS Collections joining the Eclipse family under the name Eclipse Collections framework. The talk was one of the best at the conference. In fact, we are planning to evaluate this technology for use in IncQuery, to optimize the memory footprint.

We would like to thank the Budapest University of Technology and Economics, the MONDO and CONCERTO EU FP7 Projects, and IncQuery Labs Ltd. for supporting our talks at the EclipseCon Europe 2015 conference.

by István Ráth at November 21, 2015 06:53 PM

EMF Forms goes AngularJS

by Maximilian Koegel and Jonas Helming at November 20, 2015 01:13 PM

Over three years ago, we started the explicit development of EMF Forms as a sub component of the EMF Client Platform. The goal was to ease the development of data-centric form-based UIs based on a given EMF data model. Rather than manually coding UIs (e.g. in SWT), the approach is to describe them declaratively in a simple model language, which is focussed on the specification of forms – the View Model. A view model instance is then interpreted by a flexible and extensible rendering component to create a working UI at the end.
The approach has been a major success and shows significant advantages over manual UI programming or the usage of WYSIWYG editors. Besides the lower development effort and the higher quality of the resulting form-based UI, another advantage is the technological flexibility of the approach. The view model, which specifies the UI is not depending on a certain UI toolkit (e.g. SWT, GWT or JavaFX). Implementing new renderers allows you to switch the UI technology without respecifying the concrete UI itself. With renderers for the Remote Application Platform and Vaadin, EMF Forms is already used for web applications.
EMF Forms has grown to a very active, frequently used project. This success motivates us to continuously drive the technology forward, extend its use cases, and, in doing so, attract new users. An obvious new field for applying the concepts of EMF Forms is the implementation of data-centric web applications. More and more complex business applications are developed as single-page web applications using JavaScript and frameworks such as AngularJS and EmberJS. These clients are then connected to backends using Restful Services. This JavaScript based area opens the gate for a large group of new users and use cases. Therefore, it is a consequent step to implement an AngularJS based renderer for EMF Forms.
The former eponym “EMF” is rather unknown in the web area. Additionally, it loses its central role for defining the data schema and the UI schema. Therefore, a new name was required: JSON Forms. However, it is not difficult to guess, which technology takes over the role of EMF.

What happened before?

Before we talk about the world of JavaScript, we would like to take a brief look back on the main concepts of EMF Forms. While with JSON Forms we switch to a new technology stack, the basic idea and the basic concepts remains the same as in EMF Forms. If you are already familiar with EMF Forms, you can probably skip this section.

Both frameworks are based on the fact that typical UI toolkits are not focussed on the implementation of form-based UIs. Therefore, they make things unnecessarily complicated and require too much effort. For a typical input field, displaying a String attribute, you need to implement a label as a text field and possibly validate it as well as the binding to the underlying data entity. This has to be repeated for all required fields in a form.

In a declarative language, like the one provided by EMF/JSON Forms, you only need to specify the existence of a control, which references an attribute of the data schema. The whole creation of a fully functional UI, including validation and binding is then done by a rendering component.

Analogously, layouts are specified, in which controls can be embedded. As for controls, the declarative approach focuses on concepts, which are typically required in form-based UIs. Instead of complex layouts, such as GridLayout, the declarative language provides simple concepts such as groups or columns. This abstraction makes the specification of UIs much simpler and more efficient. Additionally, the central rendering component, which replaces a great deal of the manually written UI code, improves the adaptability and maintenance. Any desired change to the UI must be applied only once on the renderer. Further information on EMF Forms can be found here.

Last but not least, the declarative description of the UI is technology independent. Only the renderer component is bound to a specific toolkit. Therefore, it is possible to migrate existing UIs to new technologies. JSON Forms provides a complete new technology stack, based on HTML, CSS, JavaScript, JSON, JSON Schema and AngularJS. However, our goal is to keep the declarative description compatible, so it is also possible to reuse existing “view models” from EMF Forms on the new stack.

A new technology stack

As mentioned before, the general concept of EMF Forms has been maintained. The declarative way of describing UIs and the central rendering approach provide the same advantage and can be efficiently implemented in an client server oriented browser application. To display a form, JSON Forms still needs three artefacts: the definition of the displayed entity (Data Model), the declarative description of the UI (View Model) and finally the data entity to be displayed (Data Model Instance). In EMF Forms, all those artefacts are modelled in EMF. As the name already implies, we use JSON instead of EMF for JSON Forms. More precisely, we use JSON Schema for the data model, and JSON objects for the view model and the data entity. Therefore, in JSON Forms, we use the terms “data schema” and “ui schema” instead of “data model” or “View Model”.
The following example shows a very simple data schema, which defines a data object with only two attributes. The specified entity shall be displayed in a form-based UI later on. That means, there are controls to enter both attributes “Name” and “Gender”. The schema defines the possible values for “Gender”, and is an enumeration. The “Name” field is specified as mandatory. Both constraints are considered by JSON Forms.

  "type": "object",
  "properties": {
    "name": {
      "type": "string"
    "gender": {
      "type": "string",
      "enum": [ "Male", "Female" ]

The schema only describes the data to be displayed, it does not specify how the data should be rendered in the form. This is specified in a second JSON-based artifact, the “UI Schema” (see listing below). The ui schema references the data schema, more precisely the attributes defined in the data schema. The ui schema element “Control” specifies, a certain attribute shall be rendered at a specific location in the UI. Therefore, it references the specific attribute from the data schema (see the JSON attribute “scope” in the following example). The renderer is responsible for displaying the attribute now,  accordingly, it also selects a UI element, e.g. a text field for a string attribute. Therefore, you do not need to specify, that there should be a drop down for the “Gender” attribute or which values it should contain. Also, you do not need to specify additional labels, if the labels should just show the name of the attribute. If you want to show another label, you can optionally specify it in the ui schema. As you can see, JSOn Forms derives a lot of information directly from the data schema and therefore reduces duplicate specification.

The ui schema also allows to structure controls with container elements. This allows to specify a logical layout. As a simple example is a HorizontalLayout, which displays all children elements next to each other. As in EMF Forms, there are more complex layouts in practice, for example groups, stacks, or multi-page forms. However, the basic concepts remains the same, the ui schema specifies, in a simple way, how the form-based UI is structured, the rendering component takes care of creating a concrete UI. The following example ui schema shows two controls structured in a horizontal layout:

  "type": "HorizontalLayout",
  "elements": [
      "type": "Control",
      "scope": {
        "$ref": "#/properties/name"
      "type": "Control",
      "scope": {
        "$ref": "#/properties/gender"

This simple and concise specification of the data schema and the ui schema is already enough to render a fully functional form-based UI. This is done by the JSON Forms rendering component describe in the following section.


The two artifacts, the data schema and the ui schema, are now rendered to a UI. That is done by the rendering component. In JSON Forms, it uses HTML for defining the visible elements of the UI and JavaScript for the implementation of any behavior, such as data binding and validation. To ease the implementation and also offer state-of-the-art features such as bi-directional data binding, we additionally use the framework AngularJS. It has a growing user base since its publication in 2012, which is already a long period in the volatile area of web frameworks.

The rendering component consists of several registered renderers. Every renderer is responsible for rendering a specific element of the ui schema. This allows the modular development of new renders. Additionally, specific renderers can be replaced.


A frequently raised question is, why AngularJS alone is not enough to efficiently develop form-based Web UI. AngularJS definitely provides a lot of  support to the development of web UIs, however, implementing the desired feature manually is still required. To implement the example above, you have to complete several manual steps in AngularJS. First you have to create an HTML template defining all UI elements such as the text box, the labels and the drop down element. This can lead to complex HTML documents, especially for larger forms. Now, you have to manually set the AngularJS directives on those elements to bind the data and to add validations. Finally, to achieve a homogenous look & feel, you need to layout and alling all created elements, typically with CSS. That means, even for defining a very simple form, you have to deal with three different languages, JavaScript, HTML and CSS. In contrast, when using JSON Forms, you just have to define the data schema and ui schema, both done in a simple JSON format. The rendered form just needs to be embedded into an existing web page using a custom directive.

The declarative approach of JSON Forms especially pays off, in case the underlying data schema is extended or changed. In this case, you just need to slightly adapt the ui schema, e.g. by adding a new control. Another advantage is that there is one implementation per UI element, enabled by the renderer registry. This allows you to adapt the form-based UI at a central place in a homogenous way. As an example, if you want to add a specific behavior to text fields, you just need to adapt the renderer for string attributes.

Therefore, JSON Forms allows to adapt and replace existing renderers. Renderers can be registered for the complete application, e.g. for all text fields, or alternatively, for specific attributes, e.g. only for the string attribute “name”. As an example, if you want to show the Enumeration “Gender” from before as radio buttons instead of a drop down box, you could adapt the renderer for Enumerations. Alternatively, you can even depend on a certain condition, e.g. you can adapt the renderer for all Enumerations with only two possible values.


After the renderer has created a read-to-use UI, the questions remains open, how this can be used in an existing application? In the following section, we describe how to embed JSON Forms into any web application.


To use the rendered form at the end, it is typically embedded into an existing application. As for EMF Forms, our focus is on a not invasive integration. That means, it is possible to integrate the framework as easily into an existing application. Additionally, it should integrate well with existing frameworks, such as Bootstrap.

For embedding EMF Forms into a HTML page, we use a specific directive (as often done in AngularJS). The JSON Forms directive specifies the data schema, the data entity as well as the ui schema to be shown (see following code example). The values of the attributes “schema”, “ui-schema” and input must be in the scope of the current controller. Therefore, they can be retrieved from any kind of source, which allows full flexibility in connecting a backend. In a typical application, the data schema and the ui schema would be static content, while the data entity is retrieved from a REST service.


<jsonforms schema=”mySchema” ui-schema=”myUiSchema” input=”myDataObject”/>


In contrast to EMF Forms, JSON Forms uses JSON to represent the ui schema. The default JSON serialization is much easier to read than the EMF one. Further, JSON is a de-facto standard in the JavaScript world. A first ui schema can easily be created using any kind of text editor. However, one good tooling for creating and modifying View Models (ui schemata in JSON Forms) has been an important success factor for EMF Forms. As an example, the EMF Forms tooling allows one to generate a complete default view model based on a given data schema. This default model can then be iteratively adapted. This feature reduced the initial effort for creating a form-based UI even more.

Of course we also want to support the user as much as possible to specify ui schemata in JSON Forms. The first good news is that you can directly export existing view models from EMF Forms to JSON forms. That allows you to reuse any existing view model as well as using the existing and well-known tooling for the creation of new ui schemata. Beside the ui schema, we also provide an export from an existing EMF model to a json data schema. Therefore, you can reuse the existing features of the EMF Forms and EMF/Ecore tooling.

To export view models and Ecore models to JSON Forms, EMF provides a new option in the right click menu on both artefacts. This has been introduced in the 1.8.x development stream. Further documentation on how the ui schema can be used with JSON Forms then can be found on the JSON Forms homepage and the EMF Forms / JSON Forms integration guide.

On a medium term, we of course also want to address user groups outside of the Eclipse ecosystem with JSON Forms. Therefore, we are working on a ui schema editor as a pure web application. As for EMF Forms, the editor is of course based on JSON Forms itself. Therefore the framework is bootstrapping itself.


With JSON Forms, EMF Forms enters the JavaScript world and the area of typical single page web applications. The well-proven concepts of EMF Forms are transferred seamlessly. The implementation is adapted to the new context and the requirements of typical web applications. Technically, JSON Forms is based on HTML, CSS, JavaScript, JSON, JSONSchema und AngularJS. JSON Forms can be used independently of Eclipse. However, you can reuse the data models and view models from EMF Forms. Therefore it should be easy for existing EMF Forms users to get started with JSON Forms. The most important concepts of EMF Forms are already supported in JSON Forms, we are actively working on completing the framework.

The development of JSON Forms does not mean we will lose our focus on EMF Forms. We plan on continuously driving it forward and extending its vital user base.

One advantage of the declarative approach is especially relevant in the web area, i.e. for JSON Forms: The independance of a UI toolkit. If you manually develop web UIs, you bind the application to a specific JavaScript framework, e.g. AngularJS or Ember.JS. This is a technical risk, especially in the area of web framework. Angular has been pretty popular for the past 3 years, which is already a long time for a JavaScript framework. However, at the end of 2015, the next major version will be published, which is not fully backwards compatible. With JSON Forms, you partially mitigate this risk. The actual specification of your custom forms is one in a declarative and UI technology independant JSON format. By adding new renderers, you can reuse this specification with new JavaScript frameworks or new versions of them.

If you want to try JSON Forms, please visit our website, it provides tutorials and a running live example, where you can specify a ui schema online. Further information about EMF Forms and first steps with JSON Forms can be also found on the EMF Forms website. Finally, we will soon start a blog series, which provides detailed tutorial how to implement complex forms with JSON Forms based on an example application. if you want to learn about the on-going development of JSON Forms, please follow us on twitter.


3 Comments. Tagged with emf, emfforms, JSON, jsonfo, emf, emfforms, JSON, jsonfo

by Maximilian Koegel and Jonas Helming at November 20, 2015 01:13 PM

New Releases of Eclipse IoT Projects Advance IoT Open Source Technology

November 19, 2015 02:00 PM

These projects and the Eclipse IoT ecosystem provide open source IoT technology for developers to build IoT solutions.

November 19, 2015 02:00 PM

Service Update for the Clean Sheet Eclipse IDE Look and Feel

by Frank Appel at November 18, 2015 09:20 AM

Written by Frank Appel

Roughly two weeks ago, I published an introduction to the Clean Sheet Eclipse IDE look and feel, an ergonomic theme extension for Windows 10. Happily, the feature got a surprisingly good reception considering its early development stage.

Thanks to the participation of dedicated early bird users, we were able to spot and resolve some nuisances with respect to the tiny scrollbar overlay feature of tables and trees. Please refer to the issues #43, #39, #38, #37, #33, #32, #31, #30, and #29 for more details.

These achievements offer a good opportunity to release a service update (version 0.1.2), containing all fixes and some minor enhancements like displaying the feature in the about dialog. The new version can be installed/updated by drag and drop the ‘Install’ icon into a running Eclipse workbench

Drag to your running Eclipse installation to install Clean Sheet


Select Help > Install New Software…/Check for Updates.
P2 repository software site @ http://fappel.github.io/xiliary/
Feature: Code Affine Theme

So, don’t be shy and give it a try 😉

Of course, it is interesting to hear suggestions or find out about further potential issues that need to be resolved. Feel free to use the Xiliary Issue Tracker or the comment section below for reporting.

Clean Sheet Eclipse IDE Look and Feel

In case you’ve missed out on the topic and you are wondering what I’m talking about, here is a screenshot of my real world setup using the Clean Sheet theme (click on the image to enlarge).

Eclipse IDE Look and Feel: Clean Sheet Screenshot

For more information please refer to the features landing page at http://fappel.github.io/xiliary/clean-sheet.html or read the introductory Clean Sheet feature description blog post.

The post Service Update for the Clean Sheet Eclipse IDE Look and Feel appeared first on Code Affine.

by Frank Appel at November 18, 2015 09:20 AM

Benchmarking JavaScript parsers for Eclipse JSDT

by gorkem (noreply@blogger.com) at November 17, 2015 06:36 PM

The JavaScript parser for the Eclipse JSDTproject is outdated. It lacks support for the latest EcmaScript 2015 (ES6)standard and has quality issues. Moreover, the parser on JSDT is derived from the JDT’s Java parser, hence it is not adopted by the JavaScript community at large, leaving the JSDT committers as the sole maintainer. Luckily, there are good quality JavaScript parsers that already support a large number of tools built around it. However, these parsers, like most of the JavaScript tools, are developed using JavaScript and requires additional effort to integrate with Eclipse JSDT which runs on a Java VM. In the last few weeks, I have been experimenting with alternatives that enables such integration.


Before I go into the details of integration let me quickly introduce the parsers that I have tried.


Acorn is a tiny parser written in JavaScript that supports the latest ES6 standard. It is one of the most adopted parsers and used by several popular JavaScript tools. It parses JavaScript to ESTree (SpiderMonkey) AST format and is extensible to support additional languages such as JSX, QML etc.


Esprima is also a fast, tiny parser that is written in JavaScript, which also supports the latest ES6. Its development has been recently moved to JQuery foundation and has been in use on Eclipse Orion for a while. Just like Acorn it also uses the ESTree AST format.


Shift(java)is the only Java based parser on my list. It is a relatively new parser. It uses Shift AST as its model which is different from the widely adopted ESTree.

Why does AST model matter?

AST model is what actually what tools operate on. For instance a JavaScript linter first uses a parser to generate an AST model and operates on the model to find possible problems. As one can imagine, an IDE that uses a widely adopted AST model can utilize the ecosystem of JavaScript tools more efficiently.

Eclipse JSDT already comes with a JSDT AST model that is used internally that is very hard to replace. Therefore, regardless of the AST model generated by the parser it will be converted to JSDT’s own model before used. Which renders discussionsaround the AST models moot in JSDT’s context.


The parsers other than Shift, which already runs on the Java VM, need a mechanism to play nice with the Java VM. I have experimented with 3 mechanisms for running Acorn and Esprima for JSDT so far.


Utilizes node.js to run the parser code. node.js runs as an external process, receives the content to be parsed and return the results. I have chosen to use console I/O to communicate between node.js and Java VM. There are also other techniques such as running an http or a socket based server for communication. In order to avoid the start up time for node.js, which does affect the performance significantly, node.js process is actually kept running.


A JNI based wrapper that bundles V8 JavaScript VM. It provides a low level Java API to execute JavaScript on bare V8 engine. Although it uses V8, it does not provide the full functionality of node.js and can only be used to execute selected scripts, fortunately Acorn and Esprima parsers can be run with J2V8.


The JavaScript engine that is nowadays built into Java 8. Provides a simple high level API to run JavaScript.

Performance Benchmarks

The criteria for choosing a parser may vary from the feature set, to AST model used, to even community size. However performance is the one criteria that would make all others relevant. So in order to compare the performance of different alternatives I have developed a number of benchmark tests to compare parsers and the mechanisms.

All benchmark tests produce a result with an AST model, either in JSON form or as a Java object model. Tests avoid the startup time for their environments, for instance the startup time for the node.js process affects the results significantly but are discarded by the tests. The current test sets use AngularJs 1.2.5 and JQuery Mobile 1.4.2 (JQM) as the JavaScript code to be parsed.

Table 1. Average time for each benchmark

Acorn (AngularJS)


118.229 ms

± 1.453

Acorn (JQM)


150.250 ms

± 4.579

Acorn (AngularJS)


181.617 ms

± 6.421

Acorn (JQM)


177.265 ms

± 9.074

Acorn (AngularJS)


59.115 ms

± 0.698

Acorn (JQM)


34.670 ms

± 0.250

Esprima (AngularJS)


98.399 ms

± 0.77

Esprima (JQM)


114.753 ms

± 1.007

Esprima (AngularJS)


73.542 ms

± 0.450

Esprima (JQM)


73.848 ms

± 0.885

Shift (Angular)


16.369 ms

± 1.019

Shift (JQM)


15.900 ms

± 0.325

As expected Shift parser which runs directly on top of JavaVM is the quickest solution. To be fair, Shift parser is missing several features such as source location, tolerant parsing and comments that may affect the parsing performance. However even after these features added it may remain the quickest. I feel that the performance for J2V8 can also improve with more creative use of the low level APIs however there is so much memory copying between Java heap to JNI to V8 heap and back I am not sure if it would be significant.

The surprise for me is the Esprima’s performance with Nashorn. It is unexpected in two ways. It is actually the third quickest option however Acorn does not give the same level of performance.

by gorkem (noreply@blogger.com) at November 17, 2015 06:36 PM

EMF Client Platform 1.7.0 Feature: EMF Change Broker

by Maximilian Koegel and Jonas Helming at November 17, 2015 08:49 AM

With Mars.1, we released EMF Client Platform and EMF Forms 1.7.0. EMF Forms is a framework focused on the creation of form-based UIs. If you are not yet familiar with EMF Forms, please refer to this tutorial for a introduction of EMF Forms. EMF Forms is a sub component of the EMF Client Platform, which is designed for general support of the development of applications based on an EMF data model. While we focused our development activities a lot on EMF Forms during the last releases, EMF Client Platform is still under active development, too. In this post, we would like to introduce a new feature of the EMF Client Platform, the EMF Change Broker.

One of the core and most valuable features of EMF is the change notification. By default, it is possible to register change listeners to EObjects and therefore get notified on changes on attributes and references. The EContentAdapter even enables to listen to complete trees of objects, so you can register a listener to all changes in a model instance. However, when implementing an EContentAdapter, you typically need to filter the notifications the adapter receives to only react to changes of interest. As an example, you might only be interested in changes on a certain EClass or on a certain EFeature within your model. Of course, you could add conditions to every listener, but for this kind of generic filters, a central broker architecture is typically more efficient. Additionally, one wants to avoid the registration of several EContentAdapters, as this usually slows down the performance. Therefore a central broker for change notifications of an EMF model instance is very helpful. Such a broker is provided by the EMF Client Platform ChangeBroker. The following diagram shows a conceptional overview.


The ChangeBroker receives notifications from model instances and dispatches them to registered observers. There are specific types of observers, as for example, you can register an observer only for changes on a specific EClass or a specific EFeature. 

The change broker is a stand-alone component, which can used in combination with any EMF model. There is a ready-to-use integration with the persistence layer of the EMF Client Platform, but the Change Broker can also be used stand-alone without other components of the EMF Client Platform. Please see this tutorial for a detailed introduction to the change broker.


Leave a Comment. Tagged with eclipse, emf, emfcp, eclipse, emf, emfcp

by Maximilian Koegel and Jonas Helming at November 17, 2015 08:49 AM

Why I love Open Source

by Gunnar Wagenknecht at November 17, 2015 08:22 AM

Here is another example of why I love working with open source software and its communities. Last Thursday, one of my team members reached out and asked for advice with regards to a build issue he was observing. We use Maven & Tycho at Tasktop for building almost all of our products. One of its unique features is automated stable versioning based on source code history and dependencies.

Turns out that there was a shortcoming in the implementation with regards to building an aggregated version based on component dependencies. After finding out, I opened a defect report, prepared a patch and posted to the mailing list for awareness and getting feedback on Friday. This took me less then an hour of my time. By Monday – only 72 hours later – the patch was accepted and merged into the main code base, unblocking my colleague.

  1. Discover the root cause.
  2. File defect report
  3. Prepare patch
  4. Communicate about change and collect feedback
  5. Cross fingers – or – hold thumbs.

In my opinion, the key criteria for such a change being successful is the patch size. Yes, I could have added configurability to the change and that would have increased the patch size. The larger a change is, the more time committers need to set aside for understanding and reviewing a change. It’s about the minimal viable solution – keeping things simple. Don’t overwhelm committers with your first change.

Well, and it pays off going to conferences and having a beer or two with project committers.

by Gunnar Wagenknecht at November 17, 2015 08:22 AM

EclipseCon NA 2016 - Early-bird talks accepted

November 17, 2015 07:53 AM

Five talks have been accepted early for EclipseCon NA. The final deadline for proposal is November 23.

November 17, 2015 07:53 AM

Accessing PreferenceStore - JFace Preferences or Eclipse Preferences

by Its_Me_Malai (noreply@blogger.com) at November 17, 2015 05:28 AM

Normally when asked to access the PreferenceStore, we always access it via the Activator Class of the Plugin.

IPreferenceStore prefStore = Activator.getDefault().getPreferenceStore();

But sometimes there are usecases where you may not have access to the Activator of the Plugin whose PreferenceStore you need to access. In that case you may use the Eclipse IPreferenceService.
Platform.getPreferencesService().getInt(Activator.PLUGIN_ID, preferenceKey, defaultValue, null);
For more details on Preferences, please read an AWESOME Document "How Eclipse runtime preferences and defaults work"

by Its_Me_Malai (noreply@blogger.com) at November 17, 2015 05:28 AM

Eclipse IDE on JDK 9 Early Access with Project Jigsaw

by waynebeaton at November 16, 2015 07:50 PM

I wrote a few weeks ago about getting Eclipse Neon running on Java 9 (though, I had mistakenly and embarrassingly left “Mars” in the title of the post). It’s worth noting that the steps that I laid out also apply to the JDK 9 Early Access with Project Jigsaw (Java modularity) builds. Eclipse Neon works on Jigsaw. I’ve been using this combination for real development on some new plug-ins that I’ve been tinkering with (more on that later).

Eclipse Neon running on JDK 9 + Jigsaw

Developing some new plug-ins using Eclipse Neon M2 running on JDK 9 + Jigsaw.

In its current form, Jigsaw provides a well-defined visibility model that manages what pieces of a module are accessible from other modules. As part of this, it prevents you from accessing internal code. We’ve been warned for years, for example, that using com.sun.* packages is verboten and Jigsaw aims to do something about it. The modularized JDK hides these internal packages from dependent modules and throws a fit when you try to access them (both the compiler and runtime).

As a “legacy” Java application running on the classpath, Eclipse IDE runs as what’s called an unnamed module (Voldemodule? the module that must not be named?) Unnamed modules have a special status in the runtime, but are still subject to the visibility restrictions. I’ll save a more detailed discussion of this for another post. My point today is that the Eclipse IDE just works on JDK 9 Jigsaw builds. This is true, at least, on the Fedora 22 and Windows 8 systems I’ve tested; I’m interested to learn of your experience.

The Jigsaw builds come with a handy tool, jdeps, which does all sorts of things related to module dependencies (note that this tool is only included with the Jigsaw builds). Included with the functionality is the ability to scan Java code to determine if it violates any of the restrictions enforced by the modularity model.

I ran jdeps on the Mars.1 repository to get a sense for how much work we might have ahead of us and was delightfully surprised by how few references Eclipse Project code has to internal APIs. Perhaps my biggest concern is that there is a reference to an internal class in the SWT_AWT bridge (bug 482318). I’ll open additional bugs as I investigate the other hits.

In the meantime, if you want to check out your own code for violations you can run jdeps yourself. The JDK 9 Early Access with Project Jigsaw builds are just archive files that you can decompress into the directory of your choice (it doesn’t update any paths or configuration on your system) and execute:

~/jdk1.9.0> bin/jdeps -jdkinternals /path/file.jar

Where /path/file.jar points to one or more files (e.g. ~/.p2/plugins/*.jar).

Correction: jdeps is included in Java 8 and 9 builds.

While I have your attention: be sure to propose a talk for EclipseCon 2016!

by waynebeaton at November 16, 2015 07:50 PM

Eclipse Hackathon Hamburg – 2015 Q41

by eselmeister at November 15, 2015 10:42 AM

Last Friday we had again an Eclipse Hackathon in Hamburg! Folks do Hackathons, it’s a great experience to be a part of the Eclipse Community!



by eselmeister at November 15, 2015 10:42 AM

Make Retrofit ready for usage in OSGi

by Simon Scholz at November 13, 2015 09:04 PM

Retrofit is a really great library for addressing  REST APIs. It is often used for Android apps, because it is really lightweight and easy to use.

I’d also love to use this library for my Eclipse 4 RCP applications, so let’s make use of retrofit also here.

So download the retrofit artefacts and make use of it. But wait..! For Eclipse applications we need OSGi bundles rather than usual Java artefacts. When looking at the MANIFEST.MF file of the retrofit jar archive there isn’t any OSGi bundle meta data.

Fortunately there are many tools out there to convert plain Java artefacts into OSGi bundles, e.g., p2-maven-plugin(Maven) or bnd-platform(Gradle).

Since I am involved in the Buildship development (Gradle tooling for Eclipse) and we now also offer Gradle trainings besides our Maven trainings, I chose the bnd-platform plugin for Gradle.

The build.gradle file then looks like this:

buildscript {
	repositories {
	dependencies {
		classpath 'org.standardout:bnd-platform:1.2.0'

apply plugin: 'org.standardout.bnd-platform'

repositories {

platform {
	bundle 'com.squareup.retrofit:retrofit:2.0.0-beta2'

	bundle 'com.squareup.retrofit:converter-gson:2.0.0-beta2'

When Gradle has been setup properly, the desired bundles can be converted with the bundles task from the org.standardout.bnd-platform plugin:

/retrofit-osgi-convert$ ./gradlew bundles

By running the bundles task retrofit, a json converter, in this case GSON, and the transitive dependencies are available as converted OSGi bundles in the /retrofit-osgi-convert/build/plugins folder.

See Gradle tutorial for further information.

When adding these converted bundles to the target platform of a Eclipse RCP application it should usually work out of the box.

But …! After adding the retrofit and gson converter bundles as dependencies to my plugin’s MANIFEST.MF file I still get compile errors. :-(

So what went wrong? Basically 2 things! The first problem is obvious, because when looking into the generated MANIFEST.MF meta data of retrofit there is an import for the android.os package. This import was added automatically during the conversion. The readme of the bnd-platform plugin explains how to configure the imports.

The second thing is that the retrofit and its converter bundles have split packages, which is fine for plain Java projects, but not for OSGi bundles. So the split package problem has also to be resolved. See https://github.com/SimonScholz/retrofit-osgi#make-use-of-retrofit-in-osgi

Fortunately this can also be configured in the build.gradle file:

platform {

	// Convert the retrofit artifact to OSGi, make android.os optional and handle the split package problems in OSGi
		bnd {
			optionalImport 'android.os'
			instruction 'Export-Package', 'retrofit;com.squareup.retrofit=split;mandatory:=com.squareup.retrofit, retrofit.http'

	// Convert the retrofit gson converter artifact to OSGi and handle the split package problems in OSGi
	bundle('com.squareup.retrofit:converter-gson:2.0.0-beta2') {
			instruction 'Require-Bundle', 'com.squareup.retrofit'
			instruction 'Export-Package', 'retrofit;com.squareup.retrofit.converter-gson=split;mandatory:=com.squareup.retrofit.converter-gson'

	// You can add other converters similar to the gson converter above...

The actual build.gradle file can be found on Github.

After resolving these problems, no compile errors are left. :-) But when running the application an java.lang.IllegalAccessError: tried to access class retrofit.Utils from class retrofit.GsonResponseBodyConverter error occured.

The cause for this are the modifiers used in the retrofit.Utils class, which itself is package private and also its closeQuietly method is package private. So even with these split package rules being applied this package private access rules prohibit the usage of the closeQuietly method from the converters bundles (gson, jackson etc.).

Now comes the part, why I love open source that much. I checked out the retrofit sources made some changes, build retrofit locally, tried my local OSGi version with my changes and finally provided a fix for this. See https://github.com/square/retrofit/pull/1266. Thanks a lot @JakeWharton for merging my pull request that fast.

Retrofit and its GSON converter can already be obtained from bintray as p2 update site: https://dl.bintray.com/simon-scholz/retrofit-osgi/

For further information and a complete example please refer https://github.com/SimonScholz/retrofit-osgi

This repository contains the conversion script for making OSGi bundles from retrofit artifacts and  a sample application, which shows how to make use of retrofit in an Eclipse 4 RCP application. Just clone the repository in an Eclipse workspace, activate the target platform from the retrofit-osgi-target project and start the product in the de.simonscholz.retrofit.product project.

Retrofit and Eclipse 4

Retrofit and Eclipse 4

Feedback is highly appreciated.

Happy retrofitting in your OSGi applications 😉

by Simon Scholz at November 13, 2015 09:04 PM

What will you build for the Open IoT Challenge 2.0?

by Benjamin Cabé at November 13, 2015 03:20 PM

I am really excited that we are having a second edition of the Open IoT Challenge. Last year, more than 60 teams entered and we ended up with some seriously cool projects being built!

So we are back, with more prizes, more sponsors, and more importantly even more cool ideas of what you could build if you decide to participate. You will find below some project ideas that you may want to explore, but I am pretty sure many of you will just surprise the jury by coming up with even cooler ideas! :-)

Hardware integration

I would really like to see integrations of your projects with new kinds of sensors. Measuring temperature and humidity is so mainstream now, is it not? :) How about solutions for health monitoring, using blood pressure monitors, or even EKG?

Last year we’ve seen integrations with cars, over OBD-II, and I’m sure there is still a lot to do in that area (how about a solution to help people drive more efficiently by allowing them to compare their gas consumption with others on a given road?).

Another interesting idea would be to explore how to improve the User Experience (UX) for IoT, by providing new ways to interact with IoT devices: gesture interactions, sound control (look at e.g. Amazon Alexa), etc.

Wireless technologies

  • LoRa? Sigfox, anyone? Those wireless technologies definitely get lots of traction these days and I am looking forward to seeing projects that will highlight how they can be combined with existing open source technologies. It would for example be interesting to see teams adding LoRaWAN support in Eclipse Kura.
  • Bluetooth Smart/BLE is becoming more and more mainstream, and lots of IoT starter kits now include BLE (TI’s SensorTag obviously comes to mind). Not only will BLE provide wireless sensing capabilities to your project, but you can certainly get more creative and start building solutions that use beacons to do indoor positioning…

Data Analytics

What if you could predict when an event is going to occur by analyzing your IoT data stream?

I am hoping that some of you will be using technology like Apache Spark and process IoT Data to extract patterns that can be used to predict when e.g the battery of a device is going to depleted. You probably want to look at things like at the Apache Spark Streaming API, the MLlib machine learning engine, and this code example for doing Apache Spark Streaming with MQTT.

Enter now!

You have 10 days left to submit your application, so don’t wait any longer and let us know what you will be building.  If your proposal is particularly cool, you will be awarded a $150 gift card that you will be able to use to buy the hardware parts needed for building your project.
You can find more details on the challenge on the dedicated web page.

Thanks to our sponsors bitreactive, Eurotech, IS2T, Red Hat and Zolertia for supporting the challenge by providing prizes for the winners, as well as access to special offers for all the challengers.

sponsors iot challenge

by Benjamin Cabé at November 13, 2015 03:20 PM

OSGi and Java 9 Modules Working Together

November 13, 2015 12:00 PM

The best thing about the Java Platform Module System – commonly known as Project Jigsaw, or JSR 376 – is that it has been used to break apart the monolithic Java runtime into smaller modules. This makes Java a more attractive implementation technology in many scenarios where it has historically been weak. For example, modern desktop or mobile applications can’t rely on having a pre-installed copy of Java on the user’s device, so you need to ship Java as part of your application. IoT is also driving developers to seek smaller installed size and memory footprint.

Unfortunately, Jigsaw has some shortcomings as a module system for applications. I don’t want this blog post to be a critique of Jigsaw, but suffice to say there are many developers who will prefer to use OSGi for application modularity.

Still, it would be very useful if we could run a modular OSGi application on a modular Java runtime. This would allow us to benefit from a minimal Java install, and at the same time use the full power of OSGi. Also it would help to hide much of Jigsaw’s complexity from application developers.

I have just spent a day hacking together a demo that proves you can do this.

I must emphasise that this is a throw-away prototype. You should absolutely not start building on it. The OSGi Core Platform Expert Group (CPEG) will be working to specify official support for Java 9 features… I just wanted to show that a solution can exist, and perhaps feed into their discussions.

NB: The following description may get confusing because of the overlapping concepts used by Jigsaw and OSGi, so I have to be very careful with terms. Whenever I say “module” I always mean a Jigsaw/Java 9 module, and when I say “bundle” I’m always talking about OSGi.

Use Case: OSGi Bundles in a Known Platform Configuration

Suppose we have a Java runtime containing a known set of modules. We would like to start an OSGi Framework within the context of a module, such that the bundles in that framework can import packages from the platform modules.

As background, OSGi has long used the concept of an Execution Environment (EE), which models the Java platform on which the framework is running. These have names such as JavaSE-1.7. When an OSGi framework starts up, it detects what kind of platform it is running on and declares the EE as a capability that bundles can depend on.

If you are unfamiliar with the term “capability” in this context, it is basically how all dependencies work in OSGi. Any bundle can declare a capability, which has a namespace and a set of attributes. Then any other bundle can require a capability, optionally with a filter expression. The OSGi Framework ensures that for every requirement, there exists a bundle providing a matching capability, otherwise the requirement will not resolve. OSGi additionally defines the meaning of some special capability namespaces. One of these is osgi.ee, which is declared by the OSGi system bundle to indicate the Java platform version.

Why can’t we work out a bundle’s platform dependency based on its imported packages? That’s not possible for two reasons. First, the packages from the Java platform are not versioned. Second, a large subset of packages – namely those starting with java. (e.g. java.net, java.lang etc) cannot be imported; they have to be loaded from the boot class loader. So bundles have to declare a requirement on the EE in order to ensure they can safely call methods like Files.readAllBytes (added in Java 7) or String.isEmpty (added in Java 6).

An EE is basically a monolith: Java 7 is Java 7 and there are no other flavours. Things got a bit more complicated in Java 8 with the introduction of Compact Profiles… but these are basically still monolithic platforms. With just three profiles defined in the Java 8 specification, it was no problem for OSGi to define an EE for each profile. However in Java 9 there is a very large set of potential module combinations – too many to model with EEs.

In my prototype I posit a capability namespace of jmodule to describe the Java modules available in the platform. At runtime, the launcher inspects the set of available modules and uses them to declare the jmodule capability. The relevant code looks like this:

// Calculate the system capabilities
List<String> systemCapabilites = new ArrayList<>();
for (Module module : Main.class.getModule().getLayer().modules()) {
    String cap = String.format("jmodule;jmodule=%s;version:Version=1.9", module.getName());

// Configure the framework
Map<String,String> configMap = new HashMap<>();
Framework framework = frameworkFactory.newFramework(configMap);

An OSGi bundle can now declare a requirement for a Jigsaw module as follows:

Require-Capability: jmodule; filter:="(jmodule=java.xml)"

… and the OSGi Framework will refuse to resolve the above module if the required module (in this case, java.xml) is not present in the platform. This is valuable because it protects against NoClassDefFoundErrors and ClassNotFoundExceptions from occurring at some unknown time during the life of the bundle. OSGi provides the assurance that the bundle will actually work on the platform that it finds itself on.

But how do we know that our bundle depends on the java.xml module? At the moment we have to use the jdeps tool from the JDK:

$ jdeps -module org.example.xml.jar
org.example.xml.jar -> java.base
org.example.xml.jar -> java.xml
org.example.xml.jar -> not found
   org.example.xml (org.example.xml.jar)
      -> java.io                                            java.base
      -> java.lang                                          java.base
      -> javax.xml.parsers                                  java.xml

… which tells us that our bundle depends on java.base and java.xml. The java.base module is implicit so we can ignore it, but we do need to add java.xml to the requirements.

This process is manual and cumbersome right now, but in the future bnd should be able to generate the requirement automatically.

We’re not done yet though, because the Java platform provides a whole bunch of packages that are not in the java.* namespace. For example: javax.xml, javax.sql.rowset, org.w3c.dom, org.omg.CORBA etc. We don’t need to do anything different in our OSGi bundles for these, because bnd will generate imports as normal. However we need the OSGi launcher to work out which packages are available from the platform modules. It can do this again by iterating over the modules using reflection:

// NB this sample ignores package versions for now
private static List<String> calculateExports(Module module) {
    String[] packageNames = module.getPackages();
    List<String> exports = new ArrayList<>(packageNames.length);
    for (String packageName : packageNames) {
        // ignore java.* packages
        if (packageName.startsWith("java."))

        if (module.isExported(packageName))
    return exports;

This is actually a massive boon to the OSGi Framework implementers. In previous Java releases, the Framework had to maintain a list of all the known packages provided by the platform; there was no standard API for enumerating classes or packages on the classpath (even if there were, it would reveal non-standard packages such as sun.misc, com.sun.* and so on). Take a look at the properties file that Felix maintains for this purpose… it’s larger than you might expect.

Implementation Issues

In the demo I have copied the source code for Apache Felix into my module. I did this to give myself freedom to hack the implementation, but so far have hardly needed to — more on that shortly.

Most of the code is in nbartlett.osgi_jigsaw.Main, and it’s mostly just a vanilla OSGi launcher. The main differences are the jmodule capabilities and system bundle exports as already described. The module-info.java looks like this:

module nbartlett.jigsaw_osgi {	
	exports org.osgi.framework;
	exports org.osgi.framework.dto;
	exports org.osgi.framework.hooks.bundle;
	exports org.osgi.framework.hooks.resolver;
	exports org.osgi.framework.hooks.service;
	exports org.osgi.framework.hooks.weaving;
	exports org.osgi.framework.launch;
	exports org.osgi.framework.namespace;
	exports org.osgi.framework.startlevel;
	exports org.osgi.framework.startlevel.dto;
	exports org.osgi.framework.wiring;
	exports org.osgi.framework.wiring.dto;
	exports org.osgi.resource;
	exports org.osgi.resource.dto;
	exports org.osgi.service.packageadmin;
	exports org.osgi.service.startlevel;
	exports org.osgi.service.url;
	exports org.osgi.service.resolver;
	exports org.osgi.util.tracker;
	exports org.osgi.dto;

All of the packages of the OSGi Framework have to be exported by the module wrapping it. This is necessary because the bundles actually load into a different module – the “unnamed” module – and so we need Jigsaw to permit access to the OSGi API packages. If we fail to do this, the following error occurs:

java.lang.IllegalAccessError: superinterface check failed: class org.example.Activator
(in module: Unnamed Module) cannot access class org.osgi.framework.BundleActivator (in
module: nbartlett.jigsaw_osgi), org.osgi.framework is not exported to Unnamed Module

After fixing this, I encountered the following error when the OSGi Framework tried to instantiate the BundleActivator from any bundle:

java.lang.IllegalAccessException: class org.apache.felix.framework.Felix (in module
nbartlett.jigsaw_osgi) cannot access class org.example.Activator because module
nbartlett.jigsaw_osgi does not read <unnamed module @5cee5251>

This one was quite a puzzle! It turns out that if you create an instance of ClassLoader and use it to load classes dynamically, those classes will always belong to the unnamed module. In order to load classes into a real module you have to use ModuleClassLoader. However this class is final, so OSGI cannot override loadClass to implement OSGi loading rules.

Incidentally, the upshot is that any code that looks like the following – and this is a very common code pattern – will fail from inside a module other than the unnamed module:

URLClassLoader loader = new URLClassLoader(urls, this.getClass().getClassLoader());
Class<?> clazz = loader.loadClass("blah.Blah");
clazz.newInstance(); // <-- this line fails from within a named module

It fails because blah.Blah belongs to the unnamed module, and our module doesn’t require unnamed. You can’t write requires unnamed in the module-info.java, but you can dynamically add the requirement as follows:


For now, we have to accept that OSGi bundles will load into the unnamed module. It’s an open question whether we can map them into the OSGi module, or even a module for each bundle. By adding the above line to Felix (BundleWiringImpl line 731), the Framework is able to load the bundle activator, and OSGi starts up:

g! lb
   ID|State      |Level|Name
    0|Active     |    0|System Bundle (0.0.0)
    1|Active     |    1|Apache Felix Gogo Command (0.14.0)
    2|Active     |    1|Apache Felix Gogo Runtime (0.12.1)
    3|Active     |    1|Apache Felix Gogo Shell (0.10.0)
    4|Active     |    1|org.example.xml (

In my demo I installed and started the three bundles of Felix Gogo, which gives me an interactive shell, and a fourth bundle that contains some XML parsing code. This code forces a dependency on the java.xml module and the javax.xml.parsers package from the platform, both of which are satisfied by the system bundle.

Use Case: Configuring the Platform for a Set of Bundles

Above we had a known platform and wanted OSGi to validate the bundles we wanted to run on it. Flipping this around, what if we have a set of bundle – our application – and want to work out the minimal platform that will run it?

You can already do this – kind of – with existing tools. The jdeps tool can print the module dependencies of each of your OSGi bundles, so the modules you need in your platform are just the aggregate of those jdeps outputs. The OSGi framework module I defined above doesn’t have any dependencies aside from java.base.

If you build your bundles using the future version of bnd that generates jmodule requirements for you, then you don’t need to run jdeps. It will be very simple to generate the aggregate dependencies just by looking at the bundle manifests. We can create a tool that builds an application artifact, including the minimal Java runtime modules plus OSGi bundles, as part of bnd/Bndtools. We already have an exporter for “standalone” JAR files, this would just be an extension of the same principle. But that’s for the future.

Open Questions

Here are some of the issues I haven’t even touched on, that will need a good answer before OSGi and Java 9 Module interop is a reality:

  1. Versions! Jigsaw chose to ignore versions, so they will have to be layered on top somehow. This sounds like a recipe for lots of competing and incompatible schemes… the Maven way, the Gradle way, the OSGi way, etc. Should OSGi infer versions for the exports from the platform? If so, where should those come from?
  2. Bundle-Module Mapping. The bundle classes load into the unnamed module, meaning that we have exactly as much protection and access control between bundles as OSGi has always had – no more, no less – and that’s fine. However, by using ModuleClassLoader we may be able to map each bundle to a Jigsaw module and benefit from increased isolation. I didn’t have time to figure out whether this would be possible.
  3. Dynamics. With this prototype we can install, update and uninstall OSGi bundles dynamically. But if a newly installed bundle has requirements that are not satisfied by the current set of platform modules, can we install the required module into the platform dynamically? It looks like probably not, that’s just not within the scope of Jigsaw.
  4. Uses Constraints. This is a tricky problem to describe and solve… see my post on the subject of uses constraints. Jigsaw module descriptors do not provide any information on the interdependencies between packages (e.g. the fact that javax.sql uses javax.transaction.xa, so if you import both of these packages then you really have to import them from the same source). Without this info, it will not be safe for OSGi bundles to offer exports that subtitute for exports from the platform.
  5. Services. Jigsaw has a service model, though quite different from OSGi services. It would probably be useful for services from the platform to be bridged into the OSGi service registry.
  6. Bidirectional Dependencies. Can we ever build a bidirectional link between OSGi and Jigsaw, so that bundles and modules can sit side-by-side in cooperative harmony? I don’t particularly care about this use case – I believe OSGi is the better application module system so it makes sense for the dependencies to flow downwards but not upwards. However, somebody might want to do this I suppose. It will be tricky though because of the dynamics in OSGi and lack of versions in Jigsaw.
  7. Evolving EEs. The CPEG will need to decide what to do with EEs. Maybe they will be supplemented with something like thejmodule capability, or perhaps something completely different. We probably still need an EE to define the version and content of java.base.

Getting It

The prototype code is on GitHub at njbartlett/osgi_jigsaw.

To build the code, execute ./build.sh from the nbartlett-jigsaw-osgi directory. To run, execute ./run.sh.

Thanks to my employer Paremus for funding this research. We want to help our customers to build and deploy manageable, modular software, whatever the implementation technology.

Thanks to Sander Mak’s blog post for helping me to setup a Jigsaw workspace (though I can’t let this pass with commenting on his last paragraph… “vastly less complex”? Seriously dude are you feeling okay?)

November 13, 2015 12:00 PM