The Containerization of Dev Environments

by Doug Schaefer at May 24, 2017 03:38 PM

As a veteran tools architect working for a veteran embedded systems platform vendor, we’re getting pretty good at building cross development environments. You get all the speed and integration with other native tools that today’s rich host platforms can provide. Combine that that with a good installer and software maintenance tool, and it’s super simple for users to get setup and keep their installs updated with the latest fixes and patches. It’s worked for many years.

So of course, I was a bit taken back with recent talk about delivering development environments in containers and distributing them to users for use with cloud IDEs. The claim is that the installation process is simpler. But I have to ask, while yes it is simpler for the provider, is it also simpler for the user?

I work with embedded software engineers. Their systems are complex and the last thing they want to do is fight with their tools. That doesn’t pay the bills. And that’s why we work so hard to make that management simpler. And if you don’t have the experience creating cross development environments, it is certainly appealing to only have to worry about one host platform, 64-bit Linux, as you do with Docker which, BTW, just happens to be the easiest to support especially relative to Windows.

But do I really have to teach my embedded developer customers about Docker? How to clean up images as updates are delivered? How to start and stop containers, and in the case of Windows and Mac, the VMs that run those containers? And that’s not to mention cloud environments which are a whole new level requiring server management, especially as the developer community scales. Embedded development tools require a lot of horsepower. How many users can a server actually support and how do customers support the burstiness of demand?

So, while I get it, and as vendors take this path and as users do get used to it, I do need to be prepared to support such environments. I’ll just feel a bit sad that we are giving up on providing our users the great experiences that native cross development tools provide.

by Doug Schaefer at May 24, 2017 03:38 PM

Fuse and BRMS Tooling Maintenance Release for Neon.3

by pleacu at May 24, 2017 02:03 PM

Try our complete Eclipse-Neon capable, Devstudio 10.4.0 compatible integration tooling.

jbosstools jbdevstudio blog header

JBoss Tools Integration Stack 4.4.3.Final / JBoss Developer Studio Integration Stack 10.3.0.GA

All of the Integration Stack components have been verified to work with the same dependencies as JBoss Tools 4.4 and Developer Studio 10.

What’s new for this release?

This release syncs up with Devstudio 10.4.0, JBoss Tools 4.4.4 and Eclipse Neon.3. It is also a maintenance release for Fuse Tooling, SwitchYard and the BRMS tooling.

Released Tooling Highlights

JBoss Fuse Development Highlights

Fuse Tooling Highlights

See the Fuse Tooling 9.2.0.Final Resolved Issues Section of the Integration Stack 10.3.0.GA release notes.

SwitchYard Highlights

See the SwitchYard 2.3.1.Final Resolved Issues Section of the Integration Stack 10.3.0.GA release notes.

JBoss Business Process and Rules Development

BPMN2 Modeler Known Issues

See the BPMN2 1.3.3.Final Known Issues Section of the Integration Stack 10.3.0.GA release notes.

Drools/jBPM6 Known Issues

Data Virtualization Highlights

Teiid Designer Known Issues

See the Teiid Designer 11.0.1.Final Resolved Issues Section of the Integration Stack 10.1.0.GA release notes.

What’s an Integration Stack?

Red Hat JBoss Developer Studio Integration Stack is a set of Eclipse-based development tools. It further enhances the IDE functionality provided by JBoss Developer Studio, with plug-ins specifically for use when developing for other Red Hat JBoss products. It’s where the Fuse Tooling, DataVirt Tooling and BRMS tooling is aggregated. The following frameworks are supported:

JBoss Fuse Development

  • Fuse Tooling - JBoss Fuse Development provides tooling for Red Hat JBoss Fuse. It features the latest versions of the Fuse Data Transformation tooling, Fuse Integration Services support, SwitchYard and access to the Fuse SAP Tool Suite.

  • SwitchYard - A lightweight service delivery framework providing full lifecycle support for developing, deploying, and managing service-oriented applications.

JBoss Business Process and Rules Development

JBoss Business Process and Rules Development plug-ins provide design, debug and testing tooling for developing business processes for Red Hat JBoss BRMS and Red Hat JBoss BPM Suite.

  • BPEL Designer - Orchestrating your business processes.

  • BPMN2 Modeler - A graphical modeling tool which allows creation and editing of Business Process Modeling Notation diagrams using graphiti.

  • Drools - A Business Logic integration Platform which provides a unified and integrated platform for Rules, Workflow and Event Processing including KIE.

  • jBPM6 - A flexible Business Process Management (BPM) suite.

JBoss Data Virtualization Development

JBoss Data Virtualization Development plug-ins provide a graphical interface to manage various aspects of Red Hat JBoss Data Virtualization instances, including the ability to design virtual databases and interact with associated governance repositories.

  • Teiid Designer - A visual tool that enables rapid, model-driven definition, integration, management and testing of data services without programming using the Teiid runtime framework.

JBoss Integration and SOA Development

JBoss Integration and SOA Development plug-ins provide tooling for developing, configuring and deploying BRMS, SwitchYard and Fuse applications to Red Hat JBoss Fuse and Fuse Fabric containers, Apache ServiceMix, and Apache Karaf instances.

  • All of the Business Process and Rules Development plugins, plus…​

  • Fuse Apache Camel Tooling - A graphical tool for integrating software components that works with Apache ServiceMix, Apache ActiveMQ, Apache Camel and the FuseSource distributions.

  • SwitchYard - A lightweight service delivery framework providing full lifecycle support for developing, deploying, and managing service-oriented applications.

The JBoss Tools website features tab

Don’t miss the Features tab for up to date information on your favorite Integration Stack components.


The easiest way to install the Integration Stack components is through the stand-alone installer. If you’re interested specifically in Fuse we have the all-in-one installer JBoss Fuse Tooling + JBoss Fuse/Karaf runtime.

For a complete set of Integration Stack installation instructions, see Integration Stack Installation Instructions

Give it a try!

Paul Leacu.

by pleacu at May 24, 2017 02:03 PM

JBoss Tools and Red Hat Developer Studio Maintenance Release for Eclipse Neon.3

by jeffmaury at May 24, 2017 12:22 PM

JBoss Tools 4.4.4 and Red Hat JBoss Developer Studio 10.4 for Eclipse Neon.3 are here waiting for you. Check it out!



JBoss Developer Studio comes with everything pre-bundled in its installer. Simply download it from our Red Hat developers and run it like this:

java -jar devstudio-<installername>.jar

JBoss Tools or Bring-Your-Own-Eclipse (BYOE) JBoss Developer Studio require a bit more:

This release requires at least Eclipse 4.6.3 (Neon.3) but we recommend using the latest Eclipse 4.6.3 Neon JEE Bundle since then you get most of the dependencies preinstalled.

Once you have installed Eclipse, you can either find us on the Eclipse Marketplace under "JBoss Tools" or "Red Hat JBoss Developer Studio".

For JBoss Tools, you can also use our update site directly.

What is new?

Our main focus for this release was improvements for container based development and bug fixing.

Improved OpenShift 3 and Docker Tools

We continue to work on providing better experience for container based development in JBoss Tools and Developer Studio. Let’s go through a few interesting updates here.

OpenShift Server Adapter enhanced flexibility

OpenShift server adapter is a great tool that allows developers to synchronize local changes in the Eclipse workspace with running pods in the OpenShift cluster. It also allows you to remote debug those pods when the server adapter is launched in Debug mode. The supported stacks are Java and NodeJS.

As pods are ephemeral OpenShift resources, the server adapter definition was based on an OpenShift service resource and the pods are then dynamically computed from the service selector.

This has a major drawback as it allows to use this feature only for pods that are part of a service, which may be logical for Web based applications as a route (and thus a service) is required in order to access the application.

So, it is now possible to create a server adapter from the following OpenShift resources:

  • service (as before)

  • deployment config

  • replication controller

  • pod

If a server adapter is created from a pod, it will be created from the associated OpenShift resource, in the preferred order:

  • service

  • deployment config

  • replication controller

As the OpenShift explorer used to display OpenShift resources that were linked to a service, it has been enhanced as well. It now displays resources linked to a deployment config or replication controller. Here is an example of a deployment with no service ie a deployment config:

server adapter enhanced

So, as an OpenShift server adapter can be created from different kind of resources, the kind of associated resource is displayed when creating the OpenShift server adapter:

server adapter enhanced1

Once created, the kind of OpenShift resource adapter is also displayed in the Servers view:

server adapter enhanced2

This information is also available from the server editor:

server adapter enhanced3

Security vulnerability fixed in certificate validation database

When you use the OpenShift tooling to connect to an OpenShift API server, the certificate of the OpenShift API server is first validated. If the issuer authority is a known one, then the connection is then established. If the issuer is an unknown one, a validation dialog is first shown to the user with the details of the OpenShift API server certificate as well as the details of the issuer authority. If the user accepts it, then the connection is established. There is also an option to store the certificate in a database so that next time a connection is attempted to the same OpenShift API server, then the certificate will be considered valid an no validation dialog will be show again.

certificate validation dialog

We found a security vulnerability as the certificate was wrongly stored: it was partially stored (not all attributes were stored) so we may interpret a different certificate as validated where it should not.

We had to change the format of the certificate database. As the certificates stored in the previous database were not entirelly stored, there was no way to provide a migration path. As a result, after the upgrade, the certificate database will be empty. So if you had previously accepted some certificates, then you need to accept them again and fill the certificate database again.

CDK 3 Server Adapter

The CDK 3 server adapter has been here for quite a long time. It used to be Tech Preview as CDK 3 was not officially released. It is now officially available. While the server adapter itself has limited functionality, it is able to start and stop the CDK virtual machine via its minishift binary. Simply hit Ctrl+3 (Cmd+3 on OSX) and type CDK, that will bring up a command to setup and/or launch the CDK server adapter. You should see the old CDK 2 server adapter along with the new CDK 3 one (labeled Red Hat Container Development Kit 3).

cdk3 server adapter5

All you have to do is set the credentials for your Red Hat account and the location of the CDK’s minishift binary file and the type of virtualization hypervisor.

cdk3 server adapter1

Once you’re finished, a new CDK Server adapter will then be created and visible in the Servers view.

cdk3 server adapter2

Once the server is started, Docker and OpenShift connections should appear in their respective views, allowing the user to quickly create a new Openshift application and begin developing their AwesomeApp in a highly-replicable environment.

cdk3 server adapter3
cdk3 server adapter4

OpenShift Container Platform 3.5 support

OpenShift Container Platform (OCP) 3.5 has been announced by Red Hat. JBossTools 4.4.4.Final has been validated against OCP 3.5.

OpenShift server adapter extensibility

The OpenShift server adapter had long support for EAP/Wildfly and NodeJS based deployments. It turns out that it does a great deal of synchronizing local workspace changes to remote deployments on OpenShift which have been standardized through images metadata (labels). But each runtime has its own specific. As an example, Wildfly/EAP deployments requires that a re-deploy trigger is sent after the files have been synchronized.

In order to reduce the technical debt and allow support for other runtimes (lots of them in the microservice world), we have refactored the OpenShift server adapter so that each runtime specific is now isolated and that it will be easy and safe to add support for new runtime.

For a full in-depth description, see the following wiki page.

Pipeline builds support

Pipeline based builds are now supported by the OpenShift tooling. When creating an application, if using a template, if one of the builds is based on pipeline, you can view the detail of the pipeline:

pipeline wizard

When your application is deployed, you can see the details of the build configuration for the pipeline based builds:

pipeline details

More to come as we are improving the pipeline support in the OpenShift tooling.

Update of Docker Client

The level of the underlying com.spotify.docker.client plug-in used to access the Docker daemon has been upgraded to 3.6.8.

Run Image Network Support

A new page has been added to the Docker Run Image Wizard and Docker Run Image Launch configuration that allows the end-user to specify the network mode to use. A user can choose from Default, Bridge, Host, None, Container, or Other. If Container is selected, the user must choose from an active Container to use the same network mode. If Other is specified, a named network can be specified.

Network Mode
Network Mode Configuration

Refresh Connection

Users can now refresh the entire connection from the Docker Explorer View. Refresh can be performed two ways:

  1. using the right-click context menu from the Connection

  2. using the Refresh menu button when the Connection is selected

Refresh Connection

Server Tools

API Change in JMX UI’s New Connection Wizard

While hardly something most users will care about, extenders may need to be aware that the API for adding connection types to the &aposNew JMX Connection&apos wizard in the &aposJMX Navigator&apos has changed. Specifically, the & extension point has been changed. While previously having a child element called &aposwizardPage&apos, it now requires a &aposwizardFragment&apos.

A &aposwizardFragment&apos is part of the &aposTaskWizard&apos framework first used in WTP’s ServerTools, which has, for a many years, been used throughout JBossTools. This framework allows wizard workflows where the set of pages to be displayed can change based on what selections are made on previous pages.

This change was made as a direct result of a bug caused by the addition of the Jolokia connection type in which some standard workflows could no longer be completed.

This change only affects adopters and extenders, and should have no noticable change for the user, other than that the below bug has been fixed.

Hibernate Tools

Hibernate Runtime Provider Updates

A number of additions and updates have been performed on the available Hibernate runtime providers.

Hibernate Runtime Provider Updates

The Hibernate 5.0 runtime provider now incorporates Hibernate Core version 5.0.12.Final and Hibernate Tools version 5.0.5.Final.

The Hibernate 5.1 runtime provider now incorporates Hibernate Core version 5.1.4.Final and Hibernate Tools version 5.1.3.Final.

The Hibernate 5.2 runtime provider now incorporates Hibernate Core version 5.2.8.Final and Hibernate Tools version 5.2.2.Final.

Forge Tools

Forge Runtime updated to 3.6.1.Final

The included Forge runtime is now 3.6.1.Final. Read the official announcement here.


What is next?

Having JBoss Tools 4.4.4 and Developer Studio 10.4 out we are already working on the next release for Eclipse Oxygen.


Jeff Maury

by jeffmaury at May 24, 2017 12:22 PM

Registration opens now! – Eclipse DemoCamp Oxygen 2017

by Maximilian Koegel and Jonas Helming at May 24, 2017 12:00 PM

Eclipse DemoCamp Oxygen 2017 on June 28th 2017 – Registration opens now!

We are pleased to invite you to participate in the Eclipse DemoCamp Oxygen 2017. The DemoCamp Munich is one the biggest DemoCamps worldwide and therefore an excellent opportunity to showcase all the cool, new and interesting technology being built by the Eclipse community. This event is open to Eclipse enthusiasts who want to show demos of what they are doing with Eclipse.

Registration open now!
Please click here for detailed information and the registration.

Seating is limited, so please register soon if you plan to attend.
We look forward to welcoming you to the Eclipse DemoCamp Oxygen 2017!

The last DemoCamps have always been sold out. Due to legal reasons, we have a fixed limit for the room and cannot overbook. However, every year some places unfortunately remain empty. Please unregister if you find you are unable to attend so we can invite participants from the waiting list. If you are not attending and have not unregistered, we kindly ask for a donation of 10 Euros to “Friends of Eclipse”. Thank you in advance for your understanding!

If the event is sold out, you will be placed on the waiting list. You will be informed if a place becomes available, and you will need to confirm your attendance after we contact you.
If you are interested in giving a talk, please send your presentation proposal to for consideration. There are always more proposals than slots in the agenda, so we will make a selection from the submissions.

We look forward to meeting you at the Eclipse DemoCamp Oxygen 2017!


Leave a Comment. Tagged with democamp, eclipse, democamp, eclipse

by Maximilian Koegel and Jonas Helming at May 24, 2017 12:00 PM

Eclipse Newsletter - Language Server Protocol 101

May 24, 2017 09:35 AM

Everything you need to know about the Language Server Procotol (aka LSP) is in this month's newsletter!

May 24, 2017 09:35 AM

What is it like to work in Open-source?

by maggierobb at May 24, 2017 09:19 AM

Open-source software (OSS) is computer software with its source code made available with a license in which the copyright holder provides the rights to study, change, and distribute the software to anyone and for any purpose. – Wikipedia


I am Yannick Mayeur, a French computer science student currently gaining work experience at Kichwa Coders in the UK, and this is how I feel about working with Open-source.

Why Open-source ?

Let me tell you a story. A company asks someone in their software team to build some software to do a certain task. It takes him a lot of time but he manages to do it. He is the only one working on the project so there are no comments in the code nor any documentation to help maintain the code. He later leaves the company, the software slowly becomes useless as nobody else knows how to use it.

If this company had created an Open-source project instead, this problem wouldn’t have occurred.

Help spread Open-source – or ensure a job for life by using this guerrilla guide on how to write unmaintainable code. But seriously don’t. 

Get started

Getting started with Open-source is fairly easy because a lot of people write guides or blog posts to help people tackle the tricky stuff like git. All you need is a computer and motivation!

My feeling on the subject

I am now into my fifth week at Kichwa Coders, during which time I have worked on two different projects for the Eclipse Foundation:

  • Eclipse January which is a set of libraries to help process a lot of data. It is mainly used by Diamond Light Source scientists to process the data they get from their particle accelerator.
  • Eclipse CDT which is an IDE for C and C++, and is used by a lot of programmers.

Knowing that I contributed to two big projects that a lot of people use every day makes me kinda proud. And the truth is it wasn’t even that hard. With my not-that-big-knowledge of how to work on big projects I have contributed some changes that will have a lot of visibility such as bug 515296 (for more details on the bug and on how Pierre and I solved it you can read his blog post about it). Because if you are experiencing a problem the chances are that other people within the community are also experiencing it – and their knowledge of it can help you solve it for everyone.


Knowing that other people can see exactly what code you wrote puts pressure on you – but in a good way. In my opinion it pushes you to give the best of yourself. But this community can also help you out when you are stuck with a problem. All this and more makes being part of an Open-source project a very satisfying experience.

by maggierobb at May 24, 2017 09:19 AM

Generate Traced Code with Xtext

by Miro Spönemann at May 24, 2017 08:54 AM

Xtext 2.12 is released on May 26th. As described in its release notes, a main novelty is an API for tracing generated code.

Why Tracing?

Whenever you transpile code from one language to another, you need some kind of mapping that instructs the involved tools how to navigate from a piece of source code to the respective target code and back. In a debugging session, for example, developers can get quite frustrated if they have to step through the generated code and then try to understand where the problem is in their original source. Allowing to debug directly in the source code saves time and frustration, so it’s definitely the way to go.

For transpiled JVM languages such as Xtend, the JSR 45 specification defines how to map byte code to the source. For languages that target JavaScript, such as TypeScript or CoffeeScript, source maps are generated by the compiler and then processed by the development tools of the web browser. The DSL framework Xtext offers an API to access tracings between source and target code for any language created with Xtext. Xbase languages such as Xtend make use of such tracings automatically, but for other languages the computation of tracing information had to be added manually to the code generator with Xtext 2.11 and earlier versions.

Tracing Xtend Templates

The examples shown in this post are available on GitHub. They are based on a simple Xtext language that describes classes with properties and operations.

The examples are implemented in Xtend, a language that is perfectly suited to writing code generators. Among other things, it features template expressions with smart whitespace handling and embedded conditionals and loops. Here’s an excerpt of the generator implementation of our example, where the target language is C:

The entry point of the new API is TracingSugar, which provides extension methods to generate traced text. The code above uses generateTracedFile to create a file and map its contents to model, the root of our AST. The generateHeader method is shown below. It defines another template, and the resulting text is mapped to the given ClassDeclaration using the @Traced active annotation.

The _name extension method in the code above is another part of the new API. Here it writes the name property of the ClassDeclaration into the output and maps it to the respective source location. This method is generated from the EMF model of the language using the @TracedAccessors annotation. Just pass the EMF model factory class as parameter to the annotation, and it creates a tracing method for each structural feature (i.e. property or reference) of your language.

The Generator Tree

The new tracing API creates output text in two phases: first it creates a tree of generator nodes from the Xtend templates, then it transforms that tree into a character sequence with corresponding tracing information. The base interface of generator nodes is IGeneratorNode. There are predefined nodes for text segments, line breaks, indentation of a subtree, tracing a subtree, and applying templates.

The generator tree can be constructed via templates, or directly through methods provided by TracingSugar, or with a mixture of both. The direct creation of subtrees is very useful for generating statements and expressions, where lots of small text segments need to be concatenated. The following excerpt of our example code generator transforms calls to class properties from our source DSL into C code:

The parts of the TracingSugar API used in this code snippet are

  • trace to create a subtree traced to a source AST element,
  • append to add text to the subtree, and
  • appendNewLine to add line breaks.

The resulting C code may look like this:

Employing the Traces

Trace information is written into _trace files next to the generator output. For example, if you generate a file persons.c, you’ll get a corresponding .persons.c._trace in the same output directory. Xtext ships a viewer for these files, which is very useful to check the result of your tracing computation. In the screenshot below, we can see that the property reference bag is translated to the C code Bag* __local_0 = &this->bag;


The programmatic representation of such a trace file is the ITrace interface. An instance of ITrace points either in the source-to-target or the target-to-source direction, depending on how it was obtained. In order to get such a trace, inject ITraceForURIProvider and call getTraceToTarget (for a source-to-target trace) or getTraceToSource (for a target-to-source trace).

Xtext provides some generic UI for traced generated code: If you right-click some element of your source file and select “Open Generated File”, you’ll be directed to the exact location to which that element has been traced. In the same way, you can right-click somewhere in the generated code and select “Open Source File” to navigate to the respective source location. This behavior is shown in the animation below.


Enhancing Existing Code Generators

In many cases it is not necessary to rewrite a code generator from scratch in order to enhance it with tracing information. The new API is designed in a way that it can be weaved into existing Xtend code with comparatively little effort. The following hints might help you for such a task, summarizing what we have learned in the previous sections of this post.

  • Use generateTracedFile to create a traced text file. There are two overloaded variants of that method: one that accepts a template and traces it to a root AST element, and one that accepts a generator node. If you are already using Xtend templates, just pass them to this method.
  • Add the @Traced annotation to methods that transform a whole AST element into text. In some cases it might be useful to extract parts of a template into local methods so this annotation can be applied.
  • Use the @TracedAccessors annotation to generate extension methods for tracing single properties and references. For example, if you have an expression such as in your template, you could replace that with property._name so that the expression is properly traced.
  • Use the TracingSugar methods to construct a generator subtree out of fine-grained source elements such as expressions. If you have previously used other string concatenation tools like StringBuilder or StringConcatenation, you can replace them with CompositeGeneratorNode (see e.g. generateExpression in our example code).

It’s Time to Trace!

With the new Xtext version 2.12, generating traced code has become a lot simpler. If such traces are relevant in any way for your languages, don’t hesitate to try the API described here! We also welcome any feedback, so please report problems on GitHub and meet us on Gitter to discuss things, or just to tell us how cool tracing is 🙂

by Miro Spönemann at May 24, 2017 08:54 AM

It’s time to organise Eclipse Oxygen DemoCamps

May 23, 2017 08:35 AM

What is an Eclipse DemoCamp and why should I organise one?

May 23, 2017 08:35 AM

It’s time to organise Eclipse Oxygen DemoCamps

by Antoine THOMAS at May 23, 2017 08:27 AM

The next major release of the Eclipse Oxygen is coming up on June 28 and, it means the start of this year’s Eclipse DemoCamps season. If you or your colleagues are considering a DemoCamp for 2017, we would like to help!

What’s a DemoCamp?

You may be asking yourself what the heck a DemoCamp is and why should you care? Eclipse DemoCamps are typically 1-day or even evening events organized by Eclipse community members all over the world. The organizers bring together a set of expert speakers and attendees from their local community. In other words, it’s a free event where you get to meet fellow Eclipsians and learn from each other in the form of demos/talks about Eclipse technology.

How do I get started?

This is the best part, wherever you are, you can organize an Eclipse DemoCamp! You choose the place, set the time, organize the venue (maybe a local pub or company office), provide a screen and projector, and arrange for refreshments.

To tell us that you are planning an Eclipse DemoCamp:

  • Send us an email on to ask about support, speaker ideas or possible goodies
  • Add it to the DemoCamp 2017 wiki page

To add it, simply create a page with the program and venue information. And if you use another service like Meetup, just add a link to it from the Eclipse wiki. We will be pleased to list it on

How does Eclipse Foundation help?

We, as the Eclipse Foundation, will participate to the cost of food, beverages, and room rental up to $300. We encourage organizers to find outside corporate sponsors to help organize their event. Sponsors usually contribute a certain amount, food or provide the space. Please acknowledge your sponsors on the DemoCamp & Hackathon wiki page and at the event itself.

We will help you promote it through the Eclipse Foundation’s social media network and website. To read more about organizing an event, visit this page “Organise an Eclipse DemoCamp or Hackathon“.

Eclipse Foundation staff also tries to attend the DemoCamps. This is obviously not always possible, but who knows… we could be coming to yours!

In 2016, DemoCamps took place in 19 different cities, from 10 different countries: Austria, Canada, China, Germany, Guatemala, Hungary, India, Norway, Poland, and Switzerland! We need you to reach new places in 2017 and that place could be near you!

Looking forward to hear from you 🙂

by Antoine THOMAS at May 23, 2017 08:27 AM

Presentation of the Vert.x-Swagger project

by phiz71 at May 22, 2017 12:00 AM

This post is an introduction to the Vert.x-Swagger project, and describe how to use the Swagger-Codegen plugin and the SwaggerRouter class.

Eclipse Vert.x & Swagger

Vert.x and Vert.x Web are very convenient to write REST API and especially the Router which is very useful to manage all resources of an API.

But when I start a new API, I usually use the “design-first” approach and Swagger is my best friend to define what my API is supposed to do. And then, comes the “boring” part of the job : convert the swagger file content into java code. That’s always the same : resources, operations, models…

Fortunately, Swagger provides a codegen tool : Swagger-Codegen. With this tool, you can generate a server stub based on your swagger definition file. However, even if this generator provides many different languages and framework, Vert.X is missing.

This is where the Vert.x-Swagger project comes in.

The project

Vert.x-Swagger is a maven project providing 2 modules.


It’s a Swagger-Codegen plugin, which add the capability of generating a Java Vert.x WebServer to the generator.

The generated server mainly contains :

  • POJOs for definitions
  • one Verticle per tag
  • one MainVerticle, which manage others APIVerticle and start an HttpServer.

the MainVerticle use vertx-swagger-router


The main class of this module is SwaggerRouter. It’s more or less a Factory (and maybe I should rename the class) that can create a Router, using the swagger definition file to configure all the routes. For each route, it extracts parameters from the request (Query, Path, Header, Body, Form) and send them on the eventBus, using either the operationId as the address or a computed id (just a parameter in the constructor).

Let see how it works

For this post, I will use a simplified swagger file but you can find a more complex example here based on the petstore swagger file

Generating the server

First, choose your swagger definition. Here’s a YAML File, but it could be a JSON file :

Then, download these libraries :

Finally, run this command

java -cp /path/to/swagger-codegen-cli-2.2.2.jar:/path/to/vertx-swagger-codegen-1.0.0.jar io.swagger.codegen.SwaggerCodegen generate \
  -l java-vertx \
  -o path/to/destination/folder \
  -i path/to/swagger/definition \
  --group-id \

For more Information about how SwaggerCodegen works
you can read this

You should have something like that in your console:

[main] INFO io.swagger.parser.Swagger20Parser - reading from ./wineCellarSwagger.yaml
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/model/
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/model/
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/verticle/
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/verticle/
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/verticle/
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/verticle/
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/resources/swagger.json
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/resources/
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/pom.xml
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/.swagger-codegen-ignore
And this in your destination folder:

Generated sources

What have been created ?

As you can see in 1, the vertx-swagger-codegen plugin has created one POJO by definition in the swagger file.

Example : the bottle definition

In 2a and 2b you can find :

  • an interface which contains a function per operation
  • a verticle which defines all operationId and create EventBus consumers

Example : the Bottles interface

Example : the Bottles verticle

… and now ?

Line 23 of, you can see this

BottlesApi service = new BottlesApiImpl();
This line will not compile until the BottlesApiImpl class is created.

In all XXXAPIVerticles, you will find a variable called service. It is a XXXAPI type and it is instanciated with a XXXAPIImpl contructor. This class does not exist yet since it is the business of your API.

And so you will have to create these implementations.

Fine, but what if I don’t want to build my API like this ?

Well, Vert.x is unopinionated but the way the vertx-swagger-codegen creates the server stub is not. So if you want to implement your API the way you want, while enjoying dynamic routing based on a swagger file, the vertx-swagger-router library can be used standalone.

Just import this jar into your project :

You will be able to create your Router like this :

FileSystem vertxFileSystem = vertx.fileSystem();
vertxFileSystem.readFile(YOUR_SWAGGER_FILE, readFile -> {
    if (readFile.succeeded()) {
        Swagger swagger = new SwaggerParser().parse(readFile.result().toString(Charset.forName(“utf-8”))); 
        Router swaggerRouter = SwaggerRouter.swaggerRouter(Router.router(vertx), swagger, vertx.eventBus(), new OperationIdServiceIdResolver());
   } else {
You can ignore the last parameter in SwaggerRouter.swaggerRouter(...). As a result, addresses will be computed instead of using operationId from the swagger file. For instance, GET /bottles/{bottle_id} will become GET_bottles_bottle-id


Vert.x and Swagger are great tools to build and document an API but using both in the same project can be painful. The Vert.x-Swagger project was made to save time, letting the developers focusing on business code. It can be seen as an API framework over Vert.X.

You can also use the SwaggerRouter in your own project without using Swagger-Codegen.

In future releases, more information from the swagger file will be used to configure the router and certainly others languages will be supported.

Though Vert.x is polyglot, Vert.x-Swagger project only supports Java. If you want to contribute to support more languages, you’re welcome :)

Thanks for reading.

by phiz71 at May 22, 2017 12:00 AM

N4JS Becomes an Eclipse Project

by Brian Smith ( at May 19, 2017 03:34 PM

We’re proud to announce that N4JS has been accepted as an Eclipse Project and the final official steps are underway. Our team have been working very hard to wrap up the Initial Contribution and are excited to be part of Eclipse. The project will be hosted at, although this currently redirects to the project description while our pages are being created. In the meantime, N4JS is already open source - our GitHub project pages are located at which contains articles, documentation, the source for N4JS and more.

Some background information about us:
N4JS was developed by Enfore AG, founded in 2009 as NumberFour AG by Marco Boerries. Enfore’s goal is to build an open business platform for 200+ million small businesses and to provide those businesses with the tools and solutions they need to stay competitive in a connected world.

Initially, JavaScript was intended as the main language for third-party developers to contribute to our platform; it runs directly in the browser and it’s the language of the web! One major drawback is the absence of a static type system; this turned out to be an essential requirement for us. We wanted to ensure reliable development of our platform and our own applications, as well as making life easier for third-party contributors to the Enfore platform. That’s the reason why we developed N4JS, a general-purpose programming language based on ECMAScript 5 (commonly known as JavaScript). The language combines the dynamic aspects of JavaScript with the strengths of Java-like types to facilitate the development of flexible and reliable applications.

N4JS is constantly growing to support many new modern language features as they become available. Some of the features already supported are concepts introduced in ES6 including arrow functions, async/await, modules and much more. Our core team are always making steady improvements and our front end team make use of the language and IDE daily for their public-facing projects. For more information on how the N4JS language differs from other JavaScript variants introducing static typing, see our detailed FAQ.

Why Eclipse?
For us, software development is much more than simply writing code, which is why we believe in IDEs and Eclipse in particular. We were looking for developer tools which leverage features like live code validation, content assist (aka code completion), quick fixes, and a robust testing framework. Contributors to our platform can benefit from these resources for their own safe and intuitive application development.

We tried very hard to design N4JS so that Java developers feel at home when writing JavaScript without sacrificing JavaScript’s support for dynamic and functional features. Our vision is to provide an IDE for statically-typed JavaScript that feels just like JDT. This is why we strongly believe that N4JS could be quite interesting in particular for Eclipse (Java) developers. Aside from developers who are making use of N4JS, there are areas in the development of N4JS itself which would be of particular interest to committers versed in type theory, semantics, EMF, Xtext and those who generally enjoy solving the multitude of challenges involved in creating new programming languages.

What’s next?
While we are moving the project to Eclipse, there are plenty of important checks that must be done by the Eclipse Intellectual Property Team. The Initial Contribution is under review with approximately thirty Contribution Questionnaires created. This is a great milestone for us and reflects the huge effort involved in the project to date. We look forward to joining Eclipse, taking part in the ecosystem in an official capacity and seeing what the community can do with N4JS. While we complete these final requirements, we want to extend many thanks to all at Eclipse who are helping out with the process so far!

by Brian Smith ( at May 19, 2017 03:34 PM

Open Testbeds, DB Case Study, and IoT Events

by Roxanne on IoT at May 19, 2017 01:02 PM

The Eclipse IoT community has been working hard on some pretty awesome things over the past few months! Here is a quick summary of what has been happening.

Open Testbeds

We recently announced the launch of Eclipse IoT Open Testbeds. Simply put, they are collaborations between vendors and open source communities that aim to demonstrate and test commercial and open source components needed to create specific industry solutions.

The Asset Tracking Management Testbed is the very first one! It is a collaboration between Azul Systems, Codenvy, Eurotech, Red Hat, and Samsung’s ARTIK team. It demonstrates how assets with various sensors can be tracked in real-time, in order to minimize the cost of lost or damaged parcels. You can learn more about the Eclipse IoT Open Testbeds here.

Watch Benjamin Cabé present the Asset Tracking testbed demo in the video below. It was recorded at the Red Hat Summit in Boston this month.⬇

Case Study

We have been working with Deutsche Bahn (DB) and DB Systel to create a great case study that demonstrates how open source IoT technology is being used on their German railway system. They are currently using two Eclipse IoT projects, Eclipse Paho and Eclipse Mosquitto, among other technologies. In other words, if you’ve taken a DB train in Germany, you might have witnessed the “invisible” work of Eclipse IoT technology at the station or on board. How awesome is that?!

Case Study — Eclipse IoT and DB

Upcoming IoT Events

I am currently working on the organization of two upcoming Eclipse IoT Days that will take place in Europe this fall! 🍂 🍁 🍃 We are currently accepting talks for both events. Go on, submit your passion! I am excited to read your proposal :)

Eclipse IoT Day @ Thingmonk
September 11 | London, UK
📢 Email us your proposal iot at eclipse dot org

Eclipse IoT Day @ EclipseCon Europe
October 24 | Ludwigsburg, Germany
📢 Propose a talk

I look forward to meeting you in person at both events!

— Roxanne (Yes, I decided to sign this blog post.)

by Roxanne on IoT at May 19, 2017 01:02 PM

Installing Red Hat Developer Studio 10.2.0.GA through RPM

by jeffmaury at May 19, 2017 12:23 PM

With the release of Red Hat JBoss Developer Studio 10.2, it is now possible to install Red Hat JBoss Developer Studio as an RPM. It is available as a tech preview. The purpose of this article is to describe the steps you should follow in order to install Red Hat JBoss Developer Studio.

Red Hat Software Collections

JBoss Developer Studio RPM relies on Red Hat Software Collections. You don’t need to install Red Hat Software Collections but you need to enable the Red Hat Software Collections repositories before you start the installation of the Red Hat JBoss Developer Studio.

Enabling the Red Hat Software Collections base repository

The identifier for the repository is rhel-server-rhscl-7-rpms on Red Hat Enterprise Linux Server and rhel-workstation-rhscl-7-rpms on Red Hat Enterprise Linux Workstation.

The command to enable the repository on Red Hat Enterprise Linux Server is:

sudo subscription-manager repos --enable rhel-server-rhscl-7-rpms

The command to enable the repository on Red Hat Enterprise Linux Workstation is:

sudo subscription-manager repos --enable rhel-workstation-rhscl-7-rpms

For more information, please refer to the Red Hat Software Collections documentation.

JBoss Developer Studio repository

As this is a tech preview, you need to manually configure the JBoss Developer Studio repository.

Create a file /etc/yum.repos.d/rh-eclipse46-devstudio.repo with the following content:


Install Red Hat JBoss Developer Studio

You’re now ready to install Red Hat JBoss Developer Studio through RPM.

Enter the following command:

sudo yum install rh-eclipse46-devstudio

Answer &aposy&apos when transaction summary is ready to continue installation.

Answer &aposy&apos one more time when you see request to import GPG public key

Public key for rh-eclipse46-devstudio .rpm is not installed
      Retrieving key from
      Importing GPG key 0xA5787476:
       Userid     : "Red Hat, Inc. (development key) <>"
       Fingerprint: 2d6d 2858 5549 e02f 2194 3840 08b8 71e6 a578 7476
       From       :
      Is this ok [y/N]:

After all required dependencies have been downloaded and installed, Red Hat JBoss Developer Studio is available on your system through the standard update channel !!!

You should see messages like the following:

rh eclipse46 devstudio.log

Launch Red Hat JBoss Developer Studio

From the system menu, mouse over the Programming menu, and the Red Hat Eclipse menu item will appear.

programming menu

Select this menu item and Red Hat JBoss Developer Studio user interface will appear then:



Jeff Maury

by jeffmaury at May 19, 2017 12:23 PM

EcoreTools: user experience revamped thanks to Sirius 5.0

by Cédric Brun ( at May 19, 2017 12:00 AM

Every year the Eclipse M7 milestone act as a very strong deadline for the projects which are part of the release train: it’s then time for polishing and refining!

When your company is responsible for a number of inter-dependent projects some of them core technologies like EMF Services , the GMF Runtime, others user facing tools like Acceleo, Sirius or EcoreTools, packaging and integration oriented projects like Amalgam or the Eclipse Packaging project and all of these releases needs to be coordinated, then may is a busy month.

I’m personally involved in EcoreTools which makes me in the position to step in the role of the consumer of the other technologies and my plan for Oxygen was to make use of the Property Views support included in Sirius. This support allows me, as the maintainer of EcoreTools to specify directly through the .odesign every Tab displayed in the properties view. Just like the rest of Sirius it is 100% dynamic, no need for code generation or compilation, and complete flexibility with the ability to use queries in every part of the definition.

Before Oxygen EcoreTools already had property editors. Some of them were coded by hand and were developed more than 8 years ago. When I replaced the legacy modeler by using Sirius I made sure at that time to reuse those highly tuned property editors. Others I generated using the first generation of the EEF Framework so that I could cover every type of Ecore and benefit from the dialogs to edit properties using double click. The intent at that time was to make the modeler usable in fullscreen when no other view is visible.

Because of this requirement I had to wait for the Sirius team to make its magic: the properties views support was ready for production with Sirius 4.1, but this was not including any support for dialogs and wizards yet.

Then magic happened: the support for dialogs and wizards is now completely merged in Sirius, starting with M7. In EcoreTools the code responsible for those properties editors represents more than 70% of the total code which peaks at 28K.

Lines of Java code subject to deletion in EcoreTools

In gray those are the plugins which are subject to removal once I use this new feature, as a developer one can only rejoice at the idea of deleting so much code!.

I went ahead and started working on this, the schedule was tight but thanks to the ability to define reflective rules using Dynamic Mappings I could quickly cover everything in Ecore and get those new dialogs working.

New vs old dialogs

Just by using a dozen reflective rules and adding specific Pages or Widgets when needed.

The tooling definition in ecore.odesign

It went so fast I could add new tools for the Generation Settings through a specific tab.

Genmodel properties exposed through a specific tab

And even introduce a link to directly navigate to the Java code generated from the model:

Link opening the corresponding generated Java code.

Even support for EAnnotations could be implemented in a nice way:

Tab to add, edit or delete any EAnnotation

As a tool provider I could focus on streamlining the experience, providing tabs and actions so that the end user don’t have to leave the modeler to adapt the generation settings or launch the code generation, give visual clues when something is invalid. I went through many variants of these UIs just to get the feel of it, as I get an instant feedback I only need minutes to rule out an option. I have a whole new dimension I can use to make my tool super effective.

This is what Sirius is about, empowering the tool provider to focus on the user experience of its users.

It is just one of the many changes which we’ve been working on since last year to improve the user experience of modeling tools, Mélanie and Stéphane will present a talk on this very subject during EclipseCon France at Toulouse: “All about UX in Sirius.”.

All of these changes are landing in Eclipse Oxygen starting with M7, those are newly introduced and I have no doubt I’ll have some polishing and refining to do, I’m counting on you to report anything suspicious

EcoreTools: user experience revamped thanks to Sirius 5.0 was originally published by Cédric Brun at CTO @ Obeo on May 19, 2017.

by Cédric Brun ( at May 19, 2017 12:00 AM

Case Study: Deploying Eclipse IoT on Germany's DB Railway System

May 18, 2017 08:55 AM

We worked with Deutsche Bahn (DB) to find out how they use Eclipse IoT technology on their railway system!

May 18, 2017 08:55 AM

New blog location

by Kim Moir ( at May 17, 2017 09:12 PM

I moved my blog to WordPress.

New location is here

by Kim Moir ( at May 17, 2017 09:12 PM

What can Eclipse developers learn from Team Sky’s aggregation of marginal gains?

by Tracy M at May 17, 2017 01:36 PM

The concept of marginal gains, made famous by Team Sky, has revolutionized some sports. The principle is that if you make 1% improvements in a number of areas, in the long run the cumulative gains will be hugely significant. And in that vein, a 1% decline here-and-there will lead to significant problems further down the line.

So how could we apply that principle to the user experience (UX) of Eclipse C/C++ Development (CDT) tools? What would happen if we continuously improved lots of small things in Eclipse CDT? Such as the build console speed? Or a really annoying message in the debugger source window? It is still too soon to analyse the impact of these changes but we believe even the smallest positive change will be worth it. Plus it is a great way to get new folks involved with the project. Here’s a guest post from Pierre Sachot, a computer science student at IUT Blagnac who is currently doing open-source work experience with Kichwa Coders. Pierre has written an experience report on fixing his very first CDT UX issue.


This week I worked with Yannick on fixing the CDT CSourceNotFoundEditor problem – the unwanted error message that Eclipse CDT shows when users are running the debugger and jumping into a function which is in another project file. When Eclipse CDT users were running the debugger on the C Project, a window was opening on screen. This window was both alarming in appearance and obtrusive. In addition, the message itself was unclear. For example, it could display “No source available for 0x02547”, which is irrelevent to the user because he/she does not have an access to this memory address. Several users had complained about it and expressed a desire to disable the window (see: stack overflow: “Eclipse often opens editors for hex numbers (addresses?) then fails to load anything”). In this post I will show you how we replaced CSourceUserNot FoundEditor with a better user experience display.

Problem description:

1- The problem we faced was that CSourceNotFoundEditor displayed on several occasions. For example:

  • When the source file was not found
  • When the memory address was known but not the function name
  • When the function name was known

2- We also wanted to tackle that red link! Red lettering is synonymous with big problems – yet the error message was merely informing the user that the source could not be found, so we felt a less alarmist style of text would be more appropriate.

CSourceNotFoundEditor Dialog:

Previous version New version

CSourceNotFoundEditor Preferences:

Previous version New version

How to resolve the problem ?


CSourceNotFoundEditor is the class called by the openEditor() function, Yannick added a link to the debug preferences page inside it:

  • The first thing to do was to create the “Preferences…” button and a text to go with it. Yannick did this in the createButtons() function.
  • Next, we made it possible for the user to open the Preferences on the correct page – in our case, the Debug page – using this code:
PreferencesUtil.createPreferenceDialogOn(parent.getShell(), "org.eclipse.cdt.debug.ui.CDebugPreferencePage", null, null).open();

“org.eclipse.cdt.debug.ui.CDebugPreferencePage” is the class name we want to load in the debug preferences.


This class is the one which contains the debug preferences page. I set about modifying it so that the CSourceNot Found preferences could be re-set and access to them enabled. This included the option to modify which contains the String values of the buttons, and to declare them and use them. The last thing we did was to create a global value in CCorePreferenceConstants to get and set the display preferences. This we did in 4 stages:

  • First we created a group for the radio buttons. This is in the function createContents().
  • Second we created the variable intended to store the preference value. This value is a String store in the CCorePreferenceConstants class. To get a preference String value, you need to use:
DefaultScope.INSTANCE.getNode(CDebugCorePlugin.PLUGIN_ID).get(CCorePreferenceConstants.YOUR_PREFERENCE_NAME, null);

And to store it:

InstanceScope.INSTANCE.getNode(CCorePlugin.PLUGIN_ID).put(CCorePreferenceConstants.YOUR_PREFERENCE_NAME, "Your text");

Here we created a preference named: SHOW_SOURCE_NOT_FOUND_EDITOR which can take 3 values, defined at the begining of the CDebugPreferencePage class:

* Use to display by default the source not found editor
* @since 6.3
public static final String SHOW_SOURCE_NOT_FOUND_EDITOR_DEFAULT = "all_time"; //$NON-NLS-1$

* Use to display all the time the source not found editor
* @since 6.3
public static final String SHOW_SOURCE_NOT_FOUND_EDITOR_ALL_THE_TIME = "all_time"; //$NON-NLS-1$

* Use to display sometimes the source not found editor
* @since 6.3
public static final String SHOW_SOURCE_NOT_FOUND_EDITOR_SOMETIMES = "sometimes"; //$NON-NLS-1$

* Use to don't display the source not found editor
* @since 6.3
public static final String SHOW_SOURCE_NOT_FOUND_EDITOR_NEVER = "never"; //$NON-NLS-1$
  • Third, we needed to find where to set the values and where to get them. So, to set the values on your components, use the setValues() function.To store a value, you will need to add your code in storeValues(), like it’s name suggests it will store the value inside of the global preferences variable.
  • The fourth and final stage is really important: – You need to put the default value of the preference you want to add in setDefaultValues() to allows access to the original value of the preferences.


This is the class which calls CSourceNotFoundEditor, so here in the function openEditor, we needed to check the preferences options in order to know if it was possible to display CSourceFoundEditor. These checks need to be carried out in openEditor() function because this is the function which opens the CSourceNotFoundEditor. To do that, we created two cases:

  • First case in which the user wants to display the Editor all the time
  • Second for when the user only wants to display it if the source file is not found
  • The last case is an exclusion of the “all_time”, so you don’t need to check it because nothing is done in this case.

To do that, we did it like this:
how to display CSourceNotFoundEditor


Now users have the capacity to disable CSourceNotFoundEditor window altogether or to choose for themselves when to display it. Thus saving time and improving the user experience of the Eclipse debugger. This is a great example of how working on an open source project can really benefit a whole community of users. But, a word of warning, CDT project isn’t the easiest program to develop or the easiest to master, you need to understand other user’s code and if you change it you need to retain its original logic and style. Fiddly perhaps but well worth it! The user community will appreciate your efforts and the flow of coding future work will be smoother and more efficient. A better user experience for everyone.

by Tracy M at May 17, 2017 01:36 PM

EclipseCon Europe 2017 | Call for Papers Open

May 17, 2017 01:29 PM

Submissions are now open for EclipseCon Europe 2017, October 24 - 26, in Ludwigsburg, Germany.

May 17, 2017 01:29 PM

Theia – One IDE For Desktop & Cloud

by Sven Efftinge at May 17, 2017 11:54 AM

Today, I want to point you at a GitHub repository we have been contributing to for the last couple of weeks. Theia is a collaborative and open effort to build a new IDE framework in TypeScript.

Yet another IDE?”, You might think. Let me explain the motivation behind it and how its scope is unique compared to existing open-source projects.

Single-Sourcing Desktop & Browser (Cloud) Tools

Let’s start with the unique selling point: Theia targets IDEs that should run as native desktop applications (using Electronas well as in modern browsers (e.g. Chrome).

So you would build one application and run it in both contexts. Theia even supports a third mode, which is a native desktop app connecting to a remote workspace. No matter if you target primarily desktop or cloud, you can leverage the goodness of web technology and will be well prepared for the future. Although implemented using web technologies, neither VSCode nor Atom support execution in a browser with a remote backend.


Theia is an open framework that allows users to compose and tailor their Theia-based applications as they want. Any functionality is implemented as an extension, so it is using the same APIs a third-party extension would use. Theia uses the dependency injection framework Inversify.js to compose and configure the frontend and backend application, which allows for fine-grained control of any used functionality.

Since in Theia there is no two-class treatment between core code and extensions, any third-party code runs in the main application processes with the same rights and responsibilities the core application has. This is a deliberate decision to support building products based on Theia.

Dock Layout

Theia focusses on IDE-like applications. That includes developer tools but extends to all kinds of software tools for engineers. We think only splitting an editor is not enough. For such applications, you want to allow representing data in different ways (not only textual) and provide the user more freedom to use the screen estate.

Theia uses the layout manager library phosphor.js. It supports side panels similar to what JetBrains’ products do and allows the user to layout editors and views as they want in the main area.


Language Server Protocol

Another goal of this effort is to reuse existing components when sensible. The language server protocol (LSP) is, therefore, an important, central concept. Theia uses Microsoft’s Monaco code editor, for which I already found some positive words last week. That said, Theia has a thin generic editor API that shields extensions from using Monaco-specific APIs for the most common tasks. Also, other components, like Eclipse Orion’s code editor, could be utilized as the default editor implementation in Theia as well.

To show-case the LSP support, Theia comes with Eclipse’s Java Language Server which also nicely shows how to add protocol extensions. For instance, the Java LS has a particular URI scheme to open source files from referenced jars, which Theia supports.



The JavaScript (JS) language is evolving, but the different targeted platforms lag behind. The solution to this is to write code in tomorrow’s language and then use a transpiler to ‘down-level’ the source code to what the targeted platforms require. The two popular transpilers are Babel and TypeScript. In contrast to Babel, which supports the latest versions of JavaScript (ECMAScript), TypeScript goes beyond that and adds a static type system on top.

Furthermore, the TypeScript compiler exposes language services to provide advanced tool support, which is crucial to read and maintain larger software systems. It allows navigating between references and declarations, gives you smart completion proposals and much more. Finally, we are not the only ones believing TypeScript is an excellent choice (read ‘Why TypeScript Is Growing More Popular’).

Status Quo & Plans

Today we have the basic architecture in place and know how extensions should work. In the Theia repository, there are two examples (one runs in a browser the other on Electron), which you can try yourself. They allow to navigate within your workspace and open files in code editors. We also have a command registry with the corresponding menu and keybinding services. Depending on whether you run in Electron or a browser the menus will be rendered natively (Electron) or using HTML. The language server protocol is working well, and there are two language servers integrated already: Java and Python. We are going to wrap the TypeScript language service in the LSP, so we can start using Theia to implement Theia. Furthermore, a terminal gives you access to the workspace’s shell.

Don’t treat this as anything like a release as this is only the beginning. But we have laid out a couple of important fundamentals and now is a good time to make it public and get more people involved. The CDT team from Ericsson have already started contributing to Theia and more parties will join soon.

Although Theia might not be ready for production today, but if you are starting a new IDE-like product or looking into migrating the UI technology of an existing one (e.g. Eclipse-based), Theia is worth a consideration. Let me know what you think or whether you have any questions.

by Sven Efftinge at May 17, 2017 11:54 AM

OSGi Event Admin – Publish & Subscribe

by Dirk Fauth at May 16, 2017 06:49 AM

In this blog post I want to write about the publish & subscribe mechanism in OSGi, provided via the OSGi Event Admin Service. Of course I will show this in combination with OSGi Declarative Services, because this is the technology I currently like very much, as you probably know from my previous blog posts.

I will start with some basics and then show an example as usual. At last I will give some information about how to use the event mechanism in Eclipse RCP development, especially related to the combination between OSGi services and the GUI.

If you want to read further details on the Event Admin Service Specification have a look at the OSGi Spec. In Release 6 it is covered in the Compendium Specification Chapter 113.

Let’s start with the basics. The Event Admin Service is based on the Publish-Subscribe pattern. There is an event publisher and an event consumer. Both do not know each other in any way, which provides a high decoupling. Simplified you could say, the event publisher sends an event to a channel, not knowing if anybody will receive that event. On the other side there is an event consumer ready to receive events, not knowing if there is anybody available for sending events. This simplified view is shown in the following picture:


Technically both sides are using the Event Admin Service in some way. The event publisher uses it directly to send an event to the channel. The event consumer uses it indirectly by registering an event handler to the EventAdmin to receive events. This can be done programmatically. But with OSGi DS it is very easy to register an event handler by using the whiteboard pattern.


An Event object has a topic and some event properties. It is an immutable object to ensure that every handler gets the same object with the same state.

The topic defines the type of the event and is intended to serve as first-level filter for determining which handlers should receive the event. It is a String arranged in a hierarchical namespace. And the recommendation is to use a convention similar to the Java package name scheme by using reverse domain names (fully/qualified/package/ClassName/ACTION). Doing this ensures uniqueness of events. This is of course only a recommendation and you are free to use pseudo class names to make the topic better readable.

Event properties are used to provide additional information about the event. The key is a String and the value can be technically any object. But it is recommended to only use String objects and primitive type wrappers. There are two reasons for this:

  1. Other types cannot be passed to handlers that reside external from the Java VM.
  2. Other classes might be mutable, which means any handler that receives the event could change values. This break the immutability rule for events.

Common Bundle

It is some kind of best practice to place common stuff in a common bundle to which the event publisher bundle and the event consumer bundle can have a dependency to. In our case this will only be the definition of the supported topics and property keys in a constants class, to ensure that both implementations share the same definition, without the need to be dependent on each other.

  • Create a new project org.fipro.mafia.common
  • Create a new package org.fipro.mafia.common
  • Create a new class MafiaBossConstants
public final class MafiaBossConstants {

    private MafiaBossConstants() {
        // private default constructor for constants class
        // to avoid someone extends the class

    public static final String TOPIC_BASE = "org/fipro/mafia/Boss/";
    public static final String TOPIC_CONVINCE = TOPIC_BASE + "CONVINCE";
    public static final String TOPIC_ENCASH = TOPIC_BASE + "ENCASH";
    public static final String TOPIC_SOLVE = TOPIC_BASE + "SOLVE";
    public static final String TOPIC_ALL = TOPIC_BASE + "*";

    public static final String PROPERTY_KEY_TARGET = "target";

  • PDE
    • Open the MANIFEST.MF file and on the Overview tab set the Version to 1.0.0 (remove the qualifier).
    • Switch to the Runtime tab and export the org.fipro.mafia.common package.
    • Specify the version 1.0.0 on the package via Properties…
  • Bndtools
    • Open the bnd.bnd file
    • Add the package org.fipro.mafia.common to the Export Packages

In MafiaBossConstants we specify the topic base with a pseudo class org.fipro.mafia.Boss, which results in the topic base org/fipro/mafia/Boss. We specify action topics that start with the topic base and end with the actions CONVINCE, ENCASH and SOLVE. And additionally we specify a topic that starts with the base and ends with the wildcard ‘*’.

These constants will be used by the event publisher and the event consumer soon.

Event Publisher

The Event Publisher uses the Event Admin Service to send events synchronously or asynchronously. Using DS this is pretty easy.

We will create an Event Publisher based on the idea of a mafia boss. The boss simply commands a job execution and does not care who is doing it. Also it is not of interest if there are many people doing the same job. The job has to be done!

  • Create a new project org.fipro.mafia.boss
  • PDE
    • Open the MANIFEST.MF file of the org.fipro.mafia.boss project and switch to the Dependencies tab
    • Add the following dependencies on the Imported Packages side:
      • org.fipro.mafia.common (1.0.0)
      • org.osgi.service.component.annotations (1.3.0)
      • org.osgi.service.event (1.3.0)
    • Mark org.osgi.service.component.annotations as Optional via Properties…
    • Add the upper version boundaries to the Import-Package statements.
  • Bndtools
    • Open the bnd.bnd file of the org.fipro.mafia.boss project and switch to the Build tab
    • Add the following bundles to the Build Path
      • org.apache.felix.eventadmin
      • org.fipro.mafia.common

Adding org.osgi.service.event to the Imported Packages with PDE on a current Equinox target will provide a package version 1.3.1. You need to change this to 1.3.0 if you intend to run the same bundle with a different Event Admin Service implementation. In general it is a bad practice to rely on a bugfix version. Especially when thinking about interfaces, as any change to an interface typically is a breaking change.
To clarify the statement above. As the package org.osgi.service.event contains more than just the EventAdmin interface, the bugfix version increase is surely correct in Equinox, as there was probably a bugfix in some code inside the package. The only bad thing is to restrict the package wiring on the consumer side to a bugfix version, as this would restrict your code to only run with the Equinox implementation of the Event Admin Service.

  • Create a new package org.fipro.mafia.boss
  • Create a new class BossCommand
    property = {
        "osgi.command.function=boss" },
    service = BossCommand.class)
public class BossCommand {

    EventAdmin eventAdmin;

    @Descriptor("As a mafia boss you want something to be done")
    public void boss(
        @Descriptor("the command that should be executed. "
            + "possible values are: convince, encash, solve")
        String command,
        @Descriptor("who should be 'convinced', "
            + "'asked for protection money' or 'finally solved'")
        String target) {

        // create the event properties object
        Map<String, Object> properties = new HashMap<>();
        properties.put(MafiaBossConstants.PROPERTY_KEY_TARGET, target);
        Event event = null;

        switch (command) {
            case "convince":
                event = new Event(MafiaBossConstants.TOPIC_CONVINCE, properties);
            case "encash":
                event = new Event(MafiaBossConstants.TOPIC_ENCASH, properties);
            case "solve":
                event = new Event(MafiaBossConstants.TOPIC_SOLVE, properties);
                System.out.println("Such a command is not known!");

        if (event != null) {

The code snippet above uses the annotation @Descriptor to specify additional information for the command. This information will be shown when executing help boss in the OSGi console. To make this work with PDE you need to import the package org.apache.felix.service.command with status=provisional. Because the PDE editor does not support adding additional information to package imports, you need to do this manually in the MANIFEST.MF tab of the Plugin Manifest Editor. The Import-Package header would look like this:

Import-Package: org.apache.felix.service.command;status=provisional;version="0.10.0",

With Bndtools you need to add org.apache.felix.gogo.runtime to the Build Path in the bnd.bnd file so the @Descriptor annotation can be resolved.

There are three things to notice in the BossCommand implementation:

  • There is a mandatory reference to EventAdmin which is required for sending events.
  • The Event objects are created using a specific topic and a Map<String, Object> that contains the additional event properties.
  • The event is sent asynchronously via EventAdmin#postEvent(Event)

The BossCommand will create an event using the topic that corresponds to the given command parameter. The target parameter will be added to a map that is used as event properties. This event will then be send to a channel via the EventAdmin. In the example we use EventAdmin#postEvent(Event) which sends the event asynchronously. That means, we send the event but do not wait until available handlers have finished the processing. If it is required to wait until the processing is done, you need to use EventAdmin#sendEvent(Event), which sends the event synchronously. But sending events synchronously is significantly more expensive, as the Event Admin Service implementation needs to ensure that every handler has finished processing before it returns. It is therefore recommended to prefer the usage of asynchronous event processing.

The code snippet uses the Field Strategy for referencing the EventAdmin. If you are using PDE this will work with Eclipse Oxygen. With Eclipse Neon you need to use the Event Strategy. In short, you need to write the bind-event-method for referencing EventAdmin because Equinox DS supports only DS 1.2 and the annotation processing in Eclipse Neon also only supports the DS 1.2 style annotations.

Event Consumer

In our example the boss does not have to tell someone explicitly to do the job. He just mentions that the job has to be done. Let’s assume we have a small organization without hierarchies. So we skip the captains etc. and simply implement some soldiers. They have specialized, so we have three soldiers, each listening to one special topic.

  • Create a new project org.fipro.mafia.soldier
  • PDE
    • Open the MANIFEST.MF file of the org.fipro.mafia.soldier project and switch to the Dependencies tab
    • Add the following dependencies on the Imported Packages side:
      • org.fipro.mafia.common (1.0.0)
      • org.osgi.service.component.annotations (1.3.0)
      • org.osgi.service.event (1.3.0)
    • Mark org.osgi.service.component.annotations as Optional via Properties…
    • Add the upper version boundaries to the Import-Package statements.
  • Bndtools
    • Open the bnd.bnd file of the org.fipro.mafia.boss project and switch to the Build tab
    • Add the following bundles to the Build Path
      • org.apache.felix.eventadmin
      • org.fipro.mafia.common
  • Create a new package org.fipro.mafia.soldier
  • Create the following three soldiers Luigi, Mario and Giovanni
    property = EventConstants.EVENT_TOPIC
        + "=" + MafiaBossConstants.TOPIC_CONVINCE)
public class Luigi implements EventHandler {

    public void handleEvent(Event event) {
        System.out.println("Luigi: "
        + event.getProperty(MafiaBossConstants.PROPERTY_KEY_TARGET)
        + " was 'convinced' to support our family");

    property = EventConstants.EVENT_TOPIC
        + "=" + MafiaBossConstants.TOPIC_ENCASH)
public class Mario implements EventHandler {

    public void handleEvent(Event event) {
        System.out.println("Mario: "
        + event.getProperty(MafiaBossConstants.PROPERTY_KEY_TARGET)
        + " payed for protection");

    property = EventConstants.EVENT_TOPIC
        + "=" + MafiaBossConstants.TOPIC_SOLVE)
public class Giovanni implements EventHandler {

    public void handleEvent(Event event) {
        System.out.println("Giovanni: We 'solved' the issue with "
        + event.getProperty(MafiaBossConstants.PROPERTY_KEY_TARGET));


Technically we have created special EventHandler for different topics. You should notice the following facts:

  • We are using OSGi DS to register the event handler using the whiteboard pattern. On the consumer side we don’t need to know the EventAdmin itself.
  • We need to implement org.osgi.service.event.EventHandler
  • We need to register for a topic via service property event.topics, otherwise the handler will not listen for any event.
  • Via Event#getProperty(String) we are able to access event property values.

The following service properties are supported by event handlers:

Service Registration Property Description
event.topics Specify the topics of interest to an EventHandler service. This property is mandatory.
event.filter Specify a filter to further select events of interest to an EventHandler service. This property is optional. Specifying the delivery qualities requested by an EventHandler service. This property is optional.

The property keys and some default keys for event properties are specified in org.osgi.service.event.EventConstants.

Launch the example

Before moving on and explaining further, let’s start the example and verify that each command from the boss is only handled by one soldier.

With PDE perform the following steps:

  • Select the menu entry Run -> Run Configurations…
  • In the tree view, right click on the OSGi Framework node and select New from the context menu
  • Specify a name, e.g. OSGi Event Mafia
  • Deselect All
  • Select the following bundles
    (note that we are using Eclipse Oxygen, in previous Eclipse versions org.apache.felix.scr and org.eclipse.osgi.util are not required)

    • Application bundles
      • org.fipro.mafia.boss
      • org.fipro.mafia.common
      • org.fipro.mafia.soldier
    • Console bundles
      • org.apache.felix.gogo.command
      • org.apache.felix.gogo.runtime
      • org.eclipse.equinox.console
    • OSGi framework and DS bundles
      • org.apache.felix.scr
      • org.eclipse.equinox.ds
      • org.eclipse.osgi
      • org.eclipse.osgi.util
    • Equinox Event Admin
      • org.eclipse.equinox.event
  • Ensure that Default Auto-Start is set to true
  • Click Run

With Bndtools perform the following steps:

  • Open the launch.bndrun file in the org.fipro.mafia.boss project
  • On the Run tab add the following bundles to the Run Requirements
    • org.fipro.mafia.boss
    • org.fipro.mafia.common
    • org.fipro.mafia.soldier
  • Click Resolve to ensure all required bundles are added to the Run Bundles via auto-resolve
  • Click Run OSGi

Execute the boss command to see the different results. This can look similar to the following:

osgi> boss convince Angelo
osgi> Luigi: Angelo was 'convinced' to support our family
boss encash Wong
osgi> Mario: Wong payed for protection
boss solve Tattaglia
osgi> Giovanni: We 'solved' the issue with Tattaglia

Handle multiple event topics

It is also possible to register for multiple event topics. Say Pete is a tough guy who is good in ENCASH and SOLVE issues. So he registers for those topics.

    property = {
        EventConstants.EVENT_TOPIC + "=" + MafiaBossConstants.TOPIC_CONVINCE,
        EventConstants.EVENT_TOPIC + "=" + MafiaBossConstants.TOPIC_SOLVE })
public class Pete implements EventHandler {

    public void handleEvent(Event event) {
        System.out.println("Pete: I took care of "
        + event.getProperty(MafiaBossConstants.PROPERTY_KEY_TARGET));


As you can see the service property event.topics is declared multiple times via the @Component annotation type element property. This way an array of Strings is configured for the service property, so the handler reacts on both topics.

If you execute the example now and call boss convince xxx or boss solve xxx you will notice that Pete is also responding.

It is also possible to use the asterisk wildcard as last token of a topic. This way the handler will receive all events for topics that start with the left side of the wildcard.

Let’s say we have a very motivated young guy called Ray who wants to prove himself to the boss. So he takes every command from the boss. For this we set the service property event.topics=org/fipro/mafia/Boss/*

    property = EventConstants.EVENT_TOPIC
        + "=" + MafiaBossConstants.TOPIC_ALL)
public class Ray implements EventHandler {

    public void handleEvent(Event event) {
        String topic = event.getTopic();
        Object target = event.getProperty(MafiaBossConstants.PROPERTY_KEY_TARGET);

        switch (topic) {
            case MafiaBossConstants.TOPIC_CONVINCE:
                System.out.println("Ray: I helped in punching the shit out of" + target);
            case MafiaBossConstants.TOPIC_ENCASH:
                System.out.println("Ray: I helped getting the money from " + target);
            case MafiaBossConstants.TOPIC_SOLVE:
                System.out.println("Ray: I helped killing " + target);
            default: System.out.println("Ray: I helped with whatever was requested!");


Executing the example again will show that Ray is responding on every boss command.

It is also possible to filter events based on event properties by setting the service property event.filter. The value needs to be an LDAP filter. For example, although Ray is a motivated and loyal soldier, he refuses to handle events that target his friend Sonny.

The following snippet shows how to specify a filter that excludes event processing if the target is Sonny.

    property = {
        EventConstants.EVENT_TOPIC + "=" + MafiaBossConstants.TOPIC_ALL,
        EventConstants.EVENT_FILTER + "=" + "(!(target=Sonny))"})
public class Ray implements EventHandler {

Execute the example and call two commands:

  • boss solve Angelo
  • boss solve Sonny

You will notice that Ray will respond on the first call, but he will not show up on the second call.

The filter expression can only be applied on event properties. It is not possible to use that filter on service properties.

At last it is possible to configure in which order the event handler wants the events to be delivered. This can either be ordered in the same way they are posted, or unordered. The service property can be used to change the default behavior, which is to receive the events from a single thread in the same order as they were posted.

If an event handler does not need to receive events in the order as they were posted, you need to specify the service property

    property = {
        EventConstants.EVENT_TOPIC + "="
            + MafiaBossConstants.TOPIC_ALL,
        EventConstants.EVENT_FILTER + "="
            + "(!(target=Sonny))",
        EventConstants.EVENT_DELIVERY + "="
            + EventConstants.DELIVERY_ASYNC_UNORDERED})

The value for ordered delivery is async.ordered which is the default. The values are also defined in the EventConstants.


By using the event mechanism the code is highly decoupled. In general this is a good thing, but it also makes it hard to identify issues. One common issue in Eclipse RCP for example is to forget to automatically start the bundle org.eclipse.equinox.event. Things will simply not work in such a case, without any errors or warnings shown on startup.

The reason for this is that the related interfaces like EventAdmin and EventHandler are located in the bundle The bundle wiring therefore shows that everything is ok on startup, because all interfaces and classes are available. But we require a bundle that contains an implementation of EventAdmin. If you remember my Getting Started Tutorial, such a requirement can be specified by using capabilities.

To show the implications, let’s play with the Run Configuration:

  • Uncheck org.eclipse.equinox.event from the list of bundles
  • Launch the configuration
  • execute lb on the command line (or ss on Equinox if you are more familiar with that) and check the bundle states
    • Notice that all bundles are in ACTIVE state
  • execute scr:list (or list on Equinox < Oxygen) to check the state of the DS components
    • Notice that org.fipro.mafia.boss.BossCommand has an unsatisfied reference
    • Notice that all other EventHandler services are satisfied

That is of course a the correct behavior. The BossCommand service has a mandatory reference to EventAdmin and there is no such service available. So it has an unsatisfied reference. The EventHandler implementations do not have such a dependency, so they are satisfied. And that is even fine when thinking in the publish & subscribe pattern. They can be active and waiting for events to process, even if there is nobody available to send an event. But it makes it hard to find the issue. And when using Tycho and the Surefire Plugin to execute tests, it will even never work because nobody tells the test runtime that org.eclipse.equinox.event needs to be available and started in advance.

This can be solved by adding the Require-Capability header to require an osgi.service for objectClass=org.osgi.service.event.EventAdmin.

Require-Capability: osgi.service;

By specifying the Require-Capability header like above, the capability will be checked when the bundles are resolved. So starting the example after the Require-Capability header was added will show an error and the bundle org.fipro.mafia.boss will not be activated.

If you add the bundle org.eclipse.equinox.event again to the Run Configuration and launch it again, there are no issues.

As p2 does still not support OSGi capabilities, the p2.inf file needs to be created in the META-INF folder with the following content:

requires.1.namespace = osgi.service = org.osgi.service.event.EventAdmin

Typically you would specify the Require-Capability to the EventAdmin service with the directive effective:=active. This implies that the OSGi framework will resolve the bundle without checking if another bundle provides the capability. It can then be more seen as a documentation which services are required from looking into the MANIFEST.MF.

Important Note:
Specifying the Require-Capability header and the p2 capabilities for org.osgi.service.event.EventAdmin will only work with Eclipse Oxygen. I contributed the necessary changes to Equinox for Oxygen M1 with Bug 416047. With a org.eclipse.equinox.event bundle in a version >= 1.4.0 you should be able to specify the capabilities. In previous versions the necessary Provide-Capability and p2 capability configuration in that bundle are missing.

Handling events in Eclipse RCP UI

When looking at the architecture of an Eclipse RCP application, you will notice that the UI layer is not created via OSGi DS (actually that is not a surprise!). And we can not simply say that our view parts are created via DS, because the lifecycle of a part is controlled by other mechanics. But as an Eclipse RCP application is typcially an application based on OSGi, all the OSGi mechanisms can be used. Of course not that convenient as with using OSGi DS directly.

The direction from the UI layer to the OSGi service layer is pretty easy. You simply need to retrieve the service you want to uw3. With Eclipse 4 you simply get the desired service injected using @Inject or @Inject in combination with @Service since Eclipse Oxygen (see OSGi Declarative Services news in Eclipse Oxygen). With Eclipse 3.x you needed to retrieve the service programmatically via the BundleContext.

The other way to communicate from a service to the UI layer is something different. There are two ways to consider from my point of view:

This blog post is about the event mechanism in OSGi, so I don’t want to go in detail with the observer pattern approach. It simply means that you extend the service interface to accept listeners to perform callbacks. Which in return means you need to retrieve the service in the view part for example, and register a callback function from there.

With the Publish & Subscribe pattern we register an EventHandler that reacts on events. It is a similar approach to the Observer pattern, with some slight differences. But this is not a design pattern blog post, we are talking about the event mechanism. And we already registered an EventHandler using OSGi DS. The difference to the scenario using DS is that we need to register the EventHandler programmatically. For OSGi experts that used the event mechanism before DS came up, this is nothing new. For all others that learn about it, it could be interesting.

The following snippet shows how to retrieve a BundleContext instance and register a service programmatically. In earlier days this was done in an Activator, as there you have access to the BundleContext. Nowadays it is recommended to use the FrameworkUtil class to retrieve the BundleContext when needed, and to avoid Activators to reduce startup time.

private ServiceRegistration<?> eventHandler;


// retrieve the bundle of the calling class
Bundle bundle = FrameworkUtil.getBundle(getClass());
BundleContext bc = (bundle != null) ? bundle.getBundleContext() : null;
if (bc != null) {
    // create the service properties instance
    Dictionary<String, Object> properties = new Hashtable<>();
    properties.put(EventConstants.EVENT_TOPIC, MafiaBossConstants.TOPIC_ALL);
    // register the EventHandler service
    eventHandler = bc.registerService(
        new EventHandler() {

            public void handleEvent(Event event) {
                // ensure to update the UI in the UI thread
                Display.getDefault().asyncExec(() -> handlerLabel.setText(
                        "Received boss command "
                            + event.getTopic()
                            + " for target "
                            + event.getProperty(MafiaBossConstants.PROPERTY_KEY_TARGET)));

This code can be technically added anywhere in the UI code, e.g. in a view, an editor or a handler. But of course you should be aware that the event handler also should be unregistered once the connected UI class is destroyed. For example, you implement a view part that registers a listener similar to the above to update the UI everytime an event is received. That means the handler has a reference to a UI element that should be updated. If the part is destroyed, also the UI element is destroyed. If you don’t unregister the EventHandler when the part is destroyed, it will still be alive and react on events and probably cause exceptions without proper disposal checks. It is also a cause for memory leaks, as the EventHandler references a UI element instance that is already disposed but can not be cleaned up by the GC as it is still referenced.

The event handling is executed in its own event thread. Updates to the UI can only be performed in the main or UI thread, otherwise you will get a SWTException for Invalid thread access. Therefore it is necessary to ensure that UI updates performed in an event handler are executed in the UI thread. For further information have a look at Eclipse Jobs and Background Processing.
For the UI synchronization you should also consider using asynchronous execution via Display#asyncExec() or UISynchronize#asyncExec(). Using synchronous execution via syncExec() will block the event handler thread until the UI update is done.

If you stored the ServiceRegistration object returned by BundleContext#registerService() as shown in the example above, the following snippet can be used to unregister the handler if the part is destroyed:

if (eventHandler != null) {

In Eclipse 3.x this needs to be done in the overriden dispose() method. In Eclipse 4 it can be done in the method annotated with @PreDestroy.

Ensure that the bundle that contains the code is in ACTIVE state so there is a BundleContext. This can be achieved by setting Bundle-ActivationPolicy: lazy in the MANIFEST.MF.

Handling events in Eclipse RCP UI with Eclipse 4

In Eclipse 4 the event handling mechanism is provided to the RCP development via the EventBroker. The EventBroker is a service that uses the EventAdmin and additionally provides injection support. To learn more about the EventBroker and the event mechanism provided by Eclipse 4 you should read the related tutorials, like

We are focusing on the event consumer here. Additionally to registering the EventHandler programmatically, it is possible in Eclipse 4 to specify a method for method injection that is called on event handling by additionally providing support for injection.

Such an event handler method looks similar to the following snippet:

void handleConvinceEvent(
        @UIEventTopic(MafiaBossConstants.TOPIC_CONVINCE) String target) {
    e4HandlerLabel.setText("Received boss CONVINCE command for " + target);

By using @UIEventTopic you ensure that the code is executed in the UI thread. If you don’t care about the UI thread, you can use @EventTopic instead. The handler that is registered in the back will also be automatically unregistered if the containing instance is destroyed.

While the method gets directly invoked as event handler, the injection does not work without modifications on the event producer side. For this the data that should be used for injection needs to be added to the event properties for the key This key is specified as a constant in IEventBroker. But using the constant would also introduce a dependency to, which is not always intended for event producer bundles. Therefore modifying the generation of the event properties map in BossCommand will make the E4 event handling injection work:

// create the event properties object
Map<String, Object> properties = new HashMap<>();
properties.put(MafiaBossConstants.PROPERTY_KEY_TARGET, target);
properties.put("", target);

The EventBroker additionally adds the topic to the event properties for the key event.topics. In Oxygen it does not seem to be necessary anymore.

The sources for this tutorial are hosted on GitHub in the already existing projects:

The PDE version also includes a sample project org.fipro.mafia.ui which is a very simple RCP application that shows the usage of the event handler in a view part.

by Dirk Fauth at May 16, 2017 06:49 AM