Eclipse Committer Emeritus

by Kim Moir ( at July 29, 2016 07:06 PM

I received this very kind email in my inbox this morning.

"David Williams has expired your commit rights to the
eclipse.platform.releng project.  The reason for this change is:

We have all known this day would come, but it does not make it any easier.
It has taken me four years to accept that Kim is no longer helping us with
Eclipse. That is how large her impact was, both on myself and Eclipse as a
whole. And that is just the beginning of why I am designating her as
"Committer Emeritus". Without her, I humbly suggest that Eclipse would not
have gone very far. Git shows her active from 2003 to 2012 -- longer than
most! She is (still!) user number one on the build machine. (In Unix terms,
that is UID 500). The original admin, when "Eclipse" was just the Eclipse

She was not only dedicated to her job as a release engineer she was
passionate about doing all she could to make other committer's jobs easier
so they could focus on their code and specialties. She did (and still does)
know that release engineering is a field of its own; a specialized
profession (not something to "tack on" at the end) that just anyone can do)
 and good, committed release engineers are critical to the success of any

For anyone reading this that did not know Kim, it is not too late: you can
follow her blog at

You will see that she is still passionate about release engineering and
influential in her field.

And, besides all that, she was (I assume still is :) a well-rounded, nice
person, that was easy to work with! (Well, except she likes running for
exercise. :)

Thanks, Kim, for all that you gave to Eclipse and my personal thanks for
all that you taught me over the years (and I mean before I even tried to
fill your shoes in the Platform).

We all appreciate your enormous contribution to the success of Eclipse and
happy to see your successes continuing.

To honor your contributions to the project, David Williams has nominated
you for Committer Emeritus status."

Thank you David! I really appreciate your kind words.  I learned so much working with everyone in the Eclipse community.  I had the intention to contribute to Eclipse when I left IBM but really felt that I have given all I had to give.  Few people have the chance to contribute to two fantastic open source communities during their career.  I'm lucky to have that opportunity.

My IBM friends made this neat Eclipse poster when I left.  The Mozilla dino displays my IRC handle.

by Kim Moir ( at July 29, 2016 07:06 PM

Disabling time synchronization in VirtualBox + Ubuntu 16.04

by Stefan Winkler ( at July 29, 2016 08:01 AM

Sometimes I need to test something in a VM which has an independent system time. Usually, this has to do with server applications which react time-based (think of a system in which users may enter information until a certain day is reached, after which only read-only is allowed). 

On a standard installation of VirtualBox and Ubuntu 16.04, the system time is automatically synced to the host. When trying to change the date using date -s 2015-01-01 the date will be updated but will revert to the host's date after a few seconds.

Since I end up googling the correct way to disable the time synchronization in an Ubuntu VirtualBox client every time I set up my VM from scratch, I am posting the steps here as my personal reminder (it is always nice to get one's own article in a google result because one has forgotten ;)) - and maybe someone else also finds this useful:

  1. Disable host time access (this is mentioned here - no idea if this is really required...):
    On the host, execute: 
    VBoxManage setextradata "vm-name" "VBoxInternal/Devices/VMMDev/0/Config/GetHostTimeDisabled" 1
  2. Disable client time sync:
    On the client, edit /opt/VBoxGuestAdditions-5.1.2/init/vboxadd-service  and in start() replace
    daemon $binary --pidfile $PIDFILE > /dev/null
    daemon $binary --disable-timesync --pidfile $PIDFILE > /dev/null

    and also look for the definition of daemon() and add the fourth argument
    if which start-stop-daemon >/dev/null 2>&1; then
      daemon() {
        start-stop-daemon --start --exec $1 -- $2 $3 $4
  3. Just to be sure, remove the Ubuntu timesyncd as well:
    On the client, execute:
    rm /etc/systemd/system/

Reboot and from now on, changed system dates stay as intended.

BTW, I would like to be able to decouple the system time in docker as well, but from my understanding this is not possible because docker uses the host RTC directly and the only way to differ here is to set a different timezone, which has its limitations. But if someone knows a way to simulate a different system time in docker, I would love to hear about it!

{jcomments on}


by Stefan Winkler ( at July 29, 2016 08:01 AM

Building a VS Code Extension with Xtext and the Language Server Protocol

by Miro Spönemann at July 27, 2016 01:24 PM

In the upcoming Version 2.11, Xtext will support the Language Server Protocol defined by Visual Studio Code. This is a very important step, as the protocol is generic and is going to be supported by other editors such as Eclipse or Che as well. In this post I want to give the early adopters among us a head start and explain how to use this exciting new feature.

Try the Example Language

Installing a language extension in VS Code is easy: open the “Extensions” view on the left sidebar and search for “typefox”. Install and activate the “mydsl” language, create a new file with that extension (e.g. test.mydsl), and explore the editor support for this simple DSL. Here’s an example snippet:

type A {
    int x
type B extends A {
    A ref
    string name

The source of this example is available on GitHub. It has two main components: an Xtext example language consisting of a base project (io.typefox.vscode) and an IDE support project (io.typefox.vscode.ide), and a VS Code extension project (vscode-extension). You can compile and run the example with the following steps:

  1. Run ./gradlew installServer
  2. Open the vscode-extension project with VS Code.
  3. Run npm install in the integrated terminal (View → Integrated Terminal).
  4. Press F5 to start a second instance of Code.
  5. Test the language as described above.

Create Your Own Language Extension

In case you haven’t done that yet, start by creating an Xtext project and choosing Gradle as build system. Make sure your Xtext version in the gradle build files is bumped to 2.11-SNAPSHOT. In order to create a VS Code extension for your language you need to build an executable application from it. I recommend the Gradle application plugin for this, which gives you a bundle with all required libraries and a startup script. Just add the following lines to the build.gradle of the ide project of your language:

apply plugin: 'application'
mainClassName = 'org.eclipse.xtext.ide.server.ServerLauncher'
applicationName = 'xtext-server'

The command gradle installDist generates the executable in the subfolder build/install.

As a next step, create a VS Code extension following the documentation. The official example uses a Node.js module to implement the server. You can change that to start your language server application by using the following code in extension.ts to create the server options:

let executable = process.platform == 'win32' ? 'xtext-server.bat' : 'xtext-server';
let serverLauncher = context.asAbsolutePath(path.join(
        'xtext-server', 'bin', executable));
let serverOptions: ServerOptions = {
    run : { command: serverLauncher }, debug: { command: serverLauncher }

If you have set up your VS Code project properly, you should now be able to start a second Code instance that includes your extension by pressing F5. Open a folder in that new instance and create a file according to the file extension of your language. Now language support should be active, and the debug console of the host instance of Code should show a message like “Loading development extension at …” – You’re done!

How Xtext Integrates with VSCode

In VS Code a language server is a process that is started and used by an extension, i.e. a plug-in for VS Code. The process application can be implemented in any programming language, and VSCode speaks to it through an input and an output stream (i.e. standard in/out or a socket).

Starting and Initializing

After launching the process, VS Code initializes the language server by sending a message. This message includes a path to the root directory the editor is looking at (unless the file is opened without a root directory). In Xtext we take that directory and do a quick build that includes indexing. In order to tell what kind of project structure we are looking at, the Xtext language server will be capable of using different project description providers. One for instance could be able to ask Gradle for the modules and dependencies, another could simply read ‘.project’ and ‘.classpath’ files. At the time of writing this we only have a dump; treat it as one project without implementation of dependencies. However, this will change in the coming weeks.

During the first build, Xtext might already find problems in your source code. In that case the language server will send notifications to the editor reporting those diagnostics.

Many Languages per Server

Usually a language server is responsible for one language. However, in order to allow cross-language linking and transitive dependency analyses, the Xtext language server can host as many langauges as you want. For VS Code it will look like one language with many different file extensions. The language server is a common reusable component that you don’t need to configure besides the project description provider mentioned above. The participating languages are loaded through a Java ServiceLoader for the type ISetup. The necessary entry under META-INF is generated for you if you use the latest nightly build.


The Xtext 2.11 release is planned for October 2016. This version will already allow you to create language support extensions for VS Code, but you can expect more interesting integrations in the future, e.g. the web IDE Che.

The current language server implementation of Xtext builds on the ls-api library, which I described in a previous post. This library is going to be moved to LSP4J, a new project proposed under the Eclipse umbrella.

by Miro Spönemann at July 27, 2016 01:24 PM

The forthcoming second edition of the Xtext book

by Lorenzo Bettini at July 27, 2016 08:28 AM

The second edition of the Xtext book should be published soon! In the meantime it is already available for preorders. At the time of writing, you can benefit for discounts and preorder it at 10$.


I’ll detail the differences and novelties of this second edition.

But, first things first! A huge thank you to , for reviewing this second edition, and a special thank you to Sven Efftinge, for writing the foreword to this second edition. I am also grateful to itemis Schweiz, and in particular, to Serano Colameo for sponsoring the writing of this book.

While working on this second edition, I updated all the contents of the previous edition in order to make them up to date with respect to what Xtext provides in the most recent release (at the time of writing, it is 2.10).

All the examples have been rewritten from scratch. The main examples, Entities, Expressions and SmallJava, are still there, but many parts of the DSLs, including their features and implementations, have been modified and improved, focusing on efficient implementation techniques and the best practices I learned in these years. Thus, while the features of most of the main example DSLs of the book is the same as in the first edition, their implementation is completely new.

Moreover, In the last chapters, many more examples are also introduced.

Chapter 11 on Continuous Integration, which in the previous edition was called “Building and Releasing”, has been completely rewritten and it is now based on Maven/Tycho and on Gradle, since Xtext now provides a project wizard that also creates a build configuration for these build tools. Building with Maven/Tycho is described in more details in the chapter, and Gradle is briefly described. This new chapter also briefly describes the new Xtext features: DSL editor on the web and also on IntelliJ.

I also added a brand new chapter at the end of the book, Chapter 13 “Advanced Topics”, with much more advanced material and techniques that are useful when your DSL grows in size and features. For example, the chapter will show how to manually maintain the Ecore model for your DSL in several ways, including Xcore. This chapter also presents an advanced example that extends Xbase, including the customization of its type system and compiler. An introduction to Xbase is still presented in Chapter 12, as in the previous edition, but with more details.

As in the previous edition, the book fosters unit testing a lot. An entire chapter, Chapter 7 “Testing”, is still devoted to testing all aspects of an Xtext DSL implementation.

Most chapters, as in the previous edition, still have a tutorial nature.

Summarizing, while the title and the subject of most chapters is still the same, their contents have been completely reviewed, extended and, hopefully, improved.
If you enjoyed the first edition of the book and found it useful, I hope you’ll like this second edition even more.

Be Sociable, Share!

by Lorenzo Bettini at July 27, 2016 08:28 AM

Oomph 02: A setup in action

by Christian Pontesegger ( at July 27, 2016 06:41 AM

During our first tutorial we started an installation using the Oomph installer. Now we will have a closer look on the applied tasks, how to monitor and relaunch them and where these settings get persisted.

Oomph Tutorials

For a list of all Oomph related tutorials see my Oomph Tutorials Overview.

Workspace Setup

Right after the installation Oomph prepares your workspace. While busy you can see a spinning icon in the status bar at the bottom of your eclipse installation. A double click reveals a progress dialog where you can investigate all actions Oomph performs.
Oomph provides a toolbar, which is hidden by default. Enable it in Preferences / Oomph / Setup Tasks by checking Show tool bar contributions. Now we can repeat setup tasks or add additional project setups to our installation using the Import Projects... wizard from the toolbar setup entry.

Preferences Recorder

One of the most interesting features of Oomph is the preferences recorder. It can be enabled in the preferences window by selecting the record item in the bottom left corner. Once enabled it records all preference changes and stores them for you. When switching to another workspace these settings are applied directly. In practice this means: change a setting once and as long as you stick to Oomph you never have to think about it anymore.

Generally setup tasks (like setting preferences) may be stored to one of three different locations:
  1. User
    This is a global storage on your local machine shared for all installations and workspaces. Most of your changes will go here.
  2. Installation
    Settings get stored in the configuration folder of your current eclipse installation. These settings apply as long as you stick to the current eclipse binary.
  3. Workspace
    These settings get stored in the .metadata folder of your current workspace. So they are workspace specific, no matter which eclipse binary you use to access this workspace.
Personally I did not find a use case for options 2 or 3 yet.

Investigate Oomph Setups

Now that we know of the different storage locations, we can have a look at their content. The second toolbar item allows to open each one of them in the Setup Editor (setups are also available from the Navigate Open Setup menu).

The editor displays a tree structure of all Oomph tasks. As it is based on EMF we have to open the Properties view to display details of each tree element. If an element has a [restricted] annotation next to its name this means that the definition of this item is referenced by the current setup file. Typically this refers to a setup stored on the web. Such entries are readable, but cannot be changed without opening the original setup file.

Now that you are familiar with the basic ingredients we are ready to start building our own project setups.

by Christian Pontesegger ( at July 27, 2016 06:41 AM

Eclipse IoT Day @ ThingMonk

July 26, 2016 08:10 AM

Join us September 12 in London for the Eclipse IoT Day at ThingMonk!

July 26, 2016 08:10 AM

Eclipse IoT Day @ Thingmonk

by Ian Skerrett at July 25, 2016 02:40 PM

One of my favourite IoT events is the Thingmonk conference produced by Redmonk. The speakers and attendees are always amazing and provide great insight into the IoT community in the UK and Europe.   This year the speaker line-up for Thingmonk is looking awesome so I expect to learn lots again this year.

A new addition for Thingmonk this year is that we are organizing an Eclipse IoT Day @ Thingmonk on Day 0. We are planning an equally awesome line-up of speakers that will showcase how open source and Eclipse IoT has changing the IoT industry. The Eclipse IoT Day speaker will include:

  • Kamil Baczkowicz from DeltaRail will be talking about their experiences of using MQTT and Eclipse IoT for building signal-control systems for railways. This will be real IoT in action!
  • Patrizia Gufler from IBM Watson will showcase her work for integrating Eclipse Kura with IBM Watson.
  • Kai Hudalla from Bosch will continue an IoT cloud theme in his talk about an Open IoT stack for IoT@cloud-scale.
  • Our very own Benjamin Cabe will also be talking about the Eclipse IoT open strategy.

We plan to announce a few more speakers over the next couple of weeks. It should be pretty awesome.

After the Eclipse IoT Day, will be the Thingmonk HackDay. I fully expect to see further hacks on integrating Eclipse Kura with IBM Watson, Eclipse IoT running on Cloud Foundry and IBM Watson, and I am sure Benjamin will bring along some new boards.

This is going to be a great way to kick-off Thingmonk. Eclipse IoT Day @ Thingmonk is September 12 and Thingmonk is September 13-14. The costs for Eclipse IoT Day is £50.00 . You will want to stay for the full 3 days and that costs only £200.00.  This is a great event that you won’t want to miss.

by Ian Skerrett at July 25, 2016 02:40 PM

Hackathon Q2 2016 – Hamburg

by eselmeister at July 22, 2016 08:28 AM

Here’s a picture from our Hackathon last evening:


Please subscribe to the mailing list if you’d like to get informed about next Hackathon events:

by eselmeister at July 22, 2016 08:28 AM

From Xcore/ecore to OmniGraffle

by Jens v.P. ( at July 21, 2016 12:53 PM

Some years ago I wrote a small tool for creating OmniGraffle UML diagrams directly from Java source code. Visualizing Java is nice, but since I'm often use ecore/Xcore to define my models, I wanted a tool to also nicely visualize EMF based models.

I have now extended my tool, j2og, to also create UML class diagrams from ecore or Xcore models. Below you see a (manually layouted) version of an automatically generated diagram of the ecore library example.

j2og does not layout the diagram, since OmniGraffle provides some nice layout algorithms anyway. When creating the diagram, you can tweak the output with several settings. For example

  1. show or hide attribute and operation compartments
  2. show context, optionally grayed out -- the context are classifiers defined in an external package
  3. show package names, omit common package prefixes etc.
  4. and more

Note that besides OmniGraffle, you can open the diagrams with other tools (Diagrammix, Lucidchart) as well. See the j2og github page for details. You can install the tool via update site or Eclipse marketplace link.

The following image (click to enlarge) is the result of exporting a large Xcore model defining the AST of N4JS, a statically typed version of JavaScript. I have exported it and applied the hierarchy layout algorithm -- no other manual tweaks were applied. Of course, this diagram is probably too large to be really useable, but it is a great start to document (parts) of the model. Well, in case of an AST you probably prefer using an EBNF grammar ;-)

PS: Of course you could use ecoretools to create an UML diagram. I usually need the diagrams for documentation purposes. In that case, OmniGraffle simply is so much better since it is easier to use and the diagrams look so much nicer, (sorry, ecoretools).

by Jens v.P. ( at July 21, 2016 12:53 PM

Eclipse Newsletter - Neon Lights Everywhere

July 21, 2016 12:05 PM

Read great articles about Cloud Foundry and Docker Tooling, Buildship, and Automated Error Reporting.

July 21, 2016 12:05 PM

Oomph 01: A look at the eclipse installer

by Christian Pontesegger ( at July 21, 2016 11:30 AM

This will be the start of a new series of posts on Oomph. It is the basis for the eclipse installer but with the right configuration it can do much more:
  • serve your own RCP applications
  • provide fully configured development environments
  • store and distribute your settings over multiple installations
to name a few. This first part will have a look at the installer itself. Further posts of this series will focus on custom setups and how we can leverage the power of Oomph.

Oomph Tutorials

For a list of all Oomph related tutorials see my Oomph Tutorials Overview.

Step 1: The Eclipse Installer

All starts with the Eclipse Installer, a tool we will need throughout this series. Download and unzip it. The installer is an eclipse application by itself, so it provides the familiar folder structure with plugins, features, ini file and one executable. As we will need the installer continuously find a safe home for it.

After startup you will be in Simple Mode, something we will not cover here. Use the configuration icon in the top right corner to switch to Advanced Mode. The first thing we are presented with is a catalog of products to install.
The top right menu bar allows to add our own catalogs and to select which catalogs are displayed. For now we will ignore these settings, they will be treated in a separate tutorial. After selecting a product the bottom section allows us to select from different product versions, 32/64 bit, the Java runtime to use and if we want to use bundle pools.

Bundle Pools

A bundle pool is a place that stores - among some other things - plugins and features. Basically everything that a typical eclipse application would host in its plugins/features folders. Further it may host the content of target platforms.

Using a shared bundle pool saves you from a lot of redundant downloads from eclipse servers and can provide offline abilities. For everything available in the bundle pool you do not require an internet connection anymore. A nice feature if you are sitting behind a company firewall. While it is not required to use them, bundle pools save you a lot of time and are safe and convenient to use. At first I was quite hesitant of splitting my installations and move stuff to bundle pools, but after giving it a try I do not want to step back anmore.

To have some control over the used bundle pools, click on the icon next to the location and setup a New Agent... on a dedicated location. Further eclipse installations will use this pool, so do not alter the directory content manually. The Bundle Pool Management view will allow you to analyze, cleanup and even repair the directory content.
Step 2: Project Selection

The 2nd page of the installer presents eclipse projects we want to add to our installation. Selecting a project typically triggers actions after the plain eclipse installation:
  • automatically checkout projects
  • import into the workspace
  • set the target platform
  • apply preference settings
  • setup Mylyn
  • install additional components
The target is that you get everything pre-configured to start coding on the selected projects.

Step 3: Installer Variables

Installations do need some user input for the install location, repository checkout preferences, credentials and more. All these accumulated variables will be presented on the next page of the installer. By default the installer creates three folders below the Root install folder:
  • eclipse
    to host the eclipse binary and configuration. If you use bundle pools plugins and features go there. Otherwise they will be located here.
  • ws
    the default workspace for this installation
  • git
    the default folder for git checkouts
You may go with these defaults or change them to your needs. While developing a setup (which we will start in an upcoming tutorial) I would recommend to use these settings. For a final installation I prefer to host my workspace elsewhere.

Oomph stores all your settings in a global profile. So the next time you install something it will use your previously entered values here. You may always revisit your choices by enabling Show all variables in the bottom left corner.

The last page finally allows you to enable offline mode and to enable/disable download mirrors. On the next tutorial we will have a closer look at setup tasks and where these settings get persisted.

Optional: Preferences

The icons on the bottom allow to set two kinds of important preferences: proxy and ssh settings. If you are behind a proxy activate those settings and they will automatically be copied to any installation done be Oomph.

Ssh might be needed for git checkouts depending on your repository choices. If you do not use the default ssh settings you might need to wait for Neon.1 to have these settings applied (see bug 497057).

by Christian Pontesegger ( at July 21, 2016 11:30 AM

A new interpreter for EASE (5): Support for script keywords

by Christian Pontesegger ( at July 20, 2016 10:22 AM

EASE scripts registered in the preferences support  a very cool feature: keyword support in script headers. While this does not sound extremely awesome, it allows to bind scripts to the UI and will allow for more fancy stuff in the near future. Today we will add support for keyword detection in registered BeanShell scripts.

Read all tutorials from this series.

Source code for this tutorial is available on github as a single zip archive, as a Team Project Set or you can browse the files online. 

Step 1: Provide a code parser

Code parser is a big word. Currently all we need to detect in given script code are comments. As there already exists a corresponding base class, all we need to do is to provide a derived class indicating comment tokens:
package org.eclipse.ease.lang.beanshell;

import org.eclipse.ease.AbstractCodeParser;

public class BeanShellCodeParser extends AbstractCodeParser {

protected boolean hasBlockComment() {
return true;

protected String getBlockCommentEndToken() {
return "*/";

protected String getBlockCommentStartToken() {
return "/*";

protected String getLineCommentToken() {
return "//";

Step 2: Register the code parser

Similar to registering the code factory, we also need to register the code parser. Open the plugin.xml and select the scriptType extension for BeanShell. There register the code parser from above. Now EASE is able to parse script headers for keywords and interprets them accordingly.

by Christian Pontesegger ( at July 20, 2016 10:22 AM

Running Node.js on the JVM

by Ian Bull at July 20, 2016 06:47 AM

Gone are the days of single vendor lock-in, where one technology stack is used across an entire organization. Even small organizations and hobbyists will find themselves mixing technologies in a single project. For years, Java reigned king on the server. Today Node.js is everywhere.

Screen Shot 2016-07-19 at 10.45.53 AM

But even with the rise of Node.js and popularity of JavaScript, Java continues to shine. Furthermore, few organizations can afford to migrate their entire platform from the JVM to Node.js. This means organizations must either continue with their current technology stack or run multiple stacks with networked APIs to communicate.

Another option is to run Node.js and the JVM in a single process, and J2V8 finally makes this possible.



J2V8 is a set of V8 bindings for Java. J2V8 bundles V8 as a dynamic library and provides a Java API for the engine through the Java Native Interface (JNI). With J2V8 you can execute JavaScript using V8 in a similar fashion to how you would using Rhino or Nashorn.

J2V8 was originally developed to bring highly performant JavaScript to Tabris.js, a cross-platform mobile framework.

Over the past few months I’ve managed to build Node.js as a dynamic library and provide a Java API for it as well. Now you can execute Node scripts directly from Java. Unlike other approaches which try to implement Node.js using other JavaScript engines, this is true Node.js — bug for bug, feature for feature. Node.js runs in the same process as the JVM and all communication is done synchronously through JNI.

Combining Node and the JVM

J2V8 provides an API for executing Node.js scripts, registering Java callbacks, calling JavaScript functions, requiring NPM modules and running the Node.js message loop. The Node.js core modules have also been compiled in.

Running Node.js on the JVM provides an easy migration path for anyone with a large Java stack who wishes to start using Node.js. For example, you could run a Node.js server (such as Express.js) and call existing Java methods to handle certain requests.

static String NODE_SCRIPT = "var http = require('http');\n"
  + ""
  + "var server = http.createServer(function (request, response) {\n"
  + " response.writeHead(200, {'Content-Type': 'text/plain'});\n"
  + " response.end(someJavaMethod());\n"
  + "});\n"
  + ""
  + "server.listen(8000);\n"
  + "console.log('Server running at');";
public static void main(String[] args) throws IOException {
  final NodeJS nodeJS = NodeJS.createNodeJS();
  JavaCallback callback = new JavaCallback() {
    public Object invoke(V8Object receiver, V8Array parameters) {
      return "Hello, JavaWorld!";
  nodeJS.getRuntime().registerJavaMethod(callback, "someJavaMethod");
  File nodeScript = createTemporaryScriptFile(NODE_SCRIPT, "example");
  while(nodeJS.isRunning()) {


In addition to calling existing Java methods from Node.js, J2V8 provides the ability to call JavaScript functions (and by extension, NPM modules) directly from Java. With this integration, Java users can now start using NPM modules directly on the JVM. For example, you could use the jimp image processing library from Java.

public static void main(String[] args) {
  final NodeJS nodeJS = NodeJS.createNodeJS();
  final V8Object jimp = nodeJS.require(new File("path_to_jimp_module"));
  V8Function callback = new V8Function(nodeJS.getRuntime(), new JavaCallback() {	
    public Object invoke(V8Object receiver, V8Array parameters) {
      final V8Object image = parameters.getObject(1);
      executeJSFunction(image, "posterize", 7);
      executeJSFunction(image, "greyscale");
      executeJSFunction(image, "write",  "path_to_output");
      return null;
  executeJSFunction(jimp, "read", "path_to_image", callback);
  while(nodeJS.isRunning()) {

Getting J2V8

Node.js integration is now available in J2V8 (version 4.4.0). You can use it on Windows (32 and 64 bit), MacOS and Linux (64 bit). Use the following pom dependency to get it from Maven Central (this example is for Windows 64 bit — Change the OS / Arch for other platforms).


If you find this useful, please let me know. You can find me on Twitter @ irbull, or give the GitHub repo a star!


2 Comments. Tagged with j2v8, j2v8

by Ian Bull at July 20, 2016 06:47 AM

Now Available: The Eclipse C++ IDE for Arduino

by Doug Schaefer at July 18, 2016 04:34 PM

Back in October, I released the preview edition of the Arduino C++ IDE and the response has been fantastic. I had something like 50 bug reports and lots of questions on every forum imaginable. That great feedback gave me a lot of incentive to fix those bugs and get a release out based on the work we’ve done in CDT for the Eclipse Neon release. And that is now done and available on the Eclipse Marketplace.

What’s new in this release? Well, a name change for one. I wanted to highlight that this is an Eclipse CDT project effort, not necessarily an Arduino one, so I’ve renamed it to the “Eclipse C++ IDE for Arduino.” This fits in with our strategy moving forward of providing more vertical stack support for different platforms. Expect another marketplace entry for the Eclipse C++ IDE for Qt in the next release or two, for example.

But what matters to users is usability, of course. The main new feature in this release is the Arduino Download Manager available in the Help menu. It provides a dialog that guides you through download and install of Arduino Platforms and Libraries. The metadata provided by the Arduino community has been hugely beneficial in letting me build Arduino support into CDT in such a way that new boards and libraries can easily be added. And this new dialog is your gateway into that community.

Screen Shot 2016-07-18 at 12.14.21 PM

I’ve also done a video as an introduction. It’s only 11 minutes but walks you through installation to having an Arduino sketch running on your board outputting to the Serial Console.

As always, I love to hear from users either through forums or bug reports, especially bug reports. I have things set up to quickly get fixes to users through it’s own p2 update site. Always try Help -> Check for Updates to get the latest.

by Doug Schaefer at July 18, 2016 04:34 PM

An overview on the evolution of VIATRA

by Ábel Hegedüs at July 18, 2016 01:48 PM

An open-access article, entitled 'Road to a reactive and incremental model transformation platform: three generations of the VIATRA framework' has been published in the latest issue of Software and Systems Modeling written by Dániel Varró, Gábor Bergmann, Ábel Hegedüs, Ákos Horváth, István Ráth and Zoltán Ujhelyi, major contributors and co-leads of VIATRA.

The paper summarizes the history of the VIATRA model transformation framework by highlighting key features and illustrating main transformation concepts along an open case study influenced by an industrial project.

The same issue includes another VIATRA related paper, entitled 'Query-driven soft traceability links for models', that discusses the application of model queries for robust traceability between fragmented model artifacts.

by Ábel Hegedüs at July 18, 2016 01:48 PM

NatTable: messed up scrolling

by Stefan Winkler ( at July 18, 2016 07:24 AM

Today was not the first time, I made a common mistake with NatTable layers. And since it always takes a few minutes until I identify the problem, I'll post it here (as note to myself and maybe because it is helpful for anyone else ...).

The symptom is that when scrolling in a NatTable, it is not–or not only–the NatTable which is scrolling, but each cell seems to be dislocated in itself, leading to this:

NatTable messed up scrolling

The problem lies in my misinterpretation of the constructor JavaDoc of ColumnHeaderLayer (or RowHeaderLayer), which states for the second argument:

horizontalLayerDependency – The layer to link the horizontal dimension to, typically the body layer

It turns out, that I usually confuse the body data layer with the body layer. For my typical tables, the main part of the table is composed of the body data layer, the selection layer, and the viewport layer on top. 

The image shown above is usually the result of giving the body data layer as the horizontalLayerDependency parameter instead of the viewport layer (which is correct, because the viewport layer (as the topmost layer of the body layer stack) plays the role of the body layer in the sense of the ColumnHeaderLayer constructor's horizontal layer dependency).

So, should you ever encounter the above symptom, check your ColumnHeaderLayer and RowHeaderLayer constructor for the correct layer arguments.


{jcomments on}

by Stefan Winkler ( at July 18, 2016 07:24 AM

P2, Maven, and Gradle

by @nedtwigg Ned Twigg at July 15, 2016 09:30 PM

@nedtwigg wrote:

If you work with eclipse, then you know all about p2. It's an ambitious piece of software with broad scope, and it gives the eclipse ecosystem several unique features (bundle pools!). The downside of its ambition and scope is its complexity - it can be daunting for newcomers to use. Based on google search traffic, let's see how many people are searching the term p2 repository:

It looks like, for now, p2 has reached its full audience. To put the y-axis into perspective, let's compare the number of people searching for p2 repository with the number of people searching for maven repository.

This tells us two things:

  1. If you publish software in a p2 repository, most of the java world doesn't know how to get it.
  2. Unless some event happens, that's not going to change - the trends, such as they exist currently, are not in p2's favor.

The tragedy of this is that the eclipse ecosystem has lots of valuable bits to offer the broader community, and the broader community has lots of valuable contributions that aren't happening because they just can't get our stuff in the first place.

If we were looking for a strategy to get eclipse and p2 into the hands of more users and potential contributors, where could we go?

There are still way more Maven users than Gradle users - the p2 and maven results are reduced by having "repository" on the end. But Gradle is on a monstrous growth trajectory, and it already sees millions of downloads per month.

So, in an attempt to put Eclipse, p2, and its associated ecosystem onto the Gradle rocketship, I'm proud to present Goomph. Goomph is a gradle plugin which can do two things:

1) Put a little snippet inside your build.gradle file, and it will provision your IDE as a disposable build artifact, using techniques stolen from Oomph.

apply plugin: 'com.diffplug.gradle.oomph.ide'
oomphIde {
	jdt {}
	eclipseIni {
		vmargs('-Xmx2g')    // IDE can have up to 2 gigs of RAM
	style {
		classicTheme()  // oldschool cool
		niceText()      // with nice fonts and visible whitespace

2) It can download artifacts from a p2 repository, run PDE build, run eclipse ant tasks, and all kinds of eclipse build system miscellany.

If you're curious about these claims, you can quickly find out more by cloning the Gradle and Eclipse RCP demo project. Run gradlew ide and you'll have a working IDE with a targetplatform ready to go. Run gradlew assemble.all and you'll have native launchers for win/mac/linux.

If you'd like to know more about Gradle and p2, here's a youtube video of a talk I presented at Gradle Summit 2016.

Future blog posts will dive deeper into these topics. If you'd like to be notified, you can follow its development on any of these channels:

Posts: 1

Participants: 1

Read full topic

by @nedtwigg Ned Twigg at July 15, 2016 09:30 PM

News about DiffPlug's open source eclipse projects

by @nedtwigg Ned Twigg at July 15, 2016 09:23 PM

@nedtwigg wrote:

News about DiffPlug's open source eclipse projects.

Posts: 1

Participants: 1

Read full topic

by @nedtwigg Ned Twigg at July 15, 2016 09:23 PM

Introducing the Query by Example addon of VIATRA

by Gábor Bergmann at July 15, 2016 01:09 PM

We present an illustrated introduction to Query By Example (QBE), an exciting new addon of VIATRA. QBE is a tool that helps you write queries simply by selecting model elements in your favorite editor. This automatic feature is intended to help users who are learning the VIATRA Query Language and/or are unfamiliar with the internal structure of the modeling language (metamodel) they are working with.

The problem: querying what you do not know

Model queries are used for a multitude of reasons. Often, they are developed by modeling tool authors to accomplish built-in functionalities of the language or tool, such as well-formedness checking, derived features or declarative views. But sometimes it is not the developer of the modeling language who specifies the query: e.g. users may define queries themselves to enforce a company-specific design rules, or 3rd parties may provide transformation plugins that map a model into a different representation.

There is a hidden obstacle here: usually only the language developer has intimate knowledge of the metamodel (the abstract syntax), while others are familiar with the language merely through views presented to users (the concrete syntax). It is, however, the abstract syntax which is necessary for defining queries in the traditional way.

Motivating case study: defining UML design rules

Imagine, for instance, that you are an engineer at a company that creates UML models using Papyrus, and you wish to define model queries in order to implement validation for an in-house design rule that all your UML Sequence Diagrams should adhere to: "Engine objects can invoke UI methods only in a non-blocking way".

The first challenge would be formulating a query that identifies blocking calls between objects on Sequence Diagrams - such a situation would look like this in the Papyrus editor:

what a synchronous call looks like in the concrete syntax

what a synchronous call looks like in the concrete syntax

Expressing this query in the .vql syntax would require you to know the names of the relevant EClasses of the UML metamodel and their features.

There are some easier hurdles to jump - the editor palette tells you that the vertical lines are not actually called "objects" but rather Lifelines. You might also understand from the default name offered by Papyrus that the contents of the diagram are actually represented by a model object of type Interaction.

Sometimes, you will find a bit more difficulty in formulating the query. Although the Papyrus editor palette tells you that the "arrow" thingy representing a blocking method invocation is a "Message Sync", but the actual model object is of the class Message, and the synchronous nature is expressed by its messageSort attribute being set to MessageSort::synchCall.

However, some aspects turn out to be much more difficult to guess. The UML graphical syntax offers no clues that would let you realize that the message does not directly refer to the lifelines, or vica versa. Instead, there are two invisible objects (of type MessageOccurrenceSpecification) at play, that represent the sending or receiving of the message by a lifeline:

A schematic representation of a UML model fragment in abstract syntax

A schematic representation of a UML model fragment in abstract syntax

Really, this whole thing is a mess. It is quite difficult to understand the abstract structure and come up with the right type names when writing a query, unless you are an expert of the relevant modeling language (the UML standard, in this case).

The solution: Query by Example at a glance

Wouldn't it be much easier if you could just create an example in concrete syntax using a regular model editor, and then instruct the model query framework to "fetch me stuff that looks like this"? You are in luck - this is pretty much what the new Query By Example (QBE) tool lets you do! (Available from the update sites as a VIATRA Addon since v1.3.)

To get started, the you have to select a few elements in the model, and initiate the QBE process. The QBE tool will perform a model exploration on the given EMF model to find how the selected anchor elements relate to each other. The Query by Example view will present the results of the model discovery, where you can follow up on the status of the pattern being generated, and perform some fine-tuning on it (via the Properties view). The pattern code can be exported either to the clipboard, or to a .vql file. After subsequent fine tuning, the Update button can be used to propagate any changes made to the previously saved .vql file.


(View video with subtitles/CC turned on.)

Case study walk-through: creating your first query by example

Select the two lifelines and the message in the Papyrus editor:

Sequence Diagram, with message and its source and target lifelines selected

Sequence Diagram, with message and its source and target lifelines selected

Now it is time to press the "Start" button on the Query By Example tool:

Pressing start on the QBE View

Pressing start on the QBE View

A quick glance on the contents of the QBE view (which will be explained in greater detail soon) immediately tells you that the tool has discovered that the three selected model elements are connected via three additional objects - an Interaction and two MessageOccurrenceSpecifications:

QBE View after selecting an example

QBE View after selecting an example

The QBE tool has also explored all the attribute values of these six objects, but has no way to know which of them are actually relevant to the query. Most attribute values in the example are incidental, such as the name of the lifeline. So you need to manually go through the list, find the one that has MessageSort::synchCall as the value of the messageSort of the Message (it is quite apparent from the list that none of the other attributes have anything to do with the synchronous nature of the invocation). Then you can simply indicate that it should be added as a condition to the query, by selecting it from the list, and marking it in the Properties view as included.

Finally, another button on the QBE UI lets you export the pattern to the clipboard, or a .vql file in a Viatra Query project.

Saving the finished query to the clipboard

Saving the finished query to the clipboard

If you save the generated query to a .vql file, you will notice that it does not compile at first.

A compiler error comes from the fact that the QBE tool can not (yet) guess the name of the Java package where you save the file. You can fix this by manually specifying your package: select the "package" entry in the QBE view, and use the Properties view to change the name. While you are at it, you may even change the name of the pattern to something meaningful. If you have previously opted to save the generated query to a file in the workspace, you can now overwrite that file with the new content by a single click on the Update button:

PAckage and pattern names, and the Update button

PAckage and pattern names, and the Update button

There is one more likely reason for your query not compile: the query project might not have the UML types on its classpath; you can fix this easily by adding the metamodel bundle org.eclipse.uml2.uml to your dependencies.

The generated query should look something like this (small variations possible), with pattern variables created for anchors and intermediate objects (the former appearing as pattern parameters); reference constraints created for the paths connecting them; and the additional attribute constraint that was manually requested:

package org.example.uml.designrules

import ""

pattern blockingCall(
    lifeline0 : Lifeline,
    lifeline1 : Lifeline,
    message0 : Message
) {
    Lifeline.coveredBy(lifeline0, messageoccurrencespecification0);
    Lifeline.coveredBy(lifeline1, messageoccurrencespecification1);
    Interaction.lifeline(interaction0, lifeline1);
    Lifeline.interaction(lifeline0, interaction0);
    Message.sendEvent(message0, messageoccurrencespecification0);
    MessageOccurrenceSpecification.message(messageoccurrencespecification1, message0);
    Interaction.message(interaction0, message0);
    Message.receiveEvent(message0, messageoccurrencespecification1);
    MessageOccurrenceSpecification.message(messageoccurrencespecification0, message0);

    Message.messageSort(message0, MessageSort::synchCall);

You can now load the query into Query Explorer (or the new Query Results view) to verify that it does the right thing - i.e. it matches exactly those elements that you want it to match. If it does not, you can use the QBE UI to make adjustments, and fine-tune the query (e.g. adding or removing additional constraints, see below) to meet your goals.

How it works under the hood

When you select some elements in an editor or viewer, and press the "Start" button of QBE, the tool needs to recognize the selection as a set of EObjects. VIATRA ships with integration components for several popular editor frameworks (in particular, it works out of the box with Papyrus, which is GMF-based), but you might need to contribute a model connector plug-in in order to be able to use QBE with a custom editor.

The model discovery will start separately from each selected EObject (anchor element), and will traverse EMF reference links up to a given exploration depth limit, in order to collect all paths (not longer than the given depth) connecting two anchors.

Initially, the tool automatically selects the smallest exploration depth that makes all anchors connected by the paths discovered. You can use the Exploration depth slider in the QBE view to manually increase this depth limit, so that the tool notices additional, less direct connections between the selected anchors. Changing the exploration depth will re-trigger model discovery, so that the tool can gather new paths.

After model discovery, a first attempt at a pattern will be formed by all paths that were found to connect the anchors. Pattern variables will consist of anchor points (given by the user as part of the selection) as well as the additional objects discovered as intermediate points along the paths. By default, only variables corresponding to anchors will be used as pattern parameters, while the rest will be local variables. References traversed along paths will contribute edge constraints to the pattern body.   

Fine tuning

Sometimes, the QBE tool will not immediately stumble upon the query that you are interested in. You can identify problems with the proposed pattern by directly inspecting the generated query code, or by examining query results on a test model. In these cases, there are still ways you can fine-tune the pattern through the QBE (and Properties) view, to make sure the generated query is useful.

For instance, recall how we originally designated the two Lifelines and the Message as part of the example. Had you selected the two Lifelines only, QBE would have found them connected by a path of length 2 - as they are both lifelines in the same Interaction. The fact that they exchange a Message turns out to be a less direct relationship between the two anchors. In order to arrive at the correct query, you have to manually increase the exploration depth to 4, thereby forcing QBE to search for connections between anchors in a wider context.

Depending on other particularities of the model you use as example, this wider context may have included connections between the Lifelines that are incidental in the example model, and not essential parts of the query. In this case, it is still the responsibility of the query developer to determine which details are relevant; the QBE view allows you to mark connecting paths as well as intermediate objects as excluded from the result. In the previous case study, having a common Interaction was not actually necessary as part of the query - but it is not important to remove it now, as it will not influence the results.

Additional fine-tuning options include:

  • Promoting intermediate objects found during the discovery to act as pattern parameters (along with the anchors).
  • Renaming pattern variables.
  • Adding attribute constraints based on the attribute values of the discovered objects (as demonstrated before).
  • Similarly, adding negative application conditions. By pressing the 'Find negative constraints' button, the tool will search for references between pairs of variables (anchors or intermediate objects) that are permitted in the metamodel, but not present in the example instance model. The absence of such references will be offered as additional, opt-in 'neg find' constraints that can be individually selected to be included in the query.

The rest of the case study and conclusion

You might have noticed that the above case study is far from complete. We have only managed to identify the relationship between the sender and receiver Lifelines of a blocking call; we still need to

  1. express using QBE that a given Lifeline represents an instance of a certain Class
  2. express using QBE that a certain Class resides in a Package with the name "Engine" or "GUI"
  3. write a final query that uses pattern composition to combine the previously created QBE queries in order to match a violation of the constraint in question (i.e. "an Engine object invoking a method on a UI class in a blocking way").

Here is what the Class diagram looks like:

Snippet of Class diagram in the concrete syntax

Snippet of Class diagram in the concrete syntax

For the first task, one only needs to select a Lifeline and its associated Class. Whoops - the Papyrus UI becomes an obstacle here, as there seems to be no way to simultaneously display these two elements. Fortunately, we can just select one of the two elements as a single anchor, switch over the view to the other element, and then tell the QBE tool to expand the previous selection with the new element by selecting the "Expand" alternative action available from the little drop-down arrow at the "Start" button.

The second task requires familiar steps only: selecting a Class and its Package as the two anchors, and then adding the name of the UML package as an attribute constraint. Note that in a more complex real-word example, the query would probably include a more complex condition (such as a regular expression) on the name of the package. As of now, QBE can only generate constraints for exact attribute value filtering; if you need anything more advanced than that, then you will have to consider the query generated by QBE merely as a starting point that you have to modify manually.

The third task is best solved by simply manually writing a query that composes the previously obtained patterns, and not by applying QBE, so it is left as an exercise to the reader :) The main purpose of QBE is to help the user discover connections in a model with an unfamiliar abstract syntax; it will not replace regular query engineering in the generic case.

To put it simply, you should think of Query By Example as a tool for abstracting away the ugly, unknown details of a modeling language before developing more complex queries for that language as usual.

by Gábor Bergmann at July 15, 2016 01:09 PM

ZeroTurnaround Releases RebelLabs Developer Productivity Report

by Alex Blewitt at July 14, 2016 08:33 AM

Today, ZeroTurnaround's RebelLabs released their biannual developer productivity report, which asked over 2000 respondents what their tools of the trade were. InfoQ has been given access to the report and summarises its findings.

By Alex Blewitt

by Alex Blewitt at July 14, 2016 08:33 AM