Hackathon Q2 2016 – Hamburg

by eselmeister at July 22, 2016 08:28 AM

Here’s a picture from our Hackathon last evening:



Please subscribe to the mailing list if you’d like to get informed about next Hackathon events:

by eselmeister at July 22, 2016 08:28 AM

From Xcore/ecore to OmniGraffle

by Jens v.P. (noreply@blogger.com) at July 21, 2016 12:53 PM

Some years ago I wrote a small tool for creating OmniGraffle UML diagrams directly from Java source code. Visualizing Java is nice, but since I'm often use ecore/Xcore to define my models, I wanted a tool to also nicely visualize EMF based models.

I have now extended my tool, j2og, to also create UML class diagrams from ecore or Xcore models. Below you see a (manually layouted) version of an automatically generated diagram of the ecore library example.

j2og does not layout the diagram, since OmniGraffle provides some nice layout algorithms anyway. When creating the diagram, you can tweak the output with several settings. For example

  1. show or hide attribute and operation compartments
  2. show context, optionally grayed out -- the context are classifiers defined in an external package
  3. show package names, omit common package prefixes etc.
  4. and more

Note that besides OmniGraffle, you can open the diagrams with other tools (Diagrammix, Lucidchart) as well. See the j2og github page for details. You can install the tool via update site or Eclipse marketplace link.

The following image (click to enlarge) is the result of exporting a large Xcore model defining the AST of N4JS, a statically typed version of JavaScript. I have exported it and applied the hierarchy layout algorithm -- no other manual tweaks were applied. Of course, this diagram is probably too large to be really useable, but it is a great start to document (parts) of the model. Well, in case of an AST you probably prefer using an EBNF grammar ;-)

PS: Of course you could use ecoretools to create an UML diagram. I usually need the diagrams for documentation purposes. In that case, OmniGraffle simply is so much better since it is easier to use and the diagrams look so much nicer, (sorry, ecoretools).

by Jens v.P. (noreply@blogger.com) at July 21, 2016 12:53 PM

Eclipse Newsletter - Neon Lights Everywhere

July 21, 2016 12:05 PM

Read great articles about Cloud Foundry and Docker Tooling, Buildship, and Automated Error Reporting.

July 21, 2016 12:05 PM

Oomph 01: A look at the eclipse installer

by Christian Pontesegger (noreply@blogger.com) at July 21, 2016 11:30 AM

This will be the start of a new series of posts on Oomph. It is the basis for the eclipse installer but with the right configuration it can do much more:
  • serve your own RCP applications
  • provide fully configured development environments
  • store and distribute your settings over multiple installations
to name a few. This first part will have a look at the installer itself. Further posts of this series will focus on custom setups and how we can leverage the power of Oomph.

Oomph Tutorials

For a list of all Oomph related tutorials see my Oomph Tutorials Overview.

Step 1: The Eclipse Installer

All starts with the Eclipse Installer, a tool we will need throughout this series. Download and unzip it. The installer is an eclipse application by itself, so it provides the familiar folder structure with plugins, features, ini file and one executable. As we will need the installer continuously find a safe home for it.

After startup you will be in Simple Mode, something we will not cover here. Use the configuration icon in the top right corner to switch to Advanced Mode. The first thing we are presented with is a catalog of products to install.
The top right menu bar allows to add our own catalogs and to select which catalogs are displayed. For now we will ignore these settings, they will be treated in a separate tutorial. After selecting a product the bottom section allows us to select from different product versions, 32/64 bit, the Java runtime to use and if we want to use bundle pools.

Bundle Pools

A bundle pool is a place that stores - among some other things - plugins and features. Basically everything that a typical eclipse application would host in its plugins/features folders. Further it may host the content of target platforms.

Using a shared bundle pool saves you from a lot of redundant downloads from eclipse servers and can provide offline abilities. For everything available in the bundle pool you do not require an internet connection anymore. A nice feature if you are sitting behind a company firewall. While it is not required to use them, bundle pools save you a lot of time and are safe and convenient to use. At first I was quite hesitant of splitting my installations and move stuff to bundle pools, but after giving it a try I do not want to step back anmore.

To have some control over the used bundle pools, click on the icon next to the location and setup a New Agent... on a dedicated location. Further eclipse installations will use this pool, so do not alter the directory content manually. The Bundle Pool Management view will allow you to analyze, cleanup and even repair the directory content.
Step 2: Project Selection

The 2nd page of the installer presents eclipse projects we want to add to our installation. Selecting a project typically triggers actions after the plain eclipse installation:
  • automatically checkout projects
  • import into the workspace
  • set the target platform
  • apply preference settings
  • setup Mylyn
  • install additional components
The target is that you get everything pre-configured to start coding on the selected projects.

Step 3: Installer Variables

Installations do need some user input for the install location, repository checkout preferences, credentials and more. All these accumulated variables will be presented on the next page of the installer. By default the installer creates three folders below the Root install folder:
  • eclipse
    to host the eclipse binary and configuration. If you use bundle pools plugins and features go there. Otherwise they will be located here.
  • ws
    the default workspace for this installation
  • git
    the default folder for git checkouts
You may go with these defaults or change them to your needs. While developing a setup (which we will start in an upcoming tutorial) I would recommend to use these settings. For a final installation I prefer to host my workspace elsewhere.

Oomph stores all your settings in a global profile. So the next time you install something it will use your previously entered values here. You may always revisit your choices by enabling Show all variables in the bottom left corner.

The last page finally allows you to enable offline mode and to enable/disable download mirrors. On the next tutorial we will have a closer look at setup tasks and where these settings get persisted.

Optional: Preferences

The icons on the bottom allow to set two kinds of important preferences: proxy and ssh settings. If you are behind a proxy activate those settings and they will automatically be copied to any installation done be Oomph.

Ssh might be needed for git checkouts depending on your repository choices. If you do not use the default ssh settings you might need to wait for Neon.1 to have these settings applied (see bug 497057).

by Christian Pontesegger (noreply@blogger.com) at July 21, 2016 11:30 AM

A new interpreter for EASE (5): Support for script keywords

by Christian Pontesegger (noreply@blogger.com) at July 20, 2016 10:22 AM

EASE scripts registered in the preferences support  a very cool feature: keyword support in script headers. While this does not sound extremely awesome, it allows to bind scripts to the UI and will allow for more fancy stuff in the near future. Today we will add support for keyword detection in registered BeanShell scripts.

Read all tutorials from this series.

Source code for this tutorial is available on github as a single zip archive, as a Team Project Set or you can browse the files online. 

Step 1: Provide a code parser

Code parser is a big word. Currently all we need to detect in given script code are comments. As there already exists a corresponding base class, all we need to do is to provide a derived class indicating comment tokens:
package org.eclipse.ease.lang.beanshell;

import org.eclipse.ease.AbstractCodeParser;

public class BeanShellCodeParser extends AbstractCodeParser {

protected boolean hasBlockComment() {
return true;

protected String getBlockCommentEndToken() {
return "*/";

protected String getBlockCommentStartToken() {
return "/*";

protected String getLineCommentToken() {
return "//";

Step 2: Register the code parser

Similar to registering the code factory, we also need to register the code parser. Open the plugin.xml and select the scriptType extension for BeanShell. There register the code parser from above. Now EASE is able to parse script headers for keywords and interprets them accordingly.

by Christian Pontesegger (noreply@blogger.com) at July 20, 2016 10:22 AM

Running Node.js on the JVM

by Ian Bull at July 20, 2016 06:47 AM

Gone are the days of single vendor lock-in, where one technology stack is used across an entire organization. Even small organizations and hobbyists will find themselves mixing technologies in a single project. For years, Java reigned king on the server. Today Node.js is everywhere.

Screen Shot 2016-07-19 at 10.45.53 AM

But even with the rise of Node.js and popularity of JavaScript, Java continues to shine. Furthermore, few organizations can afford to migrate their entire platform from the JVM to Node.js. This means organizations must either continue with their current technology stack or run multiple stacks with networked APIs to communicate.

Another option is to run Node.js and the JVM in a single process, and J2V8 finally makes this possible.



J2V8 is a set of V8 bindings for Java. J2V8 bundles V8 as a dynamic library and provides a Java API for the engine through the Java Native Interface (JNI). With J2V8 you can execute JavaScript using V8 in a similar fashion to how you would using Rhino or Nashorn.

Over the past few months I’ve managed to build Node.js as a dynamic library and provide a Java API for it as well. Now you can execute Node scripts directly from Java. Unlike other approaches which try to implement Node.js using other JavaScript engines, this is true Node.js — bug for bug, feature for feature. Node.js runs in the same process as the JVM and all communication is done synchronously through JNI.

Combining Node and the JVM

J2V8 provides an API for executing Node.js scripts, registering Java callbacks, calling JavaScript functions, requiring NPM modules and running the Node.js message loop. The Node.js core modules have also been compiled in.

Running Node.js on the JVM provides an easy migration path for anyone with a large Java stack who wishes to start using Node.js. For example, you could run a Node.js server (such as Express.js) and call existing Java methods to handle certain requests.

static String NODE_SCRIPT = "var http = require('http');\n"
  + ""
  + "var server = http.createServer(function (request, response) {\n"
  + " response.writeHead(200, {'Content-Type': 'text/plain'});\n"
  + " response.end(someJavaMethod());\n"
  + "});\n"
  + ""
  + "server.listen(8000);\n"
  + "console.log('Server running at');";
public static void main(String[] args) throws IOException {
  final NodeJS nodeJS = NodeJS.createNodeJS();
  JavaCallback callback = new JavaCallback() {
    public Object invoke(V8Object receiver, V8Array parameters) {
      return "Hello, JavaWorld!";
  nodeJS.getRuntime().registerJavaMethod(callback, "someJavaMethod");
  File nodeScript = createTemporaryScriptFile(NODE_SCRIPT, "example");
  while(nodeJS.isRunning()) {


In addition to calling existing Java methods from Node.js, J2V8 provides the ability to call JavaScript functions (and by extension, NPM modules) directly from Java. With this integration, Java users can now start using NPM modules directly on the JVM. For example, you could use the jimp image processing library from Java.

public static void main(String[] args) {
  final NodeJS nodeJS = NodeJS.createNodeJS();
  final V8Object jimp = nodeJS.require(new File("path_to_jimp_module"));
  V8Function callback = new V8Function(nodeJS.getRuntime(), new JavaCallback() {	
    public Object invoke(V8Object receiver, V8Array parameters) {
      final V8Object image = parameters.getObject(1);
      executeJSFunction(image, "posterize", 7);
      executeJSFunction(image, "greyscale");
      executeJSFunction(image, "write",  "path_to_output");
      return null;
  executeJSFunction(jimp, "read", "path_to_image", callback);
  while(nodeJS.isRunning()) {

Getting J2V8

Node.js integration is now available in J2V8 (version 4.4.0). You can use it on Windows (32 and 64 bit), MacOS and Linux (64 bit). Use the following pom dependency to get it from Maven Central (this example is for Windows 64 bit — Change the OS / Arch for other platforms).


If you find this useful, please let me know. You can find me on Twitter @ irbull, or give the GitHub repo a star!


2 Comments. Tagged with j2v8, j2v8

by Ian Bull at July 20, 2016 06:47 AM

Now Available: The Eclipse C++ IDE for Arduino

by Doug Schaefer at July 18, 2016 04:34 PM

Back in October, I released the preview edition of the Arduino C++ IDE and the response has been fantastic. I had something like 50 bug reports and lots of questions on every forum imaginable. That great feedback gave me a lot of incentive to fix those bugs and get a release out based on the work we’ve done in CDT for the Eclipse Neon release. And that is now done and available on the Eclipse Marketplace.

What’s new in this release? Well, a name change for one. I wanted to highlight that this is an Eclipse CDT project effort, not necessarily an Arduino one, so I’ve renamed it to the “Eclipse C++ IDE for Arduino.” This fits in with our strategy moving forward of providing more vertical stack support for different platforms. Expect another marketplace entry for the Eclipse C++ IDE for Qt in the next release or two, for example.

But what matters to users is usability, of course. The main new feature in this release is the Arduino Download Manager available in the Help menu. It provides a dialog that guides you through download and install of Arduino Platforms and Libraries. The metadata provided by the Arduino community has been hugely beneficial in letting me build Arduino support into CDT in such a way that new boards and libraries can easily be added. And this new dialog is your gateway into that community.

Screen Shot 2016-07-18 at 12.14.21 PM

I’ve also done a video as an introduction. It’s only 11 minutes but walks you through installation to having an Arduino sketch running on your board outputting to the Serial Console.

As always, I love to hear from users either through forums or bug reports, especially bug reports. I have things set up to quickly get fixes to users through it’s own p2 update site. Always try Help -> Check for Updates to get the latest.

by Doug Schaefer at July 18, 2016 04:34 PM

An overview on the evolution of VIATRA

by Ábel Hegedüs at July 18, 2016 01:48 PM

An open-access article, entitled 'Road to a reactive and incremental model transformation platform: three generations of the VIATRA framework' has been published in the latest issue of Software and Systems Modeling written by Dániel Varró, Gábor Bergmann, Ábel Hegedüs, Ákos Horváth, István Ráth and Zoltán Ujhelyi, major contributors and co-leads of VIATRA.

The paper summarizes the history of the VIATRA model transformation framework by highlighting key features and illustrating main transformation concepts along an open case study influenced by an industrial project.

The same issue includes another VIATRA related paper, entitled 'Query-driven soft traceability links for models', that discusses the application of model queries for robust traceability between fragmented model artifacts.

by Ábel Hegedüs at July 18, 2016 01:48 PM

NatTable: messed up scrolling

by Stefan Winkler (stefan@winklerweb.net) at July 18, 2016 07:24 AM

Today was not the first time, I made a common mistake with NatTable layers. And since it always takes a few minutes until I identify the problem, I'll post it here (as note to myself and maybe because it is helpful for anyone else ...).

The symptom is that when scrolling in a NatTable, it is not–or not only–the NatTable which is scrolling, but each cell seems to be dislocated in itself, leading to this:

NatTable messed up scrolling

The problem lies in my misinterpretation of the constructor JavaDoc of ColumnHeaderLayer (or RowHeaderLayer), which states for the second argument:

horizontalLayerDependency – The layer to link the horizontal dimension to, typically the body layer

It turns out, that I usually confuse the body data layer with the body layer. For my typical tables, the main part of the table is composed of the body data layer, the selection layer, and the viewport layer on top. 

The image shown above is usually the result of giving the body data layer as the horizontalLayerDependency parameter instead of the viewport layer (which is correct, because the viewport layer (as the topmost layer of the body layer stack) plays the role of the body layer in the sense of the ColumnHeaderLayer constructor's horizontal layer dependency).

So, should you ever encounter the above symptom, check your ColumnHeaderLayer and RowHeaderLayer constructor for the correct layer arguments.


{jcomments on}

by Stefan Winkler (stefan@winklerweb.net) at July 18, 2016 07:24 AM

P2, Maven, and Gradle

by @nedtwigg Ned Twigg at July 15, 2016 09:30 PM

@nedtwigg wrote:

P2, Maven, and Gradle

If you work with eclipse, then you know all about p2. It's an ambitious piece of software with broad scope, and it gives the eclipse ecosystem several unique features (bundle pools!). The downside of its ambition and scope is its complexity - it can be daunting for newcomers to use. Based on google search traffic, let's see how many people are searching the term p2 repository:

It looks like, for now, p2 has reached its full audience. To put the y-axis into perspective, let's compare the number of people searching for p2 repository with the number of people searching for maven repository.

This tells us two things:

  1. If you publish software in a p2 repository, most of the java world doesn't know how to get it.
  2. Unless some event happens, that's not going to change - the trends, such as they exist currently, are not in p2's favor.

The tragedy of this is that the eclipse ecosystem has lots of valuable bits to offer the broader community, and the broader community has lots of valuable contributions that aren't happening because they just can't get our stuff in the first place.

If we were looking for a strategy to get eclipse and p2 into the hands of more users and potential contributors, where could we go?

There are still way more Maven users than Gradle users - the p2 and maven results are reduced by having "repository" on the end. But Gradle is on a monstrous growth trajectory, and it already sees millions of downloads per month.

So, in an attempt to put Eclipse, p2, and its associated ecosystem onto the Gradle rocketship, I'm proud to present Goomph. Goomph is a gradle plugin which can do two things:

1) Put a little snippet inside your build.gradle file, and it will provision your IDE as a disposable build artifact, using techniques stolen from Oomph.

apply plugin: 'com.diffplug.gradle.oomph.ide'
oomphIde {
	jdt {}
	eclipseIni {
		vmargs('-Xmx2g')    // IDE can have up to 2 gigs of RAM
	style {
		classicTheme()  // oldschool cool
		niceText()      // with nice fonts and visible whitespace

2) It can download artifacts from a p2 repository, run PDE build, run eclipse ant tasks, and all kinds of eclipse build system miscellany.

If you're curious about these claims, you can quickly find out more by cloning the Gradle and Eclipse RCP demo project. Run gradlew ide and you'll have a working IDE with a targetplatform ready to go. Run gradlew assemble.all and you'll have native launchers for win/mac/linux.

If you'd like to know more about Gradle and p2, here's a youtube video of a talk I presented at Gradle Summit 2016.

Future blog posts will dive deeper into these topics. If you'd like to be notified, you can follow its development on any of these channels:

Posts: 1

Participants: 1

Read full topic

by @nedtwigg Ned Twigg at July 15, 2016 09:30 PM

News about DiffPlug's open source eclipse projects

by @nedtwigg Ned Twigg at July 15, 2016 09:23 PM

@nedtwigg wrote:

News about DiffPlug's open source eclipse projects.

Posts: 1

Participants: 1

Read full topic

by @nedtwigg Ned Twigg at July 15, 2016 09:23 PM

Introducing the Query by Example addon of VIATRA

by Gábor Bergmann at July 15, 2016 01:09 PM

We present an illustrated introduction to Query By Example (QBE), an exciting new addon of VIATRA. QBE is a tool that helps you write queries simply by selecting model elements in your favorite editor. This automatic feature is intended to help users who are learning the VIATRA Query Language and/or are unfamiliar with the internal structure of the modeling language (metamodel) they are working with.

The problem: querying what you do not know

Model queries are used for a multitude of reasons. Often, they are developed by modeling tool authors to accomplish built-in functionalities of the language or tool, such as well-formedness checking, derived features or declarative views. But sometimes it is not the developer of the modeling language who specifies the query: e.g. users may define queries themselves to enforce a company-specific design rules, or 3rd parties may provide transformation plugins that map a model into a different representation.

There is a hidden obstacle here: usually only the language developer has intimate knowledge of the metamodel (the abstract syntax), while others are familiar with the language merely through views presented to users (the concrete syntax). It is, however, the abstract syntax which is necessary for defining queries in the traditional way.

Motivating case study: defining UML design rules

Imagine, for instance, that you are an engineer at a company that creates UML models using Papyrus, and you wish to define model queries in order to implement validation for an in-house design rule that all your UML Sequence Diagrams should adhere to: "Engine objects can invoke UI methods only in a non-blocking way".

The first challenge would be formulating a query that identifies blocking calls between objects on Sequence Diagrams - such a situation would look like this in the Papyrus editor:

what a synchronous call looks like in the concrete syntax

what a synchronous call looks like in the concrete syntax

Expressing this query in the .vql syntax would require you to know the names of the relevant EClasses of the UML metamodel and their features.

There are some easier hurdles to jump - the editor palette tells you that the vertical lines are not actually called "objects" but rather Lifelines. You might also understand from the default name offered by Papyrus that the contents of the diagram are actually represented by a model object of type Interaction.

Sometimes, you will find a bit more difficulty in formulating the query. Although the Papyrus editor palette tells you that the "arrow" thingy representing a blocking method invocation is a "Message Sync", but the actual model object is of the class Message, and the synchronous nature is expressed by its messageSort attribute being set to MessageSort::synchCall.

However, some aspects turn out to be much more difficult to guess. The UML graphical syntax offers no clues that would let you realize that the message does not directly refer to the lifelines, or vica versa. Instead, there are two invisible objects (of type MessageOccurrenceSpecification) at play, that represent the sending or receiving of the message by a lifeline:

A schematic representation of a UML model fragment in abstract syntax

A schematic representation of a UML model fragment in abstract syntax

Really, this whole thing is a mess. It is quite difficult to understand the abstract structure and come up with the right type names when writing a query, unless you are an expert of the relevant modeling language (the UML standard, in this case).

The solution: Query by Example at a glance

Wouldn't it be much easier if you could just create an example in concrete syntax using a regular model editor, and then instruct the model query framework to "fetch me stuff that looks like this"? You are in luck - this is pretty much what the new Query By Example (QBE) tool lets you do! (Available from the update sites as a VIATRA Addon since v1.3.)

To get started, the you have to select a few elements in the model, and initiate the QBE process. The QBE tool will perform a model exploration on the given EMF model to find how the selected anchor elements relate to each other. The Query by Example view will present the results of the model discovery, where you can follow up on the status of the pattern being generated, and perform some fine-tuning on it (via the Properties view). The pattern code can be exported either to the clipboard, or to a .vql file. After subsequent fine tuning, the Update button can be used to propagate any changes made to the previously saved .vql file.


(View video with subtitles/CC turned on.)

Case study walk-through: creating your first query by example

Select the two lifelines and the message in the Papyrus editor:

Sequence Diagram, with message and its source and target lifelines selected

Sequence Diagram, with message and its source and target lifelines selected

Now it is time to press the "Start" button on the Query By Example tool:

Pressing start on the QBE View

Pressing start on the QBE View

A quick glance on the contents of the QBE view (which will be explained in greater detail soon) immediately tells you that the tool has discovered that the three selected model elements are connected via three additional objects - an Interaction and two MessageOccurrenceSpecifications:

QBE View after selecting an example

QBE View after selecting an example

The QBE tool has also explored all the attribute values of these six objects, but has no way to know which of them are actually relevant to the query. Most attribute values in the example are incidental, such as the name of the lifeline. So you need to manually go through the list, find the one that has MessageSort::synchCall as the value of the messageSort of the Message (it is quite apparent from the list that none of the other attributes have anything to do with the synchronous nature of the invocation). Then you can simply indicate that it should be added as a condition to the query, by selecting it from the list, and marking it in the Properties view as included.

Finally, another button on the QBE UI lets you export the pattern to the clipboard, or a .vql file in a Viatra Query project.

Saving the finished query to the clipboard

Saving the finished query to the clipboard

If you save the generated query to a .vql file, you will notice that it does not compile at first.

A compiler error comes from the fact that the QBE tool can not (yet) guess the name of the Java package where you save the file. You can fix this by manually specifying your package: select the "package" entry in the QBE view, and use the Properties view to change the name. While you are at it, you may even change the name of the pattern to something meaningful. If you have previously opted to save the generated query to a file in the workspace, you can now overwrite that file with the new content by a single click on the Update button:

PAckage and pattern names, and the Update button

PAckage and pattern names, and the Update button

There is one more likely reason for your query not compile: the query project might not have the UML types on its classpath; you can fix this easily by adding the metamodel bundle org.eclipse.uml2.uml to your dependencies.

The generated query should look something like this (small variations possible), with pattern variables created for anchors and intermediate objects (the former appearing as pattern parameters); reference constraints created for the paths connecting them; and the additional attribute constraint that was manually requested:

package org.example.uml.designrules

import "http://www.eclipse.org/uml2/5.0.0/UML"

pattern blockingCall(
    lifeline0 : Lifeline,
    lifeline1 : Lifeline,
    message0 : Message
) {
    Lifeline.coveredBy(lifeline0, messageoccurrencespecification0);
    Lifeline.coveredBy(lifeline1, messageoccurrencespecification1);
    Interaction.lifeline(interaction0, lifeline1);
    Lifeline.interaction(lifeline0, interaction0);
    Message.sendEvent(message0, messageoccurrencespecification0);
    MessageOccurrenceSpecification.message(messageoccurrencespecification1, message0);
    Interaction.message(interaction0, message0);
    Message.receiveEvent(message0, messageoccurrencespecification1);
    MessageOccurrenceSpecification.message(messageoccurrencespecification0, message0);

    Message.messageSort(message0, MessageSort::synchCall);

You can now load the query into Query Explorer (or the new Query Results view) to verify that it does the right thing - i.e. it matches exactly those elements that you want it to match. If it does not, you can use the QBE UI to make adjustments, and fine-tune the query (e.g. adding or removing additional constraints, see below) to meet your goals.

How it works under the hood

When you select some elements in an editor or viewer, and press the "Start" button of QBE, the tool needs to recognize the selection as a set of EObjects. VIATRA ships with integration components for several popular editor frameworks (in particular, it works out of the box with Papyrus, which is GMF-based), but you might need to contribute a model connector plug-in in order to be able to use QBE with a custom editor.

The model discovery will start separately from each selected EObject (anchor element), and will traverse EMF reference links up to a given exploration depth limit, in order to collect all paths (not longer than the given depth) connecting two anchors.

Initially, the tool automatically selects the smallest exploration depth that makes all anchors connected by the paths discovered. You can use the Exploration depth slider in the QBE view to manually increase this depth limit, so that the tool notices additional, less direct connections between the selected anchors. Changing the exploration depth will re-trigger model discovery, so that the tool can gather new paths.

After model discovery, a first attempt at a pattern will be formed by all paths that were found to connect the anchors. Pattern variables will consist of anchor points (given by the user as part of the selection) as well as the additional objects discovered as intermediate points along the paths. By default, only variables corresponding to anchors will be used as pattern parameters, while the rest will be local variables. References traversed along paths will contribute edge constraints to the pattern body.   

Fine tuning

Sometimes, the QBE tool will not immediately stumble upon the query that you are interested in. You can identify problems with the proposed pattern by directly inspecting the generated query code, or by examining query results on a test model. In these cases, there are still ways you can fine-tune the pattern through the QBE (and Properties) view, to make sure the generated query is useful.

For instance, recall how we originally designated the two Lifelines and the Message as part of the example. Had you selected the two Lifelines only, QBE would have found them connected by a path of length 2 - as they are both lifelines in the same Interaction. The fact that they exchange a Message turns out to be a less direct relationship between the two anchors. In order to arrive at the correct query, you have to manually increase the exploration depth to 4, thereby forcing QBE to search for connections between anchors in a wider context.

Depending on other particularities of the model you use as example, this wider context may have included connections between the Lifelines that are incidental in the example model, and not essential parts of the query. In this case, it is still the responsibility of the query developer to determine which details are relevant; the QBE view allows you to mark connecting paths as well as intermediate objects as excluded from the result. In the previous case study, having a common Interaction was not actually necessary as part of the query - but it is not important to remove it now, as it will not influence the results.

Additional fine-tuning options include:

  • Promoting intermediate objects found during the discovery to act as pattern parameters (along with the anchors).
  • Renaming pattern variables.
  • Adding attribute constraints based on the attribute values of the discovered objects (as demonstrated before).
  • Similarly, adding negative application conditions. By pressing the 'Find negative constraints' button, the tool will search for references between pairs of variables (anchors or intermediate objects) that are permitted in the metamodel, but not present in the example instance model. The absence of such references will be offered as additional, opt-in 'neg find' constraints that can be individually selected to be included in the query.

The rest of the case study and conclusion

You might have noticed that the above case study is far from complete. We have only managed to identify the relationship between the sender and receiver Lifelines of a blocking call; we still need to

  1. express using QBE that a given Lifeline represents an instance of a certain Class
  2. express using QBE that a certain Class resides in a Package with the name "Engine" or "GUI"
  3. write a final query that uses pattern composition to combine the previously created QBE queries in order to match a violation of the constraint in question (i.e. "an Engine object invoking a method on a UI class in a blocking way").

Here is what the Class diagram looks like:

Snippet of Class diagram in the concrete syntax

Snippet of Class diagram in the concrete syntax

For the first task, one only needs to select a Lifeline and its associated Class. Whoops - the Papyrus UI becomes an obstacle here, as there seems to be no way to simultaneously display these two elements. Fortunately, we can just select one of the two elements as a single anchor, switch over the view to the other element, and then tell the QBE tool to expand the previous selection with the new element by selecting the "Expand" alternative action available from the little drop-down arrow at the "Start" button.

The second task requires familiar steps only: selecting a Class and its Package as the two anchors, and then adding the name of the UML package as an attribute constraint. Note that in a more complex real-word example, the query would probably include a more complex condition (such as a regular expression) on the name of the package. As of now, QBE can only generate constraints for exact attribute value filtering; if you need anything more advanced than that, then you will have to consider the query generated by QBE merely as a starting point that you have to modify manually.

The third task is best solved by simply manually writing a query that composes the previously obtained patterns, and not by applying QBE, so it is left as an exercise to the reader :) The main purpose of QBE is to help the user discover connections in a model with an unfamiliar abstract syntax; it will not replace regular query engineering in the generic case.

To put it simply, you should think of Query By Example as a tool for abstracting away the ugly, unknown details of a modeling language before developing more complex queries for that language as usual.

by Gábor Bergmann at July 15, 2016 01:09 PM

ZeroTurnaround Releases RebelLabs Developer Productivity Report

by Alex Blewitt at July 14, 2016 08:33 AM

Today, ZeroTurnaround's RebelLabs released their biannual developer productivity report, which asked over 2000 respondents what their tools of the trade were. InfoQ has been given access to the report and summarises its findings.

By Alex Blewitt

by Alex Blewitt at July 14, 2016 08:33 AM

EMF Thread Safety

by javahacks at July 13, 2016 05:41 PM

In my past customer project we designed an RCP application that was supposed to monitorartificial-horizon-attitude-indicator and manage a distributed flight simulation. The simulation consists of dozens of independent simulation nodes that interchange messages and signals over a local network.

Basically the RCP application manages a huge and dynamic model graph that reflects all relevant information of the underlying flight simulation. All that information is displayed simultaneously in multiple views and editors.

The domain model is based on EMF and comprises about 100 classes with hundreds of thousands of runtime instances. The application continuously receives network messages on multiple background jobs and updates parts of the domain model accordingly.

Since a flight simulation is a highly dynamic system with dozens of model updates per second, we stumbled upon two major problems:

  • An unresponsive user interface
  • Concurrent domain model modifications

To avoid unresponsive user interfaces you should:

  • Use the SWT.VIRTUAL style for all your viewers
  • Omit EMF touch events
  • Never block the UI Thread
  • Read the post about EMF.Edit performance

EMF models are not thread-safe by default and writing multithreaded applications is not that simple. The more complex our application became, the more often we got concurrent modification exceptions and had problems with filtering and sorting operations.

As you can see in the table below, the only thread-safe operation on EMF instances is reading a single-valued attribute.

Single-Valued Attribute Multi-Valued Attribute
+ Read – Iterate
– Write – Add/Remove

It’s not obvious that writing a single-valued attribute from multiple threads is forbidden. However attaching an EAdapter in one thread while propagating a set event in another thread fires concurrent modification exceptions as well.

There are two established solutions to avoid multithreading problems in EMF.

EMF Transaction

EMF transaction uses transactional editing domains to synchronize read and write operations on different threads. All write operations are triggered by commands that are executed on the domain’s transactional commandstack.

final TransactionalEditingDomain domain = TransactionalEditingDomain.Registry.INSTANCE.getEditingDomain("instanceName");


All Read operations have to run in an exclusive context and the result is passed back as follows:

try {

Double r =(Double)editingDomain.runExclusive(new RunnableWithResult.Impl<Double>() {

  public void run() {

 label.setText("Sum = " +r);

} catch (InterruptedException e) {}

In addition the framework contains several classes for EMF.edit programming.

//EMF.edit support
viever.setContentProvider(new TransactionalAdapterFactoryContentProvider(domain,af);
viever.setLabelProvider(new TransactionalAdapterFactoryLabelProvider(domain,af);

I gave up figuring out how EMF transaction actually is working behind the scenes. It’s really complex and using the framework requires a consequent usage throughout the entire application. While synchronizing write operations using commands is doable, encapsulating all read operations on a shared editing domain ends up in writing many lines of additional boilerplate code. Especially UI related code becomes quite more complex and it’s very likely that team members simply fade out synchronization when just bringing up a simple info dialog that shows parts of your domain model.

That’s why we decided to use a simpler solution.

Model synchronization on the UI Thread

According to the EMF FAQ, EMF itself does not ensure thread-safety in application model implementations. Data structures are unsynchronized, the expectation is that a complete instance of the model (including resources and resource set, if present) should only be accessed by one thread at a time.

By one thread at one time can also be realized by using always the same single thread. An excellent candidate for such a thread is the UI/Display thread.

The advantage of using this thread is that all model operations triggered from inside the UI thread ( e.g. initializing views, databinding, event handlers ) run faultless without any further synchronization. Since more than 80% of all our application code is running in the UI thread we didn’t have to think about concurrency too much.

However all applications require some long running – mostly IO intensive – background operations that affect the application’s domain model. Fortunately a solution for that is really straightforward.

Write operations in background jobs

Below is an example how to modify the domain model in a background job. By the way never use Java threads directly. Prefer higher a level abstraction like the Eclipse jobs API or executor services.

new Job("Model Update") {

   protected IStatus run(IProgressMonitor monitor) {

       //create model in long running operation
	Signal signal = fetchFromWebserviceOrWhatEver();

       //merge instance in UI thread afterwards


After the long running operation has finished, the new model instance has to be merged immediately. Model updates are pretty short running operations and it’s no problem to run them in the UI thread. Actually this thread is idling and waiting for user input most of its time. However using the display object directly is not recommended because not all OSGI bundles should necessarily have dependencies to SWT.

That’s why we used a simple EMFTRansactionHelper utility class that provides methods to modify EMF models thread-safe. In our RCP application the class is initialized on startup as shown below. For headless unit tests running on the server, this class is initialized with another synchronizer.

public Object start(IApplicationContext context) throws Exception {

 Display display = PlatformUI.createDisplay();
 EMFTransactionHelper.setSynchronizer((runnable) ->display.syncExec(runnable));

The previous example can then be written as:

new Job("Model Append") {

protected IStatus run(IProgressMonitor monitor) {

 Signal signal = ModelFactory.eINSTANCE.createSignal();
 signal.setValue((int) Math.random());

 EMFTransactionHelper.addElementExclusive(model,ModelPackage.Literals.MODEL__SIGNALS, signal);

Read operations in background jobs

Many long running operations only need read access to the domain model. For instance generating reports or PDF documents. Unlike reading single-valued attributes from multiple threads iterating over multi-valued collections in background jobs is really problematic.

One option is cloning (EcoreUtil.copy()) parts of the application model in the UI thread and use the cloned model for background processing.

If cloning is not possible, the collection must be copied synchronized in the job itself. Here is an example:

new Job("Read Only") {

protected IStatus run(IProgressMonitor monitor) {

  //clone list thread-safe
  List<Signal> cL = EMFTransactionHelper.cloneCollectionExclusive(model.getSignals);		

  //iterate over clone
  cL.forEach(s-> ...)

Demo Application

The video below shows a very simple application that demonstrates both synchronization techniques. Each viewer contains 10.000 model instances that are modified continuously. The sourcecode is available here.

As you can see, the user interface still reacts very smooth. Both solutions have their pros and cons. For our use case the second solution was the better option. For other uses cases EMF transaction might be the better choice.

by javahacks at July 13, 2016 05:41 PM

Developing a custom code generator by example of SCXML

by Andreas Mülder (noreply@blogger.com) at July 13, 2016 10:40 AM

This blog post explains how to develop a custom code generator for the open source framework YAKINDU Statechart Tools. As an example, we will generate Statechart XML (SCXML) from a simple stop watch state machine. The stop watch use case is taken from the Apache Commons SCXML site. The stop watch model consists of 4 different states (ready, running, stopped and paused) and the state transitions are triggered by events that represent the stop watch buttons (watch.start, watch.stop, watch.split, watch.unsplit, watch.reset). 

Stop Watch example
Note that this example is not a full fletched Statechart XML code generator; only the basic concepts of states, transitions and events are supported to showcase how to get started. We expect the following XML fragment to be generated from the model above:
<?xml version="1.0" encoding="UTF-8"?>
xmlns="http://www.w3.org/2005/07/scxml" version="1.0" initial="ready">
<state id="ready">
<transition event="watch.start" target="running" />
<state id="running">
<transition event="watch.split" target="paused" />
<transition event="watch.stop" target="stopped" />
<state id="paused">
<transition event="watch.stop" target="stopped" />
<transition event="watch.unsplit" target="running" />
<state id="stopped">
<transition event="watch.reset" target="ready" />

SExec and SGraph meta models

Before starting with a code generator, we have to decide which of the two available meta models is most suitable for the use case. In model driven software development, a model always conforms to another model commonly referred to as a meta model. This meta model defines the building blocks a model can consist of. When developing a generator for YAKINDU SCT you can choose between two different meta models, SGraph and SExec. In broad terms, SGraph represents the structure of the modeled statemachine (States, Transitions, Events) while SExec contains sequences that encapsulates our execution semantics (for example Checks, Calls and Stateswitch).
The SGraph meta model is similar to the state machine meta model known from the UML. We have a top level element Statechart that contains a set of States and Transitions. Transitions may have events and a source and a target State. The following shows a very simplified version of SGraph.
In contrast, the SExec meta model defines an ExecutionFlow that contains a set of ExecutionStates. Every ExecutionState has an entry and an exit sequence of several Steps. Each Step can be of a different type, for example a StateSwitch or a Check.
If you want to generate code for a runtime environment that interprets a statechart structure (for example SCXML or Spring Statemachine) you should pick SGraph meta model. If you want to generate a code-only approach for another programming language, and you want to ensure that it behaves exactly in the way all the other code generators behave, you should pick the SExec meta model. 

The example Statechart XML generator

Creating custom code generators is really easy. It is a first-level concept in YAKINDU Statechart Tools. You can develop them directly within your project workspace which has the nice benefit that adoptions to the model or the generator templates are executed instantly, hence you don't have to bother with long code-compile-test cycles. More information about how to setup a new code generator project is out of the scope of this blog post and can be found here
When choosing the language for developing a new generator, you should consider using Xtend instead of Java. The syntax is familiar for every Java developer, but it adds some nice features like template expressions and dispatch methods that are really useful for developing code generators.
The example Statechart XML generator implemented with Xtend could look like this:
The SCXMLGenerator class extends AbstractWorkspaceGenerator to inherit some utility functions and implements the ISGraphGenerator interface (line 17)This interface defines which meta model will be used for the code generator. If you want to implement a code generator based on the ExecutionFlow meta model you would implement IExecutionFlowGenerator interface instead. 
The generate function in line 23 uses Xtends template string to generate the SCXML header. Most of the text ist static except for the value of the initial attribute  - this is calculated in the initialState function (line 30).  Below the SCXML header, the generate function for the head region (parallel regions are not implemented) is called (line 26). This function (line 34) simply iterates over all states in this region and calls the generate function for states (line 40). Other vertex types, for example Histories, Synchronizations or Final States are filtered out. Last, the generate function for states creates a new XML element and iterates over all outgoing transitions to generate the nested transition element.
You can download the example generator project from our new examples repository. However, this is just a first step towards a full fletched SCXML code generator. Feel free to take this example as a starting point but don't forget to contribute your extensions to our repository! If you have any questions or need further support on implementing custom code generators feel free to contact us.

by Andreas Mülder (noreply@blogger.com) at July 13, 2016 10:40 AM

Presentation: Pair Programming in the Cloud with Eclipse Che, Eclipse Flux, Orion, Eclipse IDE and Docker

by Sun Tan at July 13, 2016 03:15 AM

Sun Tan demos a prototype showing multi-editing and real-time collaboration from 3 different editors -Che, Orion, Eclipse- using a Flux microservice running inside a Che Docker runner.

By Sun Tan

by Sun Tan at July 13, 2016 03:15 AM

Getting rid of ugliness in the Eclipse IDE

by Lars Vogel at July 12, 2016 08:55 PM

Continuing our work to make the Eclipse IDE nicer we also removed some “weird things of the past” in our default styling.

Thanks to Patrik Suzzi for removing the yellow underline in the Mac theme (why did we have a YELLOW line?????). I also removed that in the Linux and Window variant to remove some ugliness here also. This was done via a custom painter, which I also plan to remove to save performance.


by Lars Vogel at July 12, 2016 08:55 PM

Vert.x 3.3.2 is released !

by cescoffier at July 12, 2016 12:00 AM

We have just released Vert.x 3.3.2, the first bug fix release of Vert.x 3.3.x.

We have first released 3.3.1 that fixed a few bugs, but a couple of new bugs were discovered after 3.3.1 was tagged but not announced, we decided to release a 3.3.2 to fix the discovered bugs, as these bugs were preventing usage of Vert.x.

Vert.x 3.3.1 release notes:

Vert.x 3.3.2 release notes:

These releases do not contain breaking changes.

The event bus client using the SockJS bridge are available from NPM, Bower and as a WebJar:

Docker images are also available on the Docker Hub. The Vert.x distribution is also available from SDKMan and HomeBrew.

The artifacts have been deployed to Maven Central and you can get the distribution on Bintray.

Happy coding !

by cescoffier at July 12, 2016 12:00 AM

Making the Eclipse IDE prettier on Windows

by Lars Vogel at July 11, 2016 08:14 PM

My company is starting a new customer project for which I have to the Windows OS. I noticed that the main toolbar styling looks really bad due to its inconsistent usage of colors for the perspective switcher and the main toolbar.

So we adjusted that, see the result here:

Old Styling:


New Styling:


If you look closely, you see also a broken underline under the QuickAccess box. This has also been fixed meanwhile.

by Lars Vogel at July 11, 2016 08:14 PM

VIATRA wins the offline contest at TTC 2016

by IncqueryLabs at July 11, 2016 12:56 PM

This year the 9th Transformation Tool Contest has been organized in Vienna, at the STAF 2016 conference, where each submission must solve a complex task related to model transformation.

This year's task was a simplified version of the Class Responsibility Assignment problem, where methods and attributes have to be assigned to classes in a way that the assignment is optimal with respect to certain software metrics. Our team has solved this problem using VIATRA's rule-based design space exploration framework (VIATRA-DSE), and our solution was awarded the first prize from the nine submitted solutions. The solution was authored by Gábor Szárnyas and András Szabolcs Nagy. The source is available on Github.

The corresponding research was supported by the MTA-BME Lendület 2015 Research Group on Cyber Physical Systems (original article).

by IncqueryLabs at July 11, 2016 12:56 PM