Upcoming Virtual IoT meetups

by Benjamin Cabé at November 30, 2015 04:05 PM

We have some great Virtual IoT meetups lined up for the next couple months! They are a great opportunity to learn about existing and emerging IoT tech, as well as to engage with some of the key people that are actually building the Internet of Things. Please make sure to check them out and register to be reminded to attend them in time!

Also, if you’re interested in presenting something cool that you’re doing in the field of IoT, please contact me!

Introducing the Arduino C++ IDE for Eclipse

Wednesday, Dec 2, 2015, 8:00 AM

No location yet.

47 IoT enthusiasts Attending

This is a virtual Meetup occurring at 8AM Pacific time (11am Eastern, 5pm Central European Time). For help with your timezone calculation, refer to this.The meetup will be held on Google Hangouts and you will be able to watch the live stream directly on YouTube.http://www.youtube.com/watch?v=4gCprxHFeuwThe Arduino IDE from arduino.cc provides a …

Check out this Meetup →


Smart Charging of Electric Vehicles with RISE V2G Project

Wednesday, Dec 16, 2015, 8:00 AM

No location yet.

2 IoT enthusiasts Attending

This is a virtual Meetup occurring at 8AM Pacific time (11am Eastern, 5pm Central European Time). For help with your timezone calculation, refer to this.The meetup will be held on Google Hangouts and you will be able to watch the live stream directly on YouTube.http://www.youtube.com/watch?v=ImXnDLHyZbERISE V2G is a Reference Implementation Supp…

Check out this Meetup →

by Benjamin Cabé at November 30, 2015 04:05 PM

How to Clone Git Repositories with JGit

by Rüdiger Herrmann at November 30, 2015 08:00 AM

Written by Rüdiger Herrmann

Whatever you plan to do with an existing repository, first a clone has to be created. Whether you plan to contribute or just want to peek at its history, a local copy of the repository is needed.

While cloning a repository with JGit isn’t particularly difficult, there are a few details that might be worth noting. And because there are few online resources on the subject, this article summarizes how to use the JGit API to clone from an existing Git repository.

Cloning Basics

To make a local copy of a remote repository, the CloneCommand needs at least to be told where the remote is to be found:

Git git = Git.cloneRepository()
  .setURI( "https://github.com/eclipse/jgit.git" )

The Git factory class has a static cloneRepository() method that returns a new instance of a CloneCommand. setURI() advises it where to clone from and like with all JGit commands, the call() method actually executes the command.

Though remote repositories – like the name suggests – are usually stored on a remote host, the location given in setURI() can also be a path to a local resource.

If no more information is given, JGit will choose the directory in which the cloned repository will be stored for you. Based on the current directory and the repository name that is derived from its URL, a directory name is built. In the example above it would be ‘/path/to/current/jgit’.

But usually you would want to have more control over the destination directory and explicitly state where to store the local clone.

The setDirectory() method specifies where the work directory should be and with setGitDir() the location of the metadata directory (.git) can be set. If setGitDir() is omitted, the .git directory is created directly underneath the work directory

The example below

Git git = Git.cloneRepository()
  .setURI( "https://github.com/eclipse/jgit.git" )
  .setDirectory( "/path/to/repo" )

will create a local repository whose work directory is located at ‘/path/to/repo’ and whose metadata directory is located at ‘/path/to/repo/.git’.

However the destination location is chosen, explicitly through your code or by JGit, the designated directory must either be empty or must not exist. Otherwise an exception will be thrown.

The settings for setDirectory(), setGitDir() and setBare() (see below) are forwarded to the InitCommand that is used internally by the CloneCommand. Hence more details thereover are explained in Initializing Git Repositories with JGit.

The Git instance that is returned by CloneCommand.call() provides access to the repository itself (git.getRepository()) and can be used to execute further commands targeting this repository. When finished using the repository it must be closed (git.close()) or otherwise the application may leak file handles.

To later regain a Repository (or Git) instance, the path to the work directory or .git directory is sufficient. The article How to Access a Git Repository with JGit has detailed information on the subject.

Upstream Configuration

As a last step the clone command updates the configuration file of the local repository to register the source repository as a socalled remote.

When looking at the configuration file (.git/config) the remote section looks like this:

[remote "origin"]
  url = https://github.com/eclipse/jgit.git
  fetch = +refs/heads/*:refs/remotes/origin/*

If no remote name is given, the defaults ‘origin’ is used. In order to have the CloneCommand use a certain name under which the remote repository is registered, use setRemote().

The refspec given by ‘fetch’ determines which branches should be exchanged when fetching from or pushing to the remote repository by default.

Cloning Branches

By default, the clone command creates a single local branch. It looks at the HEAD ref of the remote repository and creates a local branch with the same name as the remote branch referenced by it.

But the clone command can also be told to clone and checkout certain branch(es). Assuming that the remote repository has a branch named ‘extra’, the following lines will clone this branch.

Git git = Git.cloneRepository()
  .setURI( "https://github.com/eclipse/jgit.git" )
  .setDirectory( "/path/to/repo" )
  .setBranchesToClone( singleton( "refs/heads/extra" ) );
  .setBranch( "refs/heads/extra" )

With setBranchesToClone(), the command clones only the specified branches. Note that the setBranch() directive is necessary to also checkout the desired branch. Otherwise, JGit would attempt to checkout the ‘master’ branch. While this is isn’t a problem from a technical point of view, it is usually not what you want.

If all branches of the remote repository should be cloned, you can advise the command like so:

Git git = Git.cloneRepository()
  .setURI( "https://github.com/eclipse/jgit.git" )
  .setDirectory( "/path/to/repo" )
  .setCloneAllBranches( true )

To prevent the current branch from being checked out at all, the setNoCheckout() method can be used.

Listing Remote Branches

If you want to know which branches a remote repository has to offer, the LsRemoteCommand comes to the rescue. To list all branches of a JGit repository, use Git’s lsRemoteRepository() like shown below.

Collection<Ref> remoteRefs = Git.lsRemoteRepository()
  .setHeads( true )
  .setRemote( "https://github.com/eclipse/jgit.git" )

In case you would also want to list tags, advise the command with setTags( true ) to include tags.

For reasons I rather don’t want to know, JGit requires a local repository for certain protocols in order to be able to list remote refs. In this case Git.lsRemoteRepository() will throw a NotSupportedException. The workaround is to create a temporary local repository and use git.lsRemote() instead of Git.lsRemoteRepository() where git wraps the temporary repository.

Cloning Bare Repositories

If the local repository does not need a work directory, the clone command can be instructed to create a bare repository.

By default non-bare repositories are created, but with setBare( true ) a bare repository is created like shown below:

Git git = Git.cloneRepository()
  .setBare( true )
  .setURI( "https://github.com/eclipse/jgit.git" )
  .setGitDir( "/path/to/repo" )

Here the destination directory is specified via setGitDir() instead of using setDirectory().
The resulting repository’s isBare() will return true, getGitDir() will return /path/to/repo and since there is no work directory getWorkTree() will throw a NoWorkTreeException.

Note that ‘bare’ here only applies to the destination repository. Whether the source repository is bare or not doesn’t make a difference when cloning.

Cloning Submodules

If the remote repository is known to have submodules or if you wish to include submodules in case there are any, the clone command can be instructed to do so:

Git git = Git.cloneRepository()
  .setCloneSubmodules( true )
  .setURI( "https://github.com/eclipse/jgit.git" )
  .setDirectory( "/path/to/repo" )

The above example advises the clone command to also clone any submodule that is found.

If setCloneSubmodules( true ) wan’t specified while cloning the repository, you can catch up on the missing submodules later. For more details see the article How to manage Git Submodules with JGit.

Cloning with Authentication

Of course, JGit also allows to access repositories that require authentication. Common protocols like SSH and HTTP(S) and their authentication methods are supported. A detailed explanation on how to use authentication support can be found in the JGit Authentication Explained article.

Concluding How to Clone Git Repositories with JGit

For almost all features of the native Git clone command there is an equivalent in JGit. Even a progress monitor which may be useful when JGit is embedded in interactive applications exists. And for the missing mirror option apparently a workaround exists. Only the often asked for shallow clones (e.g. git clone --depth 2) aren’t yet supported by JGit.

The snippets shown throughout this article are excerpts from a learning test that illustrates the common use cases of the CloneCommand. The full version can be found here:

If you still have difficulties or questions, feel free to leave a comment or ask the friendly and helpful JGit community for assistance.

The post How to Clone Git Repositories with JGit appeared first on Code Affine.

by Rüdiger Herrmann at November 30, 2015 08:00 AM

CodeRASPIDe to go bright with Sirius

by Its_Me_Malai (noreply@blogger.com) at November 30, 2015 06:05 AM

Having released a Stable GPIO Version of CodeRASPIDe, our plans for the next release were falling in place.
1. Graphical Editor to design the Hardware Wiring of RaspberryPI
2. Support for Python and C Code Generation

Graphical Editor to Design -> Immediate thot goes to Sirius as we have been here about Sirius and when starting CodeRASPIDe, we also came across Adruiono Designer by Obeo using Sirius. So we decided yday to get our hands dirty with Sirius.

We ran thru the 2 Tutorials on Eclipse Wiki.
1. Basic Sirius Tutorial
2. Advanced Sirius Tutorial

And this is what we got for CodeRASPIDe. I was amazed with what Sirius could do on the 1st Code. Definitely a framework that a lot of Managers would love [I love it for CodeRASPIDe]. Yet to see whats the Kick on the Developers ASS [Will Post soon as i move further in using Sirius].

Looking forward to play with Sirius a little bit more to conclude on its usecase limitations. 
But on GO One "SIRIUS is AWESOME" is the WORD.
Also will soon have our SIRIUS Tutorials out soon to support this great piece of work by OBEO.

by Its_Me_Malai (noreply@blogger.com) at November 30, 2015 06:05 AM

Combine vert.x and mongo to build a giant

by cescoffier at November 30, 2015 12:00 AM

This blog post is part of the introduction to vert.x series. Last time, we have seen how we can use the vertx-jdbc-client to connect to a database using a JDBC driver. In this post, we are going to replace this JDBC client by the vertx-mongo-client, and thus connect to a Mongo database.

You don’t understand the title, check the mongoDB website.

But before going further, let’s recap.

Previously in ‘introduction to vert.x’

  1. The first post has described how to build a vert.x application with Maven and execute unit tests.
  2. The second post has described how this application can become configurable.
  3. The third post has introduced vertx-web, and a small collection management application has been developed. This application offers a REST API used by a HTML/JavaScript frontend.
  4. The fourth post has presented how you can run integration tests to ensure the behavior of your application.
  5. The last post has presented how you can interact with a JDBC database using the vertx-jdbc-client.

This post shows another client that lets you use MongoDB in a vert.x application. This client provides an vert.x API to access asynchronously to the Mongo database. We won’t compare whether or not JDBC is superior to Mongo, they have both pros and cons, and you should use the one that meet your requirements. Vert.x lets you choose, that’s the point.

The vertx-mongo-client documentation is available here.

The code developed in this blog post is available in the branch post-6. Our starting point is the code from the post-5 branch.

Asynchronous data access

One of the vert.x characteristics is being asynchronous. With an asynchronous API, you don’t wait for a result, but you are notified when this result is ready. Thanks to vert.x, this notification happens in the same thread (understand event loop) as the initial request:

Asynchronous data access

Your code (on the left) is going to invoke the mongo client and pass a callback that will be invoked when the result is available. The invocation to the mongo client is non blocking and returns immediately. The client is dealing with the mongo database and when the result has been computed / retrieved, it invokes the callback in the same event loop as the request.

This model is particularly powerful as it avoids the synchronization pitfalls. Indeed, your code is only called by a single thread, no need to synchronize anything.

As with every Maven project….

… we need to update the pom.xml file first.

In the pom.xml file, replace the vertx-jdbc-client by the vertx-mongo-client:


Unlike JDBC where we were instantiating a database on the fly, here we need to explicitly starts a MongoDB server. In order to launch a Mongo server in our test, we are going to add another dependency:


This dependency will be used in our unit tests, as it lets us start a mongo server programmatically. For our integration tests, we are going to use a Maven plugin starting and stopping the mongo server before and after our integration tests. Add this plugin to the section of your pom.xml file.


Notice the port we use here (37017), we will use this port later.

Enough XML for today

Now that we have updated our pom.xml file, it’s time to change our verticle. The first thing to do is to replace the jdbc client by the mongo client:

mongo = MongoClient.createShared(vertx, config());

This client is configured with the configuration given to the verticle (more on this below).

Once done, we need to change how we start the application. With the mongo client, no need to acquire a connection, it handles this internally. So our startup sequence is a bit more simple:

        (nothing) -> startWebApp(
            (http) -> completeStartup(http, fut)
        ), fut);

As in the previous post, we need to insert some predefined data if the database is empty:

private void createSomeData(Handler> next, Future fut) {
    Whisky bowmore = new Whisky("Bowmore 15 Years Laimrig", "Scotland, Islay");
    Whisky talisker = new Whisky("Talisker 57° North", "Scotland, Island");
    // Do we have data in the collection ?
    mongo.count(COLLECTION, new JsonObject(), count -> {
      if (count.succeeded()) {
        if (count.result() == 0) {
          // no whiskies, insert data
          mongo.insert(COLLECTION, bowmore.toJson(), ar -> {
            if (ar.failed()) {
            } else {
              mongo.insert(COLLECTION, talisker.toJson(), ar2 -> {
                if (ar2.failed()) {
                } else {
        } else {
      } else {
        // report the error

To detect whether or not the database already contains some data, we retrieve the number of documents from the whiskies collection. This is done with : mongo.count(COLLECTION, new JsonObject(), count -> {}). The second parameter is the query. In our case, we want to count all documents. This is done using new JsonObject() that would create a query accepting all documents from the collection (it’s equivalent to a SELECT * FROM ...).

Also notice the insert calls. Documents are passed as JSON object, so to insert an object, just serialize it to JSON and use mongo.insert(COLLECTION, json, completion handler).

Mongo-ize the REST handlers

Now that the application boot sequence has been migrated to mongo, it’s time to update the code handling the REST requests.

Let’s start by the getAll method that returns all stored products. To implement this, we use the find method. As we saw for the count method, we pass an empty json object to describe a query accepting all documents:

private void getAll(RoutingContext routingContext) {
    mongo.find(COLLECTION, new JsonObject(), results -> {
      List objects = results.result();
      List whiskies = objects.stream().map(Whisky::new).collect(Collectors.toList());
          .putHeader("content-type", "application/json; charset=utf-8")

The query results are passed as a list of JSON objects. From this list we can create our product instances, and fill the HTTP response with this set.

To delete a specific document we need to select the document using its id:

private void deleteOne(RoutingContext routingContext) {
    String id = routingContext.request().getParam("id");
    if (id == null) {
    } else {
      mongo.removeOne(COLLECTION, new JsonObject().put("_id", id),
          ar -> routingContext.response().setStatusCode(204).end());

The new JsonObject().put("_id", id) describes a query selecting a single document (selected by its unique id, so it’s the equivalent to SELECT * WHERE id=...). Notice the _id which is a mongo trick to select a document by id.

Updating a document is a less trivial:

private void updateOne(RoutingContext routingContext) {
    final String id = routingContext.request().getParam("id");
    JsonObject json = routingContext.getBodyAsJson();
    if (id == null || json == null) {
    } else {
          new JsonObject().put("_id", id), // Select a unique document
          // The update syntax: {$set, the json object containing the fields to update}
          new JsonObject()
              .put("$set", json),
          v -> {
            if (v.failed()) {
            } else {
                  .putHeader("content-type", "application/json; charset=utf-8")
                  new Whisky(id, json.getString("name"),

As we can see, the update method takes two JSON objects as parameter:

  1. The first one denotes the query (here we select a single document using its id).
  2. The second object expresses the change to apply to the selected document. It uses a mongo syntax. In our case, we update the document using the $set operator.

Replace document
In this code we update the document and replace only a set of fields. You can also replace the whole document using mongo.replace(...).

I definitely recommend to have a look to the MongoDB documentation, especially:

Time for configuration

Well, the code is migrated, but we still need to update the configuration. With JDBC we passed the JDBC url and the driver class in the configuration. With mongo, we need to configure the connection_string - the mongo:// url on which the application is connected, and db_name - a name for the data source.

Let’s start by the unit test. Edit the MyFirstVerticleTest file and add the following code:

private static MongodProcess MONGO;
    private static int MONGO_PORT = 12345;
    public static void initialize() throws IOException {
      MongodStarter starter = MongodStarter.getDefaultInstance();
      IMongodConfig mongodConfig = new MongodConfigBuilder()
          .net(new Net(MONGO_PORT, Network.localhostIsIPv6()))
      MongodExecutable mongodExecutable =
     MONGO = mongodExecutable.start();

    public static void shutdown() {  MONGO.stop(); }

Before our tests, we start (programmatically) a mongo database on the port 12345. When all our tests have been executed, we shutdown the database.

So now that the mongo server is managed, we need to to give the right configuration to our verticle. Update the DeploymentOption instance with:

DeploymentOptions options = new DeploymentOptions()
        .setConfig(new JsonObject()
            .put("http.port", port)
            .put("db_name", "whiskies-test")
                "mongodb://localhost:" + MONGO_PORT)

That’s all for the unit tests.

For the integration-test, we are using an externalized json file. Edit the src/test/resources/my-it-config.json with the following content:

      "http.port": ${http.port},
      "db_name": "whiskies-it",
      "connection_string": "mongodb://localhost:37017"

Notice the port we are using for the mongo server. This port was configured in the pom.xml file.

Last but not least, we still have a configuration file to edit: the configuration you use to launch the application in production:

      "http.port": 8082,
      "db_name": "whiskies",
      "connection_string": "mongodb://localhost:27017"

Here you would need to edit the localhost:27017 with the right url for your mongo server.

Some changes in the integration tests
Because mongo document id are String and not integer, we have to slightly change document selection in the integration test.

Time for a run

It’s time to package and run the application and check that everything works as expected. Let’s package the application using:

mvn clean verify

And then to launch it, start your mongo server and launch:

java -jar target/my-first-app-1.0-SNAPSHOT-fat.jar \
  -conf src/main/conf/my-application-conf.json

If you are, like me, using docker / docker-machine for almost everything, edit the configuration file to refer to the right host (localhost for docker, the docker-machine ip if you use docker-machine) and then launch:

docker run -d -p 27017:27017 mongo
java -jar target/my-first-app-1.0-SNAPSHOT-fat.jar \
  -conf src/main/conf/my-application-conf.json
# or
java -jar target/my-first-app-1.0-SNAPSHOT-fat.jar \
  -conf src/main/conf/my-application-conf-docker-machine.json

The application live and running

That’s all folks !

We are reaching the end of this post. We saw how you can use the vert-mongo-client to access asynchronously data stored inside a mongo database as well as inserting/updating this data. Now you have the choice between JDBC or Mongo. In addition, vert.x provides a client for Redis.

Next time, we will see how the verticle class can be split in two verticles in order to better organize your code. The interaction between the two verticles will uses services.

Stay tuned & Happy coding !

by cescoffier at November 30, 2015 12:00 AM

Enhancements and Fixes on CodeRASPIDe

by Its_Me_Malai (noreply@blogger.com) at November 28, 2015 04:02 AM

Further Fixes happening on CodeRASPIDe. We have stabilised on our earlier release, added a few enhancements and a most of the bugs fixed to make it stable. We also have an improved Web Documentation talking about Installation, Usage, Contact Details if in case on bugs or interest in the project.

Our Website for CodeRASPIDe : http://www.ancitconsulting.com/coderaspide/

All the functionalities are available thru our UpdateSite or Marketplace
UpdateSite : http://www.ancitconsulting.com/coderaspide/updatesite/
Marketplace : http://marketplace.eclipse.org/content/code-raspide-ide-raspberry-pi

Functionalities Added
1. Improved Project Creation Wizard

2. Support for Multiple RaspberryPI Boards [40 Pins and 26 Pins]

3. GPIO Provisioning of Pins : Input Configuration, Output Configuration and MultiPIN Configuration

Bug Fixed : 
1. Option to Delete
2. Generate Code Action on FormPage Header
3. Few NPEs and Other Bugs Fixed.

by Its_Me_Malai (noreply@blogger.com) at November 28, 2015 04:02 AM

Oomph changes the way you handle multiple eclipse installations

November 27, 2015 07:53 AM

I work on many Eclipse projects and projects based on Eclipse technologies. I have of course the project I work on for our customer (daily job), but I also try to contribute to different open source projects and I also like trying new stuff and playing around with new technologies.

All these different tasks require different Eclipse installations. When I experiment with Xtend I use “Eclipse IDE for Java and DSL Developers”. Sometime I need an IDE with Maven support, but not always. After the EclipseCon I just installed an Eclipse supporting Gradle and Groovy development.

So far, I have never tried to install everything in the same Eclipse IDE. I feared the RAM consumption, overloaded menu trees and an IDE containing too many plugins. So I ended up with a folder containing different eclipse installations (one for each domain):

Such a folder structure both consumes disk space and maintaining each installation is time consuming and repetitive. Take the Mars.1 update release as an example. I have so many bad reasons for not updating my IDEs: it is boring, it takes time, it takes disk space, ….

At the last EclipseCon Europe I finally found the time to get into Oomph and the great new is: with Eclipse Oomph (a.k.a. Eclipse Installer), I am convinced that I can do much better than in the past.When you look at a classic Eclipse installation, it looks like this:

I wasn’t really aware of it, but this folder contains the result of 3 different mechanisms: an Eclipse installation containing the executable and some configuration, a p2 bundle pool where the plugins are effectively stored and a p2 agent that keeps everything together by storing the installation details into profiles. Oomph takes advantage of the flexibility offered by P2 and does not create a standalone installation.

The main advantage of this approach is that the Bundle Pool can be shared across your Eclipse installations. This drastically reduces the disk space for your eclipse installations. It also reduces the time it takes to install and to update each of your eclipse installations.

When you install something in one IDE, files are stored in the bundle pool. Installing the same plugin again in another IDE is straight forward. Oomph just updates the profile and reuse the files already present in your bundle pool.

In addition Oomph is all about setting up your Eclipse environment. Automation is the key word here. You can easily define setup tasks that are shared across all your Eclipse installations. This way you no longer loose time with modifying the default Eclipse settings to obtain the installation you want.

The Oomph approach is really great. In my opinion you can now have one Eclipse IDE setup for each project you work on. On this blog I will continue to explain the advantages of Oomph and what it changes when you work with it. The Scout team will of course also contribute a setup task to facilitate contributions on our framework.

Feedback: please use this forum thread.

Project Home, Forum, Wiki, Twitter, Google+

November 27, 2015 07:53 AM

Vert.x ES6 back to the future

by pmlopes at November 25, 2015 12:00 AM

On October 21th, 2015 we all rejoiced with the return from the past of Marty McFly with his flying car and so on, however in the Vert.x world we were quite sad that the JavaScript support we have was still using a technology released in December 2009. The support for ES5 is not something that we Vert.x team controls but something that is inherited from running on top of Nashorn.

With all these nostalgic thoughts on my mind I’ve decided to bring us back to the future and by future I mean, lets start using a modern JavaScript, or more correctly, lets start using ECMAScript 6.

It turned out to be quite simple to achieve this so I’ll pick the hello world example and write it in ES6 just to show how you can port your code to ES6 and still use the current Vert.x APIs. Note that Vert.x internals still are ES5 and have not been touched or modified to support any of ES6 features.


Traditionally your main.js file would reside in the root of your module (this is where NPM will look for it by default); however as we are going to transpile to ES5 you’ll want to put your index file in /src/main.js.

However, because we are transpiling to ES5, your package.json‘s main block should point to the transpiled index.js file in the /lib directory.

  "name": "vertx-es6",
  "version": "0.0.1",
  "private": true,

  "main": "lib/main.js",

  "scripts": {
    "build": "rm -Rf lib && ./node_modules/.bin/babel --out-dir lib src",
    "start": "./node_modules/.bin/vertx run lib/main.js"

  "dependencies": {
    "vertx3-full": "3.1.0",
    "babel-cli": "6.2.0",
    "babel-preset-es2015": "6.1.18"

As you can see, the main idea is to invoke the transpiler (Babel) when we are building our project, and run it using the generated files. This is slightly equivalent to a compilation process you would have using compiled language.


If you’re planning to deploy your package to npm either local or private you should be aware that npm will exclude anything listed on your .gitignore since we should ignore the generated code from git it need to inform npm to ignore that rule and keep the lib directory. The .gitignore should be something like:


And the .npmignore:


Hello fat arrows and let keywords

So all the heavy work has been done, in order to create our hello world we just need to code some ES6 in our src/main.js file:

var Router = require("vertx-web-js/router");
var server = vertx.createHttpServer();

var router = Router.router(vertx);

router.get("/").handler((ctx) => {

    let response = ctx.response();
    response.putHeader("content-type", "text/plain");

    response.end("Hello ES6 World!");


As you can see we’re using fat arrows instead of writing a function closure and scoped variables using let keyword. If you now compile your project:

npm run build

And then start it:

npm start

You have your first back to the future ES6 verticle!

by pmlopes at November 25, 2015 12:00 AM

Eclipse Key Binding: Select Enclosing Element

by waynebeaton at November 24, 2015 04:03 PM

Here’s a Eclipse command that’s pretty handy: Select Enclosing Element  (key binding: Shift+Alt+Up on Linux).


Every time you hit the key combination, it expands the selection to the enclosing element. In this example, it starts with the method selector, and expands to include the parameters, the receiver, the statement, the block, the method, … all the way up to the entire compilation unit.

To go the other way (back towards the original selection), use the Restore Last Selection command (key binding Shift+Alt+Down on Linux).

It is, of course, up to you to decide what to do with the selection once you have it just right: Maybe use Alt+Up or Alt+Down to move selection around in the file…

You can find this and more commands by hitting Ctrl+3 and just typing a bit of what you’re looking for, or hit Shift+Ctrl+L to open a pop-up with the full list of key bindings.

by waynebeaton at November 24, 2015 04:03 PM

Andmore 0.5-M3 available.

by kingargyle at November 24, 2015 03:33 PM

Screen Shot 2015-04-19 at 4.44.52 PM

The third stable milestone is ready for you, the user community to kick the tires, and use for your development.  This is primarily a bug and stability milestone.  The one big addition is that Andmore now supports multi-dexing of the APK files.    Also, starting with the next Neon milestone, an Android Developers EPP package will be available.   This effort is being lead by the newest committer Kaloyan Raev.

Also this release of Andmore can be installed on older versions of Eclipse other than Mars.   There is also now an Eclipse Marketplace entry for the project as well to make installing the tooling even easier.



The latest version can always be obtained from the following p2 url:


by kingargyle at November 24, 2015 03:33 PM

Exploring TypeScript Support in Eclipse

by aaronlonin at November 24, 2015 02:34 PM

 TypeScript is an open source superset of JavaScript that adds class based objects that compile to JavaScript plain code. Currently there are two main options to support TypeScript in Eclipse. I’m going to discuss their features and pros and cons of each.Palantir’s TypeScript (Version 1.6.0.v20151006)This plugin offers minimal support for TypeScript and uses the official TypeScript […]

The post Exploring TypeScript Support in Eclipse appeared first on Genuitec.

by aaronlonin at November 24, 2015 02:34 PM

AnyEdit 2.6.0 for beta testers

by Andrey Loskutov (noreply@blogger.com) at November 22, 2015 09:09 PM

I did some bigger changes in AnyEdit plugin related to "Compare To/Replace With" menu entries.

They are now contributed differently to avoid duplicated menu entries and to better support workspace external files. Together with the latest nightly build of EGit one can see this menus first time in the Git Repositories view:

This beta offline update site contains the not yet released 2.6.0 version of AnyEdit.

It would be really nice if you could test and report possible regressions. If no one complain, I will release this in a week or so.

P.S: be aware, I've dropped support for Eclipse 3.7 in AnyEdit. While technically this still should work, I do not plan to support Eclipse 3.7 anymore. Eclipse 3.8 is now the minimal platform version for AnyEdit.

by Andrey Loskutov (noreply@blogger.com) at November 22, 2015 09:09 PM

IncQuery and VIATRA at EclipseCon Europe 2015

by István Ráth at November 21, 2015 06:53 PM

This year, István Ráth and Ákos Horváth have attended EclipseCon Europe and represented our team. IncQuery and VIATRA have been featured in two talks:

  • "IoT Supercharged: Complex Event Processing for MQTT with Eclipse Technologies" (session, slides) - featuring a live demo squeezed into a 10 minute lightning talk - luckily everything worked like a charm :-) This talk is about VIATRA-CEP, a new, high-level framework for complex event processing over runtime models. CEP is a hot topic in IoT for obvious reasons: it is one of the key technologies you want to use to process data/event streams - preferably as close to the data source as possible. In an IoT context, this would typically mean your gateway - which is now possible via new technologies such as Eclipse Kura. VIATRA-CEP is part of our Eclipse.org model transformation framework VIATRA, completely open source and licensed under the EPL.
  • "IncQuery gets Sirius: Faster and Better Diagrams" (session, slides, video). This talk is about a new innovation coming from the IncQuery team, namely the integration between EMF-IncQuery and Sirius. Once this is publicly released, you will be able to use IncQuery patterns in Odesign diagram definitions (next to the traditional options and the new AQL), and enjoy the performance benefits. Additionally, we also provide an integration with IncQuery Viewers, which allows you to define "live diagrams" that are synchronized to the semantic model automatically by incremental transformations.

Both talks were well received, we were asked quite a few interesting questions and even suggestions on future development ideas. The entire conference schedule was very strong this year, with hot topics such as the "IoT Day", "Project Quality Day", and the "LocationTech Day" that all provided high quality talks on interesting topics. Here are my favorite picks:

  • The first day keynote by Stefan Ferber of Bosch Software Innovations was one of the best keynotes I have seen at EclipseCons. It is good to see such a powerful entity joining the Eclipse ecosystem.
  • I really liked two Xtext talks: Business DSLs in Web Applications and The Future of Xtext. As both IncQuery and VIATRA rely on Xtext, it is good to see this framework moving ahead at such a high pace and quality.
  • GEF4 - Sightseeing Mars was one of the highlights of the Conference to me. I was very pleased to see such an important piece of technology gaining new momentum and fresh ideas. The demos were impressive too!
  • Processing GeoSpatial Data at Scale was very useful to anyone interested in this topic, as it provided a thorough overview of the domain, including challenges and technologies. Stay tuned for some new innovation coming from the IncQuery team in this area in the near future.
  • Finally, I'm very happy to see GS Collections joining the Eclipse family under the name Eclipse Collections framework. The talk was one of the best at the conference. In fact, we are planning to evaluate this technology for use in IncQuery, to optimize the memory footprint.

We would like to thank the Budapest University of Technology and Economics, the MONDO and CONCERTO EU FP7 Projects, and IncQuery Labs Ltd. for supporting our talks at the EclipseCon Europe 2015 conference.

by István Ráth at November 21, 2015 06:53 PM

EMF Forms goes AngularJS

by Maximilian Koegel and Jonas Helming at November 20, 2015 01:13 PM

Over three years ago, we started the explicit development of EMF Forms as a sub component of the EMF Client Platform. The goal was to ease the development of data-centric form-based UIs based on a given EMF data model. Rather than manually coding UIs (e.g. in SWT), the approach is to describe them declaratively in a simple model language, which is focussed on the specification of forms – the View Model. A view model instance is then interpreted by a flexible and extensible rendering component to create a working UI at the end.
The approach has been a major success and shows significant advantages over manual UI programming or the usage of WYSIWYG editors. Besides the lower development effort and the higher quality of the resulting form-based UI, another advantage is the technological flexibility of the approach. The view model, which specifies the UI is not depending on a certain UI toolkit (e.g. SWT, GWT or JavaFX). Implementing new renderers allows you to switch the UI technology without respecifying the concrete UI itself. With renderers for the Remote Application Platform and Vaadin, EMF Forms is already used for web applications.
EMF Forms has grown to a very active, frequently used project. This success motivates us to continuously drive the technology forward, extend its use cases, and, in doing so, attract new users. An obvious new field for applying the concepts of EMF Forms is the implementation of data-centric web applications. More and more complex business applications are developed as single-page web applications using JavaScript and frameworks such as AngularJS and EmberJS. These clients are then connected to backends using Restful Services. This JavaScript based area opens the gate for a large group of new users and use cases. Therefore, it is a consequent step to implement an AngularJS based renderer for EMF Forms.
The former eponym “EMF” is rather unknown in the web area. Additionally, it loses its central role for defining the data schema and the UI schema. Therefore, a new name was required: JSON Forms. However, it is not difficult to guess, which technology takes over the role of EMF.

What happened before?

Before we talk about the world of JavaScript, we would like to take a brief look back on the main concepts of EMF Forms. While with JSON Forms we switch to a new technology stack, the basic idea and the basic concepts remains the same as in EMF Forms. If you are already familiar with EMF Forms, you can probably skip this section.

Both frameworks are based on the fact that typical UI toolkits are not focussed on the implementation of form-based UIs. Therefore, they make things unnecessarily complicated and require too much effort. For a typical input field, displaying a String attribute, you need to implement a label as a text field and possibly validate it as well as the binding to the underlying data entity. This has to be repeated for all required fields in a form.

In a declarative language, like the one provided by EMF/JSON Forms, you only need to specify the existence of a control, which references an attribute of the data schema. The whole creation of a fully functional UI, including validation and binding is then done by a rendering component.

Analogously, layouts are specified, in which controls can be embedded. As for controls, the declarative approach focuses on concepts, which are typically required in form-based UIs. Instead of complex layouts, such as GridLayout, the declarative language provides simple concepts such as groups or columns. This abstraction makes the specification of UIs much simpler and more efficient. Additionally, the central rendering component, which replaces a great deal of the manually written UI code, improves the adaptability and maintenance. Any desired change to the UI must be applied only once on the renderer. Further information on EMF Forms can be found here.

Last but not least, the declarative description of the UI is technology independent. Only the renderer component is bound to a specific toolkit. Therefore, it is possible to migrate existing UIs to new technologies. JSON Forms provides a complete new technology stack, based on HTML, CSS, JavaScript, JSON, JSON Schema and AngularJS. However, our goal is to keep the declarative description compatible, so it is also possible to reuse existing “view models” from EMF Forms on the new stack.

A new technology stack

As mentioned before, the general concept of EMF Forms has been maintained. The declarative way of describing UIs and the central rendering approach provide the same advantage and can be efficiently implemented in an client server oriented browser application. To display a form, JSON Forms still needs three artefacts: the definition of the displayed entity (Data Model), the declarative description of the UI (View Model) and finally the data entity to be displayed (Data Model Instance). In EMF Forms, all those artefacts are modelled in EMF. As the name already implies, we use JSON instead of EMF for JSON Forms. More precisely, we use JSON Schema for the data model, and JSON objects for the view model and the data entity. Therefore, in JSON Forms, we use the terms “data schema” and “ui schema” instead of “data model” or “View Model”.
The following example shows a very simple data schema, which defines a data object with only two attributes. The specified entity shall be displayed in a form-based UI later on. That means, there are controls to enter both attributes “Name” and “Gender”. The schema defines the possible values for “Gender”, and is an enumeration. The “Name” field is specified as mandatory. Both constraints are considered by JSON Forms.

  "type": "object",
  "properties": {
    "name": {
      "type": "string"
    "gender": {
      "type": "string",
      "enum": [ "Male", "Female" ]

The schema only describes the data to be displayed, it does not specify how the data should be rendered in the form. This is specified in a second JSON-based artifact, the “UI Schema” (see listing below). The ui schema references the data schema, more precisely the attributes defined in the data schema. The ui schema element “Control” specifies, a certain attribute shall be rendered at a specific location in the UI. Therefore, it references the specific attribute from the data schema (see the JSON attribute “scope” in the following example). The renderer is responsible for displaying the attribute now,  accordingly, it also selects a UI element, e.g. a text field for a string attribute. Therefore, you do not need to specify, that there should be a drop down for the “Gender” attribute or which values it should contain. Also, you do not need to specify additional labels, if the labels should just show the name of the attribute. If you want to show another label, you can optionally specify it in the ui schema. As you can see, JSOn Forms derives a lot of information directly from the data schema and therefore reduces duplicate specification.

The ui schema also allows to structure controls with container elements. This allows to specify a logical layout. As a simple example is a HorizontalLayout, which displays all children elements next to each other. As in EMF Forms, there are more complex layouts in practice, for example groups, stacks, or multi-page forms. However, the basic concepts remains the same, the ui schema specifies, in a simple way, how the form-based UI is structured, the rendering component takes care of creating a concrete UI. The following example ui schema shows two controls structured in a horizontal layout:

  "type": "HorizontalLayout",
  "elements": [
      "type": "Control",
      "scope": {
        "$ref": "#/properties/name"
      "type": "Control",
      "scope": {
        "$ref": "#/properties/gender"

This simple and concise specification of the data schema and the ui schema is already enough to render a fully functional form-based UI. This is done by the JSON Forms rendering component describe in the following section.


The two artifacts, the data schema and the ui schema, are now rendered to a UI. That is done by the rendering component. In JSON Forms, it uses HTML for defining the visible elements of the UI and JavaScript for the implementation of any behavior, such as data binding and validation. To ease the implementation and also offer state-of-the-art features such as bi-directional data binding, we additionally use the framework AngularJS. It has a growing user base since its publication in 2012, which is already a long period in the volatile area of web frameworks.

The rendering component consists of several registered renderers. Every renderer is responsible for rendering a specific element of the ui schema. This allows the modular development of new renders. Additionally, specific renderers can be replaced.


A frequently raised question is, why AngularJS alone is not enough to efficiently develop form-based Web UI. AngularJS definitely provides a lot of  support to the development of web UIs, however, implementing the desired feature manually is still required. To implement the example above, you have to complete several manual steps in AngularJS. First you have to create an HTML template defining all UI elements such as the text box, the labels and the drop down element. This can lead to complex HTML documents, especially for larger forms. Now, you have to manually set the AngularJS directives on those elements to bind the data and to add validations. Finally, to achieve a homogenous look & feel, you need to layout and alling all created elements, typically with CSS. That means, even for defining a very simple form, you have to deal with three different languages, JavaScript, HTML and CSS. In contrast, when using JSON Forms, you just have to define the data schema and ui schema, both done in a simple JSON format. The rendered form just needs to be embedded into an existing web page using a custom directive.

The declarative approach of JSON Forms especially pays off, in case the underlying data schema is extended or changed. In this case, you just need to slightly adapt the ui schema, e.g. by adding a new control. Another advantage is that there is one implementation per UI element, enabled by the renderer registry. This allows you to adapt the form-based UI at a central place in a homogenous way. As an example, if you want to add a specific behavior to text fields, you just need to adapt the renderer for string attributes.

Therefore, JSON Forms allows to adapt and replace existing renderers. Renderers can be registered for the complete application, e.g. for all text fields, or alternatively, for specific attributes, e.g. only for the string attribute “name”. As an example, if you want to show the Enumeration “Gender” from before as radio buttons instead of a drop down box, you could adapt the renderer for Enumerations. Alternatively, you can even depend on a certain condition, e.g. you can adapt the renderer for all Enumerations with only two possible values.


After the renderer has created a read-to-use UI, the questions remains open, how this can be used in an existing application? In the following section, we describe how to embed JSON Forms into any web application.


To use the rendered form at the end, it is typically embedded into an existing application. As for EMF Forms, our focus is on a not invasive integration. That means, it is possible to integrate the framework as easily into an existing application. Additionally, it should integrate well with existing frameworks, such as Bootstrap.

For embedding EMF Forms into a HTML page, we use a specific directive (as often done in AngularJS). The JSON Forms directive specifies the data schema, the data entity as well as the ui schema to be shown (see following code example). The values of the attributes “schema”, “ui-schema” and input must be in the scope of the current controller. Therefore, they can be retrieved from any kind of source, which allows full flexibility in connecting a backend. In a typical application, the data schema and the ui schema would be static content, while the data entity is retrieved from a REST service.


<jsonforms schema=”mySchema” ui-schema=”myUiSchema” input=”myDataObject”/>


In contrast to EMF Forms, JSON Forms uses JSON to represent the ui schema. The default JSON serialization is much easier to read than the EMF one. Further, JSON is a de-facto standard in the JavaScript world. A first ui schema can easily be created using any kind of text editor. However, one good tooling for creating and modifying View Models (ui schemata in JSON Forms) has been an important success factor for EMF Forms. As an example, the EMF Forms tooling allows one to generate a complete default view model based on a given data schema. This default model can then be iteratively adapted. This feature reduced the initial effort for creating a form-based UI even more.

Of course we also want to support the user as much as possible to specify ui schemata in JSON Forms. The first good news is that you can directly export existing view models from EMF Forms to JSON forms. That allows you to reuse any existing view model as well as using the existing and well-known tooling for the creation of new ui schemata. Beside the ui schema, we also provide an export from an existing EMF model to a json data schema. Therefore, you can reuse the existing features of the EMF Forms and EMF/Ecore tooling.

To export view models and Ecore models to JSON Forms, EMF provides a new option in the right click menu on both artefacts. This has been introduced in the 1.8.x development stream. Further documentation on how the ui schema can be used with JSON Forms then can be found on the JSON Forms homepage and the EMF Forms / JSON Forms integration guide.

On a medium term, we of course also want to address user groups outside of the Eclipse ecosystem with JSON Forms. Therefore, we are working on a ui schema editor as a pure web application. As for EMF Forms, the editor is of course based on JSON Forms itself. Therefore the framework is bootstrapping itself.


With JSON Forms, EMF Forms enters the JavaScript world and the area of typical single page web applications. The well-proven concepts of EMF Forms are transferred seamlessly. The implementation is adapted to the new context and the requirements of typical web applications. Technically, JSON Forms is based on HTML, CSS, JavaScript, JSON, JSONSchema und AngularJS. JSON Forms can be used independently of Eclipse. However, you can reuse the data models and view models from EMF Forms. Therefore it should be easy for existing EMF Forms users to get started with JSON Forms. The most important concepts of EMF Forms are already supported in JSON Forms, we are actively working on completing the framework.

The development of JSON Forms does not mean we will lose our focus on EMF Forms. We plan on continuously driving it forward and extending its vital user base.

One advantage of the declarative approach is especially relevant in the web area, i.e. for JSON Forms: The independance of a UI toolkit. If you manually develop web UIs, you bind the application to a specific JavaScript framework, e.g. AngularJS or Ember.JS. This is a technical risk, especially in the area of web framework. Angular has been pretty popular for the past 3 years, which is already a long time for a JavaScript framework. However, at the end of 2015, the next major version will be published, which is not fully backwards compatible. With JSON Forms, you partially mitigate this risk. The actual specification of your custom forms is one in a declarative and UI technology independant JSON format. By adding new renderers, you can reuse this specification with new JavaScript frameworks or new versions of them.

If you want to try JSON Forms, please visit our website, it provides tutorials and a running live example, where you can specify a ui schema online. Further information about EMF Forms and first steps with JSON Forms can be also found on the EMF Forms website. Finally, we will soon start a blog series, which provides detailed tutorial how to implement complex forms with JSON Forms based on an example application. if you want to learn about the on-going development of JSON Forms, please follow us on twitter.


3 Comments. Tagged with emf, emfforms, JSON, jsonfo, emf, emfforms, JSON, jsonfo

by Maximilian Koegel and Jonas Helming at November 20, 2015 01:13 PM

New Releases of Eclipse IoT Projects Advance IoT Open Source Technology

November 19, 2015 02:00 PM

These projects and the Eclipse IoT ecosystem provide open source IoT technology for developers to build IoT solutions.

November 19, 2015 02:00 PM

Service Update for the Clean Sheet Eclipse IDE Look and Feel

by Frank Appel at November 18, 2015 09:20 AM

Written by Frank Appel

Roughly two weeks ago, I published an introduction to the Clean Sheet Eclipse IDE look and feel, an ergonomic theme extension for Windows 10. Happily, the feature got a surprisingly good reception considering its early development stage.

Thanks to the participation of dedicated early bird users, we were able to spot and resolve some nuisances with respect to the tiny scrollbar overlay feature of tables and trees. Please refer to the issues #43, #39, #38, #37, #33, #32, #31, #30, and #29 for more details.

These achievements offer a good opportunity to release a service update (version 0.1.2), containing all fixes and some minor enhancements like displaying the feature in the about dialog. The new version can be installed/updated by drag and drop the ‘Install’ icon into a running Eclipse workbench

Drag to your running Eclipse installation to install Clean Sheet


Select Help > Install New Software…/Check for Updates.
P2 repository software site @ http://fappel.github.io/xiliary/
Feature: Code Affine Theme

So, don’t be shy and give it a try 😉

Of course, it is interesting to hear suggestions or find out about further potential issues that need to be resolved. Feel free to use the Xiliary Issue Tracker or the comment section below for reporting.

Clean Sheet Eclipse IDE Look and Feel

In case you’ve missed out on the topic and you are wondering what I’m talking about, here is a screenshot of my real world setup using the Clean Sheet theme (click on the image to enlarge).

Eclipse IDE Look and Feel: Clean Sheet Screenshot

For more information please refer to the features landing page at http://fappel.github.io/xiliary/clean-sheet.html or read the introductory Clean Sheet feature description blog post.

The post Service Update for the Clean Sheet Eclipse IDE Look and Feel appeared first on Code Affine.

by Frank Appel at November 18, 2015 09:20 AM

Benchmarking JavaScript parsers for Eclipse JSDT

by gorkem (noreply@blogger.com) at November 17, 2015 06:36 PM

The JavaScript parser for the Eclipse JSDTproject is outdated. It lacks support for the latest EcmaScript 2015 (ES6)standard and has quality issues. Moreover, the parser on JSDT is derived from the JDT’s Java parser, hence it is not adopted by the JavaScript community at large, leaving the JSDT committers as the sole maintainer. Luckily, there are good quality JavaScript parsers that already support a large number of tools built around it. However, these parsers, like most of the JavaScript tools, are developed using JavaScript and requires additional effort to integrate with Eclipse JSDT which runs on a Java VM. In the last few weeks, I have been experimenting with alternatives that enables such integration.


Before I go into the details of integration let me quickly introduce the parsers that I have tried.


Acorn is a tiny parser written in JavaScript that supports the latest ES6 standard. It is one of the most adopted parsers and used by several popular JavaScript tools. It parses JavaScript to ESTree (SpiderMonkey) AST format and is extensible to support additional languages such as JSX, QML etc.


Esprima is also a fast, tiny parser that is written in JavaScript, which also supports the latest ES6. Its development has been recently moved to JQuery foundation and has been in use on Eclipse Orion for a while. Just like Acorn it also uses the ESTree AST format.


Shift(java)is the only Java based parser on my list. It is a relatively new parser. It uses Shift AST as its model which is different from the widely adopted ESTree.

Why does AST model matter?

AST model is what actually what tools operate on. For instance a JavaScript linter first uses a parser to generate an AST model and operates on the model to find possible problems. As one can imagine, an IDE that uses a widely adopted AST model can utilize the ecosystem of JavaScript tools more efficiently.

Eclipse JSDT already comes with a JSDT AST model that is used internally that is very hard to replace. Therefore, regardless of the AST model generated by the parser it will be converted to JSDT’s own model before used. Which renders discussionsaround the AST models moot in JSDT’s context.


The parsers other than Shift, which already runs on the Java VM, need a mechanism to play nice with the Java VM. I have experimented with 3 mechanisms for running Acorn and Esprima for JSDT so far.


Utilizes node.js to run the parser code. node.js runs as an external process, receives the content to be parsed and return the results. I have chosen to use console I/O to communicate between node.js and Java VM. There are also other techniques such as running an http or a socket based server for communication. In order to avoid the start up time for node.js, which does affect the performance significantly, node.js process is actually kept running.


A JNI based wrapper that bundles V8 JavaScript VM. It provides a low level Java API to execute JavaScript on bare V8 engine. Although it uses V8, it does not provide the full functionality of node.js and can only be used to execute selected scripts, fortunately Acorn and Esprima parsers can be run with J2V8.


The JavaScript engine that is nowadays built into Java 8. Provides a simple high level API to run JavaScript.

Performance Benchmarks

The criteria for choosing a parser may vary from the feature set, to AST model used, to even community size. However performance is the one criteria that would make all others relevant. So in order to compare the performance of different alternatives I have developed a number of benchmark tests to compare parsers and the mechanisms.

All benchmark tests produce a result with an AST model, either in JSON form or as a Java object model. Tests avoid the startup time for their environments, for instance the startup time for the node.js process affects the results significantly but are discarded by the tests. The current test sets use AngularJs 1.2.5 and JQuery Mobile 1.4.2 (JQM) as the JavaScript code to be parsed.

Table 1. Average time for each benchmark

Acorn (AngularJS)


118.229 ms

± 1.453

Acorn (JQM)


150.250 ms

± 4.579

Acorn (AngularJS)


181.617 ms

± 6.421

Acorn (JQM)


177.265 ms

± 9.074

Acorn (AngularJS)


59.115 ms

± 0.698

Acorn (JQM)


34.670 ms

± 0.250

Esprima (AngularJS)


98.399 ms

± 0.77

Esprima (JQM)


114.753 ms

± 1.007

Esprima (AngularJS)


73.542 ms

± 0.450

Esprima (JQM)


73.848 ms

± 0.885

Shift (Angular)


16.369 ms

± 1.019

Shift (JQM)


15.900 ms

± 0.325

As expected Shift parser which runs directly on top of JavaVM is the quickest solution. To be fair, Shift parser is missing several features such as source location, tolerant parsing and comments that may affect the parsing performance. However even after these features added it may remain the quickest. I feel that the performance for J2V8 can also improve with more creative use of the low level APIs however there is so much memory copying between Java heap to JNI to V8 heap and back I am not sure if it would be significant.

The surprise for me is the Esprima’s performance with Nashorn. It is unexpected in two ways. It is actually the third quickest option however Acorn does not give the same level of performance.

by gorkem (noreply@blogger.com) at November 17, 2015 06:36 PM

EMF Client Platform 1.7.0 Feature: EMF Change Broker

by Maximilian Koegel and Jonas Helming at November 17, 2015 08:49 AM

With Mars.1, we released EMF Client Platform and EMF Forms 1.7.0. EMF Forms is a framework focused on the creation of form-based UIs. If you are not yet familiar with EMF Forms, please refer to this tutorial for a introduction of EMF Forms. EMF Forms is a sub component of the EMF Client Platform, which is designed for general support of the development of applications based on an EMF data model. While we focused our development activities a lot on EMF Forms during the last releases, EMF Client Platform is still under active development, too. In this post, we would like to introduce a new feature of the EMF Client Platform, the EMF Change Broker.

One of the core and most valuable features of EMF is the change notification. By default, it is possible to register change listeners to EObjects and therefore get notified on changes on attributes and references. The EContentAdapter even enables to listen to complete trees of objects, so you can register a listener to all changes in a model instance. However, when implementing an EContentAdapter, you typically need to filter the notifications the adapter receives to only react to changes of interest. As an example, you might only be interested in changes on a certain EClass or on a certain EFeature within your model. Of course, you could add conditions to every listener, but for this kind of generic filters, a central broker architecture is typically more efficient. Additionally, one wants to avoid the registration of several EContentAdapters, as this usually slows down the performance. Therefore a central broker for change notifications of an EMF model instance is very helpful. Such a broker is provided by the EMF Client Platform ChangeBroker. The following diagram shows a conceptional overview.


The ChangeBroker receives notifications from model instances and dispatches them to registered observers. There are specific types of observers, as for example, you can register an observer only for changes on a specific EClass or a specific EFeature. 

The change broker is a stand-alone component, which can used in combination with any EMF model. There is a ready-to-use integration with the persistence layer of the EMF Client Platform, but the Change Broker can also be used stand-alone without other components of the EMF Client Platform. Please see this tutorial for a detailed introduction to the change broker.


Leave a Comment. Tagged with eclipse, emf, emfcp, eclipse, emf, emfcp

by Maximilian Koegel and Jonas Helming at November 17, 2015 08:49 AM

Why I love Open Source

by Gunnar Wagenknecht at November 17, 2015 08:22 AM

Here is another example of why I love working with open source software and its communities. Last Thursday, one of my team members reached out and asked for advice with regards to a build issue he was observing. We use Maven & Tycho at Tasktop for building almost all of our products. One of its unique features is automated stable versioning based on source code history and dependencies.

Turns out that there was a shortcoming in the implementation with regards to building an aggregated version based on component dependencies. After finding out, I opened a defect report, prepared a patch and posted to the mailing list for awareness and getting feedback on Friday. This took me less then an hour of my time. By Monday – only 72 hours later – the patch was accepted and merged into the main code base, unblocking my colleague.

  1. Discover the root cause.
  2. File defect report
  3. Prepare patch
  4. Communicate about change and collect feedback
  5. Cross fingers – or – hold thumbs.

In my opinion, the key criteria for such a change being successful is the patch size. Yes, I could have added configurability to the change and that would have increased the patch size. The larger a change is, the more time committers need to set aside for understanding and reviewing a change. It’s about the minimal viable solution – keeping things simple. Don’t overwhelm committers with your first change.

Well, and it pays off going to conferences and having a beer or two with project committers.

by Gunnar Wagenknecht at November 17, 2015 08:22 AM

EclipseCon NA 2016 - Early-bird talks accepted

November 17, 2015 07:53 AM

Five talks have been accepted early for EclipseCon NA. The final deadline for proposal is November 23.

November 17, 2015 07:53 AM

Accessing PreferenceStore - JFace Preferences or Eclipse Preferences

by Its_Me_Malai (noreply@blogger.com) at November 17, 2015 05:28 AM

Normally when asked to access the PreferenceStore, we always access it via the Activator Class of the Plugin.

IPreferenceStore prefStore = Activator.getDefault().getPreferenceStore();

But sometimes there are usecases where you may not have access to the Activator of the Plugin whose PreferenceStore you need to access. In that case you may use the Eclipse IPreferenceService.
Platform.getPreferencesService().getInt(Activator.PLUGIN_ID, preferenceKey, defaultValue, null);
For more details on Preferences, please read an AWESOME Document "How Eclipse runtime preferences and defaults work"

by Its_Me_Malai (noreply@blogger.com) at November 17, 2015 05:28 AM