Multimap — How it works

by Nikhil Nanivadekar at June 23, 2018 06:48 PM

In my previous blogs I explained how Eclipse Collections UnifiedMap and UnifiedSet works. In this blog, we will take a look at Multimap or Multi-valued Map in Eclipse Collections.

According to Javadoc of Map, a Map is an object that maps keys to values. A map cannot contain duplicate keys; each key can map to at most one value. However, we come across scenarios wherein more than one value has to be mapped to a key. In such scenarios where multiple values need to be mapped to a single key, we end up creating a Map with a single key but a collection of values. It is important that the semantics of the value collection are maintained viz.:

  1. List of values will behave as a List: allows duplicates, maintain order.
  2. Set of values will behave as a Set: hashed data structure, contains unique elements.
  3. Bag of values will behave as a Bag: hashed data structure, allows duplicates.

Eclipse Collections provides Multimaps for all 3 types of value collections: ListMultimap, SetMultimap (Sorted, Unsorted) and BagMultimap (Sorted, Unsorted). Mutable, Immutable, Synchronized and MultiReader variants of all these Multimaps are available in Eclipse Collections.

Let us consider an Item object and an Item data set as below. The Item data set consists of three fruits, two vegetables and one meat.
Item data set for tests.

Let us see how we can group the list of items:

  1. In the JDK we can use the streams API with Collectors.groupingBy() to get a Map<String, List<Item>> in this case.
  2. Eclipse Collections provides the groupBy() API which returns an Eclipse Collections Multimap . Since we are calling groupBy() on a MutableList we will get a ListMultimap<String, Item>.
JDK Map<String, List<Item>>
Eclipse Collections ListMultimap<String, Item>

We need to use the overloaded methods which accept a target value collection to get a desired type of Multimap, because, both JDK and Eclipse Collections have covariant overrides. The covariant override contract ensures that a groupBy() operation:

  1. On a List returns a ListMultimap
  2. On a Set returns a SetMultimap
  3. On a Bag returns a BagMultimap.

Let us see them side by side:

JDK vs Eclipse Collections Multimap construction from top to bottom: ListMultimap, SetMultimap, BagMultimap.

Eclipse Collections Multimaps Architecture:

Multimaps are backed up by UnifiedMap, which is the more memory efficient Map included in Eclipse Collections. The overall architecture for a Multimap without collisions can be seen below, the strategy for handling collisions is same as that of UnifiedMap.

Eclipse Collections Multimap Architecture Schematic Diagram

Adding and Removing elements from Eclipse Collections MutableMultimap:

Eclipse Collections MutableMultimap has mutating operations like put(), putAll(), remove(), removeAll(). There are a few interesting aspects of these mutating methods, let us look at each one:

  1. put(), putAll() : These methods are interesting when called for a key which does not exist in the Multimap. The Eclipse Collections implementation handles these cases by creating a new Collection and then adding the key and value. In the example below, there is no element with key=beverage. When we add key-value = beverage-milk, internally Eclipse Collections will create an empty List and then add to the MutableMultimap. Any further additions of values to key=beverage, the new values are added to the list. In case of the JDK implementation of Map<K, List<V>> we have to handle the empty List creation.
MutableMultimap.put() operation in Eclipse Collections.

2. remove(), removeAll() : These methods are interesting when the result of removal will leave an empty collection. The Eclipse Collections implementation ensures that there will not exists a key without a non-empty collection. In cases where the last value is removed for a particular key, the key as well is removed. This ensures that the Multimap will contain only those keys which have a non-empty value collection.

MutableMultimap.remove() operation in Eclipse Collections.

The Eclipse Collections Multimap has a rich and an intuitive API specifically designed to help with iteration patterns pertaining to Multimap like keyBag(), keySet(), forEachKey(), forEachValue(), forEachKeyValue(), forEachKeyMultiValues(), selectKeysValues(), rejectKeysValues(), selectKeysMultiValues(), rejectKeysMultiValues(), collectKeysValues(), collectValues() to name a few.

Memory Footprint (lower number the better)

Below are few memory footprint comparisons between JDK 1.8 HashMap and Eclipse Collections Multimap. This shows the total memory footprint including the constituents of the data structures.

Memory Comparison: EC ListMultimap<Integer, Integer> and JDK HashMap<Integer, List<Integer>>
Memory Comparison: EC ListMultimap<String, String> and JDK HashMap<String, List<String>>
Memory Comparison: EC SetMultimap<Integer, Integer> and JDK HashMap<Integer, Set<Integer>>
Memory Comparison: EC SetMultimap<String, String> and JDK HashMap<String, Set<String>>
Memory Comparison: EC BagMultimap<Integer, Integer> and JDK HashMap<Integer, Map<Integer, Long>>
Memory Comparison: EC BagMultimap<String, String> and JDK HashMap<String, Map<String, Long>>


  1. Eclipse Collections provides Multimap implementations with List, Set and Bag as backing collections.
  2. Eclipse Collections provides an intuitive API to create a Multimap.
  3. Eclipse Collections Multimap has a Multimap specific API which handles initialization and eviction of backing collections for you.
  4. Eclipse Collections Multimap API is intuitive for use and the API is kept similar to the API provided by Maps.
  5. Eclipse Collections Multimaps consistently have a smaller memory footprint compared to the equivalent JDK Multimap implementation. Eclipse Collections SetMultimap memory footprint is ~55% that of JDK SetMultimap memory footprint.

Show your support star us on GitHub.

Eclipse Collections Resources:
Eclipse Collections comes with it’s own implementations of List, Set and Map. It also has additional data structures like Multimap, Bag and an entire Primitive Collections hierarchy. Each of our collections have a rich API for commonly required iteration patterns.

  1. Website
  2. Source code on GitHub
  3. Contribution Guide
  4. Reference Guide

Multimap — How it works was originally published in Oracle Developers on Medium, where people are continuing the conversation by highlighting and responding to this story.

by Nikhil Nanivadekar at June 23, 2018 06:48 PM

Get the Scoop, a Homebrew for Windows

by Doug Schaefer at June 23, 2018 12:00 AM

Get the Scoop, a Homebrew for Windows

When I came back to QNX over six years ago (wow, it’s been that long?), they offered a choice of one of the the three main environments. I was excited to see what all the hype was about and picked Mac. I really enjoyed it. It provided a great blend of user experience with the power of Unix and the shell underneath. The trackpad on the MBP is amazing. It’s under rated how much a productivity enhancer that is.

But eventually that machine got old and beat up and it was time for a new one. I had my fun with the Mac, but it’s not what’s used by many of the users of the tools I build. In the embedded space you see a pretty solid mix of Windows and Linux. And I was interested to see how much better Windows 10 was and whether it could overcome it’s really crappy command line environment. So that’s where I went.

I knew Eclipse works well there. It was invented there and still looks like a big MFC app. But selecting a shell environment revealed a lot of choice.

  • MSYS. This was the environment I used the last time I was on Windows. It came as the shell for the MinGW toolchain. But it seems to have died.
  • CYGWIN. This environment bugs me a lot as it maps Windows paths to Unix paths and confuses the hell out of native tools, like Eclipse CDT.
  • MSYS2. This is a pretty rich environment that seems to be an evolution of the old MSYS. It comes with a package manager but the choice of Arch Linux’s pacman is a tough one. It makes it your responsibility to figure out what 32/64-bit host and target combinations of the toolchains and libraries you want to install. But it does have everything, even Qt.
  • Scoop. Scoop is a more general package manager and is pretty easy to use. It’s very active and has a good community keeping the tools up to date. And while it has all my favorite tools for building C/C++ apps, it doesn’t have any libraries. But that’s fine, you should be building those yourself anyway to make sure you’re using the same toolchain settings.
  • Chocolatey. This is another package manager but at a higher level than Scoop. It’s less focused on being a shell and assumes you’re a Powershell user, which I’m not and barely understand.
  • Windows Subsystem for Linux. Also know as Bash for Windows, it provide a Linux emulation layer and gives you access to real Linux distributions. You can use it to access you’re files outside of it’s Linux emulated file system, but it’s pretty weird and not everything works.

For now, I’ve gone with Scoop. I really like it because I also use it to install other tools I use like docker and kubectl for my test cluster, and maven and python and svn, pretty much any command line tool I need. And it manages the PATH for you so all the tools are available in native apps without any magic, including the Windows command line if you’re stuck for anything else. But most of the time, I use busybox which gives me all the Unix tools I need including a not quite bash compatible shell, but that’s fine. I do wish it had C/C++ libraries like SDL or Qt, but it is open source and extensible so I could just do this myself.

The good news is that I’m finding myself very productive on Windows. I have a great shell environment with Scoop, good editors with emacs and Visual Studio Code, and Eclipse which still works best on Windows, and all the Windows apps I need. I don’t miss my Mac at all.

by Doug Schaefer at June 23, 2018 12:00 AM

Eclipse Kura on Steroids with UPM and Eclipse OpenJ9

by Benjamin Cabé at June 21, 2018 10:28 AM

So it’s been a while since the last time I blogged about a cool IoT demo… Sorry about that! On the bright side, this post covers a couple projects that are really, really, neat so hopefully, this will help you forgive me for the wait! 🙃

UP Squared Grove IoT Development Kit

At the end of last year, a new high-performance IoT developer kit was announced. Built on top of the UP Squared board, it features an Intel Apollo lake x86-64 processor, plenty of GPIOs, two Ethernet interfaces, USB 3.0 ports, an Altera MAX 10 FPGA, and more. You can get the kit from Seeed Studio for USD 249.

The UP Squared Grove IoT Development Kit

Of course, it wouldn’t be a Grove kit without the Grove shield that can be attached on top of the board to simplify the connection to a wide variety of sensors and actuators (and there’s actually a few of them in the kit).

Running Eclipse Kura on the UP Squared board

Enough with the hardware! With all this horsepower, it is of course very tempting to run Eclipse Kura on this. The UP Squared being based on an Intel x86-64 processor, it is incredibly easy to start by replacing the default OpenJDK JVM by Eclipse OpenJ9. Here’s your two-step tutorial to get Eclipse OpenJ9 and Eclipse Kura running on your board:

In case you are wondering how much faster OpenJ9 is compared to OpenJDK or Oracle’s JVMs, here’s a quick comparison of the startup time of Eclipse Kura on the UP Squared:

Eclipse Kura start-up time on Intel UP Squared Grove kit


UPM logo

UPM is a set of libraries for interacting with sensors and actuators in a cross-platform, cross-OS, language-agnostic, way.

There are over 400 sensors & actuators supported in UPM. Virtually all the “DIY” sensors you can get from SeeedStudio, Adafruit, etc. are supported, but beyond that, UPM also provides support for a wide variety of industrial sensors.

Thanks to Eclipse Kura Wires and the underlying concept of “Drivers” and “Assets”, Kura provides a way to access physical assets in a generic way.

In the next section, we will see a proof-of-concept of UPM libraries being wrapped as Kura “drivers” in order to make it really simple to interact with the 400+ kind of sensors/actuators supported by UPM.

Integrating UPM in Kura Wires

UPM drivers are small native C/C++ libraries that expose bindings in several programming languages, including Java, and therefore calling UPM drivers from Kura is pretty simple.

The only thing you need is a few JARs for UPM itself (and for MRAA, the framework that is supporting it), the JARs for the driver(s) of the particular sensor(s) you want to use, and the associated native libraries (.so files) for the above. As you may know, OSGi makes it pretty easy to package native libraries that may go alongside Java/JNI libraries, so there is really no difficulty there.

In order for the UPM drivers to be accessible from Kura Wires, and to expose “channels” corresponding to the methods available on them, they need to be bundled as Kura Drivers. This is also a pretty straightforward task, and while I created the driver for only a few sensor types out of the 400+ supported in UPM, I am pretty confident that Kura drivers can be automatically generated from UPM drivers.

You can find the final result on my Github:

See it in action!

So what do we end up getting, and why should you care? Just check out the video below and see for yourself!

L’article Eclipse Kura on Steroids with UPM and Eclipse OpenJ9 est apparu en premier sur Benjamin Cabé.

by Benjamin Cabé at June 21, 2018 10:28 AM

ECF Photon supports OSGi R7 Async Services - part 2

by Scott Lewis ( at June 20, 2018 06:30 PM

In a previous post, I described a usage of OSGi R7's Async Remote Services. This specification makes it easy to define, implement and use non-blocking remote services. ECF's implementation allows the use of pluggable transports...known as distribution providers.

Here's a partial list of distribution providers:

ECF generic
Jax-RS Jersey
Python.Java (Supports async remote services between Java and Python with protocol buffers serialization)

 It's also straightforward to creation your own distribution provider, using private or legacy transport and/or serialization. This can be done by extending one of the distribution providers above or creating a new one.

Most of these distribution providers have updated examples and/or tutorials, and many of them now have templates included in the Bndtools (4.0+) Support added for Photon.

Separating the remote service contract from the underlying distribution provider via OSGi remote services allows implementers and consumers to create, debug, and test remote services without being bound to a single transport, while still allowing consistent (specified) runtime behavior.

For more info and links, please see the New and Noteworthy.

by Scott Lewis ( at June 20, 2018 06:30 PM

4+1 = CAFEBABE: Java Bytecode in Eclipse

by Arne Deutsch ( at June 20, 2018 04:00 PM

What might itemis staff do on their project-free 4 + 1 day? They continue their education, often by working on their own ideas; I would like to introduce just such a project today. The goal was to build familiarity with Xtext and Xtend. The result is a Java Byte Code (JBC) Editor based on these technologies.


Xtext is above all an Eclipse-based framework for creating textual DSLs and generating tools for them, while Java bytecode stored in .class files is binary. What appears out-of-scope at first is easy to manage with one or two tricks.

Without going into too much detail here, the most important part is replacing the IDocumentProvider of the editor. This converts binary data both when reading text and writing back to binary format. In a future article I'll go into more detail about the technique, but today will just stick to the use and functionality of the Java bytecode editor.


Why might you be interested in the contents of .class files and using an editor on them? In most projects the Java code itself is interesting, but not what the compiler makes of it. But there are cases in which a viewer and possibly an editor for the binary data could be helpful.

For example, anyone who writes their own compiler and designs a language for the Java Virtual Machine that compiles to bytecode will probably benefit if they can see the output of their tool. The same use case might arise with tool manufacturers who instrument bytecodes to allow, for example, tracing or profiling.

For anyone working on frameworks that dynamically create bytecode it might also be helpful to look at the code of existing classes, or to manually modify their code to observe the effects in real time before casting them into code.

Last but not least, an eager student who might want to look a little deeper into the technique could find it interesting and educational to see how the JVM gets its code. 


You can get an idea of the possibilities most easily by installing and trying out our Java bytecode editor. Just go to the following update site in Eclipse under Help -> Install New Software…

Install the JBC feature, confirm the license (EPL) and restart.


Open the editor

To open the .class file associated with a .java file, an Open JBC entry is provided in the Package and Project Explorer context menu. Executing this opens a new editor that displays the bytecode as a DSL.


You can of course open .class file directly with an editor. This is done as usual in Eclipse via the context menu using the command Open With -> JBC Editor.



Editing the bytecode

The editor displays each byte as a hexadecimal number. The only exception is UTF8 strings, which are represented as they are in Java; this is to provide enhanced editability. Changing a string value is as easy as it is in Java code. The data is enriched with keywords and grouping brackets to highlight its meaning. The presentation corresponds directly to what you can get from the Java Spec via the ClassFile format.

You can generally navigate within the code using F3 / Ctrl + Click. This allows you to follow references easily, which are represented by two bytes in most cases. If occurrence marking is activated the target under the cursor will be highlighted.


In the bytecode the length of a table is display before each table. This length must be adjusted if entries are deleted or added using the editor. The editor therefore offers validation that compares the actual lengths of tables with their specified lengths. If they do not match they are highlighted as errors and can be adjusted via Quick Fix (Ctrl + 1).


Information displayed by hovering, together with the outline, can provide further understanding of the bytecode. The elements from the editor are also shown here, but with resolved references and interpreted values. For example, any access modifiers of a class are displayed textually rather than as a hexadecimal number, as in the editor.



To get an initial feel for the editor, try the following:

  1. Create a ‘Hello World’ program
  2. Run it and observe its output
  3. Open the JBC editor
  4. Change the string constant that contains the output text
  5. Save to the JBC editor
  6. Run the program again and observe the output

The source code

This project was implemented quickly thanks to Xtext. It could be extended with many more features that you might find useful: extended validations, better auto-completion, templates, refactorings, a formatter and more. All these could be added with minimal effort, or at least significantly less effort than would be involved in writing an Eclipse plug-in with plain Java. If your appetite is whetted you can also look at the source code on GitHub.

by Arne Deutsch ( at June 20, 2018 04:00 PM

Welcome (at Eclipse), Theia!

by Jonas Helming and Maximilian Koegel at June 20, 2018 10:35 AM

Although there hasn’t yet been a big official announcement (except Svens blog last week), one of the most interesting Eclipse project proposals in recent years was approved a couple of weeks ago. So what is this all about? In a nutshell, it is a new platform to build modern IDEs and tools based on web technologies. It was open source and available before it even became an Eclipse project. However, the move to the Eclipse ecosystem is a good opportunity to have a closer look at this very promising project.

Welcome (at Eclipse), Theia!

According to the hypothesis, “Theia” collided with earth 4.5 billion years ago

What is Theia?

Theia is a platform based on modern web technologies (TypeScript, HTML and CSS) to build IDEs and tools. Those tools can run as desktop applications or in the browser. Besides that, the scope is pretty much like the classic Eclipse Platform and the Eclipse Tools platform. Such a tool platform aims at two generic goals:

The first goal is to provide common features, which can be reused to implement a custom tool as efficiently as possible. Some good examples for this feature are support for Git or a source code editor with syntax highlighting. These reusable features significantly lower the required effort for the implementation of a custom tool or IDE.

The second goal is providing mechanisms to integrate existing and new modules to a custom product, e.g. “an IDE for TypeScript developers”. This integration must be supported on both a technical level as well as from a UI perspective. This allows you to create a custom product by combining existing modules with custom extensions.

Theia in particular adds a third goal, which could be called the “unique selling point”: A platform for building tools as desktop-apps and web-apps at the same time sharing the same code.

To achieve the first two goals, Theia provides three main features:

  1. A basic workbench frame including menus, a status bar, a view concept, part layouting, a workspace abstraction, etc.. This basic workbench can be extended with custom UI extensions, such as menu items, custom views, and custom editors.
  2. A modular extension mechanism, allowing you to implement features in a modular, reusable and combinable way. Those extension can target the frontend, the backend, or both. This extension mechanism also allows the user to install new features.
  3. Common tool features to be reused, e.g. an integrated source code editor supporting LSP, support for the most important Git features, a terminal and many more

To target the third goal – the ability to run a tool on the desktop and as a web application –

Theia consists of two parts, a client (the UI) and a server. In a local, desktop scenario, the server part is deployed locally. Theia is implemented in Typescript, CSS, and HTML. When running a Theia-based tool on the desktop, Electron is used as a local replacement for the Browser.

Welcome (at Eclipse), Theia!     Welcome (at Eclipse), Theia!

The same tool based on Theia, running as a desktop application and in the browser.

Yet another Web IDE?

There are several similar approaches out there already, some commercial and some open source. So why is there yet another Web IDE? The combination of several interesting features make Theia unique in the market. To pick the most relevant ones from our point of view:

  • Web AND desktop-based: Theia as a platform supports the use case to run tools in the cloud (access via a browser), but also, at the same time, on the desktop (via Electron). This is a very unique and interesting feature, as it provides a lot of flexibility when it comes to use cases for tools created based on Theia.
  • Do not reinvent the wheel: Theia reuses other frameworks, standards and technologies wherever it makes sense to do so. Otherwise, the project would not have evolved so quickly. As an example, Theia reuses the Monaco code editor of VS Code and makes strong use of the language server protocol (LSP).
  • For IDEs (not only code editors): Theia does not aim at being a simple code editor, but rather being a platform to create comprehensive tool-suites and integrated development environments (IDEs)
  • Extension first: Theia is meant as a platform, not as a tool itself. This leads to a consistent “extension first” approach, which basically means: Everything is an extension, even the core features, which are provided by the project itself. As a result, you can customize almost everything within Theia and even replace core features, if you like. This is an important “lesson learned” from the Eclipse rich-client platform.
  • Promising community: The project has currently a lot of traction and many of the key players in the Eclipse ecosystem have started to adopt, contribute, or evaluate Theia (see the next section). Eclipse itself could never have been as successful as it is without its excellent ecosystem. Therefore, building a strong ecosystem around a technology such as Theia is the key to success.

Where does it come from?

One very interesting aspect about Theia is that from the beginning it has not been a single-vendor driven project. The initial idea and scope has been discussed with member companies such as Ericsson, Codenvy, Intel, Obeo, RedHat, and Eclipse. If you look at the list of interested parties in the project proposal and the list of 40 contributors, you will immediately see, that Theia is not a one-man show. In any case, credits go to TypeFox and Ericsson for getting the ball rolling on this endeavour one year ago. The broad variety of interested parties also differentiates Theia from other approaches mentioned in the section before.

What is the current state?

Theia is far from being as feature-rich as the Eclipse desktop ecosystem. This is kind of obvious, as the project was only incepted a year ago. However, Theia reuses existing components wherever this makes sense, rather than reinventing them from scratch. A good example of this is seen in the Monaco Code editor, which has its origin in VS Code and is also embedded into Theia. Another example is the focus on LSP, which allows you to reuse existing language servers. Therefore, Theia is already quite powerful for its age as a platform.

Although Theia is currently in the “early adopter” stage, which often means that something is not ready to be used, Theia has already been successfully used as a basis for products such as Yangster and we have already successfully adopted Theia for a couple of customer projects. It has been open source and available before even becoming an Eclipse project. Whatever is currently in Theia works pretty well in our experience. However, not “everything” is available from desktop Eclipse.

How is it different from classic Eclipse?

The most obvious difference is the technology stack. While Eclipse is implemented based on Java, SWT and uses OSGi as a module system, Theia is implemented in TypeScript uses HTML and CSS for the UI and provides its own extension mechanism based on npm. Therefore, if your UI is implemented in SWT, you cannot simply migrate it to Theia. However, there are several strategies to reuse existing tool components in Theia. One example is to embed existing components, e.g. a compiler into the server part of Theia and embed their results into the UI. The most prominent example is using a Language Server (LSP), which operates exactly like this. Another technique is to avoid manually written SWT code and use declarative approaches such as EMF Forms and JSON Forms.

However, the more interesting question is whether you would like to migrate to Theia. First of all, you should evaluate, whether you would benefit from a web-based solution, see here for more details. Second, there is no rush if you have an existing tool. If you look at how long tools took to migrate to Eclipse 4, it is clear, that not everything is re-implemented over night. This is especially true for tools that typically have a long life cycle. Many projects could benefit from a more modern UI stack, but not enough to justify the efforts of an intermediate migration.

However, it makes sense to develop a strategy for the future, as pointed out here.

Which platform to use for new projects also depends on the use case you would like to implement. Although we like Theia a lot, it does not yet have a successful 17 year track record like desktop Eclipse does and obviously, Eclipse has had quite a head start in terms of supported features.


So, Theia is not the next version of Eclipse and not a replacement, but hopefully it can be as successful. It fills an important gap in the Eclipse ecosystem, which previously lacked a platform to build web-based tools, which still run on the desktop. After several suggestions, e.g. “Eclipse Two” by Dough Schaefer, such a platform has now become a realty and it is moving forward very quickly. The broad set of interested parties looks very promising to create a powerful piece of technology.

Therefore, we look forward to contributing to Theia as well as using it in projects. Two particular features we are working on is enhancing the support for graphical editors and support for creating form-based data-centric editors in Theia.

In any case, Theia is a very interesting and promising project. The next months and years will be important to show whether Theia can become as successful as Eclipse has been (and still is), as a technology, a community, and an ecosystem. Welcome to Eclipse, Theia!

by Jonas Helming and Maximilian Koegel at June 20, 2018 10:35 AM

JBoss Tools 4.6.0.AM3 for Eclipse Photon.0.RC3

by jeffmaury at June 20, 2018 06:26 AM

Happy to announce 4.6.0.AM3 (Developer Milestone 3) build for Eclipse Photon.0.RC3.

Downloads available at JBoss Tools 4.6.0 AM3.

What is New?

Full info is at this page. Some highlights are below.


Eclipse Photon

JBoss Tools is now targeting Eclipse Photon RC3.

Fuse Tooling

Camel URI completion with XML DSL

As announced here, it was already possible to have Camel URI completion with XML DSL in the source tab of the Camel Route editor by installing the Language Support for Apache Camel in your IDE.

This feature is now installed by default with Fuse Tooling!

Camel URI completion in source tab of Camel Editor

Now you have the choice to use the properties view with UI help to configure Camel components or to use the source editor and benefit from completion features. It all depends on your development preferences!

Webservices Tooling

JAX-RS 2.1 Support

JAX-RS 2.1 is part of JavaEE8 and JBoss Tools now provides you with support for this update of the specification.

Server side events

JAX-RS 2.1 brought support for server side events. The Sse and SseEventSink resources can now be injected into method arguments thanks to the @Context annotation.


Jeff Maury

by jeffmaury at June 20, 2018 06:26 AM

New Papyrus-based tool!

by tevirselrahc at June 19, 2018 01:30 PM

Good news! The Papyrus Industry Consortium’s steering committee has approved the creation of a “Papyrus Light” addition to the product line!

My insiders have been telling me that work is ongoing on the requirements for this new tool.

Would you like to have a voice? Well you can do so through the Papyrus IC public Tuleap repo’s product management forum!  (You may remember my previous post about Tuleap).

by tevirselrahc at June 19, 2018 01:30 PM

Web-based vs. desktop-based Tools

by Jonas Helming and Maximilian Koegel at June 19, 2018 12:25 PM

It is clear that there is an ongoing excitement surrounding web-based IDEs and tools, e.g. Eclipse Che, Eclipse Theia, Visual Studio Code, Atom or Eclipse Orion. If you attended recent presentations or read current announcements, you may get the feeling that desktop IDEs have already been deprecated. But is this really true? If you ask developers for the tools they use in their daily work, you will rarely find someone already using web-based development tools in production.

At EclipseSource we develop IDE-based solutions, development tools, tools for engineers and modeling tools on a daily basis in various customer projects. We are dealing with this particular design decision regularly:

Do we go for a desktop tool, a web-based, or cloud-based solution?

Web-based vs. desktop-based Tools

Therefore, we want to share our experience on this topic. This is the first of three articles. It describes the most important drivers behind any design decision: the requirements. In the second article, we will describe challenges, technical patterns, solutions, and frameworks on how to match the requirements and how to remain as flexible as possible. In the third article, we will provide an example scenario to substantiate those best practises.

So first things first: As for so many design decisions, the most important thing is to know the requirements. Software Engineers love to talk about implementation and we also like to use new, fancy, or just our favorite technology. But in the end, we need to solve a given problem as efficiently as possible. Therefore, we should think about the problem definition first, even if that leads to a design decision that doesn’t bet on “what’s trendy right now”?

For the impatient reader, here is the possibly unsatisfying conclusion: Whether to go for a desktop or a web-based solution is a complex decision. If you want to make an optimal choice, you will need to consider your custom requirements in several dimensions. For some projects, it will be a rather simple and straightforward choice, e.g. if you are required to integrate with a given existing tool chain. However, for most projects you will need to consider the overall picture and even try to predict the future as accurately as possible.

In our experience, it is worth the effort. In the end, you will hopefully develop a good strategy. This strategy does not have to be limited to strictly choosing one option. It is also a perfectly valid strategy to choose one primary option, but also to decide on being compatible with another option. This allows for a future migration path forward. Further, it is possible to mix both worlds on a per use case basis. We will detail these particular strategies in a follow-up post.

So let’s look at the most common non-functional requirements, which play a role in the design choice between desktop and web. To be more precise, the following areas are typically treated as advantages of a web-based solution.

  • Installability: (also referred to as deployment or “set-up effort”): How easy and fast can you set-up the required tooling and the runtime for executing a system. Usually this is mainly referring to the developer tooling and its runtime(s) since it needs to be repeated for every developer. For simplicity, let us assume this also includes the effort to keep the system up-to-date.
  • Portability: How difficult is it to port a tool to another platform/hardware. In the case of IDEs this requirement is also sometimes referred to as “accessibility”. The typical use case is to access all development resources from any platform, e.g. also on your mobile device.
  • Performance and responsiveness: How responsive is the tool or how responsive does it feel. How long do crucial operations take to run, e.g. a full rebuild.
  • Usability: Let us use this wonderful definition from Wikipedia: “In software engineering, usability is the degree to which a software can be used by specified consumers to achieve quantified objectives with effectiveness, efficiency, and satisfaction in a quantified context of use”
  • Cost: The cost it takes to implement an IDE, tooling, extension or the necessary development runtime. For most projects it is probably one of the most important criteria to consider

Besides those non-functional requirements, tools also need to fulfill functional requirements. As those are specific to a certain tool, we will only consider the cost requirement since typical projects are aimed at fulfilling their functional requirements at the lowest possible cost.

As a first requirement, we will look at installability, because it is the most obvious distinction between a desktop-based and a cloud-based solution. For this requirement, we will also introduce some example scenarios and dimensions, that recur for other requirements later on, so the next section is the most comprehensive.

Installability (a.k.a. Deployment, “set-up effort” and updatability)

Web-based vs. desktop-based Tools

Installability is probably the most prominent advantage that is advertised for web-based solutions. The vision is that you can simply log into a system via your Browser and you can directly start to code without installing anything, neither the IDE or tools, nor any runtime environment. Further, you do not need to install any updates anymore, as they are applied in a central way.

So let us look at this obvious advantage more in detail. The first interesting dimension of this is how much time you can save with improving the installability. This is connected to the number of developers that you have on-board to use the tooling and the number of people who are occasionally using those tools. Further, it plays a role on how long a developer would use the tool after installation, the shorter the usage the more significant the set-up time would be.

Let’s look at three example scenarios.

First let’s consider a tutorial/training scenario, where participants should participate in completing exercises. A tutorial/training takes usually a couple of hours or days, so the set-up time is a crucial part here. Further, trainings/tutorials are typically conducted for a bigger number of developers. Anybody who ever had to deal with preparing a setup guide for a tutorial would agree that a browser-based solution would immediately pay off here. Even the simplest and best prepared desktop-based setup will take some time to install. So this scenario is a win for a web-based solution if, and only if, you can rely on a robust high-speed internet connection. Consequently, you can observe that a lot of online tutorials already embed web-based code editors and tooling. 

The second scenario is an open source project. Many OS project have a mix of different developer roles. Some developers (committers) are regularly contributing to a project. The second group of contributors typically uses a technology (“adopters”), but want to eventually contribute small enhancements and fixes. While for the regular committers, set-up is not that important compared to their time working on the project, occasional contributors are obviously often discouraged from contributing to a project by a complex set-up. Therefore, in this scenario, there is a mismatch between the requirements of two user roles that you need to balance. So is it worth to switch existing committers to a cloud-based solution to ease the life for contributors?

Ed Merks has described this issue very well in his blog post in 2014. His conclusion was the creation of a tool called Oomph, which automates the set-up of a desktop IDE for projects. For the source code, Oomph even goes partly a little further than most cloud-based solutions, as it even allows you to automatically check-out the sources. Please see here for a tutorial regarding Oomph. While Oomph greatly improves the set-up process, it does not solve the issue completely. It will still take some time to materialize all project resources (i.e. download time). So for a very small contribution, it might still be too much of a burden. Further, it does not fully automate the creation of an appropriate runtime. If a project requires a lot of external tools (e.g. databases, applications servers, etc.), those have to be installed separately. In turn, it does not touch the regular committer user, as they just continue using their existing and well-proven solutions. In scenarios with such different roles, it is always a good idea to let all developers use the same set-up and tooling. Otherwise, there might be slight differences in their output and further, the committers will usually only maintain their solution well.

The third scenario is not differentiated by the type of project, but by the use case: code review. In a scenario where a developer works on implementation within a project every day, she might not care about installability so much, at least other requirements might be more important. In turn, if you only review a code change and you do not implement something regularly, installing/updating all the required tooling plays a significant role. As a consequence, most reviews are already probably done in web interfaces (e.g. Gerrit or Pull Requests). Also the use case is focussed on reading and understanding code, rather than on changing or creating new code. Therefore, other requirements are not so important for a code review compared to a good installability.

Like the three scenarios described here, you can categorize any arbitrary project and the according use cases based on the importance of a good installability. The result of this will be very specific to a custom project set-up. Those considerations are naturally already reflected in today’s toolchains, where parts of the tool chain more focussed on reading or browsing code are often web-based.

There are some more considerations to make, which are related to installability. One is the updatability. While of course an update to the tooling is hopefully not the same as installing it from scratch, most considerations we had for the installability should be applied for the update case as well. This especially includes, how often updates to the tool are applied.

Another obvious dimension is the complexity of the project set-up. The more difficult it is, the bigger the advantage of simplifying it via a cloud solution. For this, we need to differentiate of course between the IDE tooling itself and the necessary runtime environment. The environment is often much more complicated, e.g. if you need to set-up several services, databases and so on. If only the runtime set-up is very complicated, a cloud-based IDE might not be the only valuable solution. There are several ways to ease those setups, even without using a cloud- based IDE, e.g. with docker containers or workspace servers like Eclipse Che.

Installability is one of the major advantages of a web-based solution. However, it will only provide a significant advantage, if the use case fits. Therefore, it is worth it to spend some time on defining the core users and use cases of your system and define the importance of installability for them. In a nutshell, installability pays off most in environments, where a lot of people need to and can be on-boarded very fast, and if they do not work continuously on a project. Unfortunately, when onboarding developers it is usually much more time consuming to transfer the required knowledge than to setup the IDE.


Web-based vs. desktop-based Tools

Portability is the second very obvious advantage of a cloud and browser-based solution over a desktop-based tool. The ultimate goal is that you can access the tooling and runtime with any device which has a browser. As a consequence, you can ideally fulfil your development use case at any location, even from a mobile device. In some discussions, this is currently referred to as “accessibility”.  While this strictly speaking means something different, we consider Portability as the ability to access the tool from anywhere on any device.

A lot of the considerations we have described for installability can be shared when thinking about the advantage of Portability. Different project roles and different tooling use cases would benefit on a different level. Doing a code review or browsing a diagram on a tablet makes more sense than writing a lot of code. So, again, the detailed roles and use cases will need to be evaluated. We will not repeat that in detail, but focus on new dimensions, which are specific for portability.

One additional scenario connected to portability is the ability to share a certain runtime set-up or project state. That means developers always have exactly the same environment. This obviously simplifies life as the typical phrase “I cannot reproduce this on my machine” would no longer occur. However, this only fully applies to the tooling. For the runtime, it plays a role if the runtime platform for a system is unified, too. If the system under development runs natively on different operating systems again, you still need to test different runtime environments. As a consequence, cloud-based tooling currently seems to get adapted in the area of cloud-based development, first.

And like mentioned for installability, there are other ways to achieve a uniform setup, although not as unified as a cloud solution.

A disadvantage of a pure cloud-based solution is that they often rely on constant internet connection. While this issue becomes less and less relevant, at least it must be considered. Some cloud-solutions already provide a good work-around, e.g. the offline mode of Google Mail.

A final word about the dream to be able to contribute to a project from anywhere on any device: While this sounds appealing for certain use cases, do we really want to be called by a client and subsequently feel obligated to fix a bug on our smart phone while we are sitting in a ski lift?


Web-based vs. desktop-based Tools

Performance is a very interesting requirement to consider. In contrast to installability and portability, there is no clear winner between desktop-based and cloud-based tooling. You will find valid arguments for both to be “more performant”, e.g. this article by Tom Radcliffe claims that desktop IDEs are the clear winner.

The major reason for this tie is again that we have to consider the specific use case when talking about performance. While writing and browsing code, we want fast navigation, key bindings and coding features such as auto-completion. While web IDEs have caught up a lot in recent years, you can still claim that a desktop tool is typically more performant for those “local” use cases (as also claimed by Tom Radcliffe in the article referenced above).

Things change, when looking at other use cases, e.g. compiling. A powerful cloud-instance can certainly compile a project faster than a typical laptop. Further, it is comparably cheaper and more scalable to gain more resources in a central way. However, when going for a cloud-based solution, scalability must be taken into account. Any advantage is obsolete, if developers have to wait 15 minutes for getting something compiled, because other build jobs are running on the central instance.

So for performance, it is important to consider which development use cases are crucial and will benefit performance-wise from either solution. A follow-up decision of a cloud-based solution would be to strip down the hardware for participating developers to save costs (Chromebook scenario). While this sounds like a rational thing to do, not everybody will like the idea of giving away his or her powerful device.


Web-based vs. desktop-based Tools

Usability also doesn’t have a clear winner in the comparison between desktop-based to web-based IDEs. While advocates of both platforms would claim a clear advantage, this is really a matter of personal taste. Web technologies have become incredibly powerful when it comes to styling, look, and feel. Therefore, you can achieve almost any visualization you would like.

Further, there is much more innovation going on in the web area in comparison to, e.g. SWT.

On the desktop, depending on the UI Toolkit you use, there might be more restrictions. JavaFX and Swing are powerful when it comes to styling, but Swing is kind of deprecated. SWT has limitations when it comes to styling. However, it probably provides most existing framework support when it comes to tooling (see also “cost vs. features”).

Besides styling, native desktop applications have still some advantages in usability, e.g. by supporting native features such as application menus, tray icons, key bindings and such. Although, it is expectable that these advantages will shrink over time as  browsers keep on evolving very fast. In any case, usability is not equal to the ability of supporting a dark theme as some browser advocats may try to make us believe.

It is worth mentioning that there are platforms combining advantages of both worlds. Platforms such as Visual Studio Code, Atom or Eclipse Theia embed web technologies (HTML and Typescript) into a desktop application (using Electron).

Cost vs. Features

Web-based vs. desktop-based Tools

At the end of the day, for many projects a very important constraint is cost, meaning the required effort to satisfy your list of functional requirements. A comparison of the required effort on either platform is driven by several parameters.

First, the effort would be influenced by the general efficiency of a certain platform. It would go beyond the scope of this article to compare, for example, the efficiency of Java development with TypeScript or JavaScript development, so let us assume the general development efficiency to be equal for now.

Second, cloud-based solutions usually add some complexity to the architecture due to the required encapsulation of client and server. Of course, you want to have encapsulation of components in a desktop-based tool, too. However, you typically do not have to deal with the fact that components are deployed in a distributed environment.

Third, a very central aspect for implementing tooling is the availability of existing frameworks, tools and platforms to reuse. Ecosystems like Eclipse have grown over 17 years now and already provide a tool, framework or plugin for almost everything. Many requirements can be covered with very little effort, just by extending existing tools or by using the right framework.

While there are frameworks for cloud-tools, too (such as Orion, Eclipse Che or Theia), they are arguably not as powerful as Eclipse/IntelliJ/NetBeans, yet. This might of course change over time.

One mentionable trend is to reuse existing “desktop” components in web-tooling. As an example, Eclipse JDT provides an API to be used using the language server protocol (LSP). This allows you to reuse features such as auto-completion and syntax highlighting from another UI, typically a web-based IDE. While LSP does not yet cover the full feature set of the Eclipse desktop IDE, it is a great way of reusing existing efforts.

Finally, powerful frameworks usually also carry the burden of complexity. The advantage of reusing existing stuff typically (or hopefully) justifies the effort of learning to use a framework. In turn, if you just use a tiny bit of a platform, you might be better off in using something slimmer, which covers your use cases just as well. As an example, if you need a plain code editor without any context, e.g. for a tutorial use case, a platform such as Eclipse might be overkill.

So it is useful to evaluate the desired feature set and how well it is supported by existing frameworks on the respective technology stack (web vs. desktop). It is also worth it to investigate whether existing features on one stack can be adapted to be reused on the other (e.g. using LSP). This applies in both directions. Not only can you call existing frameworks in the background from a cloud-based solution, it is also possible to embed web-based UI components into desktop applications (see also conclusion).

Finally, what makes the dimension of cost vs. features especially difficult is that you typically cannot know exactly what kind of features you need to provide with a tool in the mid-term future.

And more…

Web-based vs. desktop-based Tools

There are obviously many other parameters to consider when comparing the costs of cloud vs. desktop-based tools. This article would go beyond scope if we were to spend a section on all of these, but let us at least mention some more important topics.

One meta criterion in the decision for a platform, framework, or technology stack, is the long term availability as well as how actively it is maintained. Again, there is no clear winner, when comparing desktop and web stacks. While you can argue that there is more development going on in the web area, it is in turn extremely volatile. Platforms such as Eclipse have been successfully maintained for 17 years now. There are existing workflows for long-term support (LTS) and suppliers like us providing this as a service. Plus, the platform is very stable in terms of APIs.

In turn, web frameworks provide major version updates with improvements almost every year. While this brings in innovation, it also often requires refactoring of an adopting project. As an example, there is currently a lot of variety emerging when it comes to web-based IDEs (e.g. Visual Studio Code, Theia, Eclipse Che, Orion, etc.). We will cover strategies to deal with this risk in a follow-up article.

Another meta criterion is the availability of skilled resources. At the moment, you can probably find more Angular developers on the market than SWT experts. However, this may change very quickly – once Angular is not “hip” anymore.

Another frequently discussed topic when it comes to cloud-based solutions is of course the security as well as tracing aspect. While it is certainly worth considering, it is probably not the key decision factor for most professional environments, but rather requires special attention.


Web-based vs. desktop-based Tools

In this article, we have tried to cover the most important considerations when deciding between a cloud or web-based solution and a desktop tool. There are for sure many more things to consider.

However, what all dimensions have in common is that it is most important to think about the users, their use cases and the frequency of those. Based on these criteria, you can evaluate the benefits of supporting them in the web or on the desktop. This is especially true, if you already have an existing tool and are considering a migration. In this case, there must be clear advantages justifying the cost.

While this is already complex, it is even worth it to make this decision on a per use case basis. This is already happening naturally in our tool landscape, e.g. code reviews are very often conducted using web-interfaces. Identifying the use cases in your tool, which would benefit most from being available online, reduces the effort and risk to migrate everything at once. So it is often a good idea to pick the low hanging fruits first.

Ultimately, it will almost never be possible to make a perfect decision. This is especially true, as important criteria, use cases, and technologies change over time and no one can perfectly predict the future. Therefore, the most important thing is to keep some flexibility. That means, even if you decide for a desktop solution, or vice versa, it should be as easy as possible to switch to the other option later on.

Even mixtures of both technology stacks on a per use case basis often make sense. While this sounds ambiguous, there are some simple patterns to follow to make this true. We will highlight those strategies in a follow-up article. This strategy also allows an iterative migration, which is often the only viable way tackle the complexity and efforts of migrating existing tool projects. Some Frameworks even proactively support this strategy by supplying implementations based on web- and desktop technology at the same time, e.g. EMF Forms and JSON Forms.

Let us close this article with a general, non-statistical overview of what most projects currently do. This is of course biased, as the input is derived from our customer projects or projects we know about. However, looking at those:

Some projects directly aim at a pure web-based solution, typically, if they benefit a lot from the advantages, if they implement something from scratch and if they have a pretty self-contained feature set (e.g. training).

Few projects do not consider web-based tooling at all, mostly if they have a defined set of continuous users and a lot of existing investments in their desktop tools.

Most projects plan to maintain their desktop solutions in the near future, but will migrate certain use cases to web technology. Therefore, those projects implement certain design patterns allowing this partial migration. We will highlight those patterns in a follow-up article. Follow us on Twitter to get notified about our blog posts. Stay tuned!

Finally, if you are dealing with the designing decision in your project and want support, if you want an evaluation of a web-based version of your tools or if you want to make your current tools ready for the upcoming challenges and chances, please do not hesitate to contact us.

by Jonas Helming and Maximilian Koegel at June 19, 2018 12:25 PM

Visualizing Eclipse Collections

by Donald Raab at June 16, 2018 09:36 PM

A visual overview of the APIs, Interfaces, Factories, Static Utility and Adapters in Eclipse Collections using mind maps.

A picture is worth a thousand words

I’m not sure how many words a mind map is worth, but they are useful for information chunking. Eclipse Collections is a very feature rich library. The mind maps help organize and group concepts and they help convey a sense of the symmetry in Eclipse Collections.

Symmetric Sympathy

A High-level view of the Eclipse Collections Library

RichIterable API

The RichIterable API is the set of common APIs share between all of the container classes in Eclipse Collections. Some methods have overloaded forms which take additional parameters. In the picture below I have grouped the unique set of methods by the kind of functionality they provide.

RichIterable API

API by Example

Below are links to several blogs covering various APIs available on RichIterable.

  1. Filtering (Partitioning)
  2. Transforming (Collect / FlatCollect)
  3. Short-circuiting
  4. Counting
  5. Filter / Map / Reduce
  6. Eclipse Collections API compared to Stream API

RichIterable Interface Hiearchy

RichIterable is the base type for most container types in Eclipse Collections. Even object valued Map types extend RichIterable in Eclipse Collections. A Map of type K (key) and V (value), will be an extension of RichIterable of V (value). This provides an rich set of behaviors to Map types for their values. You can still iterate over keys, and keys and values together, and there are separate methods for this purpose.

RichIterable Interface Hierarchy — Green Star=Mutable, Red Circle=Immutable

PrimitiveIterable Interface Hierarchy

Eclipse Collections provides container support for all eight Java primitives. There is a base interface with common behavior named PrimitiveIterable.

PrimitiveIterable Interface Hierarchy

The following diagram shows the IntIterable branch from the diagram above. There are seven other similar branches.

IntIterable Interface Hierarchy — Green Star=Mutable, Red Circle=Immutable

The interface hierarchy for each primitive type is pretty much the same as IntIterable.


If you want to create a collection in Eclipse Collections, you have a few options available. One option is to use a constructor or static factory method on the concrete mutable type that you want to create. This requires you to know the name of the concrete mutable types (e.g. FastList, UnifiedSet or UnifiedMap). This option does not exist for immutable types however. The most convenient, consistent and symmetric option if you are going to create both mutable and immutable containers is to use one of the factory classes provided. A factory class follows the pattern of using the type name plus an s, to make it plural. So if you want a mutable or immutable List, you would use the Lists class, and then specify whether you want the mutable or immutable factory for that class.

Factory Classes available in Eclipse Collections for Object Containers

There are separate factory classes for primitive containers. Prefix the primitive type in front of the container type to find the right primitive factory class.

Mutable Factory Examples

MutableList<T> list = Lists.mutable.empty();
MutableSet<T> set = Sets.mutable.empty();
MutableSortedSet<T> sortedSet = SortedSets.mutable.empty();
MutableMap<K, V> map = Maps.mutable.empty();
MutableSortedMap<K, V> sortedMap = SortedMaps.mutable.empty();
MutableStack<T> stack = Stacks.mutable.empty();
MutableBag<T> bag = Bags.mutable.empty();
MutableSortedBag<T> sortedBag = SortedBags.mutable.empty();
MutableBiMap<K, V> biMap = BiMaps.mutable.empty();

Immutable Factory Examples

ImmutableList<T> list = Lists.immutable.empty();
ImmutableSet<T> set = Sets.immutable.empty();
ImmutableSortedSet<T> sortedSet = SortedSets.immutable.empty();
ImmutableMap<K, V> map = Maps.immutable.empty();
ImmutableSortedMap<K, V> sortedMap = SortedMaps.immutable.empty();
ImmutableStack<T> stack = Stacks.immutable.empty();
ImmutableBag<T> bag = Bags.immutable.empty();
ImmutableSortedBag<T> sortedBag = SortedBags.immutable.empty();
ImmutableBiMap<K, V> biMap = BiMaps.immutable.empty();

Static Utility Classes

In the beginning of Eclipse Collections development, everything was accomplished through static utility classes. We added our own interface types later on. Over time Eclipse Collections has accumulated quite a few static utility classes that serve various purposes. Static utility classes are useful when you want to use Eclipse Collections APIs with types that extend the JDK Collection interfaces like Iterable, Collection, List, RandomAccess and Map.

A collection of useful static utility classes

Static Utility Examples

Collections.singletonMap(1, "1"),
String[] strings = {"1", "2", "3"};
ArrayIterate.anySatisfy(strings, "1"::equals));
ArrayIterate.contains(strings, "1"));


There are adapters that provide the Eclipse Collections APIs to JDK types.

Adapters for JDK types

Creating an adapter

MutableList<String> list = 
Lists.adapt(new ArrayList<>());
MutableSet<String> set =
Sets.adapt(new HashSet<>());
MutableMap<String, String> map =
Maps.adapt(new HashMap<>());
MutableList<String> array =
ArrayAdapter.adapt("1", "2", "3");
CharAdapter chars =
Strings.asChars("Hello Chars!");
CodePointAdapter codePoints =
Strings.asCodePoints("Hello CodePoints!");
LazyIterable<String> lazy =
LazyIterate.adapt(new CopyOnWriteArrayList<>());

Additional Types

There are more types in Eclipse Collections like Multimaps. These will be covered in a separate blog. Multimap is one of the types today, along with ParallelIterable, that does not extend RichIterable directly.


  1. Eclipse Collections Reference Guide
  2. Eclipse Collections Katas
  3. API Design of Eclipse Collections
  4. Refactoring to Eclipse Collections
  5. UnifiedMap, UnifiedSet and Bag Explained

Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it on GitHub.

by Donald Raab at June 16, 2018 09:36 PM

Pro Tip: Implementing JUnit Test Cases in Xtend

by Tamas Miklossy ( at June 15, 2018 02:18 PM


What makes a clean test? Three things. Readability, readability, and readability. Readability is perhaps even more important in unit tests than it is in production code. What makes tests readable? The same thing that makes all code readable: clarity, simplicity, and density of expression.
[Robert C. Martin: Clean Code - A Handbook of Agile Software Craftsmanship (page 124)]


Recently, the Eclipse GEF DOT Editor has been extended by the Rename Refactoring functionality. Following the Behaviour-Driven Development approach, its acceptance criteria have been specified first:

Feature: Rename Refactoring

  Scenario Outline:
    Given is the <dslFile>
    When renaming the <targetElement> to <newName>
    Then the dsl file has the content <newContent>.

      |  dslFile  |   targetElement   | newName | newContent |
      |  graph {  |                   |         |  graph {   |
      |    1      |     firstNode     |    2    |    2       |
      |  }        |                   |         |  }         |
      |           |                   |         |            |
      | digraph { |                   |         | digraph {  |
      |   1       |     firstNode     |    3    |   3        |
      |   1->2    |                   |         |   3->2     |
      | }         |                   |         | }          |
      |           |                   |         |            |
      | digraph { |                   |         | digraph {  |
      |   1       |    source node    |    3    |   3        |
      |   1->2    | of the first edge |         |   3->2     |
      | }         |                   |         | }          |
      |           |                   |         |            |

Thereafter, the test specification has been implemented in JUnit test cases:

class DotRenameRefactoringTests extends AbstractEditorTest {

	// ...

	@Test def testRenameRefactoring01() {
			graph {
		testRenameRefactoring([firstNode], "2", '''
			graph {

	@Test def testRenameRefactoring02() {
			digraph {
		testRenameRefactoring([firstNode], "3", '''
			digraph {

	@Test def testRenameRefactoring03() {
			digraph {
		testRenameRefactoring([sourceNodeOfFirstEdge], "3", '''
			digraph {

	// ...

	private def testRenameRefactoring(CharSequence it, (DotAst)=>NodeId element,
		String newName, CharSequence newContent) {
		// given
		// when
		rename(target(element), newName).
		// then

	// ...


Thanks to the Xtend programming language, the entire DotRenameRefactoringTests test suite became readable, clean, and scales very well.

How did I do this? I did not simply write this program from beginning to end in its current form. To write clean code, you must first write dirty code and then clean it.

[Robert C. Martin: Clean Code - A Handbook of Agile Software Craftsmanship (page 200)]

Would you like to learn more about Clean Code, Behaviour-Driven and Test-Driven Development? Take a look at the (german) blog posts of my former colleague Christian Fischer, a very passionate software craftsman and agile coach.

by Tamas Miklossy ( at June 15, 2018 02:18 PM

Announcing Ditto Milestone 0.3.0-M2

June 15, 2018 04:00 AM

Today we, the Eclipse Ditto team, are happy to announce our next milestone 0.3.0-M2.

The main changes are

  • improvement of Ditto’s cluster performance with many managed Things
  • improved cluster bootstrapping based on DNS with the potential to easy plugin other mechanism (e.g. for Kubernetes)

Have a look at the Milestone 0.3.0-M2 release notes for a detailed description of what changed.


The new Java artifacts have been published at the Eclipse Maven repository as well as Maven central.

The Docker images have been pushed to Docker Hub:


The Eclipse Ditto team

June 15, 2018 04:00 AM

Hello Planet Eclipse!

by Jonas Helming and Maximilian Koegel at June 14, 2018 09:23 AM

This is a test blog to check the aggregation on Planet Eclipse.

by Jonas Helming and Maximilian Koegel at June 14, 2018 09:23 AM

ECF Photon supports OSGi Async Remote Services

by Scott Lewis ( at June 12, 2018 05:29 PM

In a previous post, I indicated that ECF Photon/3.14.0 will support the recently-approved OSGi R7 specification.   What does this support provide for  developers?

Support osgi.async remote service intent

The OSGi R7 Remote Services specification has been enhanced with remote service intents.  Remote Service Intents allow service authors to specify requirements on the underlying distribution system in a standardized way.   Standardization of service behavior guarantees the same runtime behavior across distribution providers and implementations.

The osgi.async intent allows the service interface to use return types such as Java8's CompletableFuture or OSGi's Promise.   With a supporting distribution provider, the proxy will automatically implement the asynchronous/non-blocking behavior for the service consumer.

For example, consider a service interface:
public interface Hello {
CompletableFuture<String> hello(String greetingMessage);
When an implementation of this service is registered and exported as a remote service with the osgi.async intent:
@Component(property = { "service.exported.interfaces=*", "service.intents=osgi.async" })
public class HelloImpl implements Hello {
public CompletableFuture<String> hello(String greetingMessage) {
CompletableFuture<String> future = new CompletableFuture<String>();
future.complete("Hi. This a response to the greeting: "+greetingMessage);
return future;
Then when a Hello service consumer (on same or other process) discovers, imports and then remote service is injected by DS:
public class HelloConsumer {

private Hello helloService;

void activate() throws Exception {
// Call helloService.hello remote service without blocking
helloService.hello("hi there").whenComplete((result,exception) -> {
if (exception != null)
System.out.println("hello service responds: " + result);
The injected helloService instance (a distribution-provider-constructed proxy) will automatically implement the asynchronous remote call.   Since the proxy is constructed by the distribution provider, there is no need for the consumer to implement anything other than calling the 'hello' method and handling the response via the Java8-provided whenComplete method.   Java8's CompletionStage, Future, and OSGi's Promise are also supported return types.  (Only the return type is used to identify asynchronous remote methods, any method name can be used).  For example: the following signature is also supported as an async remote service:
public interface Hello {
org.osgi.util.promise.Promise<String> hello(String greetingMessage);

Further, OSGi R7 Remote Services supports a timeout property:
@Component(property = { "service.exported.interfaces=*", "service.intents=osgi.async", "osgi.basic.timeout=20000" })
public class HelloImpl implements Hello {
public CompletableFuture<String> hello(String greetingMessage) {
CompletableFuture<String> future = new CompletableFuture<String>();
future.complete("Hi. This a response to the greeting: "+greetingMessage);
return future;
With ECF's RSA implementation and distribution providers, this timeout will be honored by the underlying distribution system. That is, if the remote implementation does not return within 20000ms, then the returned CompletableFuture will complete with a TimeoutException.

Async Remote Services make it very easy for service developers to define, implement, and consume loosely-coupled and dynamic asynchronous remote services.   It also makes asynchronous remote service contracts transport independent, allowing the swapping of distribution providers or creating/using custom providers without changes to the service contract.

For the documented example code, see here

by Scott Lewis ( at June 12, 2018 05:29 PM

Eclipse Vert.x 3.5.2

by vietj at June 08, 2018 12:00 AM

We have just released Vert.x 3.5.2, a bug fix release of Vert.x 3.5.x.

Since the release of Vert.x 3.5.1, quite a few bugs have been reported. We would like to thank you all for reporting these issues.

Vert.x 3.5.2 release notes:

The event bus client using the SockJS bridge are available from NPM, Bower and as a WebJar:

Docker images are also available on the Docker Hub. The Vert.x distribution is also available from SDKMan and HomeBrew.

The artifacts have been deployed to Maven Central and you can get the distribution on Bintray.

Happy coding !

by vietj at June 08, 2018 12:00 AM

Siemens partnering with Obeo on Model Based Systems Engineering solution - a major recognition for OSS Modeling Techs

by Cédric Brun ( at June 08, 2018 12:00 AM

You might have already heard the news, earlier this week during Siemens PLM Connection Americas 2018, Joe Bohman announced that Siemens PLM was partnering with Obeo.

Here is the complete press release for more detail but in short: we are working with Siemens with either standard modeling languages, Capella, SysML or tools to support custom process methodologies in order to contribute to the true integration of MBSE - Model Based System Engineering, within the entire product lifecycle.

This is significant in several ways.

First it’s another strong data point demonstrating that MBSE is a key enabler in a strategy aiming at enabling multi-domain engineering.

Second, it’s a public endorsement from one of the top high-tech multinational company that the OpenSource technologies built through the Eclipse Foundation and the Polarsys Working Group, in this case Acceleo, Sirius and Capella are innovation enablers. Our contribution is fundamental to those and as such this clearly strengthen these projects but also our vision and strategy!

Even more importantly adopters of those technologies will benefit from new integration points and means to leverage their models during the entire product lifecycle, and that’s what modeling is all about: using the model, iterating over it, refining it; as a living artifact, one that is shared and not as something gathering dust in a corner.

These are pretty exciting prospects ahead, no doubt this will be a central subject during EclipseCon France next week. Note that we’ll hold a Capella workshop during the Unconference and that it’s still time to register!

See you next week!

Siemens partnering with Obeo on Model Based Systems Engineering solution - a major recognition for OSS Modeling Techs was originally published by Cédric Brun at CEO @ Obeo on June 08, 2018.

by Cédric Brun ( at June 08, 2018 12:00 AM

Edit an OpenAPI specification in Eclipse IDE

June 07, 2018 10:00 PM

I am working a lot on the OpenAPI Generator project these days. This means that I need to edit OpenAPI Specification files a lot. A specification file is a *.yaml file that describes a REST API.

In Eclipse IDE I have installed the KaiZen OpenAPI Editor plugin. This is an Xtext editor that provides everything that you need to be efficient with your OpenAPI specification: outline, code completion, jumps for references, renaming support…​

KaiZen OpenAPI Editor for Eclipse IDE

It can be installed from the Eclipse Marketplace.

If you use the Eclipse Installer (also called Oomph), you can add this xml snippet to your installation.setup file:

Listing 1. Oomph snippet to install the KaiZen OpenAPI Editor
<?xml version="1.0" encoding="UTF-8"?>

It is free and open-source (EPL). Enjoy.

June 07, 2018 10:00 PM

Visualizing npm Package Dependencies with Sprotty

by Miro Spönemann at June 07, 2018 07:24 AM

Sprotty is an open-source diagramming framework that is based on web technologies. I’m excited to announce that it will soon be moved to the Eclipse Foundation. This step will enable existing visualizations built on the Eclipse Platform to be migrated to cloud IDEs such as Eclipse Theia. But Sprotty is not limited to IDE integrations; it can be embedded in any web page simply by consuming its npm package.

In this post I present an application that I implemented with Sprotty: visualizing the dependencies of npm packages as a graph. Of course there are already several solutions for this, but I was not satisfied with their graph layout quality and their filtering capabilities. These are areas where Sprotty can excel.

Standalone Web Page

The application is available at Its source code is on GitHub.

Dependency graph of the sprotty package

The web page offers a search box for npm packages, with package name proposals provided through search requests to the npm registry. After selecting a package, its metadata is resolved through that same registry and the direct dependencies are shown in the diagram. Further dependencies are loaded by clicking on one of the yet unresolved packages (shown in light blue).

If you want to see the whole dependency graph at once, click the “Resolve All” button. For projects with many transitive dependencies, this can take quite some time because the application needs to load the metadata of every package in the dependency graph from the npm registry. The resulting graph can be intimidatingly large, as seen below for lerna.

The full dependency graph of lerna

This is where filtering becomes indispensable. Let’s say we’re only interested in the package meow and how lerna depends from it. Enter meow in the filter box and you’ll see this result:

Dependency paths from lerna to meow

The filtered graph shows the packages that contain the filter string plus all packages that have these as direct or indirect dependencies. Thus we obtain a compact visualization of all dependency paths from lerna to meow.

Hint: If the filter text starts with a space, only packages that have the following text as prefix are selected. If it ends with a space, packages must have the text as suffix. Thus, if the text starts and ends with a space, only exact matches are accepted.

How It Works

The basic configuration of the diagram is quite simple and follows the concepts described in the documentation of Sprotty. Some additional code is necessary to resolve package metadata from the npm registry and to analyze the graph to apply the selected filter. A subclass of LocalModelSource serves as the main API to interact with the graph.

Automatic layout is provided by elkjs, a JavaScript version of the Eclipse Layout Kernel. Here it is configured such that dependency edges point upwards using the Layered algorithm. It tries to minimize the number of crossings, though only through a heuristic because that goal cannot be satisfied efficiently (it’s an NP-hard problem).

Integration in Theia

The depgraph-navigator package can be used in a standalone scenario as described above, but it also works as an extension for the Theia IDE. Once installed in the Theia frontend, you can use this extension by right-clicking the package.json file of an npm package you are working on and selecting Open With → Dependency Graph.

The dependency graph view embedded in Theia

If you have already installed the dependencies of your project via npm install or yarn, all package metadata are available locally, so they are read from the file system instead of querying the npm registry. The registry is used only as a fallback in case a package is not installed in the local project. This means that resolving further packages is much faster compared to the standalone web page. You can get a full graph view of all dependencies by typing ctrl + shift + A (cmd + shift + A on Mac). Again, if the number of dependencies is too large, you probably want to filter the graph; simply start typing a package name to set up the same kind of filter described above for the standalone application (press esc to remove the filter).

Try It!

If you haven’t already done it while reading, try the dependency graph application. You are welcome to get in touch with me if you have any questions about Sprotty and how it can help you to build web-based diagrams and visualizations.

By the way, don’t miss the talk on Sprotty at EclipseCon France that I will do together with Jan next week!

by Miro Spönemann at June 07, 2018 07:24 AM

Download the conference app

by Anonymous at June 05, 2018 11:56 AM

Explore the program by speaker, tracks (categories) or days. Read the session descriptions and speaker bios and choose your favourites. Download the Android or iOS versions. Thank you @EclipseSource!

by Anonymous at June 05, 2018 11:56 AM

Meet the research community

by Anonymous at June 05, 2018 10:39 AM

Tap into the research community at EclipseCon France!

by Anonymous at June 05, 2018 10:39 AM