July 31, 2014
July 30, 2014

Henshin 1.0 (Update)

8 years after the first PoC and 4 years after its official launch, we are proud to announce the release of Henshin 1.0 -- a model transformation language and framework for EMF. Henshin 1.0 is ready for Eclipse Luna and Java 8 (including its new JavaScript engine Nashorn). The primary domain of Henshin is model-driven development and formal verification methods. With its new multi-threaded pattern matching engine and its code generator for Apache Giraph, big data is a new third main domain of Henshin.

The new version of Henshin comes with an extended transformation language with support for specifying Java imports and annotations, and a couple of new utility APIs. Besides the changes under the hood, the 1.0 release also comes with a number of visible new features, including a reworked interpreter wizard and new preferences for the diagram editor. As an eye candy, we also added a new color mode for the diagram editor which goes well together with Luna's Dark theme.

You can obtain Henshin 1.0 from the release update site or clone our Git repo to try it out!

Update from 30-07-2014

And of course we've build a critical bug into the release! Anyway, the fixed version is available at the release update site.

Code Recommenders 2.1.2: Improved Proxy Support, Snippet Creation & Code Completion

We are pleased to announce the release of Code Recommenders 2.1.2 which comes with a number of improvements and features that will make Code Recommenders even better. We have made improvements to three general areas, which I would like to highlight in this post: improved proxy support, easier creation of snippets, and tweaks to the way code completion works.

Eclipse Marketplace Passes 10 million Successful Installs

This week Eclipse Marketplace hit a very significant milestone: 10 MILLION successful installs. What a great example of the vibrancy in the Eclipse ecosystem. Over the last 4 years, Eclipse users have successfully installed a plugin into Eclipse over 10 million times.

Eclipse Marketplace Client was introduced as part of the Helios release in June 2010. It took close to 11 months for Marketplace to reach 500,000 successful installs and another 5 months to hit 1 million in early November 2011. Now just over 4 years since it was launch, we now average over 400K install each month, close to 5 million installs/year. Pretty impressive growth.

It is also great to see diversity of plugins on Marketplace. In the last 30 days, over 70 plugins have been installed over 1000 times and over 300 plugins have been installed at least 100 times. In total we have close to 1800 plugins listed on Marketplace.

Thanks to everyone who has made Marketplace a success. If you are not using Eclipse Marketplace Client to discover and install new Eclipse plugins then this is a perfect time to start!


mp10 million


July 29, 2014

JSON (Desktop) Editors

There are many online JSON editors, but I was looking for a local editor where I can edit my local json files. Below are some of the ones I found:
  1. http://eclipsejsonedit.sourceforge.net/ - Eclipse JSON Editor plugin - You can either download the plugin from this url or search in marketplace and install the plugin directly as well. For working inside Eclipse, this is good.
  2. http://marketplace.eclipse.org/content/json-tools - Eclipse JSON Tools plugin - This has Eclipse JSON editor bundled and offers additional functionality as well. It offers syntax highlighting, collapsible sections, json formatting and outline tree view.
  3. http://tomeko.net/software/JSONedit/ - Right now, they are in version 0.9.8, but it is under active development and it looks good. It offers tree view and text view. Tree view offers editing as well. Validation of json text is done when switching to tree view and text view also offers collapsing of sections.
  4. http://jsonviewer.codeplex.com/ - This one doesn't have syntax highlighting or collapsible sections in text mode, but offers immediate validation (which is not offered by others) and also clicking on the error takes you to the correct (almost) location, which it makes it easy to fix the error. It also has tree view, which makes it easier.
  5. http://sourceforge.net/projects/nppjsonviewer/ - If you use Notepad++, then this plugin might be worth exploring.
  6. http://www.thomasfrank.se/json_editor.html - They have both online and local version, but it is not that impressive.
  7. http://www.altova.com/xmlspy/json-editor.html - XMLSpy - This is not free, but seems to have good functionality. 
I was looking at something that is free and would easily integrate into my development environment. Since I use Eclipse, I installed JSON Tools Eclipse plugin and it satisfies my needs so far.

Leading Automotive Companies to Collaborate at Eclipse: Introducing openMDM

Last week, AUDI, BMW and Daimler announced they are joining forces to form the Eclipse openMDM Working Group to create a new open source community to develop and distribute tools for managing automotive test data. These leading automotive OEMs will be joined in the group by Canoo Engineering AG, GIGATRONIK GmbH, HighQSoft GmbH, Peak Solution GmbH, and science + computing ag.

OpenMDM will address the challenge of managing the generated test measurement data that is becoming critical to the automotive industry. Automotive and other industries are driven by continuous product development processes where multiple partners collaborate across the lifecycle. Almost every development phase includes the testing of components, subsystems, or final products. Usually testing is done with computer assistance via automated measurement systems. The amount of test data created and collected is tremendous in size, and is constantly growing due to an increasing variance of products, a rising number of functions, and advancements in measurement techniques. The management of measured test data is a significant challenge for industry. OpenMDM will focus on developing and distributing tools, certification tests, and test data that conforms to the automotive industry standard called ASAM ODS (Association of Standardisation of Automation and Measuring Systems). MDM@WEB, the first Eclipse project for the group has already been proposed.

For obvious reasons, this is a big deal for Eclipse. When three of the most innovative and successful automotive companies decide to create an open source community at Eclipse, it is a strong statement that open source in general, and Eclipse in particular are offering a compelling solution for companies interested in promoting open collaboration and innovation.

I also see this announcement as confirmation of some key trends in the technology industry.

  1. Open Innovation for All: Software companies long ago understood the importance of open source to drive technology adoption and innovation. We are now seeing other industries, such as automotive and aerospace,  realize that the best way to innovate and drive technology adoption is through open source communities. The model we have developed with the Eclipse Working Groups is proving to provide the right level of governance, process, infrastructure, and community development to make these collaborations successful.
  2. Open Standards and Open Source Make a Good Match: Over and over again we see great things happen when an open standard is matched with a vendor-neutral open source implementation. Lots of industries implement standards to drive interoperability and reduce vendor lock-in costs. The implementation of a standard typically does not provide any competitive advantage for any individual company. Therefore, collaborating on an open source implementation is the best way forward. OpenMDM is focused on implementing tools and test for an automotive standard called ASAM ODS. AUDI, BMW, and Daimler all realize that creating their own proprietary tool for ASAM ODS is not likely to build a better car or increase shareholder value. In fact, working together to encourage a community to innovate on tools will create a better set of tools than an individual company can create by themselves, at a significantly reduced cost.
  3. Vendor Independence Is Key: Many companies start their journey with open source by creating their own community. But they ultimately hit a wall in growing that community, especially in getting the involvement of the other companies in the same industry. Eclipse experienced this in its early days when it was not an independent, vendor-neutral entity. At the time, competitors of IBM were reluctant to make strategic commitments if IBM had ultimate control over the destiny of the Eclipse community. After the Eclipse Foundation was created, companies like Oracle, SAP, BEA, and Borland became more involved. The story of openMDM is strikingly similar, as it originates with Audi and is now becoming a truly independent and vendor-neutral open source community.

One of the key reasons we created the Eclipse Working Groups was so other industries and communities could create their own vendor-neutral “foundations” at Eclipse. We like to think of working groups as being “foundations in a box”. Instead of going through the costly and time consuming process of setting up a new foundation, an Eclipse Working Group can get setup very quickly and provide all the benefits of a vendor-neutral entity. In addition, the operational costs of the group are significantly reduced by using the professional resources of the Eclipse Foundation. OpenMDM is a great example of a vendor-neutral community that allows competing automotive companies to collaborate on equal terms.

Congratulations and thank you to AUDI, BMW and Daimler, as well as Canoo Engineering AG, GIGATRONIK GmbH, HighQSoft GmbH, Peak Solution GmbH, and science + computing ag, for creating openMDM at Eclipse. We see a bright future for this collaboration and open innovation at Eclipse. This is another great example of how industry can and will use open source and Eclipse Working Groups to drive forward open collaboration and innovation.

Filed under: Foundation, Open Source

Orion 6.0 – Git Page Changes

In an attempt to stream line and improve Git workflows, we have started on a redesign of the Git pages in Orion 6.0

From Many to One

One of our main design goals was to reorganize all of the git functionality from multiple pages down to one. We used to have individual pages or views on pages for:

  • git status
  • git log
  • git repository
  • all git repositories
  • branches
  • tags
  • git configuration options

With the exception of the all git repositories page (which we are in the process of redesigning), all of the other pages have been merged into the current Git page which looks like this (with sections collapsed):


Let’s take a look at what’s in each of these sections. Starting from the very top:

Repo Header

At the very top of the page, we have the name of the repo along with buttons that perform repo-level actions:

  • Apply Patch: brings up a dialog that lets a user select the URL or file that contains the patch
  • Pull: Performs a Pull on the repo (fetch + rebase)
  • Delete: Deletes this repo

Changed Files

This section replaces the old git status page – it will show you the current state of your working directory. With no files changed, the section looks like this:


The top of the section contains the 2 main section buttons:

  • Discard: discard any changes made to selected files and revert the files to their previous committed state
  • Commit: commit selected files with the messaged entered in the message box

Next down comes the message box area which is used to enter a commit message – notice the 2 checkboxes. Amend Previous Commit can be toggled to amend the previous commit (the previous message will be fetched and displayed in the message box). Prepare for Gerrit will add a ChangeId to the commit message.

When files are changed, they appear in a table under the message box:


The files can be expanded to reveal the changes made:


Note that you can also choose to view the diffs side by side or open a compare editor to view the diffs.

You can select the files to include in the commit by using the checkboxes (or the select all checkbox if you want to include all files). As you select files, a number counter at the top of the table changes to match your selected file count.

Once you have selected your files (and typed in a message) you can hit Commit to commit them (or Discard to discard the changes made).

When you have files selected, a Show Patch link appears on the same line with the Commit and Discard buttons. This will create a patch for you out of the files that you have selected.



The commits section has been redesigned with some new sub-sections.


  • Outgoing: this section lists all of the commits that have not been pushed yet to the remote repo
  • Incoming: this sections  lists all of the commits that have not been merged into the local repo
  • History: this section lists all of the commits that the local branch has in common with the remote branch.

Let’s see what happens, when we commit the change from the previous section:


The commit shows up in the Outgoing section. Note that there is a new Undo button that will let you undo the commit and restore the changed files back into your working directory (a soft reset).

Before pushing, you want to be sure that you are caught up to all changes, so you can hit Fetch in the Incoming section to fetch the latest changes. All of the changes that haven’t been merged will show up and you can accept them by hitting Rebase or Merge.


The accepted changes now show up in the History section. You can scroll down to view the the history, or hit the “More commits for <Branch>” to load more entries.


Once you are all up to date with the remote branch, you can Push your changes.

There is also a new button Sync which is combination of Fetch/Rebase/Push.

Changes in a Commit

Expanding any commit in the history will reveal the changed files (this is true for any place a commit is shown on the page) :


Branches and Tags

The branches and tags sections have been merged into one section as well. There are top level items for local branches, remotes and tags. The 2 buttons in this section are:

  • New Remote: brings up a dialog to add a new remote
  • New Branch: brings up a dialog to create a new branch


If you expand the local item, you will see all of the local branches you have for the current repo – you can check a branch out from here.


If you expand the branch – you will see the log for that branch along with all allowable actions next to each commit.


Each remote can be expanded to reveal all of its remote branches along with applicable actions for each entry.


You can see the log for a remote branch by expanding it.

The tags section follows the same pattern – expand the top level item and you are presented with the list of tags along with all available actions for that tag.



The configuration section is now being rendered as a table.


 More to Come

Although the Git page is in a reasonable state right now, we have some future improvements planned that will truly allow for a one page design. Stay tuned for more!

July 28, 2014

Reorder and Drag and Drop with JavaFX TabPane

Upon public request I’ve extracted my draggable TabPane from my code and made it available in a bundle who holds only controls. If you have a need for a TabPane with drag support it is as easy as grabbing “org.eclipse.fx.ui.controls_1.0.0.$TIMESTAMP.jar” from our build server (I’ll work on publishing the stuff to maven central)

Once you have the jar added to your project adding the drag support involves only:

HBox h = new HBox();
  Pane pane = DndTabPaneFactory.createDefaultDnDPane(
    FeedbackType.MARKER, this::setupTb1);
  HBox.setHgrow(pane, Priority.ALWAYS);
  Pane pane = DndTabPaneFactory.createDefaultDnDPane(
    FeedbackType.MARKER, this::setupTb2);
  HBox.setHgrow(pane, Priority.ALWAYS);

and the 2 methods setupTb1 and setupTb2 look like this:

private void setupTb1(TabPane tb) {
    Tab tab = new Tab("T 1.1");
    Rectangle r = new Rectangle(100, 100);
    tab.setContent(new BorderPane(r));
    Tab tab = new Tab("Tab 1.2");
    Rectangle r = new Rectangle(100, 100);
    tab.setContent(new BorderPane(r));

// ....

2014 USENIX Release Engineering Summit CFP now open

The CFP for the 2014 Release Engineering summit (Western edition) is now open.  The deadline for submissions is September 5, 2014 and speakers will be notified by September 19, 2014.  The program will be announced in late September.  This one day summit on all things release engineering will be held in concert with LISA, in Seattle on November 10, 2014. 

Seattle skyline © Howard Ignatius, https://flic.kr/p/6tQ3H Creative Commons by-nc-sa 2.0

From the CFP

"Suggestions for topics include (but are not limited to):
  • Best practices for release engineering
  • Practical information on specific aspects of release engineering (e.g., source code management, dependency management, packaging, unit tests, deployment)
  • Future challenges and opportunities in release engineering
  • Solutions for scalable end-to-end release processes
  • Scaling infrastructure and tools for high-volume continuous integration farms
  • War and horror stories
  • Metrics
  • Specific problems and solutions for specific markets (mobile, financial, cloud)
URES '14 West is looking for relevant and engaging speakers and workshop facilitators for our event on November 10, 2014, in Seattle, WA. URES brings together people from all areas of release engineering—release engineers, developers, managers, site reliability engineers, and others—to identify and help propose solutions for the most difficult problems in release engineering today."

War and horror stories. I like to see that in a CFP.  Describing how you overcame problems with  infrastructure and tooling to ship software are the best kinds of stories.  They make people laugh. Maybe cry as they realize they are currently living in that situation.  Good times.  Also, I think talks around scaling high volume continuous integration farms will be interesting.  Scaling issues are a lot of fun and expose many issues you don't see when you're only running a few builds a day. 

If you have any questions surrounding the CFP, I'm happy to help as I'm on the program committee.   (my irc nick is kmoir (#releng) as is my email id at mozilla.com)

Fun stats about WildFly and Luna

In our recent milestone releases of JBoss Tools 4.2 we’ve started gathering additional data from those who are have agreed sending back anonymous usage data to us (Thank you!).

Reminder: As always, this is my personal interpretation of the data and again it is early days for the data collection. These numbers are just for the last month of beta testers thus do take these absolute numbers with a grain of salt!

In any case I find the numbers interesting and thought I would share since there are some lessons to be learned.

JBoss Server Usage

One of the data points are which JBoss servers users create.

Mind you we don’t collect the exact version of the server installed, just which server adapter users are using - i.e. EAP 6.1 also covers EAP 6.2 and 6.3, WildFly 8 covers 8.0 and 8.1 etc.

server creation stats

The numbers above shows the last two weeks of server creation by our beta users. Not surprisingly majority of users are using the community version of the latest JBoss servers (AS 7.1 and WildFly 8), and it’s great to see the third most used server is the free for development/enterprise supported EAP 6.1/6.2/6.3.

Oldie but goodie

What I find funny is that there are still users using the latest/greatest development tools, but who runs JBoss AS 3.2 - this was last released back in 2006! Talk about dedication :)

Importance of Multiple runtime support

What the list shows to me is the importance of development tools need to support multiple versions because even though most are using latest/greatest runtime there are still a great bunch of users that will be using older versions of runtimes. Many developers tend to forget or blissfully ignore this.

I’m convinced as users move to our release that gathers these data we will start see even higher numbers of "older" runtime usage.

Deploy Only Server

What is a bit disconcerting is how few seem to know about our Deploy only server (in this list noted as systemCopyServer ). This server allows you to use Eclipse’s support for incremental deployments to any directory locally or remotely available. Really useful for deploying to a non-JBoss server, a remote PHP or just a plain html app.

You should try it out!

File ▸ New ▸ Server ▸ Basic ▸ Deploy Only

Combine this with our LiveReload support and you get a great and fast workflow!

Uptake of Eclipse versions

Another data item we have insight to is the uptake of Eclipse versions.

Uptake of Eclipse versions

The graph above is our recorded startups of Eclipse Java EE installs since January pr. week. Be aware the versions listed are the EPP versions, not Eclipse release train versions - I’ll do the mapping for you below.

Two things to note: the "Drop" at the end is just the effect of the numbers ending middle of the month, the "Dip" in mid April I’m not sure what is but we see it across all our Google analytics thus I expect it was an Google Analytics anomoly. The numbers have since stabilized, that said…lets interpret!

This graph shows how Eclipse Kepler SR1 release (2.0.1) usage is dropping as users upgrade to Kepler SR2 release (2.0.2) - this are most likely the effect of users using Eclipse built-in update mechanism to upgrade.

What also can be seen is that the latest stable release (4.4.0) uptake is gaining faster than total Eclipse version usage (the faint lines are even older eclipse versions). Meaning total usage of JBoss Tools is up/stable. Eclipse isn’t dead yet :)

I wish Google Analytics had a way to show this graph cumulative instead of per line…anyone up for a data extraction and visualization project ? I’ll give you access to the data to play with.

Uptake of Eclipse Luna

Finally, my personal main interest was to see what the uptake of Eclipse Luna is.

You can see what effect a GA release of Eclipse has. The red line is Eclipse Luna going from a couple of hundreds starts to now 7.000 starts pr. week since its release - but do notice that there is no corresponding drop (yet) in Kepler. Looks like most are installing Luna next to their Kepler installs (my theory at least ;)

This mimicks previous years uptake patterns and once everyone gets back from vacation and Luna SR1 release comes out it should be close to the level of Eclipse Kepler installs. Good to see users continue to picking up latest greatest features and bugfixes!

I’ll go look at the numbers again in a few months to see if the trend continues.

If there is some additonal data you are interested or questions about the above let me know in comments and I’ll try include/answer it!

Have fun!

Max Rydahl Andersen

Development of Hybrid Mobile Tools has moved to Eclipse Foundation

Even back when the first line of code was dropped for Hybrid Mobile tooling, making the tools as part of the Eclipse foundation was a goal. When starting, we looked at the available tools for developing Cordova applications. We found out that there were no open source solutions that we could contribute and use as part of our tools. Furthermore, interoperability among what very little existed was poor. Of course, our main goal is creating good tools for Apache Cordova development, but while doing that we always keep an eye on interoperability and extendibility.

It is only natural that we are moving the development of our tools for Cordova based application development and forming the Eclipse THyM project. We hope that, as a vendor neutral non-profit organization, Eclipse foundation will encourage contributions and be the base for interoperable Cordova tooling.

What is contributed

Everything related to Cordova based development including the project management, plugin discovery, and support for iOS and Android excluding the Cordova simulator is contributed to Eclipse.org. We have excluded CordovaSim for now because of its complex set of dependencies

What is changing

The development will continue to happen on GitHub but on a repository owned by Eclipse foundation. The contributed code is already renamed, cleaned and on the new repository. If you are a contributor, or want to be one, please use https://github.com/eclipse/thym

We will use bugzilla, and thym-dev mailing list from now on as provided by Eclipse foundation. As expected project documentation is at the wiki. The builds will be running on eclipse.org build server instance.

What is NOT changing

JBoss tools will continue to have support for Cordova development. We will consume Thym project and extend them with more capabilities and integrate with other parts of the tools and technologies coming from projects such as Aerogear.

And of course our wish to create good tools for Apache Cordova development continues with a hope for better collaboration with other individuals and companies.

m2e 1.5.0 improvements

The Maven Integration for Eclipse plugin, a.k.a. m2e, released version 1.5.0 a few weeks ago, as part of the annual Eclipse release train, this year known as Luna. 77 Bugs were fixed as part of that release, compatible with both Eclipse Kepler and Luna. I believe it’s a pretty solid one, with numerous interesting fixes and usability improvements that deserve a blog post. So here goes, in no particular order:

Improved project import workflow

Selecting Maven projects to import used to take an inordinate amount of time, due to a suboptimal - I love that word :-) - Maven Lifecycle Mapping Analysis (LMA). LMA is used to determine whether the projects would require further configuration to operate properly in Eclipse. That LMA is now only run after projects are imported, making selection of projects to import much, much faster (< couple seconds vs 1-2 min for the wildfly 8.0 codebase and its 130 projects, for instance)

After import, lifecycle mapping error markers are collected on imported projects and the discovery service is invoked to find proposals to fix those errors.

Another improvement to this workflow is the ability to easily import multi-module projects to an Eclipse Working Set. The default name is inferred from the root project but can be overridden manually:


More performance improvements during import itself are to be expected to be included in m2e 1.6.0.

See bugs 409732, 408042 and 417466.

Improved memory consumption

Maven project instance caching strategy has been revisited to reduce memory consumption. For a workspace with 300+ projects for instance, heap memory used went from 2.5GB down to well under 1GB without any noticeable side effects.

Nexus index download disabled by default

Before m2e 1.5, by default, Nexus indexes were downloaded on new workspace startup, then subsequently once a week. Depending on your internet connection, that whole process could take 15 minutes or more, heavily pegging the CPU. Once the indexes were updated, the size of the workspace would increase by approximately 500 MB. Even though space is relatively cheap these days, those with many workspaces (eg., for testing) or large workspaces, this extra disk usage can add up quickly.

m2e 1.5.0 now has this feature disabled by default. You can still enable it in Preferences ▸ Maven ▸ Download repository index updates on startup. One major downside of having this feature disabled by default though, is Archetype and Artifact/Plugin searches are now much less efficient, as they rely on this indexed content.

See bug 404417

New Maven Profile management UI

The JBoss Tools team contributed its Maven Profile management interface to m2e 1.5.0. This new interface eases switching between profiles.

Rather than right-clicking on a project, going to the Properties ▸ Maven page, then manually (mis)typing a list of active or disabled profiles, you can now just use Ctrl+Alt+P to open the new Maven Profile selection interface.


The new interface is also accessible from the Maven context menu: Right-click project Maven ▸ Select Maven Profiles…

The list of available profiles is inferred from profiles defined in:

  • the project pom.xml

  • the project’s parent hierarchy

  • user and global maven settings.xml

When several projects are selected, only the common available profiles are displayed for selection. Common profiles are profiles defined in settings.xml or profiles having the same id in different pom.xml.

You can learn more about that feature from the original JBoss Tools blog

See bug 428094

Easily update outdated projects

The Update Maven Project dialog (launched via Right-click project Maven ▸ Update Project… or via Alt-F5), now shows a dirty overlay on projects which need updating.

Additionally, an "Add out-of-date" button adds all out-of-date (OOD) projects to the current selection. If an OOD project has not been selected, a warning is shown underneath the selection table with a link equivalent to "Add out-of-date". Warning text and "Add out-of-date" button tooltip show a count of unselected OOD projects.


See bug 422667

No more Unsupported IClasspathEntry kind=4

There’s a very popular question on StackOverflow about an m2e bug that plagued many users of the maven-eclipse-plugin: m2e would throw Unsupported IClasspathEntry kind=4 exceptions on classpath entries generated by the maven-eclipse-plugin (one of the reasons why you should never mix maven-eclipse-plugin and m2e).

m2e 1.5.0 no longer complains about these unsupported classpath entries, but unexpected classpath issues may still arise, should you mix duplicate jars from m2e and those added by the maven-dependency-plugin.

New checksum settings

Ever connected to a network with limited Internet access or simply stayed at a hotel where you needed to get past a for-pay-firewall, resulting in HTML pages being downloaded instead of jars? There’s nothing better to pollute your local Maven repository. Maven CLI builds can use these flags:

  • -C - fail build if checksums do not match

  • -c - warn if checksums do not match

m2e now has a global Checksum Policy available in Preferences ▸ Maven, that will help you keep your sanity, and yor local repository clean:


While m2e actually won’t create any Warning markers on projects when "Warn" is selected, it will override existing checksum policies set on repositories.

Improved settings for Errors/Warnings preferences

m2e has been known for generating specific errors that have puzzled more than one user in the past:

  • Project Configuration is not up-to-date - a change in pom.xml might require a full project configuration update.

  • Plugin execution not covered by lifecycle - m2e doe not know if it is safe to execute a maven plugin as part of the Eclipse build

With the new Preferences ▸ Errors/Warnings page, users can now decide according to their own needs whether these errors should be downgraded to Warning, or even be ignored entirely.


See bugs 433776, 434053

Maven runtime changes

A few changes have been made with regards to the Maven runtime(s):

  • The embedded Maven runtime has been updated to maven 3.2.1.

  • The Netty/AsynHttpClient transport layer as been replaced with OkHttp 1.5.4. OkHttp is now the default HTTP client on the Android platform. It brings HTTP 2.0 and SPDY support to artifact downloads. Please note though, NTLM authentication is not supported.

  • Maven runtime installations can now be customized with a name, and additional libraries can be added. Maven Launch configurations now reference the Maven runtime by name, instead of using a hard-coded location so the configuration is more portable.

See bugs 427932, 418263, 432436

Accept contributions from Gerrit

In order to lower the contribution barrier and increase contributor diversity, the m2e project now accepts changes contributed via the Gerrit review system. Head over the wiki that explains how to use it. Does it work? Hell yeah! After several significant contributions, Anton Tanasenko has joined the m2e team as commiter!

Welcome Anton!

See bug 374665


With new blood on the m2e team, numerous fixed bugs and some big new features & improvements, m2e 1.5.0 is a pretty exciting release. Hope you guys appreciate this year’s release, before an even better version next time.

So if you haven’t installed m2e 1.5.0 yet, head over to https://www.eclipse.org/m2e/download/ and have at it.

We’d love to hear your feedback on the mailing list, or whether you report bugs or enhancement requests.

Fred Bricon

July 26, 2014

Building an Eclipse target platform for the Raspberry Pi

It’s now been a bit over a year since a wrote about running Eclipse RCP projects on a Raspberry Pi. Since then something really exiting has taken place. Tim Webb and Jed Anderson of Genuitec has published PiPlug; a runtime and deployment tool for hosting SWT applications on Raspberry Pi. As you might have guessed PiPlug uses a small subset of the Eclipse platform, about 5.2MiB. This includes the Equinox launcher compiled for ARM. Incidentally that is the same binary I created when writing my blog post. Anyway, the reason I’m exited is that this is a really neat way of running Java UI code on the Raspberry Pi. It is very simple: Start the PiPlug agent on the target, write your code in Eclipse and use the deployment view to push your stuff. It does not get easier than that!

However, not everything is straight out of the box. I have a tiny resistive touch LCD screen that I want to use. And I want PiPlug to adapt to the small resolution (320×240 pixels). Currently it is mostly suited to much larger resulutions. So I need to amend the PiPlug code to support this screen, then rebuild it. In order to do that I must have an Eclipse target platform with all the bits in place so that I can build my forked version of PiPlug. This gave me a reason to revisit my previous work. The instructions I wrote last year no longer works due to changes in the base repositories and the build process. So I’ve done a recap and simplified everything. The source code has been updated to the latest from Eclipse 4.4 (aka Luna) and the changes I’ve done have been squashed to exactly one commit for each repository, making it easier to follow. I’ve also created patches for these repositories. So when you run the script it will clone Eclipse from the source proper, then apply the patches before building – very straightforward. Note that the script is written for Bash.

Running the script will as before create a target platform with the ARMv6 little endian binaries to use for PiPlug or another Eclipse RCP application on the Raspberry Pi. The git repo with all the code is at https://github.com/turesheim/eclipse-rpi. But in order to save you the time it takes to build, I’ve already created the repository which you can use. You’ll find it in the release section of the above mentioned project page. Simply download and set as target platform.

The SWT and Equinox binary bundles you want all contain the segment “armv6l”. So you should add these to the feature or product definition. Note that the export product command in Eclipse PDE is a bit limited and will not work. You should probably set up a Tycho build for you project in order to create a Raspberry Pi distribution of it.

That’s it. Now I’m off to play around with PiPlug and the Sensiron SHT15 I have attached to my RPi :-)

The post Building an Eclipse target platform for the Raspberry Pi appeared first on Yet Another Coder's Blog.

July 25, 2014

Yakindu Statechart Tools 2.3.0 is ready for Luna!

Today, the Luna release train arrived at Yakindu Statechart Tools Station! Apart from Luna compatibility, the new release version 2.3.0 provides some great new features. If you are new to statecharts and Yakindu Statechart Tools you should take a look at our Getting Started Tutorial.



You can download Yakindu Statechart Tools 2.3.0 for Luna either from our dowload page or install it into your existing Eclipse installation via our update site:


New and Noteworthy

Version 2.3.0 is shipped with a new C++ Code Generator. Of course, all advanced statechart features like history states and parallel regions are supported. Furthermore, we implemented some features requested via our user group, for example a name wrangler for c function names and fixed a bunch of bugs. Of course, all the other cool new Luna features like the split editor are working with the new release, even the dark theme is. ;-)
Another interesting project worth to mention is the Statechart Tools Arduino Integration. Marco wrote a great article on how to run the generated statechart tools code on an Arduino board.

What's next? 


Based on the new GMF Compare feature (thanks for this, great job!) we started to create a diff/merge viewer for statechart diagrams. Since our viewer's merge functionality is not stable yet, we decided to publish it with the next release. However, if you want to test it, you can checkout the plugin from our svn repository. Do not forget to send us some feedback!

Preview of the new SCT Diff/Merge viewer

We hope you like the new Statechart Tools release, if you have any questions do not hesitate to contact us via our user group!

Committer and Contributor Hangout -- Eclipse Project Infrastructure and Best Practices

Thanh Ha from the Eclipse Foundation will be talking about our infrastructure, some best practices and using Hudson, Git and Garret. Links Below: IT Infrastructure https://wiki.eclipse.org/IT_Infrastructure_Doc Common Build Infrastructure https://wiki.eclipse.org/CBI Hudson https://wiki.eclipse.org/Hudson Social Coding (GitHub) https://wiki.eclipse.org/Social_Coding/Hosting_a_Project_at_GitHub Git Code Contributions https://wiki.eclipse.org/Development_Resources/Handling_Git_Contributions Gerrit https://wiki.eclipse.org/Gerrit Sonar https://wiki.eclipse.org/Sonar
July 24, 2014

JFace Viewers in a Java8 world – Part 1 ListViewer

Most people who write RCP applications today know about JFace Viewers and in my opinion the Viewer framework is one of the best programming patterns that came out of Eclipse – easy to understand, easy to apply but still powerful.

Still working with viewers in a world of Java8 feels really bad because of 2 things:

  • JFace Viewers don’t have generics and useage of Object[] in the API
  • JFace APIs does not use SAM-Types

There’s a google summer of code project that tries to add generics to the JFace Viewer API but this does not change the 2nd IMHO as important part – the API feels alien in a Java8 world of lambda expressions and method references.

A few weeks back a discussion came up at the GEF mailing list in conjunction with GEF4 who thought API adopting the JFace API to setup graph viewers but that brought up a 3rd problem – JFace Viewer does not completely hide the SWT API and because GEF4 is designed to be widget toolkit agnostic then JFace Viewer API.

So the requirements for a revised JFace API is:

  1. Use generics to provide type safety
  2. Make use of SAM types
  3. Do not depend of any toolkit technology

Today I had to make a break from my day work and thought about how an Viewer 2.0 would look like and this what I came up with:


 * Base interface of all viewers
 * @param <O>
 *            the domain object representing a row
 * @param <I>
 *            the input to the viewer
 * @param <C>
 *            the content provider responsible to translate the input into the
 *            internal structure
public interface Viewer<O, I, C extends ContentProvider<O, I>> {
  public void setContentProvider(
    @NonNull Supplier<@NonNull C> contentProvider);

  public void setInput(@NonNull Supplier<@NonNull I> input);

You notice that the content-provider and input are not set directly but through suppliers, the reason for that is the input most of the time is not created next to viewer but through a method call into the business layer and for the content provider one often uses a factory to reuse content provider implementations.

So the 2nd and 3rd step of setting up Viewers 2.0 would look like this:

public class Demo {

  private void setup(ListViewer<Person,List<Person>,ContentProvider<Person, List<Person>>> viewer) {
    // ... (setup of labels, ...)

  // ....

and the helper methods could look like this:

private List<Person> listInput() {
  try {
    return Arrays.asList(
      new Person(false, "Tom", "Schindl", format.parse("01.05.1979")),
      new Person(true, "Maria", "Musterfrau", format.parse("01.05.1970"))
  } catch (ParseException e) {
    // TODO Auto-generated catch block
  return Collections.emptyList();
public class ContentProviderFactory {
  public static <O> ContentProvider<O, List<O>> createListContentProvider() {
    return new ContentProvider<O, List<O>>() {
      public List<O> getRootElements(List<O> input) {
        return input;

  // ... more factory

Let’s go on with the translation of the domain element into information a the viewer can present so we need to take a look at the ListViewer interface:

public interface ListViewer<O, I, C extends ContentProvider<O, I>> extends Viewer<O, I, C> {
   * Translate the domain object into a string
   * @param converter
   *            the converter
   * @return the list viewer
  public ListViewer<O, I, C> textProvider(
    Function<@NonNull O, @Nullable String> converter);

   * Translate the domain object into a style information to style the cell
   * and its contents e.g. background color
   * @param converter
   *            the converter
   * @return the list viewer
  public ListViewer<O, I, C> styleProvider(
    Function<@NonNull O, @Nullable String> converter);

   * Translate the domain object into a style ranges
   * @param converter
   *            the converter
   * @return the list viewer
  public ListViewer<O, I, C> textStyleRangeProvider(
    Function<@NonNull O, @NonNull List<@NonNull StyleRange>> converter);

   * Translate the domain object into an image definition
   * @param converter
   *            the converter
   * @return the list viewer
  public ListViewer<O, I, C> graphicProvider(
    Function<@NonNull O, @Nullable String> converter);

This makes our complete setup look like this:

public class Demo {

  private void setup(ListViewer<Person,List<Person>,ContentProvider<Person, List<Person>>> viewer) {


  // ....

where the textProvider and graphicProvider-Function look like this:

private String personFullText(Person p) {
  return p.getFirstname() + "," + p.getLastname() 
   + "("+format.format(p.getBirthdate())+")";
private String genderImage(Person p) {
  return p.isFemale() ? "female.png" : "male.png";

That’s it for today – in the next post I’ll show you how a revised TableViewer API could look like

Announcing Jnario 1.0

After lots of improvements behind the scenes I finally decided to release Jnario 1.0. If you are new to Jnario, here is a blog post introducing the main features of Jnario. You can install the new release via update site or maven. Special thanks go to Stefan Oehme, Boris Brodski and Sebastian Poetzsch for their help and contributions!

Xtend 2.6 Compatibility

Jnario is based on Xtend and this release provides compatibility to Xtend 2.6! This means you benefit from all the improvements made in Xtend 2.6 when writing tests with Jnario. One of the most prominent new features is that you can define anonymous classes in your Jnario specs. This is a great way to quickly create test stubs in your specs. Furthermore, the IDE performance in Eclipse has also been greatly improved.

Fun with Tables

Tables have always been one of my favorite features of Jnario specs. Thanks to Sebastian Poetzsch, using tables has become even more fun in this release! Sebastian contributed an automated formatter for specs, which works also for tables. No more fiddling with spaces when formatting your tables necessary! Just press (CTRL|CMD)+SHIFT+F and there you go:

Improved Maven Support

The standalone Jnario compiler has been completely rewritten. This lays the foundation for future support for other build environments such as gradle. Right now, this means faster compile times when compiling Jnario specs with Maven. In general the maven support has been improved. For example, in combination m2e, Eclipse will automatically synchronize your maven and project configuration.

Better Docs

There has been lots of fixes for generating html reports from your Jnario specifications. If you are curious, here is a great example for what executable documentations created with Jnario can look like.

Want to know more?

Check out the documentation or join the mailing list. You can find the full release notes here

These posts might also be interesting for you:

Happy testing!

Warning: AngularJS Modules are not Modular

In most frameworks and languages, a module's exports are only visible to the other modules that directly import it. As a simple example, the following node.js program prints undefined:




exports.myVal = 7;



> node parent.js

Ignoring the fact that circular dependencies are evil, a novice node user would realize why the printed value is undefined. The reason is one of the defining characteristics of the node module system (actually, it is a characteristic of most module systems ever created). Modules must explicitly declare the modules that they use. Referencing values from modules that are not explicitly required will result in undefined values or errors.

This is not so with AngularJS.

The Angular module system provides some nice syntax to describe required modules. For example, this says that the parent module uses the sub1 and sub2 modules:

angular.module('parent', ['sub1', 'sub2']);

Now, let's assume that sub1, and sub2 each declare no dependencies:

angular.module('sub1', []);
angular.module('sub2', []);

Using just about any other module system as a guide, you would expect that code from parent can reference (i.e., inject) values from either sub1 or sub2, but neither of the sub-modules can reference values from each other or parent.

But, no. That is not the case. Consider that this is added to the app:

.run(function(sub2Value) {

angular.module('sub2').value('sub2Value', 'I should be an error!!!');

Referencing sub2Value from module sub1 does not cause an error even though its module is not directly referenced. Rather, the program prints "I should be an error!!!" to the console. Try it out yourself.

What's going on here?

Each Angular application has a singleton instance of an injector. This injector is not namespaced or partitioned. All instances provided by all modules in the application exist in the injector. And the injector can serve any of its instances to anything that requests it.

You can see that similar behaviour in Guice:

class ModuleMain {
static class Service1 { }

static class Module1 implements Module {

public void configure(Binder binder) {

static class Service2 {

Service1 service1;

static class Module2 implements Module {

public void configure(Binder binder) {

public static void main(String[] args) {
Injector injector = Guice.createInjector(new Module1(), new Module2());

Service1 service1 = injector.getInstance(Service1.class);
Service2 service2 = injector.getInstance(Service2.class);

// Will print true since instances from unrelated modules can reference each other
System.out.println(service2.service1 == service1);

In this snippet, you can see that instances can be shared across unrelated modules. In Guice, like in Angular, the injector is a giant bucket where you can put stuff in and take stuff out with no restrictions.

But, there is a difference. In Angular, modules explicitly declare which modules they depend on, but not in Guice. In Angular, the syntax provides the expectation that only instances from directly required modules can be injected into another module. I am sure that there is a reason for this non-intuitive design, and I would like to learn it.

Consider this post as an explanation of what is going on, but not why. Also, consider this a warning to not rely on the Angular framework to enforce module boundaries. Perhaps in a future version, the Angular team can create truly hierarchical modules. Angular modules would be safer to compose, and name clashes between third-party modules would be prevented.

Inactive Blogs