LaunchBar and User Experience

by Doug Schaefer at February 10, 2016 08:03 PM

Screen Shot 2016-02-09 at 10.03.27 PM

In an effort to make the LaunchBar more “Eclipse standard”, I am trying to make the icons for the build, launch, and stop buttons 24 pixels square. They were 32, which I do admit made the entire tool bar a little too fat. 24 pixel high makes it much more streamlined and you barely notice that it’s still a couple of pixels higher than without the launch bar. I think users can live with that. And I’ll find out soon enough as I always do.

Now, the icons I have there now are ugly. My claim to fame is writing parsers and build systems, not graphic design. I just whipped these together to show what they could look like and get a feel on how the new size helps with the overall visual. We’ll get someone (and Tony McCrary has volunteered) to make them more professional looking.

But I still get the argument from a few people that the UX is bad because the Launch Bar doesn’t look like the rest of the toolbar. Well, no, it doesn’t. And it wasn’t meant to. I think I need to tell the story of how the Launch Bar came about to help explain why.

It started when we were working on BlackBerry Momentics, the IDE for BB10. Our manager hooked us up with a manager who worked in our Sweden office. These were designers who worked at the former TAT (The Astonishing Tribe) who were responsible for the beautiful user experience that became the Cascades framework for BB10. They sent over a handful of “developer experience” designers to a workshop in Ottawa and we brainstormed about how we could make Eclipse beautiful, and more important, a great experience especially for developers who were new to BB10 development.

It was an eye opening experience. They were very cautious about our feelings over the Eclipse UX but were very candid about what they thought. And we didn’t really argue. They took special aim at the tool bar and the “ridiculous” 16-bit icons that were  supposed to somehow be meaningful to a new user. It leads to an overwhelming feeling that only intimidates these poor people and we really want to make sure they become successful using our tools. The first recommendation they gave us was to turn off all of the tool bar buttons.

Then we took aim at the launch experience. I have to admit we took some inspiration from popular IDEs that we all had experience with. But in the end, isn’t the most important thing you do with an IDE other than type your code in, is launch the thing you’re coding? We felt it deserved a place front and center. So the Ottawa gang put forward the general layout of the Launch Bar and the Swedes provided the icons and the spacing around the whole widget. They made it big on purpose and made the buttons soft so that the user interface wasn’t so intimidating and was easy to understand.

The feedback we got was tremendous. I’ve told this story before, but when our product manager presented the new look at a developers conference, one of the attendees went up to him after and gave him a hug for making his life so much better. That kind of feedback for a tools developer is hard to beat and something we should all strive for.

As we move forward, and we focus on the general QNX developer and bring them more and more of what’s available in the Eclipse ecosystem, we felt it important that we push the Launch Bar upstream and work to enable it for more and more use cases. Of course, when you try to address a larger audience, not all of them are going to appreciate the look and feel of it. It is different than most things on the tool bar. But by design, it was supposed to dominate the tool bar. Remember, there wasn’t supposed to be anything else there. It’s actually the old tool bar icons that don’t fit. When I set up my environment now, I turn off everything I can and it does result in a very clean look.

I appreciate that not everyone is going to like the Launch Bar or find it useful. We are striving to make it an optional feature. But as we work to support different types of targets you can launch on, we’re finding hard to do without the Launch Bar. So we will get lots of haters and, if you work at all in open source or tools in general, it’s just part of the game and something you get used to. You can’t make everyone happy. But on the other hand, you also need to make sure you don’t make everyone sad. And those UX guys we worked with were very sad about the Eclipse UX and I’m just trying to keep their effort to fix things up alive.

by Doug Schaefer at February 10, 2016 08:03 PM

Publish an Eclipse p2 composite repository on Bintray

by Lorenzo Bettini at February 10, 2016 03:46 PM

In a previous post I showed how to manage an Eclipse composite p2 repository and how to publish an Eclipse p2 composite repository on Sourceforge. In this post I’ll show a similar procedure to publish an Eclipse p2 composite repository on Bintray. The procedure is part of the Maven/Tycho build so that it is fully automated. Moreover, the pom.xml and the ant files can be fully reused in your own projects (just a few properties have to be adapted).

The complete example at

First of all, this procedure is quite different from the ones shown in other blogs (e.g., this one, this one and this one): in those approaches the p2 metadata (i.e., artifacts.jar and content.jar) are uploaded independently from a version, always in the same directory, thus overwriting the existing metadata. This leads to the fact that only the latest version of published features and bundles will be available to the end user. This is quite against the idea that old versions should still be available, and in general, all the versions should be available for the end users, especially if a new version has some breaking change and the user is not willing to update (see p2’s do’s and do not’s). For this reason, I always publish p2 composite repositories.

Quoting from

The goal of composite repositories is to make this task easier by allowing you to have a parent repository which refers to multiple children. Users are then able to reference the parent repository and the children’s content will transparently be available to them.

In order to achieve this, all published p2 repositories must be available, each one with their own p2 metadata that should never be overwritten.

On the contrary, the metadata that we will overwrite will be the one for the composite metadata, i.e., compositeContent.xml and compositeArtifacts.xml.

In this example, all the binary artifacts can be found here:

Directory Structure

What I aim at is to have the following remote paths on Bintray:

  • releases: in this directory all p2 simple repositories will be uploaded, each one in its own directory, named after version.buildQualifier, e.g., 1.0.0.v20160129-1616/ etc. Your Eclipse users can then use the URL of one of these single update sites to stick to that specific version.
  • updates: in this directory the composite metadata will be uploaded. The URL should be used by your Eclipse users to install the features in their Eclipse of for target platform resolution (depending on the kind of projects you’re developing). All versions will be available from this composite update site; I call this main composite. Moreover, you can provide the URL to a child composite update site that includes all versions for a given major.minor stream, e.g.,,, etc. I call each one of these, child composite.
  • zipped: in this directory we will upload the zipped p2 repository for each version.

Summarizing we’ll end up with a remote directory structure like the following

|-- releases
|   |-- 1.0.0.v2016...
|   |   |-- artifacts.jar
|   |   |-- content.jar
|   |   |-- features
|   |   |   |-- your feature.jar
|   |   |-- plugins
|   |   |   |-- your bundle.jar
|   |-- 1.1.0.v2016...
|   |-- 1.1.1.v2016...
|   |-- 2.0.0.v2016...
|   ...
|-- updates
|   |-- compositeContent.xml
|   |-- compositeArtifacts.xml
|   |-- 1.0
|   |   |-- compositeContent.xml
|   |   |-- compositeArtifact.xml
|   |-- 1.1
|   |   |-- compositeContent.xml
|   |   |-- compositeArtifact.xml
|   |-- 2.0 ...
|   ...
|-- zipped
    |-- your site
    |-- your site

Uploading using REST API

In the posts I mentioned above, the typical line to upload contents with the REST API is of the shape

curl -X PUT -T $f \

For metadata, and

curl -X PUT -T $f \

For features and plugins.

But this has the drawback I was mentioning above.

Thanks to the Bintray Support, I managed to use a different scheme that allows me to store p2 metadata for a single p2 repository in the same directory of the p2 repository itself and to keep those metadata separate for each single release.

To achieve this, we need to use another URL scheme for uploading, using matrix params options or header options.

This means that we’ll upload everything with this URL

curl -XPUT -T $f \

On the contrary, for uploading p2 composite metadata, we’ll use the schema of the other approaches, i.e., we will not associate it to any specific version; we just need to specify the desired remote path where we’ll upload the main and the child composite metadata.

Building Steps

During the build, we’ll have to update the composite site metadata, and we’ll have to do that locally.

The steps that we’ll perform during the Maven/Tycho build, which will rely on some Ant scripts can be summarized as follows:

  • Retrieve the remote composite metadata compositeContent/Artifacts.xml, both for the main composite and the child composite. If these metadata cannot be found remotely, we fail gracefully: it means that it is the first time we release, or, if only the child composite cannot be found, that we’re releasing a new major.minor version. These will be downloaded in the directories target/main-composite and target/child-composite respectively. These will be created anyway.
  • Preprocess possible downloaded composite metadata: if this property is present
    <property name='p2.atomic.composite.loading' value='true'/>

    We must temporarily set it to false, otherwise we will not be able to add additional elements in the composite site with the p2 ant tasks.
  • Update the composite metadata using the version information passed from the Maven/Tycho build using the p2 Ant tasks for composite repositories
  • Post process the composite metadata (i.e., put the property p2.atomic.composite.loading above to true, see for further details about this property)
  • Upload everything to bintray: both the new p2 repository, its zipped version and all the composite metadata.

IMPORTANT: the pre and post processing of composite metadata that we’ll implement assumes that such metadata are not compressed. Anyway, I always prefer not to compress the composite metadata since it’s easier, later, to manually change them or reviewing.

Technical Details

You can find the complete example at Here I’ll sketch the main parts. First of all, all the mechanisms for updating the composite metadata and pushing to Bintray (i.e., the steps detailed above) are in the project, which is a Maven/Tycho project with eclipse-repository packaging.

The pom.xml has some properties that you should adapt to your project, and some other properties that can be left as they are if you’re OK with the defaults:

	<!-- The name of your own Bintray repository -->
	<!-- The name of your own Bintray repository's package for releases -->
	<!-- The label for the Composite sites -->
	<site.label>Composite Site Example</site.label>

	<!-- If the Bintray repository is owned by someone different from your
		user, then specify the bintray.owner explicitly -->
	<!-- Define bintray.user and bintray.apikey in some secret place,
		like .m2/settings.xml -->

	<!-- Default values for remote directories -->
	<!-- note that the following must be consistent with the path schema
		used to publish child composite repositories and actual released p2 repositories -->

If you change the default remote paths it is crucial that you update the child.repository.path.prefix consistently. In fact, this is used to update the composite metadata for the composite children. For example, with the default properties the composite metadata will look like the following (here we show only compositeContent.xml):

<?xml version='1.0' encoding='UTF-8'?>
<?compositeMetadataRepository version='1.0.0'?>
<repository name='Composite Site Example 1.0' type='org.eclipse.equinox.internal.p2.metadata.repository.CompositeMetadataRepository' version='1.0.0'>
  <properties size='2'>
    <property name='p2.timestamp' value='1454086165279'/>
    <property name='p2.atomic.composite.loading' value='true'/>
  <children size='3'>
    <child location='../../releases/1.0.0.v20160129-1625'/>
    <child location='../../releases/1.0.0.v20160129-1630'/>
    <child location='../../releases/1.0.0.v20160129-1649'/>

You can also see that two crucial properties, bintray.user and, in particular, bintray.apikey should not be made public. You should keep these hidden, for example, you can put them in your local .m2/settings.xml file, associated to the Maven profile that you use for releasing (as illustrated in the following). This is an example of settings.xml

<settings xmlns=""
                <bintray.user>YOUR BINTRAY USER HERE</bintray.user>
                <bintray.apikey>YOUR BINTRAY APIKEY HERE</bintray.apikey>


In the pom.xml of this project there is a Maven profile, release-composite, that should be activated when you want to perform the release steps described above.

We also make sure that the generated zipped p2 repository has a name with fully qualified version

<!-- make sure that zipped p2 repositories have the fully qualified version -->

In the release-composite Maven profile, we use the maven-antrun-plugin to execute some ant targets (note that the Maven properties are automatically passed to the Ant tasks): one to retrieve the remote composite metadata, if they exist, and the other one as the final step to deploy the p2 repository, its zipped version and the composite metadata to Bintray:

			<!-- Retrieve possibly existing remote composite metadata -->
					<ant antfile="${basedir}/bintray.ant" target="get-composite-metadata">
			<!-- Deploy p2 repository, p2 composite updated metadata and zipped p2 repository -->
					<ant antfile="${basedir}/bintray.ant" target="push-to-bintray">

The Ant tasks are defined in the file bintray.ant. Please refer to the example for the complete file. Here we sketch the main parts.

This Ant file relies on some properties with default values, and other properties that are expected to be passed when running these tasks, i.e., from the pom.xml

These must be set from outside
<property name="bintray.user" value="" />
<property name="bintray.apikey" value="" />
<property name="bintray.repo" value="" />
<property name="bintray.package" value="" />
<property name="bintray.releases.path" value="" />
<property name="bintray.composite.path" value="" />
<property name="" value="" />

<property name="bintray.url" value="${bintray.owner}/${bintray.repo}" />
<property name="bintray.package.version" value="${unqualifiedVersion}.${buildQualifier}" />
<property name="" value="${bintray.releases.path}/${bintray.package.version}" />

<property name="main.composite.url" value="${bintray.url}/${bintray.composite.path}" />
<property name="target" value="target" />
<property name="" value="composite-child" />
<property name="" value="composite-main" />

<property name="compositeArtifacts" value="compositeArtifacts.xml" />
<property name="compositeContent" value="compositeContent.xml" />

<property name="local.p2.repository" value="target/repository" />

To retrieve the existing remote composite metadata we execute the following, using the standard Ant get task. Note that if there is no composite metadata (e.g., it’s the first release that we execute, or we are releasing a new major.minor version so there’s no child composite for that version) we ignore the error; however, we still create the local directories for the composite metadata:

<!-- Take from the remote URL the possible existing metadata -->
<target name="get-composite-metadata" depends="getMajorMinorVersion" >
	<get-metadata url="${main.composite.url}" dest="${target}/${}" />
	<get-metadata url="${main.composite.url}/${majorMinorVersion}" dest="${target}/${}" />
	<antcall target="preprocess-metadata" />

<macrodef name="get-metadata" description="Retrieve the p2 composite metadata">
	<attribute name="url" />
	<attribute name="dest" />
		<echo message="Creating directory @{dest}..." />
		<mkdir dir="@{dest}" />
		<get-file file="${compositeArtifacts}" url="@{url}" dest="@{dest}" />
		<get-file file="${compositeContent}" url="@{url}" dest="@{dest}" />

<macrodef name="get-file" description="Use Ant Get task the file">
	<attribute name="file" />
	<attribute name="url" />
	<attribute name="dest" />
		<!-- If the remote file does not exist then fail gracefully -->
		<echo message="Getting @{file} from @{url} into @{dest}..." />
		<get dest="@{dest}" ignoreerrors="true">
			<url url="@{url}/@{file}" />

For preprocessing/postprocessing composite metadata (in order to deal with the property p2.atomic.composite.loading as explained in the previous section) we have

<!-- p2.atomic.composite.loading must be set to false otherwise we won't be able
	to add a child to the composite repository without having all the children available -->
<target name="preprocess-metadata" description="Preprocess p2 composite metadata">
	<replaceregexp byline="true">
		<regexp pattern="property name='p2.atomic.composite.loading' value='true'" />
		<substitution expression="property name='p2.atomic.composite.loading' value='false'" />
		<fileset dir="${target}">
			<include name="${}/*.xml" />
			<include name="${}/*.xml" />

<!-- p2.atomic.composite.loading must be set to true
	see -->
<target name="postprocess-metadata" description="Preprocess p2 composite metadata">
	<replaceregexp byline="true">
		<regexp pattern="property name='p2.atomic.composite.loading' value='false'" />
		<substitution expression="property name='p2.atomic.composite.loading' value='true'" />
		<fileset dir="${target}">
			<include name="${}/*.xml" />
			<include name="${}/*.xml" />

Finally, to push everything to Bintray we execute curl with appropriate URLs, as we described in the previous section about REST API. The single tasks for pushing to Bintray are similar, so we only show one for uploading the p2 repository associated to a specific version, and the one for uploading p2 composite metadata. As detailed at the beginning of the post, we use different URL shapes.

<target name="push-to-bintray" >
	<antcall target="postprocess-metadata" />
	<antcall target="push-p2-repo-to-bintray" />
	<antcall target="push-p2-repo-zipped-to-bintray" />
	<antcall target="push-composite-to-bintray" />
	<antcall target="push-main-composite-to-bintray" />

<target name="push-p2-repo-to-bintray">
	<apply executable="curl" parallel="false" relative="true" addsourcefile="false">
		<arg value="-XPUT" />
		<targetfile />

		<fileset dir="${local.p2.repository}" />

			<mergemapper to="-T" />
			<globmapper from="*" to="${local.p2.repository}/*" />
			<mergemapper to="-u${bintray.user}:${bintray.apikey}" />
			<globmapper from="*" to="${bintray.owner}/${bintray.repo}/${}/*;bt_package=${bintray.package};bt_version=${bintray.package.version};publish=1" />

<target name="push-composite-to-bintray" depends="getMajorMinorVersion" >
	<apply executable="curl" parallel="false" relative="true" addsourcefile="false">
		<arg value="-XPUT" />
		<targetfile />

		<fileset dir="${target}/${}" />

			<mergemapper to="-T" />
			<globmapper from="*" to="${target}/${}/*" />
			<mergemapper to="-u${bintray.user}:${bintray.apikey}" />
			<globmapper from="*" to="${bintray.owner}/${bintray.repo}/${bintray.composite.path}/${majorMinorVersion}/*;publish=1" />

To update composite metadata we execute an ant task using the tycho-eclipserun-plugin. This way, we can execute the Eclipse application org.eclipse.ant.core.antRunner, so that we can execute the p2 Ant tasks for managing composite repositories.

ATTENTION: in the following snipped, for the sake of readability, I split the <appArgLine> into several lines, but in your pom.xml it must be exactly in one (long) line.

		<!-- Update p2 composite metadata or create it -->
		<!-- IMPORTANT: DO NOT split the arg line -->
		<appArgLine>-application org.eclipse.ant.core.antRunner 
-buildfile packaging-p2composite.ant p2.composite.add 

The file packaging-p2-composite.ant is similar to the one I showed in a previous post. We use the p2 Ant tasks for adding a child to a composite p2 repository (recall that if there is no existing composite repository, the task for adding a child also creates new compositeContent.xml/Artifacts.xml; if a child with the same name exists the ant task will not add anything new).

<?xml version="1.0"?>
<project name="project">

	<target name="getMajorMinorVersion">
		<script language="javascript">

	                // getting the value
	                buildnumber = project.getProperty("unqualifiedVersion");
	                index = buildnumber.lastIndexOf(".");
	                counter = buildnumber.substring(0, index);


	<target name="test_getMajorMinor" depends="getMajorMinorVersion">
		<echo message="majorMinorVersion: ${majorMinorVersion}" />

		site.label						The name/title/label of the created composite site
		unqualifiedVersion 				The version without any qualifier replacement
		buildQualifier					The build qualifier
		child.repository.path.prefix	The path prefix to access the actual p2 repo from the
										child repo, e.g., if child repo is in /updates/1.0 and
										the p2 repo is in /releases/1.0.0.something then this property
										should be "../../releases/"
	<target name="" depends="getMajorMinorVersion">
		<property name="full.version" value="${unqualifiedVersion}.${buildQualifier}" />

		<property name="" value="${site.label} ${majorMinorVersion}" />
		<property name="" value="${site.label} All Versions" />

		<!-- composite.base.dir	The base directory for the local composite metadata,
			e.g., from Maven, ${}
		<property name="composite.base.dir" value="target"/>

		<property name="" location="${composite.base.dir}/composite-main" />
		<property name="" location="${composite.base.dir}/composite-child" />

		<property name="child.repository" value="${child.repository.path.prefix}${full.version}" />

	<target name="p2.composite.add" depends="">
		<add.composite.repository.internal composite.repository.location="${}""${}" composite.repository.child="${majorMinorVersion}" />
		<add.composite.repository.internal composite.repository.location="${}""${}" composite.repository.child="${child.repository}" />

	<!-- = = = = = = = = = = = = = = = = =
          macrodef: add.composite.repository.internal          
         = = = = = = = = = = = = = = = = = -->
	<macrodef name="add.composite.repository.internal">
		<attribute name="composite.repository.location" />
		<attribute name="" />
		<attribute name="composite.repository.child" />

			<echo message=" " />
			<echo message="Composite repository       : @{composite.repository.location}" />
			<echo message="Composite name             : @{}" />
			<echo message="Adding child repository    : @{composite.repository.child}" />

				<repository compressed="false" location="@{composite.repository.location}" name="@{}" />
					<repository location="@{composite.repository.child}" />

			<echo file="@{composite.repository.location}/p2.index">version=1



Removing Released artifacts

In case you want to remove an existing released version, since we upload the p2 repository and the zipped version as part of a package’s version, we just need to delete that version using the Bintray Web UI. However, this procedure will never remove the metadata, i.e., artifacts.jar and content.jar. The same holds if you want to remove the composite metadata. For these metadata files, you need to use the REST API, e.g., with curl. I put a shell script in the example to quickly remove all the metadata files from a given remote Bintray directory.

Performing a Release

For performing a release you just need to run

mvn clean verify -Prelease-composite

on the p2composite.example.tycho project.

Concluding Remarks

As I said, the procedure shown in this example is meant to be easily reusable in your projects. The ant files can be simply copied as they are. The same holds for the Maven profile. You only need to specify the Maven properties that contain values for your very project, and adjust your settings.xml with sensitive data like the bintray APIKEY.

Happy Releasing! :)

Be Sociable, Share!

by Lorenzo Bettini at February 10, 2016 03:46 PM

CFP: MesosCon 2016

by Chris Aniszczyk at February 10, 2016 03:34 PM

MesosCon is happening again and I’m happy to be involved with the Program Committee. MesosCon 2016 will be in Denver on June 1st-2nd:

The CFP is open until March 9th and the schedule will be announced on April 4th!


by Chris Aniszczyk at February 10, 2016 03:34 PM

TypeFox - The Xtext Company

by Sven Efftinge ( at February 10, 2016 11:14 AM

As many of you already noticed, we had to find a company name without 'Xtext' in it. Longer story short, we finally decided for TypeFox (it still has an 'x', ey? ;-)).  We are still all about Xtext and Xtend, of course.

The website is online now and reveals some additional details about what we do. Also we are having a blog there, which will be updated with useful content around Xtext and Xtend and more general about language engineering, code generators and so on on a regular basis. If you want to get notified about the content, there is a monthly newsletter. It will contain information about the latest blog post, upcoming Xtext and Xtend releases and upcoming events. The sign-up form is on the blog page.

Also Jan joined this month as a co-founder and there will be five more friends (Xtext committers) joining TypeFox in the coming weeks.

Finally I wanted to say thank you, for all the good wishes and the trust of our partners who already do business with us. It starts all very well and I am very thankful for that.

How do you like our logo?

by Sven Efftinge ( at February 10, 2016 11:14 AM

Announcing Extras for Eclipse

by Rüdiger Herrmann at February 10, 2016 08:00 AM

Written by Rüdiger Herrmann

Over the last months I wrote some extensions for the Eclipse IDE that I found were missing and could be implemented with reasonable effort.

The outcome is Extras for Eclipse, a collection of small extensions for the Eclipse IDE which include a launch dialog, a JUnit status bar, a launch configuration housekeeper, and little helper to accomplish recurring tasks with keyboard shortcuts.

They have been developed and proven useful in daily work over the last months to me and so I thought they might be useful for others, too. In this post I will walk through each of the features briefly.

The most noteworthy Extras for Eclipse at a glance:

  • A JUnit progress meter in the main status bar and a key binding to open the JUnit View
  • A dialog to quickly start or edit arbitrary launch configurations, think of Open Resource for launch configurations
  • An option to remove generated launch configurations when they are no longer needed
  • A key binding for the Open With… menu to choose the editor for the selected file
  • And yet another key binding to delete the currently edited file

The listed components can be installed separately so that you are free to choose whatever fits your needs.


Extras for Eclipse is available from the Eclipse Marketplace. If you want to give it a try, just drag the icon to your running Eclipse:

Drag to your running Eclipse workspace to install Extras for Eclipse

If you prefer, you can also install Extras for Eclipse directly from this software repository:

In the Eclipse main menu, select Help > Install New Software…, then enter the URL above and select Extras for the Eclipse IDE. Expand the item to select only certain features for installation.

Please note that a JRE 8 or later and Eclipse Luna (4.4) or later are required to run this software.

Extras for JUnit

If you are using the the JUnit View like Frank does and have it minimized in a fast view, the only progress indicator is the animated view icon. While it basically provides the desired information – progress and failure/success – I found it a little too subtle. But having the JUnit View always open is quite a waste of space.

That brought me to the JUnit Status Bar: a progress meter in the main status bar that mirrors the bar of the JUnit View but saves the space of the entire view that is only useful when diagnosing test failure.

Extras for Eclipse: JUnit Status Bar      Extras for Eclipse: JUnit Status Bar      Extras for Eclipse: JUnit Status Bar

Together with a key binding (Alt+Shift+Q U) to open the JUnit View when needed this made running tests even a little more convenient.

If you would like to hide the status bar, go to the Java > JUnit > JUnit Status Bar preference page. Note that due to a bug in Eclipse Mars (4.5) and later you need to resize the workbench window afterwards to make the change appear.

Launching with Keys

When working on Eclipse plug-ins I usually run tests with Ctrl+R or Alt+Shift+D T/P, but from time to time I also launch the application for a reality-check. And then Ctrl+F11/F11 to relaunch the previously launched application often isn’t the right choice. Neither does Launch the selected resource always pick the right one either.

Hence, leave the keyboard and grab the mouse, go to the main toolbar, find the Run/Debug tool item and select the appropriate launch configuration, if it is still among the most recently used ones. Otherwise open the launch dialog, …

Therefore, I was looking for quicker access and came up with the Start Launch Configuration dialog. It works much the same as Open Type or Open Resource: A filtered list shows all available launch configurations. Favorites and recently used launch configurations are listed first. With Return the selected launch configuration(s) can be debugged. A different launch mode (i.e. run or profile) can be chosen from the drop-down menu next to the filter text.

And most important, there is a key binding: Alt+F11. Or if you prefer Ctrl+3, the command is named Open Launch Dialog.

The screenshot below shows the Start Launch Configuration dialog in action:

Extras for Eclipse: Start Launch Configuration Dialog

If there are launch configurations that have currently running instances, their image is decorated with a running (Extras for Eclipse: Running Launch Configuration Decorator) symbol. And if you need to modify a launch configuration prior to running it, the Edit button gives you quick access to its properties.

Launch Configuration Housekeeping

With launch configurations there is another annoyance: each test that is run generates a launch configuration and if you code test driven, you will end up with many launch configurations. So many that they obscure the two or so manually created master test suites that actually matter.

That brought me to the idea to remove launch configurations that were generated when they are no longer needed. The no longer needed is currently reached when another launch configuration is run. This still gives the ability to re-run an application with Ctrl+F11/F11 but limits the number of launch configurations to those that are relevant.

The term generated applies to all launch configurations that aren’t explicitly created in the launch configuration dialog such as those created through the Run As > JUnit Test or Debug As > Java Application commands for example.

With the Run/Debug > Launching > Clean Up preference page, you can specify which launch configuration types should be considered when cleaning up.

Extras for Eclipse: Clean up Launch Configurations Preference Page

Open With… Key Binding

Extras for Eclipse: Open With... Key Binding

Sometimes I had the need to open a file with a different editor than the default one.

To work around the broken PDE target definition editor for example, I used to open target definition files with a text editor. While this particular editor has improved since its Mars release, I still have occassional use for Open With….

As an extension of the F3 or Return key that opens the selected file in the respective default editor, there is now Shift+F3 that shows the Open With… menu to choose an alternative editor.

Delete the Currently Edited File

For a while now I realized that I use the Package Explorer less and less. It even is in the way at times and may as well make a good candidate for a fast view.

I find the package explorer – or more generally navigation views – a useful tool to get to know the structure of a software project, but once you are comfortable with it the view adds less and less value while occupying much screen real estate.

Extras for Eclipse: Delete File in Editor

To navigate the sources I mostly use Open Type (Ctrl+Shift+T) or Open Declaration (F3) or the Quick Type Hierarchy (Ctrl+T) or the editors breadcrumb bar.

But to delete a file I have to go back to a navigation view, select the resource in question and hit the Del key.

This detour can be spared with yet another key binding, Alt+Del, that invokes the regular delete operation so that the behavior is the same as if the edited file was deleted from one of the navigation views.

Concluding Extras for Eclipse

This article describes the features that I found most noteworthy. For a complete list, please visit the project page.

For some of the extensions introduced here I have opened enhancement requests at Eclipse (see this Bugzilla query). If there is enough interest and support, I will eventually contribute them to the respective Eclipse projects.

I prefer a possibly slim IDE that consists only of those plug-ins that are actually necessary for the task at hand (mostly Java Tools, Plug-in Development Tools, MoreUnit, Maven integration, Git integration, EclEmma and Extras for Eclipse of course) – and in this environment the components have proven stable.

Therefore I would be grateful for hints if an Extra feature collides with plug-ins in a different setup. If you find a bug or would like to propose an enhancement please file an issue here:

Or if you event want to contribute to Extras for Eclipse, please read the Contributing Guidelines that list the few rules to contribute and explain how to set up the development environment.

The post Announcing Extras for Eclipse appeared first on Code Affine.

by Rüdiger Herrmann at February 10, 2016 08:00 AM

Using TypeScript LanguageService to build an JS/TypeScript IDE in Java

by Tom Schindl at February 09, 2016 06:34 PM

Yesterday I blogged about my endeavors into loading and using the TypeScript Language Service (V8 and Nashorn) and calling it from Java to get things like an outline, auto-completions, … .

Today I connected these headless pieces to my JavaFX-Editor-Framework. The result can be seen in the video below.

To get the TypeScript LanguageService feeling responsible not only for TypeScript files but also for JavaScript I used the 1.8 beta.

As you notice JS-Support is not yet really at a stage where it can replace eg Tern but I guess things are going to improve in future.

by Tom Schindl at February 09, 2016 06:34 PM

JSDT Project Structure

by psuzzi at February 09, 2016 11:07 AM

This post explains the JSDT projects structure, and it is the result of my direct experience.

This page serves also as part discussion for JSDT developments. Kudos -all to those who comment and leave constructive feedback: here and on JSDT bugzilla. [486037477020]

By reading this article you will be able to understand where the JSDT projects are located; which are the related git repositories and how to get the source code to work with one of those projects. i.e. JSDT Editor; Nodejs; Bower, JSON Editor, Gulp, Grunt, HTML>js, JSP>js , etc.

JSDT Repositories

The image below represents the current structure of JSDT repositories.

Almost all of the links to the above source code repositories are accessible via the page.


  • eclipse.platform.runtime : [gerritBrowse repo] source repo required for quitting some IDE validation at compile-time.
  • webtools : [Browse repo] contains the website for all the webtools project. It’s big, needed to update project page
  • webtools.jsdt : [gerrit,Browse repo, github] source repo containing the most updated code for JSDT
  • webtools.jsdt.[core, debug, tests]: old source repos containing outdated code (last commit: 2013)
  • webtools.sourceediting: [gerritBrowse repo] source repo for JSDT Web and JSON

Note: the Gerrit [Review With Gerrit] icons are linking to the repos accepting Gerrit contributions, so anybody can easily contribute.

Early Project Structure

According older documentation, JSDT was split in four areas: Core, Debug, Tests and Web. The source of the first three was directly acessible under project source control, while the latter, because of its wider extent, was part of the parent project.

Dissecting the old jsdt_2010.psf, we see the original project structure.


Current Project Structure

The current project structure is based on the old structure, but it has additional projects. To simplify I split the project in four sets:

  • JSDT Core, Debug, Docs (& Tests): under the webtools.jsdt source repositories contains similar data to the old project.
  • JSDT.js : it is also under the webtools.jsdt source repo, but contains the Nodejs stuffs.
  • wst.json : under the webtools.sourceediting, contains the projects needed to parse / edit JSON
  • wst.jsdt.web : also under the webtools.sourcediting repo, contains the projects to include JSDT in Web editors

The image below represents simultaneously all the above project sets, as visible in my workspace.


A complete Project Set

Here you can find the complete projectset, containing the four projectsets above, plus the Platform dependencies, and the webtools project.


After importing, you should see the project sets below.

The full list of projects in my workspace is visible in the image below.


JSDT Development

At this point, to start with JSDT development, you will need to:

  1. clone the needed repositories to your local
  2. setup the development environment, as explained in my previous article.
  3. Import the referenced projectset
  4. Launch the inner eclipse with the source plugins you want


Your comments and suggestions are very welcome. Thanks for your feedback !


by psuzzi at February 09, 2016 11:07 AM

OSGi – bundles / fragments / dependencies

by Dirk Fauth at February 09, 2016 08:02 AM

In the last weeks I needed to look at several issues regarding OSGi dependencies in different products. A lot of these issues were IMHO related to wrong usage of OSGi bundle fragments. As I needed to search for various solutions, I will publish my results and my opinion on the usage of fragments in this post. Partly also for myself to remind me about it in the future.

What is a fragment?

As explained in the OSGi Wiki, a fragment is a bundle that makes its contents available to another bundle. And most importantly, a fragment and its host bundle share the same classloader.

Looking at this from a more abstract point of view, a fragment is an extension to an existing bundle. This might be a simplified statement. But considering this statement helped me solving several issues.

What are fragments used for?

I have seen a lot of different usage scenarios for fragments. Considering the above statement, some of them where wrong by design. But before explaining when not to use fragments, let’s look when they are the agent of choice. Basically fragments need to be used whenever a resource needs to be accessible by the classloader of the host bundle. There are several use cases for that, most of them rely on technologies and patterns that are based on standard Java. For example:

  • Add configuration files to a third-party-plugin
    e.g. provide the logging configuration (log4j.xml for the org.apache.log4j bundle)
  • Add new language files for a resource bundle
    e.g. a properties file for locale fr_FR that needs to be located next to the other properties files by specification
  • Add classes that need to be dynamically loaded by a framework
    e.g. provide a custom logging appender
  • Provide native code
    This can be done in several ways, but more on that shortly.

In short: fragments are used to customize a bundle

When are fragments the wrong agent of choice?

To explain this we will look at the different ways to provide native code as an example.

One way is to use the Bundle-NativeCode manifest header. This way the native code for all environments are packaged in the same bundle. So no fragments here, but sometimes not easy to setup. At least I struggled with this approach some years ago.

A more common approach is to use fragments. For every supported platform there is a corresponding fragment that contains the platform specific native library. The host bundle on the other side typically contains the Java code that loads the native library and provides the interface to access it (e.g. via JNI). This scenario is IMHO a good example for using fragments to provide native code. The fragment only extend the host bundle without exposing something public.

Another approach is the SWT approach. The difference to the above scenario is, that the host bundle org.eclipse.swt is an almost empty bundle that only contains the OSGi meta-information in the MANIFEST.MF. The native libraries aswell as the corresponding Java code is supplied via platform dependent fragments. Although SWT is often referred as reference for dealing with native libraries in OSGi, I think that approach is wrong.

To elaborate why I think the approach org.eclipse.swt is using is wrong, we will have a look at a small example.

  1. Create a host bundle in Eclipse via File -> New -> Plug-in Project and name it Ensure to not creating an Activator or anything else.
  2. Create a fragment for that host bundle via File -> New -> Other -> Plug-in Development -> Fragment Project and name it Specify the host bundle on the second wizard page.
  3. Create the package in the fragment project.
  4. Create the following simple class (yes, it has nothing to do with native code in fragments, but it also shows the issues).
    public class MyHelper {
    	public static void doSomething() {
    		System.out.println("do something");

So far, so good. Now let’s consume the helper class.

  1. Create a new bundle via File -> New -> Plug-in Project and name it org.fipro.consumer. This time let the wizard create an Activator.
  2. In Activator#start(BundleContext) try to call MyHelper#doSomething()

Now the fun begins. Of course MyHelper can not be resolved at this time. We first need to make the package consumable in OSGi. This can be done in the fragment or the host bundle. I personally tend to configure Export-Package in the bundle/fragment where the package is located. We therefore add the Export-Package manifest header to the fragment. To do this open the file Switch to the Runtime tab and click Add… to add the package

Note: As a fragment is an extension to a bundle, you can also specify the Export-Package header for in the host bundle org.eclipse.swt is configured this way. But notice that the fragment packages are not automatically resolved using the PDE Manifest Editor and you need to add the manifest header manually.

After that the package can be consumed by other bundles. Open the file org.fipro.consumer/META-INF/MANIFEST.MF and switch to the Dependencies tab. At this time it doesn’t matter if you use Required Plug-ins or Imported Packages. Although Import-Package should be always the preferred way, as we will see shortly.

Althought the manifest headers are configured correctly, the MyHelper class can not be resolved. The reason for this is PDE tooling. It needs additional information to construct proper class paths for building. This can be done by adding the following line to the manifest file of

Eclipse-ExtensibleAPI: true

After this additional header is added, the compilation errors are gone.

Note: This additional manifest header is not necessary and not used at runtime. At runtime a fragment is always allowed to add additional packages, classes and resources to the API of the host.

After the compilation errors are gone in our workspace and the application runs fine, let’s try to build it using Maven Tycho. I don’t want to walk through the whole process of setting up a Tycho build. So let’s simply assume you have a running Tycho build and include the three projects to that build. Using POM-less Tycho this simply means to add the three projects to the modules section of the build.

You can find further information on Tycho here:
Eclipse Tycho for building Eclipse Plug-ins and RCP applications
POM-less Tycho builds for structured environments

Running the build will fail because of a Compilation failure. The Activator class does not compile because the import cannot be resolved. Similar to PDE, Tycho is not aware of the build dependency to the fragment. This can be solved by adding an extra. entry to the of the org.fipro.consumer project.

extra.. = platform:/fragment/

See the Plug-in Development Environment Guide for further information about build configuration.

After that entry was added to the of the consumer bundle, also the Tycho build succeeds.

What is wrong with the above?

At first sight it is quite obvious what is wrong with the above solution. You need to configure the tooling at several places to make the compilation and the build work. These workarounds even introduce dependencies where there shouldn’t be any. In the above example this might be not a big issue, but think about platform dependent fragments. Do you really want to configure a build dependency to a win32.win32.x86 fragment on the consumer side?

The above scenario even introduces issues for installations with p2. Using the empty host with implementations in the fragments forces you to ensure that at least (or exactly) one fragment is installed together with the host. Which is another workaround in my opinion (see Bug 361901 for further information).

OSGi purists will say that the main issue is located in PDE tooling and Tycho, because the build dependencies are kept as close as possible to the runtime dependencies (see for example here). And using tools like Bndtools you don’t need these workarounds. And in first place I agree with that. But unfortunately it is not possible (or only hard to achieve) to use Bndtools for Eclipse application development. Mainly because in plain OSGi, Eclipse features, applications and products are not known. Therefore also the feature based update mechanism of p2 is not usable. But I don’t want to start the discussion PDE vs. Bndtools. That is worth another (series) of posts.

In my opinion the real issue in the above scenario, and therefore also in org.eclipse.swt, is the wrong usage of fragments. Why is there a host bundle that only contains the OSGi meta information? After thinking a while about this, I realized that the only reason can be laziness! Users want to use Require-Bundle instead of configuring the several needed Import-Package entries. IMHO this is the only reason that the org.eclipse.swt bundle with the multiple platform dependent fragments exists.

Let’s try to think about possible changes. Make every platform dependent fragment a bundle and configure the Export-Package manifest header for every bundle. That’s it on the provider side. If you wonder about the Eclipse-PlatformFilter manifest header, that works for bundles aswell as for fragments. So we don’t loose anything here. On the consumer side we need to ensure that Import-Package is used instead of Require-Bundle. This way we declare dependencies on the functionality, not the bundle where the functionality originated. That’s all! Using this approach, the workarounds mentioned above can be removed. PDE and Tycho are working as intended, as they can simply resolve bundle dependencies. I have to admit that I’m not sure about p2 regarding the platform dependent bundles. Would need to check this separately.


Having a look at the two initial statements about fragments

  • a fragment is an extension to an existing bundle
  • fragments are used to customize a bundle

it is IMHO wrong to make API public available from a fragment. These statements could even be modified to become the following:

  • a fragment is an optional extension to an existing bundle

Having that statement in mind, things are getting even clearer when thinking about fragments. Here is another example to strengthen my statement. Guess you have a host bundle that already exports a package Now you have a fragment that adds an additional public class via that package, and in a consumer bundle that class is used. Using Bndtools or the workarounds for PDE and Tycho showed above, this should compile and build fine. But what if the fragment is not deployed or started at runtime? Since there is no constraint for the consumer bundle that would identify the missing fragment, the consumer bundle would start. And you will get a ClassNotFoundException at runtime.

Personally I think that everytime a direct dependency to a fragment is introduced, there is something wrong.

There might be exceptions to that rule. One could be to create a custom logging appender that needs to be accessible in other places, e.g. for programmatically configurations. As the logging appender needs to be in the same classloader as the logging framework (e.g. org.apache.log4j), it needs to be provided via fragment. And to access it programmatically, a direct dependency to the fragment is needed. But honestly, even in such a case a direct dependency to the fragment can be avoided with a good module design. Such a design could be for example to make the appender an OSGi service. The service interface would be defined in a separate API bundle and the programmatic access would be implemented against the service interface. Therefore no direct dependency to the fragment would be necessary.

As I struggled several days with searching for solutions on fragment dependency issues, I hope this post can help others, solving such issues. Basically my solution is to get rid of all fragments that export API and make them either separate bundles or let them provide their API via services.

If someone with a deeper knowledge in OSGi ever comes by this post and has some comments or remarks about my statements, please let me know. I’m always happy to learn something new or getting new insights.

by Dirk Fauth at February 09, 2016 08:02 AM

The buzz around Eclipse Che

by Ian Skerrett at February 08, 2016 10:56 PM

Just over two weeks ago the Eclipse Che project released a beta version of their Che 4.0 release. We published an article introducing Eclipse Che in our Eclipse Newsletter so readers can learn more about the highlights of Che.

The feedback in the community has been pretty exciting to watch. On twitter, people are certainly creating a buzz about the future of the IDE.


InfoWorld is calling Eclipse Che the launch of the cloud ide revolution.

The Eclipse Che GitHub repo has 1500 stars and 200 forks.

There have been over 100,000 downloads of the Che beta so people are trying it out.

The buzz is certainly growing around Eclipse Che. At EclipseCon in March you will be able to experience Eclipse Che first hand, including Tyler Jewell’s keynote address on the Evolution and Future of the IDE. If you are interested in the future of cloud IDEs then plan to attend EclipseCon




by Ian Skerrett at February 08, 2016 10:56 PM

5 open source IoT projects to watch in 2016

by Benjamin Cabé at February 08, 2016 09:57 PM

The IoT industry is slowly but steadily moving from a world of siloed, proprietary solutions, to embracing more and more open standards and open source technologies.
What’s more, the open source projects for IoT are becoming more and more integrated, and you can now find one-stop-shop open source solutions for things like programming your IoT micro controller, or deploying a scalable IoT broker in a cloud environment.

Here are the Top 5 Open Source IoT projects that you should really be watching this year.

  • #1 – The Things Network

    LP-WAN technologies are going to be a hot topic for 2016. It's unclear who will win, but the availability of an open-source ecosystem around those is going to be key. The Things Network is a crowdsourced world-wide community for bringin LoRaWAN to the masses. Most of their backend is open-source and on Github.
Note: you can click on the pictures to learn more!


What about you? What are the projects you think are going to make a difference in the months to come?

In case you missed it, the upcoming IoT Summit, co-located with EclipseCon North America, is a great opportunity for you to learn about some of the projects mentioned above, so make sure to check it out!

by Benjamin Cabé at February 08, 2016 09:57 PM

JavaScript Performance V8 vs Nashorn (for Typescript Language Service)

by Tom Schindl at February 08, 2016 02:18 PM

On the weekend I’ve worked on my API to interface with the Typescript language service from my Java code.

While the initial version I developed some months ago used the “tsserver” to communicate with the LanguageService I decided to rewrite that and to interface with the service directly (in memory or through an extra process).

For the in memory version I implemented 2 possible ways to load the JavaScript sources and call them

  • Nashorn
  • V8(with the help of j2v8)

I expected that Nashorn is slower than V8 already but after having implemented a small (none scientific) performance sample the numbers show that Nashorn is between 2 and 4 times slower than V8 (there’s only one call faster in Nashorn).

The sample code looks like this:

public static void main(String[] args) {
  try {
    executeTests(timeit("Boostrap", () -> new V8Dispatcher()));
    executeTests(timeit("Nashorn", () -> new NashornDispatcher()));
  } catch (Throwable e) {

private static void executeTests(Dispatcher dispatcher) throws Exception {
  timeit("Project", () -> dispatcher.sendSingleValueRequest(
    "LanguageService", "createProject", String.class, "MyProject").get());

  timeit("File", () -> dispatcher.sendSingleValueRequest(
    "LanguageService", "addFile", String.class, "p_0", DispatcherPerformance.class.getResource("sample.ts")).get());

  timeit("File", () -> dispatcher.sendSingleValueRequest(
    "LanguageService", "addFile", String.class, "p_0", DispatcherPerformance.class.getResource("sample2.ts")).get());

  timeit("Outline", () -> dispatcher.sendMultiValueRequest(
    "LanguageService", "getNavigationBarItems", NavigationBarItemPojo.class, "p_0", "f_0").get());

  timeit("Outline", () -> dispatcher.sendMultiValueRequest(
    "LanguageService", "getNavigationBarItems", NavigationBarItemPojo.class, "p_0", "f_1").get());

Provides the following numbers:

Boostrap : 386
Project : 72
File : 1
File : 0
Outline : 40
Outline : 10

Nashorn : 4061
Project : 45
File : 29
File : 2
Outline : 824
Outline : 39

The important numbers to compare are:

  • Bootstrap: ~400ms vs ~4000ms
  • 2nd Outline: ~10ms vs ~40ms

So performance indicates that the service should go with j2v8 but requiring that as hard dependency has the following disadvantages:

  • you need to ship different native binaries for each OS you want to run on
  • you need to ship v8 which might/or might not be a problem

So the strategy internally is that if j2v8 is available we’ll use v8, if not we fallback to the slower nashorn, a strategy I would recommend probably for your own projects as well.

If there are any Nashorn experts around who feel free to help me fix my implementation

by Tom Schindl at February 08, 2016 02:18 PM

Branch by Abstraction and OSGi

by David Bosschaert ( at February 08, 2016 10:02 AM

Inspired by my friend Philipp Suter who pointed me at this wired article which relates to Martin Fowler's Branch by Abstraction I was thinking: how would this work in an OSGi context?

Leaving aside the remote nature of the problem for the moment, let's focus on the pure API aspect here. Whether remote or not really orthogonal... I'll work through this with example code that can be found here:

Let's say you have an implementation to compute prime numbers:
public class PrimeNumbers {
  public int nextPrime(int n) {
    // computes next prime after n - see details
    return p;
And a client program that regularly uses the prime number generator. I have chosen a client that runs in a loop to reflect a long-running program, similar to a long-running process communicating with a microservice:
public class PrimeClient {
  private PrimeNumbers primeGenerator = new PrimeNumbers();
  private void start() {
    new Thread(() -> {
      while (true) {
        System.out.print("First 10 primes: ");
        for (int i=0, p=1; i<10; i++) {
          if (i > 0) System.out.print(", ");
          p = primeGenerator.nextPrime(p);
        try { Thread.sleep(1000); } catch (InterruptedException ie) {}
  public static void main(String[] args) {
    new PrimeClient().start();
If you have the source code cloned or forked using git, you can run this example easily by checking out the stage1 branch and using Maven:
.../primes> git checkout stage1
.../primes> mvn clean install
... maven output
[INFO] ------------------------------------------------
[INFO] ------------------------------------------------
Then run it from the client submodule:
.../primes/client> mvn exec:java -Dexec.mainClass=\
... maven output
First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31
First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31
First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31
... and so on ...
Ok so our system works. It keeps printing out prime numbers, but as you can see there is a bug in the output. We also want to replace it in the future with another implementation. This is what the Branch by Abstraction Pattern is about.

In this post I will look at how to do this with OSGi Services. OSGi Services are just POJOs registered in the OSGi Service Registry. OSGi Services are dynamic, they can come and go, and OSGi Service Consumers dynamically react to these changes, as well see. In the following few steps will change the implementation to an OSGi Service. Then we'll update the service at runtime to fix the bug above, without even stopping the service consumer. Finally we can replace the service implementation with a completely different implementation, also without even stopping the client.

Turn the application into OSGi bundles

We'll start by turning the program into an OSGi program that contains 2 bundles: the client bundle and the impl bundle. We'll use the Apache Felix OSGi Framework and also use OSGi Declarative Services which provides a nice dependency injection model to work with OSGi Services.

You can see all this on the git branch called stage2:
.../primes> git checkout stage2
.../primes> mvn clean install
The Client code is quite similar to the original client, except that it now contains some annotations to instruct DS to start and stop it. Also the PrimeNumbers class is now injected instead of directly constructed via the @Reference annotation. The greedy policyOption instructs the injector to re-inject if a better match becomes available:
public class PrimeClient {
  private PrimeNumbers primeGenerator;
  private volatile boolean keepRunning = false;
  private void start() {
    keepRunning = true;
    new Thread(() -> {
      while (keepRunning) {
        System.out.print("First 10 primes: ");
        for (int i=0, p=1; i<10; i++) {
          if (i > 0) System.out.print(", ");
          p = primeGenerator.nextPrime(p);
        try { Thread.sleep(1000); } catch (InterruptedException ie) {}
  private void stop() {
    keepRunning = false;
The prime generator implementation code is the same except for an added annotation. We register the implementation class in the Service Registry so that it can be injected into the client:
public class PrimeNumbers {
  public int nextPrime(int n) {
    // computes next prime after n
    return p;
As its now an OSGi application, we run it in an OSGi Framework. I'm using the Apache Felix Framework version 5.4.0, but any other OSGi R6 compliant framework will do.
> java -jar bin/felix.jar
g! start
g! start file:/.../clones/primes/impl/target/impl-0.1.0-SNAPSHOT.jar
g! install file:/.../clones/primes/client/target/client-0.1.0-SNAPSHOT.jar
Now you should have everything installed that you need:
g! lb
ID|State |Level|Name
0|Active | 0|System Bundle (5.4.0)|5.4.0
1|Active | 1|Apache Felix Bundle Repository (2.0.6)|2.0.6
2|Active | 1|Apache Felix Gogo Command (0.16.0)|0.16.0
3|Active | 1|Apache Felix Gogo Runtime (0.16.2)|0.16.2
4|Active | 1|Apache Felix Gogo Shell (0.10.0)|0.10.0
5|Active | 1|Apache Felix Declarative Services (2.0.2)|2.0.2
6|Active | 1|impl (0.1.0.SNAPSHOT)|0.1.0.SNAPSHOT
7|Installed | 1|client (0.1.0.SNAPSHOT)|0.1.0.SNAPSHOT
We can start the client bundle:
g! start 7
First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31
First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31
... and so on ..
You can now also stop the client:
g! stop 7
Great - our OSGi bundles work :)
Now we'll do what Martin Fowler calls creating the abstraction layer.

Introduce the Abstraction Layer: the OSGi Service

Go to the branch stage3 for the code:
.../primes> git checkout stage3
.../primes> mvn clean install
The abstraction layer for the Branch by Abstraction pattern is provided by an interface that we'll use as a service interface. This interface is in a new maven module that creates the service OSGi bundle.
public interface PrimeNumberService {
    int nextPrime(int n);
Well turn our Prime Number generator into an OSGi Service. The only difference here is that our PrimeNumbers implementation now implements the PrimeNumberService interface. Also the @Component annotation does not need to declare the service in this case as the component implements an interface it will automatically be registered as a service under that interface:
public class PrimeNumbers implements PrimeNumberService {
    public int nextPrime(int n) {
      // computes next prime after n
      return p;
Run everything in the OSGi framework. The result is still the same but now the client is using the OSGi Service:
g! lb
   ID|State      |Level|Name
    0|Active     |    0|System Bundle (5.4.0)|5.4.0
    1|Active     |    1|Apache Felix Bundle Repository (2.0.6)|2.0.6
    2|Active     |    1|Apache Felix Gogo Command (0.16.0)|0.16.0
    3|Active     |    1|Apache Felix Gogo Runtime (0.16.2)|0.16.2
    4|Active     |    1|Apache Felix Gogo Shell (0.10.0)|0.10.0
    5|Active     |    1|Apache Felix Declarative Services (2.0.2)|2.0.2
    6|Active     |    1|service (1.0.0.SNAPSHOT)|1.0.0.SNAPSHOT
    7|Active     |    1|impl (1.0.0.SNAPSHOT)|1.0.0.SNAPSHOT
    8|Resolved  |    1|client (1.0.0.SNAPSHOT)|1.0.0.SNAPSHOT
g! start 8
First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31
You can introspect your bundles too and see that the client is indeed wired to the service provided by the service implementation:
g! inspect cap * 7
org.coderthoughts.primes.impl [7] provides:
service; org.coderthoughts.primes.service.PrimeNumberService with properties: = 0 = org.coderthoughts.primes.impl.PrimeNumbers
   service.bundleid = 7 = 22
   service.scope = bundle
   Used by:
      org.coderthoughts.primes.client [8]
Great - now we can finally fix that annoying bug in the service implementation: that it missed 2 as a prime! While we're doing this we'll just keep the bundles in the framework running...

Fix the bug in the implementation whitout stopping the client

The prime number generator is fixed in the code in stage4:
.../primes> git checkout stage4
.../primes> mvn clean install
It's a small change to the impl bundle. The service interface and the client remain unchanged. Let's update our running application with the fixed bundle:
First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31
First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31
First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31
g! update 7 file:/.../clones/primes/impl/target/impl-1.0.1-SNAPSHOT.jar
First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
Great - finally our service is fixed! And notice that the client did not need to be restarted! The  DS injection, via the @Reference annotation, handles all of the dynamics for us! The client code simply uses the service as a POJO.

The branch: change to an entirely different service implementation without client restart

Being able to fix a service without even restarting its users is already immensely useful, but we can go even further. I can write an entirely new and different service implementation and migrate the client to use that without restarting the client, using the same mechanism. 

This code is on the branch stage5 and contains a new bundle impl2 that provides an implementation of the PrimeNumberService that always returns 1. 
.../primes> git checkout stage5
.../primes> mvn clean install
While the impl2 implementation obviously does not produce correct prime numbers, it does show how you can completely change the implementation. In the real world a totally different implementation could be working with a different back-end, a new algorithm, a service migrated from a different department etc...

Or alternatively you could do a façade service implementation that round-robins across a number of back-end services or selects a backing service based on the features that the client should be getting.
In the end the solution will always end up being an alternative Service in the service registry that the client can dynamically switch to.

So let's start that new service implementation:
First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
g! start file:/.../clones/primes/impl2/target/impl2-1.0.0-SNAPSHOT.jar
First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
g! stop 7
First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
First 10 primes: 1, 1, 1, 1, 1, 1, 1, 1, 1, 1
First 10 primes: 1, 1, 1, 1, 1, 1, 1, 1, 1, 1
Above you can see that when you install and start the new bundle initially nothing will happen. At this point both services are installed at the same time. The client is still bound to the original service as its still there and there is no reason to rebind, the new service is no better match than the original. But when the bundle that provides the initial service is stopped (bundle 7) the client switches over to the implementation that always returns 1. This switchover could happen at any point, even halfway thought the production of the list, so you might even be lucky enough to see something like:
First 10 primes: 2, 3, 5, 7, 11, 13, 1, 1, 1, 1
I hope I have shown that OSGi services provide an excellent mechanism to implement the Branch by Abstraction pattern and even provide the possibility to do the switching between suppliers without stopping the client!

In the next post I'll show how we can add aspects to our services, still without modifying or even restarting the client. These can be useful for debugging, tracking or measuring how a service is used.

PS - Oh, and on the remote thing, this will work just as well locally or Remote. Use OSGi Remote Services to turn your local service into a remote one... For available Remote Services implementations see

With thanks to Carsten Ziegeler for reviewing and providing additional ideas.

by David Bosschaert ( at February 08, 2016 10:02 AM

Bug 75981 is fixed!

February 05, 2016 11:00 PM

Like many of my Eclipse stories, it starts during a coffee break.

  • Have you seen the new TODO template I have configured for our project?

  • Yes. It is nice…​


  • But I hate having to set the date manually.

  • I know but it is not possible with Eclipse.

  • …​

A quick search on Google pointed me to Bug 75981. I was not the only one looking for a solution to this issue:

By analyzing the Bugzilla history I have noticed that already 2 contributors have started to work on this (a long time ago) and the feedback to the latest patch never got any answers. I reworked the last proposal…​ and…​

I am happy tell you that you can now do the following:


Short description of the possibilities:

  • As before you can use the date variable with no argument. Example: ${date}

  • You can use the variable with additional arguments. In this case you will need to name the variable (since you are not reusing the date somewhere else, the name of the variable doesn’t matter). Example: ${mydate:date}

    • The first parameter is the date format. Example: ${d:date('yyyy-MM-dd')}

    • The second parameter is the locale. Example: ${maDate:date('EEEE dd MMMM yyyy HH:mm:ss Z', 'fr')}

Back to our use case, it now works as expected:


Do not hesitate to try the feature and to report any issue you can find. The fix is implemented with the M5 milestone release of Eclipse Neon. You can download this version now here:

This experiment was also a great opportunity for me to measure how the development process at Eclipse has been improved:

  • With Eclipse Oomph (a.k.a Eclipse Installer) it is possible setup the Workspace to work on "Platform Text" very quickly

  • With Gerrit it is much easier for me (a simple contributor) to work with the commiters of the project (propose a patch, discuss each line, push a new version, rebase on top of HEAD…​)

  • With the Maven Build, the build is reproducible (I never tried to build the platform with the old PDE Build, but I believe that this was not possible for somebody like me)

Where I spend most of the time:

  • Analysis of the proposed patches and existing feedbacks in Bugzilla

  • Figure out how I could add some unit tests (for the existing behaviour and for the new use cases).

This was a great experience for me and I am really happy to have contributed this fix.

February 05, 2016 11:00 PM

Jubula 8.2.2 has been released

February 05, 2016 10:37 AM

Our first official Jubula standlone release of the year is 8.2.2 - and it's got a lot of exciting new features!

From "beta" to "official"

Just before Christmas, we released a Jubula beta version that had some pretty awesome stuff in it (I'll get to what it is in a moment). I was so excited about the new features, that we decided to add a couple more that were in progress, then release it as an official version. That version is 8.2.2, and it can now be downloaded from the testing portal.

The highlights

The short version is that everything you've seen in beta releases since the end of October 2015 is now in the release. The longer version is much more exciting.

Copy and paste

I actually never thought I'd write these lines, but we have indeed added copy and paste support to the Jubula ITE. You can now copy Test Cases, Test Suites, Test Steps, and Event Handlers between editors. Why now? Well, I have been listening to the people who requested this over the years, and we have a new team member who needed a nice starter topic to work on. I still personally think it's evil Wink - you all know by now that we'd much prefer you to structure tests to be reusable and readable. Nevertheless, we hope you enjoy the new feature Smile.

Time reduction when saving

We've moved our completeness checks to a background job, so saving things doesn't block your continuing work as it had done previously.

Set Object Mapping Profile for individual technical names

Our standard object mapping profile is pretty amazing - it's heuristic, so even unnamed components can be located in an application. Sometimes though, you end up having to remap individual items more frequently and you ask the developers to name them. Now it's possible to specify for individual technical names that the component recognition for this name should only be based on its name. That way, you don't have to name everything, but can use the "Given Names" profile for technical names you know are set. This function is also available in the Client API.

New Test Steps for executing Java methods in the application context

Sometimes you just want to directly call a method you know is available in your application, or for a specific component. The new invoke method actions let you do just that. You can specify the class name and method name, as well as parameters - and you can execute the action either on the application in general or on specific components.

Multi-line comments in editors

There is a new option to add a comment node in the Test Case Editor and Test Suite Editor. The comments are shown directly in the editor, and you can use them to comment following nodes. This is in contrast to the descriptions, which are only shown for a selected node.

New dbtool options

The dbtool, for executing actions directly on the database, has two new options. You can now delete all test result summaries (including details) for a specific time or project, and you can just delete details for test result summaries for a time frame or project.

Oomph setup

In case you missed it, there is also an Oomph setup for Jubula.

As you can see, it's been a busy few months. Development continues, and our next beta release will contain updates to the JaCoCo support and HTML support, amongst other things.

Happy testing!

February 05, 2016 10:37 AM

Vert.x 3.2.1 is released !

by cescoffier at February 05, 2016 12:00 AM

We are pleased to announce the release of Vert.x 3.2.1!

The release contains many bug fixes and a ton of small improvements, such as future composition, improved Ceylon support, Stomp virtual host support, performance improvements… Full release notes can be found here:

Breaking changes are here:

The event bus client using the SockJS bridge are available from NPM, Bower and as a WebJar:

Dockers images are also available on the Docker Hub The vert.x distribution is also available from SDKMan.

Many thanks to all the committers and community whose contributions made this possible.

Next stop is Vert.x 3.3.0 which we hope to have out in May 2016.

The artifacts have been deployed to Maven Central and you can get the distribution on Bintray.

Happy coding !

by cescoffier at February 05, 2016 12:00 AM

Presentation: Developing Cloud-native Applications with the Spring Tool Suite

by Kris De Volder, Martin Lippert at February 04, 2016 10:30 PM

Kris De Volder and Martin Lippert show how to work effectively with Spring projects in Eclipse and the Spring Tool Suite (STS). They demo all the latest enhancements in the tools including features like much smarter property file editing, as well as new features in the Eclipse 4.5 (Mars) platform.

By Kris De Volder, Martin Lippert

by Kris De Volder, Martin Lippert at February 04, 2016 10:30 PM

New in the RAP Incubator: Charts with d3 and nvd3

by Ralf Sternberg at February 04, 2016 11:57 AM

Some time ago we’ve created a draft for a Chart widget for RAP, based on the famous D3.js. Together with one of our partners, we decided to carry this work forward and make it available in the RAP Incubator.

Charts Examples

D3 is a very flexible framework to turn data into dynamic charts. Looking at their examples, it’s amazing how many different types of charts there are. Whatever diagram you can think of, it can probably be done with d3.

Such great freedom comes at the price of some complexity. When all you need is a simple bar chart with two axis, you may not want to dive into the theory of scales, domains, layouts, and selections first. D3 offers a lot of tools, but no ready-to-use chart types. Happily, there are charting libraries built on top of d3. In fact, there are dozens of them.

We’ve decided to implement some basic chart widgets for the most common chart types based on nvd3, a library that provides good-looking charts for most of the common needs. Currently, there is a PieChart, a BarChart, and a LineChart widget with a basic set of properties, that is going to be extended. But we also kept the base classes, Chart and NvChart extensible to allow you to implement your own chart widgets for other d3 or nvd3 chart types with very little effort.

On the application side, creating a simple bar chart is fairly simple:

BarChart barChart = new BarChart( parent, SWT.NONE );
  new BarItem( 759.3, "Chrome", blue ),
  new BarItem( 633.5, "Firefox", orange ),
  new BarItem( 384.6, "Edge", green )

To keep things lightweight, the Items are just data objects, not widgets. Colors can be specified as RGB objects. Since it’s in incubation, the API may still change slightly while we (and you) gather more insight.

The widget is now available in the RAP Incubator, it works with RAP 3.0 and 3.1. We hope you like it and we’re happy to hear what you think.


Leave a Comment. Tagged with eclipse, incubator, new and noteworthy, rap, eclipse, incubator, new and noteworthy, rap

by Ralf Sternberg at February 04, 2016 11:57 AM

Eclipse Community Awards | Vote for a Deserving Project or Individual

February 03, 2016 09:32 AM

The Eclipse Community Awards voting deadline is Monday, February 8. Vote now!

February 03, 2016 09:32 AM

Beta2 for Eclipse Mars.2

by akazakov at February 02, 2016 04:51 PM

The second Beta of JBoss Tools 4.3.1 and JBoss Developer Studio 9.1.0 for our maintenance Mars release is available.

jbosstools jbdevstudio blog header
Remember that since JBoss Tools 4.3.0 we require Java 8 for installing and using of JBoss Tools. We still support developing and running applications using older Java runtimes. See more in Beta1 blog.

What is New?

Full info is at this page. Some highlights are below.

Eclipse Mars.2

JBoss Tools and JBoss Developer Studio are now targeting the latest Eclipse Mars.2 as a running platform with many issues fixed comparing to the previous Mars.1 release.

OpenShift 3

More than 60 issues targeting OpenShift 3 support have been fixed in this release. The OpenShift 3 integration was introduced as a technology preview feature in JBDS 9.0.0.GA but will graduate to a supported feature in the upcoming JBDS 9.1.0.GA release.

Incremental publishing

The OpenShift 3 server adapter now respects the auto-publish settings as declared in the server editor, giving the user the option to automatically publish on workspace changes, build events, or only when the user requests it. The server adapter is also able to incrementally deploy the server’s associated project with a quick call to rsync, ensuring minimal over-the-wire transfers and a fast turnaround for testing your project.

Support for Java EE projects

Experimental support for Java EE projects (Web and EAR) is now available. When the workspace project associated with the OpenShift 3 server is a Dynamic or Enterprise Application project, the server adapter builds an exploded version of the archive to a temporary local directory and replaces the version deployed on the remote OpenShift pod. That Pod Deployment Path, is now inferred automatically from the image stream tags on the remote Pod. A .dodeploy marker file is created for the remote server to redeploy the module if necessary (for EAP/WildFly servers supporting it).

Support for LiveReload

The new tooling includes LiveReload support for OpenShift 3 server adapters. This is accessible from the Show In > Web Browser via LiveReload Server menu. When a file is published to the server adapter, the Browser connected to the LiveReload server instance will automatically refresh.


This is particularly effective in conjunction with the Auto Publish mode for the OpenShift 3 server adapters, as all it takes to reload a web resource is saving the file under edition (Ctrl+S, or Cmd+S on Mac).

Simplified OpenShift Explorer view

Previously, the OpenShift 3 resources representation exposed a large amount of unnecessary information about OpenShift. The explorer view is now simplified and specific (and made much more robust) and focuses on an application-centric view.


Everything that is no longer displayed directly under the OpenShift Explorer is accessible in the Properties view.

Red Hat Container Development Kit server adapter

The Red Hat Container Development Kit (CDK) server adapter now provides menus to quickly access the Docker Explorer and the OpenShift Explorer. Right-click on a running CDK server adapter and select an option in the Show In menu:


Forge Tools

Forge Runtime updated to 3.0.0.Beta3

The included Forge runtime is now 3.0.0.Beta3. Read the official announcement here.

Stack support

Forge now supports choosing a technology stack when creating a project:


In addition to setting up your project, choosing a stack automatically hides some input fields in the existing wizards, such as the JPA Version in the JPA: Setup wizard:

What is Next

We are approaching the final release for our first maintenance update for Eclipse Mars.2. It’s time to polish things up and prepare a release candidate.


Alexey Kazakov

by akazakov at February 02, 2016 04:51 PM

EclipseCon France 2016 | Call for Papers

February 02, 2016 07:15 AM

Submit your talk for EclipseCon France taking place in Toulouse on June 7-9, 2016.

February 02, 2016 07:15 AM