July 29, 2014

Orion 6.0 – Git Page Changes

In an attempt to stream line and improve Git workflows, we have started on a redesign of the Git pages in Orion 6.0

From Many to One

One of our main design goals was to reorganize all of the git functionality from multiple pages down to one. We used to have individual pages or views on pages for:

  • git status
  • git log
  • git repository
  • all git repositories
  • branches
  • tags
  • git configuration options

With the exception of the all git repositories page (which we are in the process of redesigning), all of the other pages have been merged into the current Git page which looks like this (with sections collapsed):


Let’s take a look at what’s in each of these sections. Starting from the very top:

Repo Header

At the very top of the page, we have the name of the repo along with buttons that perform repo-level actions:

  • Apply Patch: brings up a dialog that lets a user select the URL or file that contains the patch
  • Pull: Performs a Pull on the repo (fetch + rebase)
  • Delete: Deletes this repo

Changed Files

This section replaces the old git status page – it will show you the current state of your working directory. With no files changed, the section looks like this:


The top of the section contains the 2 main section buttons:

  • Discard: discard any changes made to selected files and revert the files to their previous committed state
  • Commit: commit selected files with the messaged entered in the message box

Next down comes the message box area which is used to enter a commit message – notice the 2 checkboxes. Amend Previous Commit can be toggled to amend the previous commit (the previous message will be fetched and displayed in the message box). Prepare for Gerrit will add a ChangeId to the commit message.

When files are changed, they appear in a table under the message box:


The files can be expanded to reveal the changes made:


Note that you can also choose to view the diffs side by side or open a compare editor to view the diffs.

You can select the files to include in the commit by using the checkboxes (or the select all checkbox if you want to include all files). As you select files, a number counter at the top of the table changes to match your selected file count.

Once you have selected your files (and typed in a message) you can hit Commit to commit them (or Discard to discard the changes made).

When you have files selected, a Show Patch link appears on the same line with the Commit and Discard buttons. This will create a patch for you out of the files that you have selected.



The commits section has been redesigned with some new sub-sections.


  • Outgoing: this section lists all of the commits that have not been pushed yet to the remote repo
  • Incoming: this sections  lists all of the commits that have not been merged into the local repo
  • History: this section lists all of the commits that the local branch has in common with the remote branch.

Let’s see what happens, when we commit the change from the previous section:


The commit shows up in the Outgoing section. Note that there is a new Undo button that will let you undo the commit and restore the changed files back into your working directory (a soft reset).

Before pushing, you want to be sure that you are caught up to all changes, so you can hit Fetch in the Incoming section to fetch the latest changes. All of the changes that haven’t been merged will show up and you can accept them by hitting Rebase or Merge.


The accepted changes now show up in the History section. You can scroll down to view the the history, or hit the “More commits for <Branch>” to load more entries.


Once you are all up to date with the remote branch, you can Push your changes.

There is also a new button Sync which is combination of Fetch/Rebase/Push.

Changes in a Commit

Expanding any commit in the history will reveal the changed files (this is true for any place a commit is shown on the page) :


Branches and Tags

The branches and tags sections have been merged into one section as well. There are top level items for local branches, remotes and tags. The 2 buttons in this section are:

  • New Remote: brings up a dialog to add a new remote
  • New Branch: brings up a dialog to create a new branch


If you expand the local item, you will see all of the local branches you have for the current repo – you can check a branch out from here.


If you expand the branch – you will see the log for that branch along with all allowable actions next to each commit.


Each remote can be expanded to reveal all of its remote branches along with applicable actions for each entry.


You can see the log for a remote branch by expanding it.

The tags section follows the same pattern – expand the top level item and you are presented with the list of tags along with all available actions for that tag.



The configuration section is now being rendered as a table.


 More to Come

Although the Git page is in a reasonable state right now, we have some future improvements planned that will truly allow for a one page design. Stay tuned for more!

July 28, 2014

Reorder and Drag and Drop with JavaFX TabPane

Upon public request I’ve extracted my draggable TabPane from my code and made it available in a bundle who holds only controls. If you have a need for a TabPane with drag support it is as easy as grabbing “org.eclipse.fx.ui.controls_1.0.0.$TIMESTAMP.jar” from our build server (I’ll work on publishing the stuff to maven central)

Once you have the jar added to your project adding the drag support involves only:

HBox h = new HBox();
  Pane pane = DndTabPaneFactory.createDefaultDnDPane(
    FeedbackType.MARKER, this::setupTb1);
  HBox.setHgrow(pane, Priority.ALWAYS);
  Pane pane = DndTabPaneFactory.createDefaultDnDPane(
    FeedbackType.MARKER, this::setupTb2);
  HBox.setHgrow(pane, Priority.ALWAYS);

and the 2 methods setupTb1 and setupTb2 look like this:

private void setupTb1(TabPane tb) {
    Tab tab = new Tab("T 1.1");
    Rectangle r = new Rectangle(100, 100);
    tab.setContent(new BorderPane(r));
    Tab tab = new Tab("Tab 1.2");
    Rectangle r = new Rectangle(100, 100);
    tab.setContent(new BorderPane(r));

// ....

2014 USENIX Release Engineering Summit CFP now open

The CFP for the 2014 Release Engineering summit (Western edition) is now open.  The deadline for submissions is September 5, 2014 and speakers will be notified by September 19, 2014.  The program will be announced in late September.  This one day summit on all things release engineering will be held in concert with LISA, in Seattle on November 10, 2014. 

Seattle skyline © Howard Ignatius, https://flic.kr/p/6tQ3H Creative Commons by-nc-sa 2.0

From the CFP

"Suggestions for topics include (but are not limited to):
  • Best practices for release engineering
  • Practical information on specific aspects of release engineering (e.g., source code management, dependency management, packaging, unit tests, deployment)
  • Future challenges and opportunities in release engineering
  • Solutions for scalable end-to-end release processes
  • Scaling infrastructure and tools for high-volume continuous integration farms
  • War and horror stories
  • Metrics
  • Specific problems and solutions for specific markets (mobile, financial, cloud)
URES '14 West is looking for relevant and engaging speakers and workshop facilitators for our event on November 10, 2014, in Seattle, WA. URES brings together people from all areas of release engineering—release engineers, developers, managers, site reliability engineers, and others—to identify and help propose solutions for the most difficult problems in release engineering today."

War and horror stories. I like to see that in a CFP.  Describing how you overcame problems with  infrastructure and tooling to ship software are the best kinds of stories.  They make people laugh. Maybe cry as they realize they are currently living in that situation.  Good times.  Also, I think talks around scaling high volume continuous integration farms will be interesting.  Scaling issues are a lot of fun and expose many issues you don't see when you're only running a few builds a day. 

If you have any questions surrounding the CFP, I'm happy to help as I'm on the program committee.   (my irc nick is kmoir (#releng) as is my email id at mozilla.com)

Henshin 1.0

8 years after the first PoC and 4 years after its official launch, we are proud to announce the release of Henshin 1.0 -- a model transformation language and framework for EMF. Henshin 1.0 is ready for Eclipse Luna and Java 8 (including its new JavaScript engine Nashorn). The primary domain of Henshin is model-driven development and formal verification methods. With its new multi-threaded pattern matching engine and its code generator for Apache Giraph, big data is a new third main domain of Henshin.

The new version of Henshin comes with an extended transformation language with support for specifying Java imports and annotations, and a couple of new utility APIs. Besides the changes under the hood, the 1.0 release also comes with a number of visible new features, including a reworked interpreter wizard and new preferences for the diagram editor. As an eye candy, we also added a new color mode for the diagram editor which goes well together with Luna's Dark theme.

You can obtain Henshin 1.0 from the release update site or clone our Git repo to try it out!

Fun stats about WildFly and Luna

In our recent milestone releases of JBoss Tools 4.2 we’ve started gathering additional data from those who are have agreed sending back anonymous usage data to us (Thank you!).

Reminder: As always, this is my personal interpretation of the data and again it is early days for the data collection. These numbers are just for the last month of beta testers thus do take these absolute numbers with a grain of salt!

In any case I find the numbers interesting and thought I would share since there are some lessons to be learned.

JBoss Server Usage

One of the data points are which JBoss servers users create.

Mind you we don’t collect the exact version of the server installed, just which server adapter users are using - i.e. EAP 6.1 also covers EAP 6.2 and 6.3, WildFly 8 covers 8.0 and 8.1 etc.

server creation stats

The numbers above shows the last two weeks of server creation by our beta users. Not surprisingly majority of users are using the community version of the latest JBoss servers (AS 7.1 and WildFly 8), and it’s great to see the third most used server is the free for development/enterprise supported EAP 6.1/6.2/6.3.

Oldie but goodie

What I find funny is that there are still users using the latest/greatest development tools, but who runs JBoss AS 3.2 - this was last released back in 2006! Talk about dedication :)

Importance of Multiple runtime support

What the list shows to me is the importance of development tools need to support multiple versions because even though most are using latest/greatest runtime there are still a great bunch of users that will be using older versions of runtimes. Many developers tend to forget or blissfully ignore this.

I’m convinced as users move to our release that gathers these data we will start see even higher numbers of "older" runtime usage.

Deploy Only Server

What is a bit disconcerting is how few seem to know about our Deploy only server (in this list noted as systemCopyServer ). This server allows you to use Eclipse’s support for incremental deployments to any directory locally or remotely available. Really useful for deploying to a non-JBoss server, a remote PHP or just a plain html app.

You should try it out!

File ▸ New ▸ Server ▸ Basic ▸ Deploy Only

Combine this with our LiveReload support and you get a great and fast workflow!

Uptake of Eclipse versions

Another data item we have insight to is the uptake of Eclipse versions.

Uptake of Eclipse versions

The graph above is our recorded startups of Eclipse Java EE installs since January pr. week. Be aware the versions listed are the EPP versions, not Eclipse release train versions - I’ll do the mapping for you below.

Two things to note: the "Drop" at the end is just the effect of the numbers ending middle of the month, the "Dip" in mid April I’m not sure what is but we see it across all our Google analytics thus I expect it was an Google Analytics anomoly. The numbers have since stabilized, that said…lets interpret!

This graph shows how Eclipse Kepler SR1 release (2.0.1) usage is dropping as users upgrade to Kepler SR2 release (2.0.2) - this are most likely the effect of users using Eclipse built-in update mechanism to upgrade.

What also can be seen is that the latest stable release (4.4.0) uptake is gaining faster than total Eclipse version usage (the faint lines are even older eclipse versions). Meaning total usage of JBoss Tools is up/stable. Eclipse isn’t dead yet :)

I wish Google Analytics had a way to show this graph cumulative instead of per line…anyone up for a data extraction and visualization project ? I’ll give you access to the data to play with.

Uptake of Eclipse Luna

Finally, my personal main interest was to see what the uptake of Eclipse Luna is.

You can see what effect a GA release of Eclipse has. The red line is Eclipse Luna going from a couple of hundreds starts to now 7.000 starts pr. week since its release - but do notice that there is no corresponding drop (yet) in Kepler. Looks like most are installing Luna next to their Kepler installs (my theory at least ;)

This mimicks previous years uptake patterns and once everyone gets back from vacation and Luna SR1 release comes out it should be close to the level of Eclipse Kepler installs. Good to see users continue to picking up latest greatest features and bugfixes!

I’ll go look at the numbers again in a few months to see if the trend continues.

If there is some additonal data you are interested or questions about the above let me know in comments and I’ll try include/answer it!

Have fun!

Max Rydahl Andersen

Development of Hybrid Mobile Tools has moved to Eclipse Foundation

Even back when the first line of code was dropped for Hybrid Mobile tooling, making the tools as part of the Eclipse foundation was a goal. When starting, we looked at the available tools for developing Cordova applications. We found out that there were no open source solutions that we could contribute and use as part of our tools. Furthermore, interoperability among what very little existed was poor. Of course, our main goal is creating good tools for Apache Cordova development, but while doing that we always keep an eye on interoperability and extendibility.

It is only natural that we are moving the development of our tools for Cordova based application development and forming the Eclipse THyM project. We hope that, as a vendor neutral non-profit organization, Eclipse foundation will encourage contributions and be the base for interoperable Cordova tooling.

What is contributed

Everything related to Cordova based development including the project management, plugin discovery, and support for iOS and Android excluding the Cordova simulator is contributed to Eclipse.org. We have excluded CordovaSim for now because of its complex set of dependencies

What is changing

The development will continue to happen on GitHub but on a repository owned by Eclipse foundation. The contributed code is already renamed, cleaned and on the new repository. If you are a contributor, or want to be one, please use https://github.com/eclipse/thym

We will use bugzilla, and thym-dev mailing list from now on as provided by Eclipse foundation. As expected project documentation is at the wiki. The builds will be running on eclipse.org build server instance.

What is NOT changing

JBoss tools will continue to have support for Cordova development. We will consume Thym project and extend them with more capabilities and integrate with other parts of the tools and technologies coming from projects such as Aerogear.

And of course our wish to create good tools for Apache Cordova development continues with a hope for better collaboration with other individuals and companies.

m2e 1.5.0 improvements

The Maven Integration for Eclipse plugin, a.k.a. m2e, released version 1.5.0 a few weeks ago, as part of the annual Eclipse release train, this year known as Luna. 77 Bugs were fixed as part of that release, compatible with both Eclipse Kepler and Luna. I believe it’s a pretty solid one, with numerous interesting fixes and usability improvements that deserve a blog post. So here goes, in no particular order:

Improved project import workflow

Selecting Maven projects to import used to take an inordinate amount of time, due to a suboptimal - I love that word :-) - Maven Lifecycle Mapping Analysis (LMA). LMA is used to determine whether the projects would require further configuration to operate properly in Eclipse. That LMA is now only run after projects are imported, making selection of projects to import much, much faster (< couple seconds vs 1-2 min for the wildfly 8.0 codebase and its 130 projects, for instance)

After import, lifecycle mapping error markers are collected on imported projects and the discovery service is invoked to find proposals to fix those errors.

Another improvement to this workflow is the ability to easily import multi-module projects to an Eclipse Working Set. The default name is inferred from the root project but can be overridden manually:


More performance improvements during import itself are to be expected to be included in m2e 1.6.0.

See bugs 409732, 408042 and 417466.

Improved memory consumption

Maven project instance caching strategy has been revisited to reduce memory consumption. For a workspace with 300+ projects for instance, heap memory used went from 2.5GB down to well under 1GB without any noticeable side effects.

Nexus index download disabled by default

Before m2e 1.5, by default, Nexus indexes were downloaded on new workspace startup, then subsequently once a week. Depending on your internet connection, that whole process could take 15 minutes or more, heavily pegging the CPU. Once the indexes were updated, the size of the workspace would increase by approximately 500 MB. Even though space is relatively cheap these days, those with many workspaces (eg., for testing) or large workspaces, this extra disk usage can add up quickly.

m2e 1.5.0 now has this feature disabled by default. You can still enable it in Preferences ▸ Maven ▸ Download repository index updates on startup. One major downside of having this feature disabled by default though, is Archetype and Artifact/Plugin searches are now much less efficient, as they rely on this indexed content.

See bug 404417

New Maven Profile management UI

The JBoss Tools team contributed its Maven Profile management interface to m2e 1.5.0. This new interface eases switching between profiles.

Rather than right-clicking on a project, going to the Properties ▸ Maven page, then manually (mis)typing a list of active or disabled profiles, you can now just use Ctrl+Alt+P to open the new Maven Profile selection interface.


The new interface is also accessible from the Maven context menu: Right-click project Maven ▸ Select Maven Profiles…

The list of available profiles is inferred from profiles defined in:

  • the project pom.xml

  • the project’s parent hierarchy

  • user and global maven settings.xml

When several projects are selected, only the common available profiles are displayed for selection. Common profiles are profiles defined in settings.xml or profiles having the same id in different pom.xml.

You can learn more about that feature from the original JBoss Tools blog

See bug 428094

Easily update outdated projects

The Update Maven Project dialog (launched via Right-click project Maven ▸ Update Project… or via Alt-F5), now shows a dirty overlay on projects which need updating.

Additionally, an "Add out-of-date" button adds all out-of-date (OOD) projects to the current selection. If an OOD project has not been selected, a warning is shown underneath the selection table with a link equivalent to "Add out-of-date". Warning text and "Add out-of-date" button tooltip show a count of unselected OOD projects.


See bug 422667

No more Unsupported IClasspathEntry kind=4

There’s a very popular question on StackOverflow about an m2e bug that plagued many users of the maven-eclipse-plugin: m2e would throw Unsupported IClasspathEntry kind=4 exceptions on classpath entries generated by the maven-eclipse-plugin (one of the reasons why you should never mix maven-eclipse-plugin and m2e).

m2e 1.5.0 no longer complains about these unsupported classpath entries, but unexpected classpath issues may still arise, should you mix duplicate jars from m2e and those added by the maven-dependency-plugin.

New checksum settings

Ever connected to a network with limited Internet access or simply stayed at a hotel where you needed to get past a for-pay-firewall, resulting in HTML pages being downloaded instead of jars? There’s nothing better to pollute your local Maven repository. Maven CLI builds can use these flags:

  • -C - fail build if checksums do not match

  • -c - warn if checksums do not match

m2e now has a global Checksum Policy available in Preferences ▸ Maven, that will help you keep your sanity, and yor local repository clean:


While m2e actually won’t create any Warning markers on projects when "Warn" is selected, it will override existing checksum policies set on repositories.

Improved settings for Errors/Warnings preferences

m2e has been known for generating specific errors that have puzzled more than one user in the past:

  • Project Configuration is not up-to-date - a change in pom.xml might require a full project configuration update.

  • Plugin execution not covered by lifecycle - m2e doe not know if it is safe to execute a maven plugin as part of the Eclipse build

With the new Preferences ▸ Errors/Warnings page, users can now decide according to their own needs whether these errors should be downgraded to Warning, or even be ignored entirely.


See bugs 433776, 434053

Maven runtime changes

A few changes have been made with regards to the Maven runtime(s):

  • The embedded Maven runtime has been updated to maven 3.2.1.

  • The Netty/AsynHttpClient transport layer as been replaced with OkHttp 1.5.4. OkHttp is now the default HTTP client on the Android platform. It brings HTTP 2.0 and SPDY support to artifact downloads. Please note though, NTLM authentication is not supported.

  • Maven runtime installations can now be customized with a name, and additional libraries can be added. Maven Launch configurations now reference the Maven runtime by name, instead of using a hard-coded location so the configuration is more portable.

See bugs 427932, 418263, 432436

Accept contributions from Gerrit

In order to lower the contribution barrier and increase contributor diversity, the m2e project now accepts changes contributed via the Gerrit review system. Head over the wiki that explains how to use it. Does it work? Hell yeah! After several significant contributions, Anton Tanasenko has joined the m2e team as commiter!

Welcome Anton!

See bug 374665


With new blood on the m2e team, numerous fixed bugs and some big new features & improvements, m2e 1.5.0 is a pretty exciting release. Hope you guys appreciate this year’s release, before an even better version next time.

So if you haven’t installed m2e 1.5.0 yet, head over to https://www.eclipse.org/m2e/download/ and have at it.

We’d love to hear your feedback on the mailing list, or whether you report bugs or enhancement requests.

Fred Bricon

July 26, 2014

Building an Eclipse target platform for the Raspberry Pi

It’s now been a bit over a year since a wrote about running Eclipse RCP projects on a Raspberry Pi. Since then something really exiting has taken place. Tim Webb and Jed Anderson of Genuitec has published PiPlug; a runtime and deployment tool for hosting SWT applications on Raspberry Pi. As you might have guessed PiPlug uses a small subset of the Eclipse platform, about 5.2MiB. This includes the Equinox launcher compiled for ARM. Incidentally that is the same binary I created when writing my blog post. Anyway, the reason I’m exited is that this is a really neat way of running Java UI code on the Raspberry Pi. It is very simple: Start the PiPlug agent on the target, write your code in Eclipse and use the deployment view to push your stuff. It does not get easier than that!

However, not everything is straight out of the box. I have a tiny resistive touch LCD screen that I want to use. And I want PiPlug to adapt to the small resolution (320×240 pixels). Currently it is mostly suited to much larger resulutions. So I need to amend the PiPlug code to support this screen, then rebuild it. In order to do that I must have an Eclipse target platform with all the bits in place so that I can build my forked version of PiPlug. This gave me a reason to revisit my previous work. The instructions I wrote last year no longer works due to changes in the base repositories and the build process. So I’ve done a recap and simplified everything. The source code has been updated to the latest from Eclipse 4.4 (aka Luna) and the changes I’ve done have been squashed to exactly one commit for each repository, making it easier to follow. I’ve also created patches for these repositories. So when you run the script it will clone Eclipse from the source proper, then apply the patches before building – very straightforward. Note that the script is written for Bash.

Running the script will as before create a target platform with the ARMv6 little endian binaries to use for PiPlug or another Eclipse RCP application on the Raspberry Pi. The git repo with all the code is at https://github.com/turesheim/eclipse-rpi. But in order to save you the time it takes to build, I’ve already created the repository which you can use. You’ll find it in the release section of the above mentioned project page. Simply download and set as target platform.

The SWT and Equinox binary bundles you want all contain the segment “armv6l”. So you should add these to the feature or product definition. Note that the export product command in Eclipse PDE is a bit limited and will not work. You should probably set up a Tycho build for you project in order to create a Raspberry Pi distribution of it.

That’s it. Now I’m off to play around with PiPlug and the Sensiron SHT15 I have attached to my RPi :-)

The post Building an Eclipse target platform for the Raspberry Pi appeared first on Yet Another Coder's Blog.

July 25, 2014

Yakindu Statechart Tools 2.3.0 is ready for Luna!

Today, the Luna release train arrived at Yakindu Statechart Tools Station! Apart from Luna compatibility, the new release version 2.3.0 provides some great new features. If you are new to statecharts and Yakindu Statechart Tools you should take a look at our Getting Started Tutorial.



You can download Yakindu Statechart Tools 2.3.0 for Luna either from our dowload page or install it into your existing Eclipse installation via our update site:


New and Noteworthy

Version 2.3.0 is shipped with a new C++ Code Generator. Of course, all advanced statechart features like history states and parallel regions are supported. Furthermore, we implemented some features requested via our user group, for example a name wrangler for c function names and fixed a bunch of bugs. Of course, all the other cool new Luna features like the split editor are working with the new release, even the dark theme is. ;-)
Another interesting project worth to mention is the Statechart Tools Arduino Integration. Marco wrote a great article on how to run the generated statechart tools code on an Arduino board.

What's next? 


Based on the new GMF Compare feature (thanks for this, great job!) we started to create a diff/merge viewer for statechart diagrams. Since our viewer's merge functionality is not stable yet, we decided to publish it with the next release. However, if you want to test it, you can checkout the plugin from our svn repository. Do not forget to send us some feedback!

Preview of the new SCT Diff/Merge viewer

We hope you like the new Statechart Tools release, if you have any questions do not hesitate to contact us via our user group!

Committer and Contributor Hangout -- Eclipse Project Infrastructure and Best Practices

Thanh Ha from the Eclipse Foundation will be talking about our infrastructure, some best practices and using Hudson, Git and Garret. Links Below: IT Infrastructure https://wiki.eclipse.org/IT_Infrastructure_Doc Common Build Infrastructure https://wiki.eclipse.org/CBI Hudson https://wiki.eclipse.org/Hudson Social Coding (GitHub) https://wiki.eclipse.org/Social_Coding/Hosting_a_Project_at_GitHub Git Code Contributions https://wiki.eclipse.org/Development_Resources/Handling_Git_Contributions Gerrit https://wiki.eclipse.org/Gerrit Sonar https://wiki.eclipse.org/Sonar
July 24, 2014

JFace Viewers in a Java8 world – Part 1 ListViewer

Most people who write RCP applications today know about JFace Viewers and in my opinion the Viewer framework is one of the best programming patterns that came out of Eclipse – easy to understand, easy to apply but still powerful.

Still working with viewers in a world of Java8 feels really bad because of 2 things:

  • JFace Viewers don’t have generics and useage of Object[] in the API
  • JFace APIs does not use SAM-Types

There’s a google summer of code project that tries to add generics to the JFace Viewer API but this does not change the 2nd IMHO as important part – the API feels alien in a Java8 world of lambda expressions and method references.

A few weeks back a discussion came up at the GEF mailing list in conjunction with GEF4 who thought API adopting the JFace API to setup graph viewers but that brought up a 3rd problem – JFace Viewer does not completely hide the SWT API and because GEF4 is designed to be widget toolkit agnostic then JFace Viewer API.

So the requirements for a revised JFace API is:

  1. Use generics to provide type safety
  2. Make use of SAM types
  3. Do not depend of any toolkit technology

Today I had to make a break from my day work and thought about how an Viewer 2.0 would look like and this what I came up with:


 * Base interface of all viewers
 * @param <O>
 *            the domain object representing a row
 * @param <I>
 *            the input to the viewer
 * @param <C>
 *            the content provider responsible to translate the input into the
 *            internal structure
public interface Viewer<O, I, C extends ContentProvider<O, I>> {
  public void setContentProvider(
    @NonNull Supplier<@NonNull C> contentProvider);

  public void setInput(@NonNull Supplier<@NonNull I> input);

You notice that the content-provider and input are not set directly but through suppliers, the reason for that is the input most of the time is not created next to viewer but through a method call into the business layer and for the content provider one often uses a factory to reuse content provider implementations.

So the 2nd and 3rd step of setting up Viewers 2.0 would look like this:

public class Demo {

  private void setup(ListViewer<Person,List<Person>,ContentProvider<Person, List<Person>>> viewer) {
    // ... (setup of labels, ...)

  // ....

and the helper methods could look like this:

private List<Person> listInput() {
  try {
    return Arrays.asList(
      new Person(false, "Tom", "Schindl", format.parse("01.05.1979")),
      new Person(true, "Maria", "Musterfrau", format.parse("01.05.1970"))
  } catch (ParseException e) {
    // TODO Auto-generated catch block
  return Collections.emptyList();
public class ContentProviderFactory {
  public static <O> ContentProvider<O, List<O>> createListContentProvider() {
    return new ContentProvider<O, List<O>>() {
      public List<O> getRootElements(List<O> input) {
        return input;

  // ... more factory

Let’s go on with the translation of the domain element into information a the viewer can present so we need to take a look at the ListViewer interface:

public interface ListViewer<O, I, C extends ContentProvider<O, I>> extends Viewer<O, I, C> {
   * Translate the domain object into a string
   * @param converter
   *            the converter
   * @return the list viewer
  public ListViewer<O, I, C> textProvider(
    Function<@NonNull O, @Nullable String> converter);

   * Translate the domain object into a style information to style the cell
   * and its contents e.g. background color
   * @param converter
   *            the converter
   * @return the list viewer
  public ListViewer<O, I, C> styleProvider(
    Function<@NonNull O, @Nullable String> converter);

   * Translate the domain object into a style ranges
   * @param converter
   *            the converter
   * @return the list viewer
  public ListViewer<O, I, C> textStyleRangeProvider(
    Function<@NonNull O, @NonNull List<@NonNull StyleRange>> converter);

   * Translate the domain object into an image definition
   * @param converter
   *            the converter
   * @return the list viewer
  public ListViewer<O, I, C> graphicProvider(
    Function<@NonNull O, @Nullable String> converter);

This makes our complete setup look like this:

public class Demo {

  private void setup(ListViewer<Person,List<Person>,ContentProvider<Person, List<Person>>> viewer) {


  // ....

where the textProvider and graphicProvider-Function look like this:

private String personFullText(Person p) {
  return p.getFirstname() + "," + p.getLastname() 
   + "("+format.format(p.getBirthdate())+")";
private String genderImage(Person p) {
  return p.isFemale() ? "female.png" : "male.png";

That’s it for today – in the next post I’ll show you how a revised TableViewer API could look like

Announcing Jnario 1.0

After lots of improvements behind the scenes I finally decided to release Jnario 1.0. If you are new to Jnario, here is a blog post introducing the main features of Jnario. You can install the new release via update site or maven. Special thanks go to Stefan Oehme, Boris Brodski and Sebastian Poetzsch for their help and contributions!

Xtend 2.6 Compatibility

Jnario is based on Xtend and this release provides compatibility to Xtend 2.6! This means you benefit from all the improvements made in Xtend 2.6 when writing tests with Jnario. One of the most prominent new features is that you can define anonymous classes in your Jnario specs. This is a great way to quickly create test stubs in your specs. Furthermore, the IDE performance in Eclipse has also been greatly improved.

Fun with Tables

Tables have always been one of my favorite features of Jnario specs. Thanks to Sebastian Poetzsch, using tables has become even more fun in this release! Sebastian contributed an automated formatter for specs, which works also for tables. No more fiddling with spaces when formatting your tables necessary! Just press (CTRL|CMD)+SHIFT+F and there you go:

Improved Maven Support

The standalone Jnario compiler has been completely rewritten. This lays the foundation for future support for other build environments such as gradle. Right now, this means faster compile times when compiling Jnario specs with Maven. In general the maven support has been improved. For example, in combination m2e, Eclipse will automatically synchronize your maven and project configuration.

Better Docs

There has been lots of fixes for generating html reports from your Jnario specifications. If you are curious, here is a great example for what executable documentations created with Jnario can look like.

Want to know more?

Check out the documentation or join the mailing list. You can find the full release notes here

These posts might also be interesting for you:

Happy testing!

Warning: AngularJS Modules are not Modular

In most frameworks and languages, a module's exports are only visible to the other modules that directly import it. As a simple example, the following node.js program prints undefined:




exports.myVal = 7;



> node parent.js

Ignoring the fact that circular dependencies are evil, a novice node user would realize why the printed value is undefined. The reason is one of the defining characteristics of the node module system (actually, it is a characteristic of most module systems ever created). Modules must explicitly declare the modules that they use. Referencing values from modules that are not explicitly required will result in undefined values or errors.

This is not so with AngularJS.

The Angular module system provides some nice syntax to describe required modules. For example, this says that the parent module uses the sub1 and sub2 modules:

angular.module('parent', ['sub1', 'sub2']);

Now, let's assume that sub1, and sub2 each declare no dependencies:

angular.module('sub1', []);
angular.module('sub2', []);

Using just about any other module system as a guide, you would expect that code from parent can reference (i.e., inject) values from either sub1 or sub2, but neither of the sub-modules can reference values from each other or parent.

But, no. That is not the case. Consider that this is added to the app:

.run(function(sub2Value) {

angular.module('sub2').value('sub2Value', 'I should be an error!!!');

Referencing sub2Value from module sub1 does not cause an error even though its module is not directly referenced. Rather, the program prints "I should be an error!!!" to the console. Try it out yourself.

What's going on here?

Each Angular application has a singleton instance of an injector. This injector is not namespaced or partitioned. All instances provided by all modules in the application exist in the injector. And the injector can serve any of its instances to anything that requests it.

You can see that similar behaviour in Guice:

class ModuleMain {
static class Service1 { }

static class Module1 implements Module {

public void configure(Binder binder) {

static class Service2 {

Service1 service1;

static class Module2 implements Module {

public void configure(Binder binder) {

public static void main(String[] args) {
Injector injector = Guice.createInjector(new Module1(), new Module2());

Service1 service1 = injector.getInstance(Service1.class);
Service2 service2 = injector.getInstance(Service2.class);

// Will print true since instances from unrelated modules can reference each other
System.out.println(service2.service1 == service1);

In this snippet, you can see that instances can be shared across unrelated modules. In Guice, like in Angular, the injector is a giant bucket where you can put stuff in and take stuff out with no restrictions.

But, there is a difference. In Angular, modules explicitly declare which modules they depend on, but not in Guice. In Angular, the syntax provides the expectation that only instances from directly required modules can be injected into another module. I am sure that there is a reason for this non-intuitive design, and I would like to learn it.

Consider this post as an explanation of what is going on, but not why. Also, consider this a warning to not rely on the Angular framework to enforce module boundaries. Perhaps in a future version, the Angular team can create truly hierarchical modules. Angular modules would be safer to compose, and name clashes between third-party modules would be prevented.
July 22, 2014

Snipmatch: a solution for configuring scout properties from the java editor?

Scout comes with powerful tools in the Scout perspective. You have a logical representation of your application in the “Scout Explorer” view and a “Scout Object Properties” view where you can set or change a property for the selected object. A modification in the property view updates the java code and a code modification is reflected in the property view. This allows the developers to work as they want.

Scout Property View

The problem is that power users are not using this property view very often. The explanation is simple: using the view breaks the flow when you are typing code in the java code editor. You need to switch to the mouse, find your current object in the scout explorer and set the properties in the other view.

When you know the name of the getConfigured method you want to set, then you are quicker if you stay in the java editor. The problem is that what the JDT is offering by default is not really what you need in this case. Lets see how Snipmach can help.

With the JDT

If you are in a column’s inner-class you might want to define it as editable. You need to override the getConfiguredEditable() method and return true instead of the default false.

The JDT can help: enter “getCondiguredEdit” and press control space:

Jdt CTRL Space

This works well, but the result is not as efficient as it could be:

Jdt result

My problems are:
* I really do not need the “(non-Javadoc) comment” block.
* The cursor is not at the correct position; It would be preferable to have the cursor in the line of the return statement.
* In this specific case I do not need to call the super implementation at all.
* And the TODO statement makes sense in a lot of cases, but not here.

With Snipmach

Snipmatch can help too. I have defined snippets to handle those cases. They are shared in this git repository. In my column, press CTRL + Alt + Space + the search filter set to “edit”, I can find the snippet I am looking for:

Snipmatch CTRL+Alt+Space

Now when I press enter, there is a big difference with what the JDT override assist function provides:

Snipmatch result

The cursor is where it should be. In the Snippet’s definition, I have defined as first choice the value that will be most likely used at this point (in this case true because the default in AbstractColumn is false). The result is really straight forward.

Just to be clear, I am not complaining about the JDT doing a bad job. We are comparing a generic feature coming out of the box in the JDT and another approach where someone needs to configure the possible snippets. It is normal to have a better user experience with a specific tool because it is a specific use case.

What is missing in snipmatch?

The snippets do not have any metadata to define a context. The consequence is that the search engine is only based on the keyword entered by the user in the search box. It would be nice to be also able to influence the search result depending on the cursor position. I have commented Bug 435632. Let’s hope that the code recommenders team will propose something going in this direction to improve the user experience.

Scout Links

Project Home, Forum, Wiki, Twitter

July 21, 2014

(Yet) Another Charting Widget for RAP

Maybe you’re familiar with Ralf Sternberg’s d3 widget for RAP (it’s part of the RAP Examples Demo). Like the name implies, it’s a charting widget based on the d3 library, which uses SVG to render its elements.

I was interested in doing the same with the other popular method used to draw freely in a modern browser, the HTML5 canvas element. The most promising project I found for that is Chart.js, a pleasantly simple and lightweight charting library that’s currently in its 1.0 beta phase.

The original idea was to keep it as simple as possible while also making it work on IE7/8 (unlike the d3 widget), but that didn’t work out to my satisfaction. After letting it rest for a while I decided to publish the project without IE8 support, a move made easier by our plan to drop IE8 for RAP 3.0 as well.

charts (Yet) Another Charting Widget for RAP

Chart.js supports 6 basic chart types that feature optional tooltips, appear animations and limited customization. The Java API is fairly close to the original JS API and pretty self-explanatory. Just create an instance of the Chart class and call one of the 6 draw***Chart methods. The first argument is the data object (instances of either ChartRowData or ChartPointData), and the (optional) second a configuration object (instance of ChartOptions). You can call these methods as often as you want, the previous chart will simply be replaced. There are currently no transitions between two charts. I experimented with this, but it made everything insanely complicated, which is exactly what I didn’t want. Perhaps in a later version.

Speaking of later versions, from my point of view this is just an experiment, but if there is some good feedback I might polish it up a bit and contribute it to the RAP Incubator. Be sure to try the d3 widget as well.


Leave a Comment. Tagged with custom widget, incubator, javascript, Luna, rap, custom widget, incubator, javascript, Luna, rap

July 18, 2014

Mozilla pushes - June 2014

Here's June 2014's  analysis of the pushes to our Mozilla development trees. You can load the data as an HTML page or as a json file

This was another record breaking month with a total of 12534 pushes.  As a note of interest, this is is over double the number of pushes we had in June 2013. So big kudos to everyone who helped us scale our infrastructure and tooling.  (Actually we had 6,433 pushes in April 2013 which would make this less than half because June 2013 was a bit of a dip.  But still impressive :-)

  • 12534 pushes
    • new record
  • 418 pushes/day (average)
    • new record
  • Highest number of pushes/day: 662 pushes on June 5, 2014
    • new record
  • Highest 23.17 pushes/hour (average)
    • new record

General Remarks
The introduction of Gaia-try in April has been very popular and comprised around 30% of pushes in June compared to 29% last month.
The Try branch itself consisted of around 38% of pushes.
The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 21% of all the pushes, compared to 22% in the previous month.

June 2014 was the month with most pushes (12534 pushes)
June 2014 has the highest pushes/day average with
418 pushes/day
June 2014 has the highest average of "pushes-per-hour" is
23.17 pushes/hour
June 4th, 2014 had the highest number of pushes in one day with
662 pushes

July 17, 2014

Announcing FOSS4G North America 2015 Program Chair

Next March, we’ll be hosting FOSS4G North America 2015 co-hosted with EclipseCon from March 9th to 12th. This conference is a wonderful collaboration between multiple communities. With participants from Eclipse, OSGeo, LocationTech, IoT, Science, and other communities of open source communities, we anticipate some weird and wonderful cross pollination of ideas. Attendees can attend any talk they wish for no added cost.

One of the biggest contributing factors to the value and success of the conference is the quality of the program. The program committee, led by the program chair, is responsible for selecting which presentations are part of the program at FOSS4G North America.

It is my pleasure to announce that Rob Emanuele has agreed to be program chair for FOSS4G North America 2015. Rob is best known for his work on the GeoTrellis high performance geoprocessing framework. His combination of breadth and depth of technology skills, along with his ability to connect with people will serve the program committee well. Thank you Rob! And I look forward to working with you to make this an outstanding conference in 2015.

July 16, 2014

Inactive Blogs