April 16, 2014

How to manage Git Submodules with JGit

For a larger project with Git you may find yourself wanting to share code among multiple repositories. Whether it is a shared library between projects or perhaps templates and such used among multiple different products. The Git built-in answer to this problem are submodules. They allow putting a clone of another repository as a subdirectory within a parent repository (sometimes also referred to as the superproject). A submodule is a repository in its own. You can commit, branch, rebase, etc. from inside it, just as with any other repository.

JGit offers an API that implements most of the Git submodule commands. And this API it is I would like to introduce you to.

The Setup

The code snippets used throughout this article are written as learning tests1. Simple tests can help to understand how third-party code works and adopting new APIs. They can be viewed as controlled experiments that allow you to discover exactly how the third-party code behaves.

A helpful side effect is that, if you keep the tests, they can help you to verify new releases of the third-party code. If your tests cover how you use the library, then incompatible changes in the third-party code will show themselves early on.

Back to the topic at hand: all tests share the same setup. See the full source code for details. There is an empty repository called parent. Next to it there is a library repository. The tests will add this as a submodule to the parent. The library repository has an initial commit with a file named readme.txt in it. A setUp method creates both repositories like so:

Git git = Git.init().setDirectory( "/tmp/path/to/repo" ).call();

The repositories are represented through the fields parent and library of type Git. This class wraps a repository and gives access to all Commands available in JGit. As I explained here earlier, each Command class corresponds to a native Git pocelain command. To invoke a command the builder pattern is used. For example, the result from the Git.commit() method is actually a CommitCommand. After providing any necessary arguments you can invoke its call() method.

Add a Submodule

The first and obvious step is to add a submodule to an existing repository. Using the setup outlined above, the library repository should be added as a submodule in the modules/library directory of the parent repository.

public void testAddSubmodule() throws Exception {
  String uri 
    = library.getRepository().getDirectory().getCanonicalPath();
  SubmoduleAddCommand addCommand = parent.submoduleAdd();
  addCommand.setURI( uri );
  addCommand.setPath( "modules/library" );
  Repository repository = addCommand.call();
  F‌ile workDir = parent.getRepository().getWorkTree();
  F‌ile readme = new F‌ile( workDir, "modules/library/readme.txt" );
  F‌ile gitmodules = new F‌ile( workDir, ".gitmodules" );
  assertTrue( readme.isF‌ile() );
  assertTrue( gitmodules.isF‌ile() );

The two things the SubmoduleAddCommand needs to know are from where the submodule should be cloned and a where it should be stored. The URI (shouldn’t it be called URL?) attribute denotes the location of the repository to clone from as it would be given to the clone command. And the path attribute specifies in which directory – relative to the parent repositories’ work directory root – the submodule should be placed. After the commands was run, the work directory of the parent repository looks like this:

The library repository is placed in the modules/library directory and its work tree is checked out. call() returns a Repository object that you can use like a regular repository. This also means that you have to explicitly close the returned repository to avoid leaking file handles.

The image reveals that the SubmoduleAddCommand did one more thing. It created a .gitmodules file in the root of the parent repository work directory and added it to the index.

[submodule "modules/library"]
path = modules/library
url = git@example.com:path/to/lib.git

If you ever looked into a Git config file you will recognize the syntax. The file lists all the submodules that are referenced from this repository. For each submodule it stores the mapping between the repository’s URL and the local directory it was pulled into. Once this file is committed and pushed, everyone who clones the repository knows where to get the submodules from (later more on that).


Once we have added a submodule we may want to know that it is actually known by the parent repository. The first test did a naive check in that it verified that certain files and directories existed. But there is also an API to list the submodules of a repository. This is what the code below does:

public void testListSubmodules() throws Exception {
  Map<String,SubmoduleStatus> submodules 
    = parent.submoduleStatus().call();
  assertEquals( 1, submodules.size() );
  SubmoduleStatus status = submodules.get( "modules/library" );
  assertEquals( INITIALIZED, status.getType() );

The SubmoduleStatus command returns a map of all the submodules in the repository where the key is the path to the submodule and the value is a SubmoduleStatus. With the above code we can verify that the just added submodule is actually there and INITIALIZED. The command also allows to add one or more paths to limit the status reporting to.

Speaking of status, JGit’s StatusCommand isn’t at the the same level as native Git. Submodules are always treated as if the command was run with &dash&dashignore-submodules=dirty: changes to the work directory of submodules are ignored.

Updating a Submodule

Submodules always point to a specific commit of the repository that they represent. Someone who clones the parent repository somewhen in the future will get the exact same submodule state although the submodule may have new commits upstream.

In order to change the revision, you must explicitly update a submodule like outlined here:

public void testUpdateSubmodule() throws Exception {
  ObjectId newHead = library.commit().setMessage( "msg" ).call();
  File workDir = parent.getRepository().getWorkTree();
  Git libModule = Git.open( new F‌ile( workDir, "modules/library" ) );
  parent.add().addF‌ilepattern( "modules/library" ).call();
  parent.commit().setMessage( "Update submodule" ).call();
  assertEquals( newHead, getSubmoduleHead( "modules/library" ) );

This rather lengthy snippet first commits something to the library repository (line 4) and then updates the library submodule to the latest commit (line 7 to 9).

To make the update permanent, the submodule must be committed (line 10 and 11). The commit stores the updated commit-id of the submodule under its name (modules/library in this example). Finally you usually want to push the changes to make them available to others.

Updating Changes to Submodules in the Parent Repository

Fetching commits from upstream into the parent repository may also change the submodule configuration. The submodules themselvs, however are not updated automatically.

This is what the SubmoduleUpdateCommand solves. Using the command without further parametrization will update all registered submodules. The command will clone missing submodules and checkout the commit specified in the configuration. Like with other submodule commands, there is an addPath() method to only update submodules within the given paths.

Cloning a Repository with Submodules

You probably got the pattern meanwhile, everything to do with submodules is manual labor. Cloning a repository that has a submodule configuration does not clone the submodules by default. But the CloneCommand has a cloneSubmodules attribute and setting this to true, well, also clones the configured submodules. Internally the SubmoduleInitCommand and SubmoduleUpdateCommand are executed recursively after the (parent) repository was cloned and its work directory was checked out.

Removing a Submodule

To remove a submodule you would expect to write something like
git.submoduleRm().setPath( ... ).call();
Unfortunately, neither native Git nor JGit has a built-in command to remove submodules. Hopefully this will be resolved in the future. Until then we must manually remove submodules. If you scroll down to the removeSubmodule() method you will see that it is no rocket science.

First, the respective submodule section is removed from the .gitsubmodules and .git/config files. Then the submodule entry in the index is also removed. Finally the changes – .gitsubmodules and the removed submodule in the index – are committed and the submodule content is deleted from the work directory.

For-Each Submodule

Native Git offers the git submodule foreach command to execute a shell command for each submodule. While JGit doesn’t exactly support such a command, it offers the SubmoduleWalk. This class can be used to iterate over the submodules in a repository. The following example fetches upstream commits for all submodules.

public void testSubmoduleWalk() throws Exception {

  int submoduleCount = 0;
  Repository parentRepository = parent.getRepository();
  SubmoduleWalk walk = SubmoduleWalk.forIndex( parentRepository );
  while( walk.next() ) {
    Repository submoduleRepository = walk.getRepository();
    Git.wrap( submoduleRepository ).fetch().call();
  assertEquals( 1, submoduleCount );

With next() the walk can be advanced to the next submodule. The method returns false if there are no more submodules. When done with a SubmoduleWalk, its allocated resources should be freed by calling release(). Again, if you obtain a Repository instance for a submodule do not forget to close it.

The SubmoduleWalk can also be used to gather detailed information about submodules.
Most of its getters relate to properties of the current submodule like path, head, remote URL, etc.

Sync Remote URLs

We have seen before that submodule configurations are stored in the .gitsubmodules file at the root of the repository work directory. Well, at least the remote URL can be overridden in .git/config. And then there is the config file of the submodule itself. This in turn can have yet another remote URL. The SubmoduleSyncCommand can be used to reset all remote URLs to the settings in .gitmodules

As you can see, the support for submodules in JGit is almost at level with native Git. Most of its commands are implemented or can be emulated with little effort. And if you find that something is not working or missing you can always ask the friendly and helpful JGit community for assistance.

  1. The term is taken from the section on ‘Exploring and Learning Boundaries’ in Clean Code by Robert C. Martin
April 15, 2014

XtextCON Update

XtextCON 2014 is still 40 days away, but I have to announce that ...

We are sold out

Initially I planned for 80 attendees. It turned out that was much too small, so we have added 30 more tickets as the event and venue can handle 110 people without problems. Today we have 112 registrations and since I want to make sure that everyone has an excellent time at this first XtextCON we closed registration today. I'm really sorry if you haven't booked yet but wanted to do so. We'll likely do an XtextCON next year again. 

New Sessions

We have added some really cool new sessions. All in all XtextCON will feature 18 Speakers and 28 sessions in two tracks. The added sessions are:
 - Xtext + Sirius : <3 (Cedric Brun)
 - Xtext + CDO - Does it blend? (Stefan Winkler)
 - Oomph - Automatically Provision a Project-Specifc IDE (Eike Stepper, Ed Merks)
 - 3D Modeling with Xtext (Martin Nilsson, Esa Ryhänen)
 - Handle Based Models for Xtext with Handly (Vladimir Piskarev)

Checkout the updated program.

EMF Forms: A Question of Effort

A comparison between view modeling and manual UI programming

In my previous blog, I introduced EMF Forms, a subcomponent of EMF Client Platform (ECP), which supports the development of form-based user interfaces based on a view model.  The approach allows the effective development of forms without manual and tedious layout coding or manually binding controls to data models.

The technological basis of EMF Forms has been used actively for more than a year in numerous user projects. In October, with release 1.1.0 of ECP, EMF Forms (still without a name) was presented publicly to the community for the first time. Since October, we’ve been able to win over many new users, we’ve received a lot of feedback and, above all, we’ve continued to develop the software. In this post, I would like to take a close look at EMF Forms. In particular, I would like to share our experience and feedback from user projects and compare EMF Forms to manual UI programming. In this context, the first and perhaps most relevant questions about new technology is: does it save effort and how much does it cost?  I start with a short introduction to EMF Forms.  For more details, please refer to the website.  Next, I compare the effort to set up an interface with and without EMF Forms, the effort to create an initial version of the UI and the effort to execute notable changes in the UI. If you already know about EMF Forms, you might want to continue reading here.

What is EMF Forms?

 Many business applications are focused on the in- and output of data as well as on subsequent data processing. Examples of such data-centric applications can be found in almost all industries, such as in CRM or ERP systems. Regardless of the specific domain, the corresponding data is often presented in forms-based UIs. These forms show the contents of one or more entities of the application and its attributes.

Displayed below is a screen shot of a simple example of a form-based UI. It shows a possible form for an entity “Person” with four attributes. Each attribute is identified by a label and a corresponding control (input field). In the example, the attributes are displayed in a simple two-column layout.

image001 EMF Forms: A Question of Effort

Figure 1: Simplified example of a form-based UI for the entity “Person” with four attributes

The implementation of this kind of user interface mainly includes the programming of individual controls such as text boxes, the binding of these controls to the data model and the creation of a layout, i.e., the placement of controls, labels and possible additional layout elements. Although the development of individual controls and their binding has been supported well with frameworks such as EMF or data binding, creating and customizing layouts is often a largely manual process. This means that all the visible elements, such as labels and input fields, are created manually in the source code, are bonded to the data model and are placed in the layout.

EMF Forms is a radically different approach. Instead of describing a user interface in source code, the UI is expressed by a simple model. It specifies which elements, specifically which attributes of the data model, are displayed at which position in the UI. The actual user interface is then rendered based on this model by interpreting the model. First, the renderer translates the controls in the view model into actual implementations.  A string attribute is displayed, for example, as a text field that is bound to that attribute. Next, the renderer translates the defined structure of the user interface in the view model into a specific layout. Figure 2 shows a very simplified example of a view model that would describe the UI used in the previous example (Figure 1).

image021 EMF Forms: A Question of Effort

Figure 2: A view model describes the form-based UI. This view model is interpreted by a renderer.

Details on the use of EMF Forms, the available view model elements and detailed tutorials can be found on the EMF Forms website. In this blog post, I will share our experiences from projects and feedback from users to compare the approach of EMF Forms with the traditional manual way of programming interfaces. Is EMF Forms’ approach really effective? Of course, there are two ways to create interfaces manually. In the first, manual code can be implemented in a particular UI toolkit. In the other, one can use a UI editor such as WindowBuilder. In general, the second version is of course more efficient but also has the limitation of not allowing use of custom controls or the reuse of interface elements. In the end, UI editors generate source code, a core difference to how EMF Forms works.


The first interesting question when using EMF Forms, of course, is whether the approach is actually more efficient than manually developing interfaces, whether it actually supports the development of a forms-based user interface with less effort.

The view model approach has at first an initial disadvantage: developers must invest time to evaluate the approach, to integrate it and to learn how to use it. Whether this effort is justified, of course, depends on the size of the developed UI, on the complexity and on the number of developers working on the forms.

We were able to observe, however, that the manual programming of user interfaces in most projects is indeed perceived as unnecessarily time-consuming and even an annoying activity. The willingness to adopt a new approach is therefore very high for most developers. In projects in which EMF Forms is already being used for form-based user interfaces, the framework is used throughout the project, including very simple UIs such as for setting dialogs or wizards.

EMF Forms and the view model are explicitly focused on the development of form-based interfaces, therefore it offers a significantly lower complexity level than a traditional UI toolkit. The explicit goal of view modeling is to provide better concepts for describing form-based user interfaces. EMF Forms offers, for example, an item “Control”, which allows developers to specify that a particular attribute from the data model (for example, “First Name”) shall be displayed in the user interface. A control is translated by a renderer into a label or a widget (for example, a text field). If such a control were to be implemented manually without EMF Forms, a label and a text box would have to be created manually.

In EMF Forms, it is sufficient to specify that a particular attribute be displayed. This information is specified in the element “Control”. By placing the control within the structure of the view model, the layout is implicitly defined. The renderer is then responsible for the actual implementation of the UI. Therefore, significantly fewer inputs are required for the specification of an interface in EMF Forms, which is much easier than manual coding in both the initial creation as well as in changing forms. The following screenshot compares the two approaches and shows what would be necessary when creating a similar interface in SWT, with a view model on the left and source code on the right. Of course this is just an example and not statistical proof, but describing UIs in a view model is generally much more concise.

To stay fair: On the side of the view model, the tree items shown contain additional information, which are not shown in the screenshot. In the example, however, the only additional information specified is which attribute of the data model to display in a certain control. This information can be entered efficiently via a selection dialog. Other attributes of the view model, such as whether a label for a control should be shown, are optional. In the example, the default values are used. When manually developing UIs, those kinds of default options must always be implemented. Furthermore, in the sample code shown in the screenshot, the created widgets are not bound to the data model, which would mean additional expense. In the case of EMF Forms, the renderer takes over this task. Controls are not only bound to the data model, they also provide additional functionality such as input validation, which would have to be implemented manually again.

image03 EMF Forms: A Question of Effort

Figure 3: Comparison between a user interface specified using the view model and the manual implementation in SWT

The initial spark

When considering efficiency, an important criterion is the required effort to create an initial interface that displays all attributes of an entity in a simple layout. Such first versions of forms are particularly helpful for newly defined data entities, for example, to check the data model to see if it is complete. For this use case, the exact layout is often not important yet. The final specification of a user interface for the entities is sometimes developed too early, while the data model is still subject to changes.

When manually developing UIs, UI editors or even UI mock-up tools can be helpful and allow faster results than manual programming. However, the created UIs are not functional; they are not bound to the data model. Using the model-based approach of EMF Forms, user interfaces can also be generated from scratch. In this case, the data model is read and the framework creates a view model on the fly that displays all attributes in a list. This approach is used by default for all entities from the data model for which no explicit view model has been defined yet. Figure 4 shows an example of the generation of a view model from the model data entity user. The generation of a default view model can also be customized, e.g., the default could be a two-column layout. The default view models provide a good starting point to adjusting a user interface step by step, which is the typical process in an agile project. In the following section, we describe the experience with EMF Forms when changes or additions are applied on an existing user interface.

image011 EMF Forms: A Question of Effort

Figure 4: EMF Forms allows the initial generation of a default view model from a data model


When it comes to the development costs of software, the initial cost is typically only half the truth. Equally interesting are the costs when changes occur. Particularly in agile development, changes are an accepted and integral part of the development process. Therefore, a crucial criterion for a framework like EMF Forms is how well the approach supports changing existing form-based user interfaces, either because the layout needs to be adjusted or because there are changes in the underlying data model. To apply changes to an existing user interface, whether in a view model or in manual code, it is first necessary to identify the correct location for the corresponding adjustment. Manually written layout code can be quite difficult to read; the structure of the code often differs significantly from that of the structure of the user interface. In Figure 2, an example of a two-column layout in SWT, the controls are created line-by-line even though the structure of the interface is column-based. The view model, as a specialized concept for form-based UIs, follows the logic of the user interface more closely and is therefore often easier to read and understand.

There are two different ways to make the actual change in an interface, which makes quite a difference for EMF Forms. The first case is a modification that can be done in the view model. An example of such a change would be moving an attribute inside a form, adding a structural element (for example, a new column) or adding new controls.

Adding a new attribute is the same as initially generating the view model; there is a simple wizard. It’s even easier to change the position of an existing element, be it a control, a group of controls or a whole element of the structure. It can be moved simply by drag and drop in the tree view of the view model.

In these two examples, the model-based approach fully shows its strengths – even highly structured manually written UI code is rarely as simple and understandable as a corresponding view model.

More interesting is the second case in which a change isn’t made directly in the view model but affects the renderer. An example of such a change would be increasing the margins of a rendered control. Many of these settings can be specified in a so-called template model in EMF Forms. Specifying these settings would then affect the layout of the entire application and thus result in a homogeneous look-and -feel.

If a general setting is not (yet) supported, the renderer shipped with EMF Forms can be extended or even replaced with a custom renderer. A typical example of such an adaptation would be adding special controls such as an input field for email addresses. For this purpose, manual UI programming is, of course, still necessary. However, each missing concept has to be implemented only once and can be combined with any existing concept. The additional expense thus refers only to proprietary, custom components.  With manual UI programming, these would need to be developed in any case.

There will always be parts of a form-based UI that are difficult to express in a view model without resulting in a similar complexity to manual UI programming. In these cases, EMF Forms pragmatically concedes and allows embedding of so-called custom areas in a form. This way, very specific parts of the UI can be programmed manually just as before. Of course, it is these types of UI element that should be avoided if possible. In practice, they can often be replaced in the medium term by adapting generic concepts.


This blog post compared the efficiency in programming form-based user interfaces of using a model-based approach such as EMF Forms on the one hand with manual UI programming on the other hand. It is not surprising that the first is seen to be better overall. EMF Forms was designed for exactly this purpose – form-based user interfaces – while UI toolkits have to support any type of user interface. Last but not least, I am of course very involved with EMF Forms, so this post cannot be considered an objective comparison. However, we are interested in feedback, even negative. Open source technologies are continually developed and improved only through feedback, especially regarding things that do not work as desired or use cases in which the framework has not been used before. Of course we are happy for positive feedback, too!  For more information on EMF Forms, please visit the EMF project website.

Professional Support

 Open-source software is free of licensing fees. Furthermore, it is easy to adapt and enhance with new features. Nevertheless, using open-source frameworks is not free. Like in closed-source software, no one is an expert on every framework. The total cost of ownership includes training, adoption, enhancement and maintenance of a framework. It might take significantly more time for somebody new to the project to extend a certain feature than for someone who is familiar with the framework. Furthermore, software has to be maintained. Even if this can be done literally by everybody for open-source software, a professional maintenance agreement with fixed response times is often mandatory in an industrial setting to ensure productivity. EclipseSource employs several EMF Forms committers and offers training and professional support. This includes:

  • Evaluation: Let us help to decide whether EMF Forms is the right choice for you. We will evaluate your requirements, assess whether and how they can be matched with EMF Forms and help you estimate the integration effort.

  • Prototyping: Let us provide you with a prototype demonstrating how EMF Forms will work in your domain.

  • Training: Let us teach you how to apply EMF Forms most efficiently in your project, including related technologies such as EMF or ECP.

  • Integration: Let us help you to integrate EMF Forms into your existing application as efficiently as possible.

  • Support: Let us assist your team when solving day-to-day issues such as technical problems or architecture decisions.

  • Sponsored Development and Maintenance: Let us adapt and enhance the framework based on your specific requirements.


2 Comments. Tagged with eclipse, emf, emfforms, rcp, eclipse, emf, emfforms, rcp

April 14, 2014

RAP CSS Tooling

While I was working on sample for “Eclipse4 on RAP” I had to use RAP CSS to get a mobile like behavior.

The CSS/Themeing support in RAP is quite powerful but one has to have a web-page opened to see all possible attributes applicable to a certain control which is not how we are used to work in an IDE dominated world.

We want content-assist, error reporting while typing, … but naturally none of the CSS-Editors know about the rap-specific selectors and properties.

Fortunately e(fx)clipse has an CSS extensible editor which is backed by a generic format definition what properties are available on which selector. All that has to be done to teach it additional CSS-Selectors and CSS-Properties is to create a file like this.

If you now:

you should get an editing feeling like the screencast below.

April 11, 2014

OSGi DevCon 2014 Schedule Announced

We are pleased to announce that the OSGi DevCon 2014 Schedule is now available. Register before April 19 to save $400 per ticket. Yes thats just 8 days away!  Don't forget there is a group discount available if there are 3 or more of you registering at the same time. Click here for the OSGi DevCon 2014 Schedule As you will see there is plenty to keep you busy for the 3 day conference (June

Retour EclipseCon Boston 2013

EclipseCon, la grand-messe internationale de la communauté Eclipse, vient de se terminer. Vous n'avez pas pu vous déplacer à Boston ? Cédric Brun, Etienne Juliot, Alex Lagarde, Mikaël Barbero, Mélanie Bats et Gaël Blondelle vous font revivre la conférence. Sur les grandes thématiques de cette année, nous avons eu : - Eclipse RCP, - Orion et le passage au web, - l'omniprésence du Modeling et des DSL, - Git, - ALM (build et déploiement continu).
April 10, 2014

CfP: Workshop on Methodical Development of Modeling Tools

How time flies... only recently I posted about a workshop (held at EDOC 2013), and today I can announce the 2014 version, held at EDOC 2014. It's the

2nd International 
Workshop on Methodical Development of Modeling Tools (ModTools14)
on the 17th IEEE International Enterprise Computing Conference EDOC 2014

This year, EDOC takes place in Ulm, Germany. You will find the call for paper and other information at the workshop's hompage: http://www.wi-inf.uni-duisburg-essen.de/ModTools14. Submission deadline is April 1st 2014 (really, no kidding).

Update (10/4/2014): Submission Deadline extended: 2014-04-28 (final extension by the main conference)

Although I'm not working at the university anymore, I still think that a workshop like this is quite important because it tries to bridge the gap between pure scientific research and real world requirements. If you look at scientific conferences, many researchers present tools in order to evaulate there approach. From my own experience I know that often you will find dragons when you try to actually implement these tools. These dragons, once disturbed, may even threaten the whole theoretically nice approach. The workshop tries to give the brave knights---and since you are reading an Eclipse related blog, that's probably you!---fighting these dragons a place to exchange thoughts, methods, and ideas. And, last but not least, it gives you an opportunity to publish about that kind of work (the workshop proceedings are published together with the conference proceedings at IEEE).

Most popular sessions and speakers at EclipseCon 2014

Thank you to everyone who attended EclipseCon, especially our speakers. The speakers spend a lot of time preparing for the conference and make the event a huge success. Therefore, I’d like to highlight some of the more popular sessions and speakers.

Most popular sessions (based on attendance)

  1. New Features in Java SE 8 - George Saab and Stuart Marks
  2. Making the Eclipse IDE fun again – continued - Martin Lippert, Fred Bricon and Andrew Clement
  3. API Design in Java 8 - John Arthorne
  4. What every Eclipse developer should know about Eclipse 4 (e4) - Jonas Helming and Eugen Neufeld
  5. A guided tour of Eclipse IoT - Benjamin Cabe
  6. Xtreme Eclipse 4: A tutorial on advanced usages of the Eclipse 4 platform - Sopot Cela, Lars Vogel and Paul Webster
  7. The New Profiling Tools in the Oracle JDK! - Klara Ward
  8. The Road to Lambda - Alex Buckley
  9. JDT embraces type annotations - Stephan Hermann
  10. M2M, IoT, device management: one protocol to rule them all? - Julien Vermillard


Most popular speakers (based on feedback survey*)

  1. JDT embraces lambda expressions - Srikanth Sankaran, Noopur Gupta and Stephen Hermann
  2. Turning Eclipse into an Arduino programming platform for kids - Melanie Bats
  3. Code Matters – Eclipse Hacker’s Git Guide - Stephan Lay, Christian Grail and Lars Vogel
  4. Writing JavaFX applications use Eclipse as IDE and runtime platform - Thomas Schindl
  5. Servlets are so ‘90s! - Holger Staudacher
  6. Building a full-product installer using P2 - Mark Bozeman and Mike Wrighton
  7. Connecting the Eclipse IDE to the Cloud-Based Era of Developer Tooling - Andrew Clement and Martin Lippert
  8. Advanced Use of Eclipse 4′s Dependency Injection Framework -Brian de Alwis
  9. What every Eclipse developer should know about Eclipse 4 (e4)  – Jonas Helming and Eugen Neufeld
  10. The Road to Lambda - Alex Buckley


* a session needed feedback from at least 15 attendees to make the list.

A detailed summary of all the sessions is available.

Introducing Eclipse 4 on RAP

Today I am happy to announce that RAP and Eclipse 4 can now be used together. Over the last couple of month we have been working with Tom Schindl, who will be reporting on RAP and Eclipse4 as a guest author on this blog.

Integrating RAP and e4 has been an important goal for us since quite some time, actually we were even involved in the creation of e4. While the RAP team was not able to actively participate in the creation and implementation of Eclipse 4, the multi user capability that Eclipse 3 is lacking remained an important design goal for the platform team. The service orientation and the ability to inject behaviour of Eclipse4 made it possible to integrate with RAP in a non intrusive way.

The foundation for our RAP / e4 integration was laid by a proof of concept from Lars Vogel and Ralf Sternberg. They found a few issues which have been addressed in the mean time by fixing them or working around them.

If you are among those who have been pestering us for getting this done, I don't want to hold you back any further and refer you to Tom's Getting started with Eclipse 4 on RAP.


1 Comment. Tagged with e4, rap, e4, rap

April 08, 2014

Eclipse Day Montreal 2014, call for presentation

The summer is fast approaching, and Eclipse is getting ready for another release.
To celebrate this release and to learn what is happening beyound the IDE, Ericsson and Rapicorp are teaming up to organize the Eclipse Day Montreal on June 10th.

At this point, the agenda has not been set and we are looking for speakers, so hurry up and submit a talk before May 15th. The submission process is simple, just edit the wiki page.

Moving towards interoperability for IoT

The is a LOT of hype around the Internet of Things (IoT). Lots of vendors selling proprietary solutions that have very little to do with the Internet of Things but everything to do with locking customers into a single solution. If we are going to have a truly open Internet of Things, the solutions will need to be interoperable.

The MQTT Interop Test Day was one of the first events that has demonstrated interoperability between different proprietary and open IoT solutions. On March 17, 15 different organization and products spent 1 day testing their MQTT solutions with each other.Participating were large established software companies like IBM, Software AG,  RedHat JBoss,; smaller software companies like  2lemetry, Xively, ClearBlade, Litmus Automation, HiveMQ; hardware companies like Eurotech and Sierra Wireless and open source projects like Eclipse Kura, Eclipse Paho, NodeRed and others. it was amazing to see MQTT clients and servers that have never been tested together would simple work. It wasn’t true all the time but it certainly showed that MQTT is a specification that will enable interoperability between solutions.

A complete report is now available. The feedback from the participants was very positive so we are going to do it again in the Fall 2014, just in time for the OASIS TC to finalize the first MQTT open specification.

We are definitely moving towards an open IoT!

April 07, 2014

Installing Jenkins Server on Google Cloud Platform

The goal of this short post is to show how to spin a Jenkins server on Google Cloud Platform.

Create a new instance

This is a simple one. Browse to your Google Cloud console https://console.developers.google.com and create a new instance. Make sure you:

  1. Select a small (or greater) instance type. 
  2. Select Debian as your OS. 
  3. If you have a network rule that enables TCP 8080 then use it. Otherwise, you can configure this later on to your default rule.

Install Jenkins

To use the Debian package repository of Jenkins, first add the key to your system:
wget -q -O - http://pkg.jenkins-ci.org/debian/jenkins-ci.org.key | sudo apt-key add -
Then add the following entry in your /etc/apt/sources.list:
deb http://pkg.jenkins-ci.org/debian binary/
Update your local package index, then finally install Jenkins:
sudo apt-get update
sudo apt-get install jenkins

Configure the default network rule

In Google Cloud Console go to your project and select Compute Engine | Networks | Firewall rules. Create a new rule. Include tcp:80,8080 to your "Allowed Protocols or Ports"

That's it! Browse to your instance external IP and concatenate :8080.


Efficient Code Coverage with Eclipse

There is a saying that a fool with a tool is still a fool. But how to use a tool most efficiently is not always obvious to me. Because of this I typically spend some time to check out new playgrounds1 that promise to increase my work speed without impairing quality. This way I came across EclEmma, a code coverage tool for the Eclipse IDE, which can be quite useful to achieve comprehensive test cases.


In general ‘Test coverage is a useful tool for finding untested parts of a codebase‘ because ‘Test Driven Development is a very useful, but certainly not sufficient, tool to help you get good tests‘ as Martin Fowler puts it2. Given this, the usual way to analyse a codebase for untested parts is either to run an appropriate tool every now and then or having a report automatically generated e.g. by a nightly build.

However the first approach seems to be a bit non-committal and the second one involves the danger of focusing on high numbers3 instead of test quality. Let alone the cost of context switches, expanding coverage on blank spots you have written a couple of days or weeks ago.

Hence Paul Johnson suggests ‘to use it as early as possible in the development process‘ and ‘ to run tests regularly with code coverage4. But when exactly is as early as possible? On second thought it occured to me that the very moment just before finishing the work on a certain unit under test should be ideal. Since at that point in time all the unit’s tests should be written and all its refactorings should be done, a quick coverage check might reveal an overlooked passage. And closing the gap at that time would come at a minimal expense as no context switch would be involved.

Certainly the most important word in the last paragraph is quick, which means that this approach is only viable if the coverage data can be collected fast and the results are easy to check. Luckily EclEmma integrates seamlessly in Eclipse by providing launch configurations, appropriate shortcuts and editor highlighting to meet exactly these requirements, without burden any code instrumentation handling onto the developer.


In Eclipse there are several ways to execute a test case quickly5. And EclEmma makes it very easy to re-run the latest test launch e.g. by the shortcut keys Ctrl+Shift+F11. As Test Driven Development demands that test cases run very fast, the related data collection runs also very fast. This means one can check the coverage of the unit under test really in a kind of fly-by mode.

Once data collection has been finished the coverage statistic is shown in a result view. But running only a single or a few test cases, the over all numbers will be pretty bad. Much more interesting is the highlighting in the code editor:


The image shows the alleged pleasant case if full instruction and branch coverage has been reached. But it cannot be stressed enough that full coverage alone testifies nothing about the quality of the underlying test!6 The only reasonable conclusion to draw is, that there are obviously no uncovered spots and if the tests are written thorough and thoughtful, development of the unit might be declared as completed.

If we however get a result like the following picture, we are definitely not done:


As you can see the tests do not cover several branches and misses a statement entirely, which means that there is still work to do. The obvious solution would be to add a few tests to close the gaps. But according to Brian Marick such gaps may be an indication of a more fundamental problem in your test case, called faults of omission7. So it might be advisable to reconsider the test case completely.

Occasionally you may need other metrics than instruction and branch counters. In this case you can drill down in the report view to the class you are currently working on and select an appropriate one as shown below:



While there could be said much more about coverage in general and how to interpret the reports, I leave this to more called upon people like those mentioned in the footnotes of this post. Summarizing one can say that full coverage is a necessary but not sufficient criteria for good tests. But note that full coverage is not always achievable or would be unreasonably expensive to achieve. So be careful not to overdo things – or to quote Martin Fowler again: ‘ I would be suspicious of anything like 100% – it would smell of someone writing tests to make the coverage numbers happy, but not thinking about what they are doing2.

Working with the approach described in this post, the numbers usually end up in the lower 90s of project wide coverage8, given that your co-workers follow the same pattern and principles, which mine usually do – at least after a while… :D

  1. Regarding to software development such playgrounds may be methodologies, development techniques, frameworks, libraries and of course – the usage of tools
  2. TestCoverage, Martin Fowler, 4/17/2012
  3. See Dashboards promote ingnorance, Sriram Narayan, 4/11/2011
  4. Testing and Code Coverage, Paul Johnson 2002
  5. See also Working Efficiently with JUnit in Eclipse
  6. To make this point clear, simply comment out every assert and verification in a test case that produces full coverage like shown above. Doing so should usually not change the coverage report at all, although the test case is now pretty useless
  7. How to Misuse Code Coverage by Brian Marick
  8. Keep in mind the numbers depend on the metric that was selected. Path Coverage numbers are usually smaller than those of branch coverage and those of branch coverage can be smaller than the statement coverage’s ones
April 06, 2014

Workflow to create a new LeanPub book from blogger posts

After creating a number of Leanpub based books, I've come up with a workflow that works quite well for me.

This workflow is based on Managing LeanPub book's Markdown content using Git and GitHub (synced to back to LeanPub via DropBox) which is supported by the following technologies:

  • Markdown: for content creation 
  • Git: for version control and distributed authoring 
  • GitHub: for managing issues and community contributions (via Pull Requests) 
  • DropBox: to sync data with LeanPub (from a local git repo) 
  • LeanPub: used to create eBook versions (pdf,mobi, epub), sell book via dedicated web page and provide a management website for authors to manage the entire publishing+sale process.
Here are the step-by-step instructions on how to create a book from a number of my blogger posts (in this case my FluentSharp related blogs).

Step 1) create the LeanPub book

Login into LeanPub, and open the Book's page from the main user's dashboard, and click on the Create Another Leanpub Book Now! button:

In the New Book page, enter the title of your book and the book's URL (I usually use _ in the title so that it looks better in the browser's location bar)

After clicking on the Create Book button, you will have a brand new Leanpub book (pre-populated with some sample content)

Step 2) Sync with DropBox

Once the book is created my first step is to enable DropBox support.

This is done in the Writing Settings page: 

... where we need to click on the Sync with Dropbox checkbox and the Switch to File Mode button

... after a couple seconds Leanpub will send an dropbox invitation to the email provided:

... which looks like this in DropBox

Once the invitation is accepted:

... a dropbox folder will exist with the contents on the current book

Step 3) Change book formatting

Next I open the Book Formatting page:

... and click on the Custom button:

... and make the following changes:

  • enable Page breaks after every section
  • disable Show links as footnotes in PDFs
  • set Font Size to 11pt
  • set Page Size to US Letter

  • use a sans-serif font (see why at Serif vs. Sans: the final battle)
  • use Parts, Chapters and Sections in the Table of contents (with dots)
  • use Number Parts, chapters and sections (for the Section Numbering)

Finally, click on Update Theme to save changes

Step 4) Import posts from blogger

This is the only section that has a bit of scripting required.

That is due to the fact that I don't want to import all of my blogger posts (more than 1000 by now), I only want the posts with a specific label, in this case the FluentSharp posts:

To export just these posts I wrote an Groovy script (see this gist) that I execute using the Eclipse Groovy REPL plugin:

The only value I change is the categoryFilter variable (which in this case is set to FluentSharp). Also note below the value of the exportXmlFile which is where we will find the Leanpub-ready export file (the blog-04-06-2014.xml file was created via blogger export function, and contains ALL posts)

Another nice feature of this script is how it shows a preview (in a on-the-fly-created-Eclipse-view) of the posts exported (including the title, publish data, categories/labels and content)

Here is the Posts_with_Only_FluentSharp.xml file created by the script

Back in Leanpub, I go to the Import action page:

... scroll down into the Import from a Blogger Export section, use the Chose File to select the Posts_with_Only_FluentSharp.xml file:

... and finally click on Start Import button:

This will take a while, but when in the 7/28 Publish changes to Dropbox... step of the Leanpub import workflow:

... we can actually see the new files pop-in into Dropbox's manuscript folder:

... and images folder:

Step 5) Sync with GitHub

The next step is to create a GitHub repository for this book.

In this case I used the https://github.com/DinisCruz/Book_Practical_FluentSharp repo:

... which was created with a default README.md and LICENSE file.

NOTE: The 'git commands' shown below need to be executed on a computer/vm that is synced with Dropbox, and has a local copy of the (Leanpub created) Practical_FluentSharp folder.

a) open a terminal in the local Dropbox folder for the Practical_FluentSharp book:

b) create a local repository, clone the GitHub repo and commit the local files:
  • $ git init 
  • $ git remote add origin git@github.com:DinisCruz/Book_Practical_FluentSharp.git
  • $ git pull origin master
  • $ cp ../Practical_AngularJS/.gitignore .
  • $ git add -A
  • $ git commit -m 'First commit of converted files'  

For reference, here are the contents of the .gitignore file (required so that Leanpub's preview and published files are not committed to the git repository):


c) push the local commit to GitHub:
  • $ git status
  • $ git push origin master

The GitHub repo now looks like this:

Step 6) set a GitHub Webhook to trigger the Leanpub sample creation

A really useful workflow is to trigger the Leanpub sample creation every time there is a git commit.

As the Leanpub's preview page nicely explains

We can have a create Subset.txt file (in this case edited directly via GitHub) containing a couple chapters (ie filenames) which will be used to create a partial preview of the book.

The way it works is by using the Leanpub API:

... and the GitHub's Webhooks capabilities (available on the settings page of the target repository)

The Webhook payload URL for this book (that triggered Leanpub's book preview generation) looks like this https://leanpub.com/Practical_FluentSharp/preview/subset.json?api_key={APIKEY} and is used here:

GitHub provides a really nice interface to see these Webhooks in action. For example here is the one we just created:

To see the Webhook in action (i.e being triggered on commit), I executed the following Git commands:
  • git pull origin master

...followed by:
  • $ git status
  • $ git add -A
  • $ git commit -m 'updating some images'
  • $ git push origin master

After the push (with new commit), refreshing the GitHub's Webhooks page shows that there was a new delivery:

... and a quick look at Leanpub's Preview page shows that there is an Preview workflow process in place:

After about a minute the (local and remote Dropbox) preview folder of this book is updated with the updated pdf:

Containing the chapters selected in the Subset.txt file

Inactive Blogs