The IoT Gateway Dream Team: Eclipse Kura & Apache Camel

by Eclipse Foundation at May 06, 2015 03:44 PM

Slides from the presentation available at: Eclipse Kura is the well recognized field gateway for Internet Of Things applications. Apache Camel is the message routing engine and the library containing a gazillions of the various endpoint connectors. Are you interested in finding out how these two can be joined together to create a rocking IoT solution? Then tune in to this talk by +Henryk Konsek, Engineer at Red Hat.

by Eclipse Foundation at May 06, 2015 03:44 PM

Orion’s new look, and split editor

by Anton McConville at May 06, 2015 02:08 PM

Designing a web based development tool has turned out to be a truly interesting and challenging job. Developers, it turns out, are strongly opinionated about the capabilities and behaviour of their tools, and want familiar controls at their fingertips.

It’s fascinating for me to think back through the evolution of Orion during the three years that I’ve worked on the front end. I’m really pleased with this new version of Orion’s interface, that we’ve just completed [ and will commit later this week ]. It comes close to what I’d hoped it would be when I first started making commits.

New Orion screenshot

A simpler and more modern look for Orion.

The new editor can be split to enable developers to keep open two files for editing at a time. This is especially useful for instance if you’re working with css and html or javascript, to ease hopping between files that rely on shared definitions.

New Orion editor - split view

New editor showing split view

Introducing split views forced us to relate the file name to the area that is split, and since the filename used to be part of the breadcrumb at the top of the page, we elected to show only the filename, but offer the whole path in a hover.

Relocating the breadcrumb freed more space from the old horizontal area that was devoted to it, so we were able to merge the remaining few controls ( for the login, and for the operations feedback ) into one bar.

The result is a much cleaner look. The navigation and operational controls sit on the perimeter of the creative coding area, in an L-Shape that defines the page.

You can compare it to the previous look here:

Previous version of Orion editor

Previous version of Orion editor

We’ve carried that L-Shape as a pattern into the other pages of Orion. You can see it here for example in the Git page.

New look Orion Git UI

New look Orion Git UI

Since Orion hadn’t had a facelift in a little while, we took this opportunity to shade it with a darker, more contemporary look, drawing on a consistent deep blue anchor colour, and relating it to tones from that blue swatch book. We wanted a darker look, but perhaps a departure from grey, to offer a little more interest.

As part of the design community at IBM, I’m a student of the IBM design language, and so leaned on the swatch books from the open resources offered. I also considered experimenting with some of Google’s Material Design, which I might also try if I can make some time for it.

Orion is open source, so you can colour it in however you like :)

I added a new ( switchable ) dark editor theme called ‘Ceol’ [ the Irish word for music, pronounced like a quick ‘ky-ole’ ].

What we’ve learned is that developers still like to see the familiar desktop approach for developing. With this latest version of Orion, we think we offer that, but in a contemporary and well thought out web context, drawing on modern navigation approaches and web layouts.

When I use Orion these days, I forget that it runs in a browser. It comes into its own when combined with Git deployment – for instance when editing GitHub pages. I can maintain my website entirely from a web browser.

It really starts to open up the power of creativity in the cloud, and now with a new more minimal look.

by Anton McConville at May 06, 2015 02:08 PM

EclipseCon France 2015 - Program and Keynote Announced

May 06, 2015 01:46 PM

We're pleased to announce the program and keynote for EclipseCon France 2015, planned for June 24-25 in Toulouse.

May 06, 2015 01:46 PM

Initializing Git Repositories with JGit

by Rüdiger Herrmann at May 06, 2015 07:00 AM

Written by Rüdiger Herrmann

I was recently asked how to initialize a new Git repository with JGit, i.e. achieve what git init /path/to/repo does.

While the creating a repository with JGit isn’t particularly difficult, there are a few details that might be worth noting. And because there are few online resources on the topic, and some of them misleading, this article summarizes how to use the JGit API to initialize a new Git repository.

Local Repositories

To initialize a repository with JGit, the InitCommand can be used. The command factory Git has a static method init() to create such a command.

Git git = Git.init().setDirectory( directory ).call();

When the above code is executed, a ne wGit repository is created. The location given in setDirectory() becomes the work directory that will contain the checked out files. If the directory does not exist, it will create it along the way.

JGit Init: Empty Git Repository

A sub-directory named .git is created in the top level of the work directory. This contains the repositories history, configuration settings, pointers to branches, the index (aka staging area) and so on.

The screenshot to the left shows the internal structure of the .git directory. The refs directory will hold information about branches and tags. The actual contents will be stored in the objects directory. In the logs directory, changes to branches are recorded. For example, a commit or a checkout will create a log entry that can later be viewed with the git reflog command. The Explore Git Internals with the JGit API post I wrote some time ago goes more into the details of how Git manages the content.

To ensure that the command actually succeeded, the StatusCommand can be used to query the status of the repository much like git status would do. Unfortunately JGit’s status implementation slightly differs from native Git in that it doesn’t complain if there is no repository. This makes it necessary to check for an existing HEAD ref which indicates that there actually is a repository.

assertNotNull( git.getRepository().getRef( Constants.HEAD ) );
assertTrue( git.status().call().isClean() );

isClean() returns true for the newly initialized repository as there are no changed or untracked files in the work directory.

The assertion uses the Git instance that was returned by the InitCommands call() method. This class serves as a factory and can be used to create Git commands (e.g. add, commit, checkout) to be executed on that repository.

Newly initialized repositories have a peculiarity in that no branch has yet been created. Though there is a HEAD (the pointer to the current branch) that references a branch, (named master by default) this very branch does not exist.

Usually nothing to be worried about as with the first commit, the missing branch will be created. However some operations like git branch create a branch and will fail with the slightly misleading error message ‘Ref HEAD cannot be resolved’ before an initial commit was made. This is also the case with native Git and not specific to JGit.

To determine if a repository contains a commit yet, examine the HEAD ref like so:

Ref headRef = git.getRepository().getRef( Constants.HEAD );
if( headRef == null || headRef.getObjectId() == null ) {
  // no commit yet

Turn a Directory into a Repository

As seen before, the InitCommand creates missing directories if necessary. But the command can also be used on an existing directory, turning it thereby into a git repository.

The snippet below initializes a repository in an existing directory that contains a file, leaving its content intact.

F‌ile f‌ile = new F‌ile( "/path/to/existing/directory/readme.txt" );
Git git = Git.init().setDirectory( f‌ile.getParentF‌ile() ).call();
assertTrue( git.status().call().getUntracked().contains( f‌ile.getName() ) );

The status command reports the file as untracked. After adding the file to the index it can be committed to the just created repository.

If the designated directory already holds a Git repository, there is no need to worry. In this case, JGit will do nothing and return a git instance that points to the already existing repository.

Separating Work and .git Directory

While by default a repository has a work directory and its .git directory is located directly underneath it, this is not required. A repository can have no work directory at all (discussed later) or its work directory at an entirely different location than the .git directory.

The init command below creates such a repository

Git git = Git.init().setDirectory( workDir ).setGitDir( gitDir ).call();

The resulting repository will be located at gitDir (this is where the history, branches, tags, etc. are stored) but will have its work directory at workDir.

The work directory configuration can also be changed for an existing repository. Either by manually editing the configuration file .git/config or through the JGit API.

StoredConfig config = git.getRepository().getConfig();
config.setString( "core", null, "worktree", workDir.getCanonicalPath() );;

Needless to say that the work directory content needs to be moved manually from the old to the new location.

Bare Repositories

The just created repository is meant to work with locally, also called a non-bare repository. Another type of Git repositories are bare repositories.

These are intended to be used as central repositories that will be shared by other users. No direct commits can be made to bare repositories. A bare repository receives commits in that they are pushed from a users local repository. Team members fetch commits made by others from this repository.

The snippet below will create such a bare repository.

Git git = Git.init().setDirectory( directory ).setBare( true ).call();

A bare repository does not have a working directory. Instead, the directory structure that can be found below the .git directory in a non-bare repository will be created directly in the given directory.

Since the status command requires a work directory, it cannot be used to verify that the above code succeeded. Instead the Repository instance should be bare and point to the desired directory like verified below.

assertTrue( git.getRepository().isBare() );
assertEquals( directory, git.getRepository().getDirectory() );

Note that querying the Repository for its work directory (i.e. calling getWorkTree()) will throw a NoWorkTreeException for bare repositories.

Alternative API: Repository.create()

An alternative way to initialize a repository is to use the create method of Repository.

Repository repository = new F‌ileRepositoryBuilder().setGitDir( directory ).build();

assertNotNull( git.getRepository().getRef( Constants.HEAD ) );
assertTrue( Git.wrap( repository ).status().call().isClean() );

With the aid of F‌ileRepositoryBuilder a Repository instance is created that represents the not yet existing repository. Calling create() materializes the repository. The result is the same as the InitCommand would yield.

In order to create bare repositories, there is a also an overloaded create method: create( boolean bare ).


I hope this article helps to clarify how to create a new repository with JGit. The code that is used here is a collection of learning tests and can be found here in full length:

It illustrates the valid and invalid uses of the InitCommand API and may be used as a starting point for further experiments with JGit.

If you have difficulties or questions, feel free to leave a comment or ask the friendly and helpful JGit community for assistance.

The post Initializing Git Repositories with JGit appeared first on Code Affine.

by Rüdiger Herrmann at May 06, 2015 07:00 AM

Red Hat becomes Strategic Eclipse Developer

by maxandersen at May 05, 2015 02:00 PM

I’m happy to report that a few days ago the final paperwork around Red Hat upgrading their membership to Strategic Developer at Eclipse was completed and now announced at

What does this mean ?

Strategic Members are organizations that view Eclipse as a strategic platform and are investing developer and other resources to further develop Eclipse Foundation technologies. Strategic Developers commit to assign at least eight developers full time to develop Eclipse technology, lead Eclipse projects and contribute annual dues up to 250.000 $.

At Red Hat we already have more than eight developers doing development Eclipse technology, both at and around the base Eclipse distribution.

  • m2e-wtp

  • JavaScript Development Tools (JSDT)

  • vert.x

  • Linux Tools

  • Thym

  • BPMN2

  • BPEL

  • SWTBot

..and contributing to many more.

This work is used in our JBoss Tools project and two products: JBoss Developer Studio (Middleware) and Red Hat Developer Toolset (Linux Platform).

By upgrading to Strategic Developer we are confirming our continued support and commitement of resources to Eclipse, but also increasing our funding to be $250.000 annually.

Red Hat have an interest in Eclipse Foundation continues to thrive, and that its flagship, the Eclipse IDE and other opensource development tools and runtimes continues to evolve and improve.

Stepping down and up from the board

This announcement also means I’ll have to step down from the board as solutions member representative, but I’ll be joining again as Red Hat’s representative for their newly aquired Strategic Developer position.

I’m happy to have served and I’m looking forward to see which other solutions member will come join in on the board and bring Eclipse Foundation forward.

What next ?

We’ve been contributing and continue to help making Eclipse Mars a great release, together with the rest of the community. We are especially working on fixing GTK/SWT on Linux, making Docker support and looking at improving the Java Script Development tools. The latter I did a presentation at EclipseCon which provide some of ideas what we are working on.

By becoming strategic developer we also plan to be involved more in how Eclipse IP and Development process works and evolves. Something that become more important to make more effective for fast moving projects to feel better at home at Eclipse.

On top of that Eclipse have a lot of other areas going on which Red Hat are keeping our eye on - especially in the area of web IDE’s and Internet-of-things.

If you are interested in hearing more about this or have a suggestion please feel free to contact me by mail or leave a comment below!

Let’s Have fun!,
Max Rydahl Andersen

by maxandersen at May 05, 2015 02:00 PM

Eclipse Foundation Announces Red Hat as a Strategic Member

May 05, 2015 12:30 PM

Company reaffirms its commitment to Eclipse open source tools and the new Eclipse Internet of Things open source community.

May 05, 2015 12:30 PM

JBoss Tools Alpha2 for Eclipse Mars

by maxandersen at May 05, 2015 08:46 AM

Alpha 2 build for Eclipse Mars M6 is now available at Alpha2 download.


This version of JBoss Tools targets Eclipse Mars 4.5 (M6).

We recommend using the Eclipse 4.5 JEE Bundle since then you get most of the dependencies preinstalled.

Once you have installed Eclipse, you use our update site directly:

Note: Marketplace entry and Integration Stack tooling will become available from JBoss Central at a later date.

What is new ?

Easy Import/Open of projects

We have included our incubation project at Eclipse that makes importing and opening of projects much easier than default Eclipse. No longer do you need to know or guess at which of many import wizards are the right one. With this you just use menu:File[Import Project from Folder], point it to a folder and it will auto-detect the type of project, imports ann configure it as best as it can.

easyimport filemenu

Once started it will recursively scan the selected folder and report which directories it found.

easyimport wizard

We included this incubation feature to get early feedback - please do give it a try and let us know if it works great or if we detected some projects "badly".

OpenShft v3

Our OpenShift integration now allow you to connect to OpenShift 3 in addition to the existing OpenShift 2 support.

connection wizard server type

Once connected you can browse the OpenShift/Kubernetes data for your application/projects.

view explorer v3

Note: OpenShift v3 is not available from to try at this point in time. If you want to try use it you can follow the instructions at OpenShift Origin sample app.

Java EE 7 Batch wizards, content assist, validation and refactoring

In Alpha 1 we introduced support for Java EE 7 batch specification and now extending this support with a wizard, content assist, linked navigation, searching and refactoring of Batch elements.


WildFly 9

We’ve added native WildFly 9 runtime detection and server support. You no longer need to use the WildFly 8 adapter and detection will work correctly now.

Content assist for AngularJS Expressions

When editing AngularJS single-page html (not templates) the html editor now communicates with the preview to provide content assist for angularjs expressions.


Custom HTML Tag validation

There is now a quickfix for marking custom HTML5 elements to be ignored in validation.


Note: this is not specific to JBoss Tools, it is built into Eclipse M6

Next steps

With Alpha2 out we are heading towards a Beta1.

In Beta1 we are targeting including:

  1. OpenShift v3 support for templates

  2. Docker Tooling

  3. Better JavaScript content assist

  4. Making project imports in Eclipse even simpler

  5. And more…​

As always, ask/suggest away and we’ll keep you posted!

Have fun!

Max Rydahl Andersen

by maxandersen at May 05, 2015 08:46 AM

Invitation to Eclipse Democamp Mars, June 23rd 2015

by Maximilian Koegel and Jonas Helming at May 04, 2015 10:12 AM

We cordially invite you to the next Eclipse Democamp München, taking place on June 23rd 2015.

If you want to attend this year’s Democamp 2015 please register soon! We can offer only 110 seats and usually receive around 200 registrations. You can register here. There you’ll also find detailed information on the location, agenda, time and more. Registration is mandatory, and unfortunately we cannot accept attendees the day of the event.

We are looking forward to great demos and seeing you in June!

A big thanks to our sponsors: BSI Business Systems Integration AG,  EclipseSource München GmbH, Eclipse Foundation and  Capgemini Deutschland GmbH


Leave a Comment. Tagged with democamp, eclipse, democamp, eclipse

by Maximilian Koegel and Jonas Helming at May 04, 2015 10:12 AM

Towards Modeling of Distributed Graph Algorithms

by Christian Krause ( at May 02, 2015 08:50 AM

The most prominent approach for distributed processing of very large graphs today is probably the Bulk Synchronous Parallel (BSP) model. BSP is a bridging model for implementing graph algorithms in such a way that they can be massively parallelized and distributed. One application can be found in the Henshin graph transformation tool, where graph transformation systems can be modeled, and code generated for the BSP framework Apache Giraph.

Although the BSP model is widely accepted and used today, there is no standard implementation framework for BSP algorithms. Frameworks that do provide BSP include Google's Pregel, Apache Giraph and GraphX in Apache Spark. For application developers it is not easy to find out which platform is the most suited one for her problem. An implementation of a graph algorithm in, say, Apache Giraph cannot be reused in other frameworks, even though the underlying concepts of BSP are the same. This is unfortunate, particularly because the overhead of developing, deploying and testing these algorithms is rather high due to the complexity of the distributed frameworks.

This problem can be solved by introducing a modeling layer for BSP algorithms. Instead of directly implementing graph algorithms in Pregel, Giraph or GraphX, the idea is to use a modeling language that supports the concepts of BSP. One possible approach is to use UML 2 state machines for the modeling. The figure below shows a state machine for a BSP-model of the shortest path algorithm.

Using such a model, one could generate code for different platforms, e.g. Pregel, Giraph or GraphX. The code generators need to be implemented only once and are straight-forward to build since the BSP concepts are more or less directly used. Using different code generators, algorithm engineers can automatically derive implementations for different platforms and benchmark them against each other without implementing a single line of code.

In Henshin, the code generation for Apache Giraph is quite complex because the concepts of graph transformations are rather different than BSP. One opportunity to simplify it would be to use BSP models as intermediate target language. Specifically, model transformations can be employed to translate a Henshin model into a BSP model. From that point on, platform-specific code generators could be used to generate the final implementation.

Using a modeling approach would enable the reuse of the many many graph algorithms already implemented in the existing BSP-frameworks. Maybe UML state machines are not the best approach for the modeling -- for instance, one could also come up with a (textual) DSL for BSP. The important point is that an abstraction from the actual implementation platforms is made.  

by Christian Krause ( at May 02, 2015 08:50 AM

Mozilla pushes - April 2015

by Kim Moir ( at May 01, 2015 04:44 PM

Here's April 2015's  monthly analysis of the pushes to our Mozilla development trees. You can load the data as an HTML page or as a json file.  

The number of pushes decreased from those recorded in the previous month with a total of 8894.  This is due to the fact that gaia-try is managed by taskcluster and thus these jobs don't appear in the buildbot scheduling databases anymore which this report tracks.


  • 8894 pushes
  • 296 pushes/day (average)
  • Highest number of pushes/day: 528 pushes on Apr 1, 2015
  • 17.87 pushes/hour (highest average)

General Remarks

  • Try has around 58% of all the pushes now that we no longer track gaia-try
  • The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 28% of all the pushes.


  • August 2014 was the month with most pushes (13090  pushes)
  • August 2014 had the highest pushes/day average with 422 pushes/day
  • July 2014 had the highest average of "pushes-per-hour" with 23.51 pushes/hour
  • October 8, 2014 had the highest number of pushes in one day with 715 pushes  

I've changed the graphs to only track 2015 data.  Last month they were tracking 2014 data as well but it looked crowded so I updated them.  Here's a graph showing the number of pushes over the last few years for comparison.

by Kim Moir ( at May 01, 2015 04:44 PM

Integrating Rhapsody in your AUTOSAR toolchain

by Andreas Graf at May 01, 2015 10:31 AM

UML tools such as Enterprise Architect or Rhapsody (and others) are well established in the software development process. Sometimes the modeling guidelines are following a custom modelling, e.g. with specific profiles. So when you are modelling for AUTOSAR systems, at sometimes you are faced with the problem of transforming your model to AUTOSAR.

For customer projects, we have analyzed/implemented different stratgies.

Artop as a integration tool

First of all, if you are transforming to AUTOSAR, the recommendation is to transform to an Artop model and let Artop do all the serialization. Directly creating the AUTOSAR-XML (.arxml) is cumbersome, error-prone and generally “not-fun”.

Getting data out: Files or API

To access the data in Rhapsody, you could either read the stored files or access the data through the API of Rhapsody. This post describes aspects of the second approach.

Scenario 1: Accessing directly without intermediate storage

In this scenario, the transformation uses the “live” data from a running Rhapsody as data source. Rhapsody provides a Java based API (basically a wrapper to Windows COM-API). So it is very easy to write a transformation from “Rhapsody-Java” to “Artop-Java”. A recommended technology would be the open source Xtend language, since it provides a lot of useful features for that use case (see a description in this blog post).

Scenario 2: Storing the data from Rhapsody locally, transforming from that local representation

In this scenario, the data from Rhapsody is being extracted via the Java-API and stored locally. Further transformation steps can work on that stored copy. A feasible approach is to store the copied data in EMF. With reflection and other approaches, you can create the required .ecore-definitions from the Rhapsody provided Java classes. After that, you can also use transformation technologies that require an .ecore-definition as a basis for the transformation (but you can still use Xtend). The stored data will be very close to the Rhapsody representation of UML.

Scenario 3: Storing the data in “Eclipse UML” ecore, transforming from that local representation

In this scenario, the data is stored in the format of the Eclipse provided UML .ecore files, which represent a UML meta-model that is true to the standard. That means that your outgoing transformation would be more conforming to the standard UML meta-model and you could use other integrations that use that meta-model. However, you would have to map to that UML meta-model first.

There are several technical approaches to that. You can even to the conversion “on-the-fly”, implementing a variant of Scenario 1 with on-the-fly conversion.

 Technology as Open Source

The base technologies for the scenarios are available as open source / community source:

  • Eclipse EMF
  • Eclipse Xtend, Qvto (or other transformation languages)
  • Artop (available to AUTOSAR members)


by Andreas Graf at May 01, 2015 10:31 AM

Vaadin & OSGi: managing the classloader

by Florian Pirchner at April 30, 2015 11:27 AM

If you are using Vaadin with OSGi, you need to be aware of an issue related to

Vaadin changed the classloader that is used to load the UI-class. Before this change, the UI-class was loaded by the classloader of the servlet class. Now the “context class loader” is used to load it.

So if you define your UI-class in the VaadinServlet by annotation, you will get a ClassNotFoundException.

It is pretty easy to fix that issue. The solution is to define the proper classloader in ServletService. Afterwards the UI-class can be loaded properly again.

@VaadinServletConfiguration(ui = ECViewSampleUI.class, productionMode = false)
public class SimpleVaadinServlet extends VaadinServlet {
	protected VaadinServletService createServletService(
			DeploymentConfiguration deploymentConfiguration)
			throws ServiceException {
		// see
		ServletService service = new ServletService(this,
		return service;

This snippet shows an implementation of VaadinServlet.

It is used to

  1. define the UI-class by annotation
  2. create a custom VaadinServletService to define the proper classloader.


public class ServletService extends VaadinServletService {

	public ServletService(VaadinServlet servlet,
			DeploymentConfiguration deploymentConfiguration)
			throws ServiceException {
		super(servlet, deploymentConfiguration);

	public ClassLoader getClassLoader() {
		// return the bundle classloader
		// see
		return ServletService.class.getClassLoader();

The highlighted lines in the ServletService ensure that a proper class loader – the classloader of the custom ServletService – is used to load the UI-class.

Things should work properly afterwards …


Florian Pirchner

by Florian Pirchner at April 30, 2015 11:27 AM

“I am your container”, Darth Sirius

by Melanie Bats at April 30, 2015 10:26 AM

Continuing the Sirius blog posts series, today we will see a small tip: how to create artificial containers in your diagram?

One of the main advantage of Sirius is that the graphical representations are independent from the metamodel’s structure. This means that you can choose to not respect the containment hierarchy of your model when you display it graphically. This is possible thanks to Sirius being based on queries.

In the following example, we define a metamodel of a family:

To begin with, we define a Flat diagram, which displays all the members of the family at the same level:

In the Person mapping, we use the Semantic Candidates Expression to provide which semantic elements must be represented. These expressions returning model elements are called queries. To write these queries, there are different languages provided by default in Sirius: specialized interpreters (var, feature, service), Acceleo, raw OCL or Java. Here, we use the feature interpreter to get for a family all the persons referenced by the members reference. You can easily identify interpreted expressions by their yellow background in the Properties tab of an element.

We create a first diagram which represents the flattened Skywalker family:

The next step is to add a level to model the Family as a container. We create a new Family diagram which contains a Family container mapping and a Person mapping as sub nodes:

The Family diagram is created and all the members are represented inside the Skywalker container.

Here we represent graphically the containment reference members. But what to do if we want to create an artificial container which does not exist in the metamodel as a containment reference?

Let’s see! Now imagine that we want to add a level to represent The Force and if the person is related to the dark side or the light side. To do this we create a new ForceSide diagram:

We add a new container to represent the DarkSide of the force :

The dark side must be represented once for each family, the semantic candidate returns var:self which means the current family. As it should contain persons, it is defined as a FreeForm container.

We need to represent, in the Dark Side container, the persons that are from the dark side of the force. So we define a new sub node Person with the Semantic expression set to: [self.members->select(p|p.oclAsType(Person).dark)/]
This query returns all the members of a family and selects only the Person which has the attribute dark set to true.

Then we reuse the Person mapping to represent the person in its force side container.

Finally, we do the same for the light side of the force and we create a LightSide container and the Person mapping to represent members of the family in this new container who are influenced by the light side: [not self.members->select(p|p.oclAsType(Person).dark)/]
A new ForceSide diagram is created in the Skywalker family and we discover that only Darth Vader is from the dark side of the force and that his children are driven by the light side of the force.

Thanks to the queries in Sirius, it is easy to create artificial containers which are not related to containment referencers in the metamodel.

“May the Sirius Force be with you” ;)

The sample code from this example is available on github:


by Melanie Bats at April 30, 2015 10:26 AM

Less testing, same great Firefox taste!

by Kim Moir ( at April 28, 2015 08:13 PM

Running a large continuous integration farm forces you to deal with many dynamic inputs coupled with capacity constraints. The number of pushes increase.  People add more tests.  We build and test on a new platform.  If the number of machines available remains static, the computing time associated with a single push will increase.  You can scale this for platforms that you build and test in the cloud (for us - Linux and Android on emulators), but this costs more money.  Adding hardware for other platforms such as Mac and Windows in data centres is also costly and time consuming.

Do we really need to run every test on every commit? If not, which tests should be run?  How often do they need to be run in order to catch regressions in a timely manner (i.e. able to bisect where the regression occurred)

Several months ago, jmaher and vaibhav1994, wrote code to analyze the test data and determine the minimum number of tests required to run to identify regressions.  They named their software SETA (search for extraneous test automation). They used historical data to determine the minimum set of tests that needed to be run to catch historical regressions.  Previously, we coalesced tests on a number of platforms to mitigate too many jobs being queued for too few machines.  However, this was not the best way to proceed because it reduced the number of times we ran all tests, not just less useful ones.  SETA allows us to run a subset of tests on every commit that historically have caught regressions.  We still run all the test suites, but at a specified interval. 

SETI – The Search for Extraterrestrial Intelligence by ©encouragement, Creative Commons by-nc-sa 2.0
In the last few weeks, I've implemented SETA scheduling in our our buildbot configs to use the data that the analysis that Vaibhav and Joel  implemented.  Currently, it's implemented on mozilla-inbound and fx-team branches which in aggregate represent around 19.6% (March 2015 data) of total pushes to the trees.  The platforms configured to run fewer pushes for both opt and debug are
  • MacOSX (10.6, 10.10)
  • Windows (XP, 7, 8)
  • Ubuntu 12.04 for linux32, linux64 and ASAN x64
  • Android 2.3 armv7 API 9

As we gather more SETA data for newer platforms, such as Android 4.3, we can implement SETA scheduling for it as well and reduce our test load.  We continue to run the full suite of tests on all platforms other branches other than m-i and fx-team, such as mozilla-central, try, and the beta and release branches. If we did miss a regression by reducing the tests, it would appear on other branches mozilla-central. We will continue to update our configs to incorporate SETA data as it changes.

How does SETA scheduling work?
We specify the tests that we would like to run on a reduced schedule in our buildbot configs.  For instance, this specifies that we would like to run these debug tests on every 10th commit or if we reach a timeout of 5400 seconds between tests.

Previously, catlee had implemented a scheduling in buildbot that allowed us to coallesce jobs on a certain branch and platform using EveryNthScheduler.  However, as it was originally implemented, it didn't allow us to specify tests to skip, such as mochitest-3 debug on MacOSX 10.10 on mozilla-inbound.  It would only allow us to skip all the debug or opt tests for a certain platform and branch.

I modified to parse the configs and create a dictionary for each test specifying the interval at which the test should be skipped and the timeout interval.  If the tests has these parameters specified, it should be scheduled using the  EveryNthScheduler instead of the default scheduler.
There are still some quirks to work out but I think it is working out well so far. I'll have some graphs in a future post on how this reduced our test load. 

Further reading
Joel Maher: SETA – Search for Extraneous Test Automation

by Kim Moir ( at April 28, 2015 08:13 PM

Releng 2015 program now available

by Kim Moir ( at April 28, 2015 07:54 PM

Releng 2015 will take place in concert with ICSE in Florence, Italy on May 19, 2015. The program is now available. Register here!

via romana in firenze by ©pinomoscato, Creative Commons by-nc-sa 2.0

by Kim Moir ( at April 28, 2015 07:54 PM

Collaborative Modeling with Papyrus and CDO (Reloaded)

by Eike Stepper ( at April 28, 2015 06:35 PM

Since the beginning of this year I've been working on fundamental improvements to the user interface of CDO and its integration with Papyrus. In particular CEA has generously funded the following:
  • Branching and interactive merging
  • Support for offline checkouts
  • Interactive conflict resolution
Most of the new functionality has been implemented directly in CDO and is available for other modeling tools, too. Please enjoy a brief tour of what's in the pipe for the Mars release:

The following screencast shows how Papyrus will integrate with this new CDO user interface:

I hope you like the new concepts and workflows. Feedback is welcome, of course. And I'd like to thank CEA, Kenn Hussey and Christian Damus for their help to make this happen!

by Eike Stepper ( at April 28, 2015 06:35 PM

Screenshot of the Week: C++ Refactoring

by waynebeaton at April 28, 2015 03:08 PM

My son has just finished up his first year of software development at college. In a demonstration of what I consider cruel and unusual punishment, his first programming language is C++ and his first development environment is Visual C++. I have to assume that the version of Visual C++ that the college unleashed on these unsuspecting students is some sort of reduced-functionality version, because it seems to lack certain functionality that I consider pretty basic, like refactoring.

I learned C years ago, and did some honest-to-goodness work using it, but never did take the time to learn C++, so I used this as an opportunity to close that gap. Naturally, I decided to learn C++ using the Eclipse C/C++ Development Tools (CDT).

The CDT provides some excellent refactoring support.

Renaming a C++ method

Renaming a C++ method

Keep the just-sorting-this-stuff-out nature of the work when considering the code in the screenshot.

This screenshot shows the first stage of the Rename refactoring. As expected, this changes the name of the method (function), the declaration in the header file, and any code that calls it. There are many other refactorings available, including ones that extract constants, fields, and functions. Note the Call Hierarchy view on the bottom view stack: use this view to find out how your function interacts with the world (calls and callers). There’s all sorts of cool stuff available.

The Eclipse CDT project has participated in every simultaneous release we’ve done and so it’s no surprise that they’re an important part of the Eclipse Mars Release. Help us test Eclipse Mars by downloading and testing a milestone build.

Epilogue: To my son’s instructors’ credit, they did avoid complex memory management issues, and did get the students to produce some pretty cool and very playable games featuring two-dimensional graphics. Those students that survive the programme are probably going to do well…

Caveat: I never really took the time necessary to properly research the functionalities provided by Visual C++ or spend any significant time using it. I have to assume that it’s very functional once you get comfortable with it.

by waynebeaton at April 28, 2015 03:08 PM

@ApacheParquet Graduating and Mesos with Siri

by Chris Aniszczyk at April 28, 2015 02:52 PM

The last week for me has been fun in open source land outside of me getting two of my wisdom teeth pulled out of my face. On the bright side, I have some pain killers now and also, two notable things happened. First it was nice to finally graduate Parquet out of the Apache Incubator:

It’s been a little over two years since we (Twitter) announced the open source columnar storage project with Cloudera. It’s a great feeling to see a plan come together and see this project grow over the years with 60+ contributors while hitting the notable achievement of graduating out of the Apache incubator gauntlet. If there’s any lesson here for me, it’s much easier to build an open source community when you do it an independent fashion with at least someone else in the beginning (thanks Cloudera).

Another notable thing that happened was that Apple finally announced that they are using Mesos to power Siri’s massive infrastructure.

In my experience of building open source communities, there are usually your public adopters and private adopters. There are companies that wish to remain private about the software they use at times and that’s fine, it’s understandable when it can be viewed as a competitive advantage. The challenge is how you work with these private adopters when they use an open source project of yours while wanting to collaborate behind the scenes.

Anyways, it’s a great feeling to see Apple opening up a bit about their infrastructure and open source usage after working with them for awhile. Hopefully this is a sign of things to come from them. Also, it would be nice if Apple just updated Siri so when you ask what Mesos is, it replies with a funny response and proclaims her love of open source infrastructure technology.

Overall, it’s been a great last week.

by Chris Aniszczyk at April 28, 2015 02:52 PM