Towards Modeling of Distributed Graph Algorithms

by Christian Krause ( at May 02, 2015 08:50 AM

The most prominent approach for distributed processing of very large graphs today is probably the Bulk Synchronous Parallel (BSP) model. BSP is a bridging model for implementing graph algorithms in such a way that they can be massively parallelized and distributed. One application can be found in the Henshin graph transformation tool, where graph transformation systems can be modeled, and code generated for the BSP framework Apache Giraph.

Although the BSP model is widely accepted and used today, there is no standard implementation framework for BSP algorithms. Frameworks that do provide BSP include Google's Pregel, Apache Giraph and GraphX in Apache Spark. For application developers it is not easy to find out which platform is the most suited one for her problem. An implementation of a graph algorithm in, say, Apache Giraph cannot be reused in other frameworks, even though the underlying concepts of BSP are the same. This is unfortunate, particularly because the overhead of developing, deploying and testing these algorithms is rather high due to the complexity of the distributed frameworks.

This problem can be solved by introducing a modeling layer for BSP algorithms. Instead of directly implementing graph algorithms in Pregel, Giraph or GraphX, the idea is to use a modeling language that supports the concepts of BSP. One possible approach is to use UML 2 state machines for the modeling. The figure below shows a state machine for a BSP-model of the shortest path algorithm.

Using such a model, one could generate code for different platforms, e.g. Pregel, Giraph or GraphX. The code generators need to be implemented only once and are straight-forward to build since the BSP concepts are more or less directly used. Using different code generators, algorithm engineers can automatically derive implementations for different platforms and benchmark them against each other without implementing a single line of code.

In Henshin, the code generation for Apache Giraph is quite complex because the concepts of graph transformations are rather different than BSP. One opportunity to simplify it would be to use BSP models as intermediate target language. Specifically, model transformations can be employed to translate a Henshin model into a BSP model. From that point on, platform-specific code generators could be used to generate the final implementation.

Using a modeling approach would enable the reuse of the many many graph algorithms already implemented in the existing BSP-frameworks. Maybe UML state machines are not the best approach for the modeling -- for instance, one could also come up with a (textual) DSL for BSP. The important point is that an abstraction from the actual implementation platforms is made.  

by Christian Krause ( at May 02, 2015 08:50 AM

Mozilla pushes - April 2015

by Kim Moir ( at May 01, 2015 04:44 PM

Here's April 2015's  monthly analysis of the pushes to our Mozilla development trees. You can load the data as an HTML page or as a json file.  

The number of pushes decreased from those recorded in the previous month with a total of 8894.  This is due to the fact that gaia-try is managed by taskcluster and thus these jobs don't appear in the buildbot scheduling databases anymore which this report tracks.


  • 8894 pushes
  • 296 pushes/day (average)
  • Highest number of pushes/day: 528 pushes on Apr 1, 2015
  • 17.87 pushes/hour (highest average)

General Remarks

  • Try has around 58% of all the pushes now that we no longer track gaia-try
  • The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 28% of all the pushes.


  • August 2014 was the month with most pushes (13090  pushes)
  • August 2014 had the highest pushes/day average with 422 pushes/day
  • July 2014 had the highest average of "pushes-per-hour" with 23.51 pushes/hour
  • October 8, 2014 had the highest number of pushes in one day with 715 pushes  

I've changed the graphs to only track 2015 data.  Last month they were tracking 2014 data as well but it looked crowded so I updated them.  Here's a graph showing the number of pushes over the last few years for comparison.

by Kim Moir ( at May 01, 2015 04:44 PM

Integrating Rhapsody in your AUTOSAR toolchain

by Andreas Graf at May 01, 2015 10:31 AM

UML tools such as Enterprise Architect or Rhapsody (and others) are well established in the software development process. Sometimes the modeling guidelines are following a custom modelling, e.g. with specific profiles. So when you are modelling for AUTOSAR systems, at sometimes you are faced with the problem of transforming your model to AUTOSAR.

For customer projects, we have analyzed/implemented different stratgies.

Artop as a integration tool

First of all, if you are transforming to AUTOSAR, the recommendation is to transform to an Artop model and let Artop do all the serialization. Directly creating the AUTOSAR-XML (.arxml) is cumbersome, error-prone and generally “not-fun”.

Getting data out: Files or API

To access the data in Rhapsody, you could either read the stored files or access the data through the API of Rhapsody. This post describes aspects of the second approach.

Scenario 1: Accessing directly without intermediate storage

In this scenario, the transformation uses the “live” data from a running Rhapsody as data source. Rhapsody provides a Java based API (basically a wrapper to Windows COM-API). So it is very easy to write a transformation from “Rhapsody-Java” to “Artop-Java”. A recommended technology would be the open source Xtend language, since it provides a lot of useful features for that use case (see a description in this blog post).

Scenario 2: Storing the data from Rhapsody locally, transforming from that local representation

In this scenario, the data from Rhapsody is being extracted via the Java-API and stored locally. Further transformation steps can work on that stored copy. A feasible approach is to store the copied data in EMF. With reflection and other approaches, you can create the required .ecore-definitions from the Rhapsody provided Java classes. After that, you can also use transformation technologies that require an .ecore-definition as a basis for the transformation (but you can still use Xtend). The stored data will be very close to the Rhapsody representation of UML.

Scenario 3: Storing the data in “Eclipse UML” ecore, transforming from that local representation

In this scenario, the data is stored in the format of the Eclipse provided UML .ecore files, which represent a UML meta-model that is true to the standard. That means that your outgoing transformation would be more conforming to the standard UML meta-model and you could use other integrations that use that meta-model. However, you would have to map to that UML meta-model first.

There are several technical approaches to that. You can even to the conversion “on-the-fly”, implementing a variant of Scenario 1 with on-the-fly conversion.

 Technology as Open Source

The base technologies for the scenarios are available as open source / community source:

  • Eclipse EMF
  • Eclipse Xtend, Qvto (or other transformation languages)
  • Artop (available to AUTOSAR members)


by Andreas Graf at May 01, 2015 10:31 AM

Vaadin & OSGi: managing the classloader

by Florian Pirchner at April 30, 2015 11:27 AM

If you are using Vaadin with OSGi, you need to be aware of an issue related to

Vaadin changed the classloader that is used to load the UI-class. Before this change, the UI-class was loaded by the classloader of the servlet class. Now the “context class loader” is used to load it.

So if you define your UI-class in the VaadinServlet by annotation, you will get a ClassNotFoundException.

It is pretty easy to fix that issue. The solution is to define the proper classloader in ServletService. Afterwards the UI-class can be loaded properly again.

@VaadinServletConfiguration(ui = ECViewSampleUI.class, productionMode = false)
public class SimpleVaadinServlet extends VaadinServlet {
	protected VaadinServletService createServletService(
			DeploymentConfiguration deploymentConfiguration)
			throws ServiceException {
		// see
		ServletService service = new ServletService(this,
		return service;

This snippet shows an implementation of VaadinServlet.

It is used to

  1. define the UI-class by annotation
  2. create a custom VaadinServletService to define the proper classloader.


public class ServletService extends VaadinServletService {

	public ServletService(VaadinServlet servlet,
			DeploymentConfiguration deploymentConfiguration)
			throws ServiceException {
		super(servlet, deploymentConfiguration);

	public ClassLoader getClassLoader() {
		// return the bundle classloader
		// see
		return ServletService.class.getClassLoader();

The highlighted lines in the ServletService ensure that a proper class loader – the classloader of the custom ServletService – is used to load the UI-class.

Things should work properly afterwards …


Florian Pirchner

by Florian Pirchner at April 30, 2015 11:27 AM

“I am your container”, Darth Sirius

by Melanie Bats at April 30, 2015 10:26 AM

Continuing the Sirius blog posts series, today we will see a small tip: how to create artificial containers in your diagram?

One of the main advantage of Sirius is that the graphical representations are independent from the metamodel’s structure. This means that you can choose to not respect the containment hierarchy of your model when you display it graphically. This is possible thanks to Sirius being based on queries.

In the following example, we define a metamodel of a family:

To begin with, we define a Flat diagram, which displays all the members of the family at the same level:

In the Person mapping, we use the Semantic Candidates Expression to provide which semantic elements must be represented. These expressions returning model elements are called queries. To write these queries, there are different languages provided by default in Sirius: specialized interpreters (var, feature, service), Acceleo, raw OCL or Java. Here, we use the feature interpreter to get for a family all the persons referenced by the members reference. You can easily identify interpreted expressions by their yellow background in the Properties tab of an element.

We create a first diagram which represents the flattened Skywalker family:

The next step is to add a level to model the Family as a container. We create a new Family diagram which contains a Family container mapping and a Person mapping as sub nodes:

The Family diagram is created and all the members are represented inside the Skywalker container.

Here we represent graphically the containment reference members. But what to do if we want to create an artificial container which does not exist in the metamodel as a containment reference?

Let’s see! Now imagine that we want to add a level to represent The Force and if the person is related to the dark side or the light side. To do this we create a new ForceSide diagram:

We add a new container to represent the DarkSide of the force :

The dark side must be represented once for each family, the semantic candidate returns var:self which means the current family. As it should contain persons, it is defined as a FreeForm container.

We need to represent, in the Dark Side container, the persons that are from the dark side of the force. So we define a new sub node Person with the Semantic expression set to: [self.members->select(p|p.oclAsType(Person).dark)/]
This query returns all the members of a family and selects only the Person which has the attribute dark set to true.

Then we reuse the Person mapping to represent the person in its force side container.

Finally, we do the same for the light side of the force and we create a LightSide container and the Person mapping to represent members of the family in this new container who are influenced by the light side: [not self.members->select(p|p.oclAsType(Person).dark)/]
A new ForceSide diagram is created in the Skywalker family and we discover that only Darth Vader is from the dark side of the force and that his children are driven by the light side of the force.

Thanks to the queries in Sirius, it is easy to create artificial containers which are not related to containment referencers in the metamodel.

“May the Sirius Force be with you” ;)

The sample code from this example is available on github:


by Melanie Bats at April 30, 2015 10:26 AM

Less testing, same great Firefox taste!

by Kim Moir ( at April 28, 2015 08:13 PM

Running a large continuous integration farm forces you to deal with many dynamic inputs coupled with capacity constraints. The number of pushes increase.  People add more tests.  We build and test on a new platform.  If the number of machines available remains static, the computing time associated with a single push will increase.  You can scale this for platforms that you build and test in the cloud (for us - Linux and Android on emulators), but this costs more money.  Adding hardware for other platforms such as Mac and Windows in data centres is also costly and time consuming.

Do we really need to run every test on every commit? If not, which tests should be run?  How often do they need to be run in order to catch regressions in a timely manner (i.e. able to bisect where the regression occurred)

Several months ago, jmaher and vaibhav1994, wrote code to analyze the test data and determine the minimum number of tests required to run to identify regressions.  They named their software SETA (search for extraneous test automation). They used historical data to determine the minimum set of tests that needed to be run to catch historical regressions.  Previously, we coalesced tests on a number of platforms to mitigate too many jobs being queued for too few machines.  However, this was not the best way to proceed because it reduced the number of times we ran all tests, not just less useful ones.  SETA allows us to run a subset of tests on every commit that historically have caught regressions.  We still run all the test suites, but at a specified interval. 

SETI – The Search for Extraterrestrial Intelligence by ©encouragement, Creative Commons by-nc-sa 2.0
In the last few weeks, I've implemented SETA scheduling in our our buildbot configs to use the data that the analysis that Vaibhav and Joel  implemented.  Currently, it's implemented on mozilla-inbound and fx-team branches which in aggregate represent around 19.6% (March 2015 data) of total pushes to the trees.  The platforms configured to run fewer pushes for both opt and debug are
  • MacOSX (10.6, 10.10)
  • Windows (XP, 7, 8)
  • Ubuntu 12.04 for linux32, linux64 and ASAN x64
  • Android 2.3 armv7 API 9

As we gather more SETA data for newer platforms, such as Android 4.3, we can implement SETA scheduling for it as well and reduce our test load.  We continue to run the full suite of tests on all platforms other branches other than m-i and fx-team, such as mozilla-central, try, and the beta and release branches. If we did miss a regression by reducing the tests, it would appear on other branches mozilla-central. We will continue to update our configs to incorporate SETA data as it changes.

How does SETA scheduling work?
We specify the tests that we would like to run on a reduced schedule in our buildbot configs.  For instance, this specifies that we would like to run these debug tests on every 10th commit or if we reach a timeout of 5400 seconds between tests.

Previously, catlee had implemented a scheduling in buildbot that allowed us to coallesce jobs on a certain branch and platform using EveryNthScheduler.  However, as it was originally implemented, it didn't allow us to specify tests to skip, such as mochitest-3 debug on MacOSX 10.10 on mozilla-inbound.  It would only allow us to skip all the debug or opt tests for a certain platform and branch.

I modified to parse the configs and create a dictionary for each test specifying the interval at which the test should be skipped and the timeout interval.  If the tests has these parameters specified, it should be scheduled using the  EveryNthScheduler instead of the default scheduler.
There are still some quirks to work out but I think it is working out well so far. I'll have some graphs in a future post on how this reduced our test load. 

Further reading
Joel Maher: SETA – Search for Extraneous Test Automation

by Kim Moir ( at April 28, 2015 08:13 PM

Releng 2015 program now available

by Kim Moir ( at April 28, 2015 07:54 PM

Releng 2015 will take place in concert with ICSE in Florence, Italy on May 19, 2015. The program is now available. Register here!

via romana in firenze by ©pinomoscato, Creative Commons by-nc-sa 2.0

by Kim Moir ( at April 28, 2015 07:54 PM

Collaborative Modeling with Papyrus and CDO (Reloaded)

by Eike Stepper ( at April 28, 2015 06:35 PM

Since the beginning of this year I've been working on fundamental improvements to the user interface of CDO and its integration with Papyrus. In particular CEA has generously funded the following:
  • Branching and interactive merging
  • Support for offline checkouts
  • Interactive conflict resolution
Most of the new functionality has been implemented directly in CDO and is available for other modeling tools, too. Please enjoy a brief tour of what's in the pipe for the Mars release:

The following screencast shows how Papyrus will integrate with this new CDO user interface:

I hope you like the new concepts and workflows. Feedback is welcome, of course. And I'd like to thank CEA, Kenn Hussey and Christian Damus for their help to make this happen!

by Eike Stepper ( at April 28, 2015 06:35 PM

JBoss Tools Alpha2 for Eclipse Mars

by maxandersen at April 28, 2015 04:22 PM

Alpha 2 build for Eclipse Mars M6 is now available at Alpha2 download.


This version of JBoss Tools targets Eclipse Mars 4.5 (M6).

We recommend using the Eclipse 4.5 JEE Bundle since then you get most of the dependencies preinstalled.

Once you have installed Eclipse, you use our update site directly:

Note: Marketplace entry and Integration Stack tooling will become available from JBoss Central at a later date.

What is new ?

Easy Import/Open of projects

We have included our incubation project at Eclipse that makes importing and opening of projects much easier than default Eclipse. No longer do you need to know or guess at which of many import wizards are the right one. With this you just use menu:File[Import Project from Folder], point it to a folder and it will auto-detect the type of project, imports ann configure it as best as it can.

easyimport filemenu

Once started it will recursively scan the selected folder and report which directories it found.

easyimport wizard

We included this incubation feature to get early feedback - please do give it a try and let us know if it works great or if we detected some projects "badly".

OpenShft v3

Our OpenShift integration now allow you to connect to OpenShift 3 in addition to the existing OpenShift 2 support.

connection wizard server type

Once connected you can browse the OpenShift/Kubernetes data for your application/projects.

view explorer v3

Note: OpenShift v3 is not available from to try at this point in time. If you want to try use it you can follow the instructions at OpenShift Origin sample app.

Java EE 7 Batch wizards, content assist, validation and refactoring

In Alpha 1 we introduced support for Java EE 7 batch specification and now extending this support with a wizard, content assist, linked navigation, searching and refactoring of Batch elements.


WildFly 9

We’ve added native WildFly 9 runtime detection and server support. You no longer need to use the WildFly 8 adapter and detection will work correctly now.

Content assist for AngularJS Expressions

When editing AngularJS single-page html (not templates) the html editor now communicates with the preview to provide content assist for angularjs expressions.


Custom HTML Tag validation

There is now a quickfix for marking custom HTML5 elements to be ignored in validation.


Note: this is not specific to JBoss Tools, it is built into Eclipse M6

Next steps

With Alpha2 out we are heading towards a Beta1.

In Beta1 we are targeting including:

  1. OpenShift v3 support for templates

  2. Docker Tooling

  3. Better JavaScript content assist

  4. Making project imports in Eclipse even simpler

  5. And more…​

As always, ask/suggest away and we’ll keep you posted!

Have fun!

Max Rydahl Andersen

by maxandersen at April 28, 2015 04:22 PM

Screenshot of the Week: C++ Refactoring

by waynebeaton at April 28, 2015 03:08 PM

My son has just finished up his first year of software development at college. In a demonstration of what I consider cruel and unusual punishment, his first programming language is C++ and his first development environment is Visual C++. I have to assume that the version of Visual C++ that the college unleashed on these unsuspecting students is some sort of reduced-functionality version, because it seems to lack certain functionality that I consider pretty basic, like refactoring.

I learned C years ago, and did some honest-to-goodness work using it, but never did take the time to learn C++, so I used this as an opportunity to close that gap. Naturally, I decided to learn C++ using the Eclipse C/C++ Development Tools (CDT).

The CDT provides some excellent refactoring support.

Renaming a C++ method

Renaming a C++ method

Keep the just-sorting-this-stuff-out nature of the work when considering the code in the screenshot.

This screenshot shows the first stage of the Rename refactoring. As expected, this changes the name of the method (function), the declaration in the header file, and any code that calls it. There are many other refactorings available, including ones that extract constants, fields, and functions. Note the Call Hierarchy view on the bottom view stack: use this view to find out how your function interacts with the world (calls and callers). There’s all sorts of cool stuff available.

The Eclipse CDT project has participated in every simultaneous release we’ve done and so it’s no surprise that they’re an important part of the Eclipse Mars Release. Help us test Eclipse Mars by downloading and testing a milestone build.

Epilogue: To my son’s instructors’ credit, they did avoid complex memory management issues, and did get the students to produce some pretty cool and very playable games featuring two-dimensional graphics. Those students that survive the programme are probably going to do well…

Caveat: I never really took the time necessary to properly research the functionalities provided by Visual C++ or spend any significant time using it. I have to assume that it’s very functional once you get comfortable with it.

by waynebeaton at April 28, 2015 03:08 PM

@ApacheParquet Graduating and Mesos with Siri

by Chris Aniszczyk at April 28, 2015 02:52 PM

The last week for me has been fun in open source land outside of me getting two of my wisdom teeth pulled out of my face. On the bright side, I have some pain killers now and also, two notable things happened. First it was nice to finally graduate Parquet out of the Apache Incubator:

It’s been a little over two years since we (Twitter) announced the open source columnar storage project with Cloudera. It’s a great feeling to see a plan come together and see this project grow over the years with 60+ contributors while hitting the notable achievement of graduating out of the Apache incubator gauntlet. If there’s any lesson here for me, it’s much easier to build an open source community when you do it an independent fashion with at least someone else in the beginning (thanks Cloudera).

Another notable thing that happened was that Apple finally announced that they are using Mesos to power Siri’s massive infrastructure.

In my experience of building open source communities, there are usually your public adopters and private adopters. There are companies that wish to remain private about the software they use at times and that’s fine, it’s understandable when it can be viewed as a competitive advantage. The challenge is how you work with these private adopters when they use an open source project of yours while wanting to collaborate behind the scenes.

Anyways, it’s a great feeling to see Apple opening up a bit about their infrastructure and open source usage after working with them for awhile. Hopefully this is a sign of things to come from them. Also, it would be nice if Apple just updated Siri so when you ask what Mesos is, it replies with a funny response and proclaims her love of open source infrastructure technology.

Overall, it’s been a great last week.

by Chris Aniszczyk at April 28, 2015 02:52 PM

Kudos to Tony McCrary for his Eclipse JDT icon work

by Lars Vogel at April 28, 2015 07:50 AM

Tony McCrary continues his awesome icon contributions.

This time he managed to re-create the JDT icon set so that the JDT icon look good on a dark background. This work started in January 2014 (!!!) and as the JDT developer are under a high pressure, Tony even generated a gallery to compare the old gif and the new png icons to make the review easier for them.

On the left you see the old gif icons, the right are the new png icons.

JDT icons

JDT icons

See JDT icon work Bug for details.

Many thanks from me personally and the platform.ui team to Tony.

by Lars Vogel at April 28, 2015 07:50 AM

Call 4 papers - where is an open source app?

by Krzysztof (Chris) Daniel ( at April 27, 2015 03:45 PM

I was recently submitting a number of conference proposal related to my current area of interest, and one thing that struck me was the lack of a rock-solid, easy to use call-4-paper application.

Each time I wanted to propose a talk to a conference I had to create a profile, confirm e-mail address, provide a lot of details and only then I was allowed to fill actual talk details.

"Don't reinvent the wheel!" they say.

So, why, WHY each conference organizers decide to write their OWN c4p application? That makes no sense.

The internet proposal submission is no longer a feature that will distinguish your conference.

That's why I am starting my new, open source application for accepting conference talks. It will not be fancy, but it will work.

Clone it, run it, modify it, USE it.

Contribute if you wish.

by Krzysztof (Chris) Daniel ( at April 27, 2015 03:45 PM

Using the Xtend language for M2M transformation

by Andreas Graf at April 26, 2015 02:34 PM

In the last few month, we have been developing a customer project that centers around model-to-model transformation with the target model being AUTOSAR.

In the initial concept phase, we had two major candidates for the M2M-transformation language: Xtend and QVTO. After doing some evaluations, we decided that for the specific use case, Xtend was the technology of choice.


ComfortXtend has a number of features that make writing model-to-model-transformations very concise and comfortable. The most important is the concise syntax to navigate over models. This helps to avoid loops that would be required when implementing in Java

val r = eAllContents.filter(EcucChoiceReferenceDef).findFirst[
shortName == "DemMemoryDestinationRef"]
Traceability / One-Pass TransformationXtend provides so-called "create" methods for creating new target model elements in your transformation. The main usage is to be able to write efficient code without having to implement a multi-pass transformation. This is solved by using an internal cache to return the same target object if the method is invoked for the same input objects more than one time

However, the internally used caches can also be used to generate tracing information about the relationship from source to target model. We use that both for

  • Writing out trace information in a log file

  • Adding trace information about the source elements to the target elements

Both features have been added to the "plain" Xtend, because we can use standard Java mechanisms to access them.

In addition, we can also run a static analysis to see what sourcetarget metaclass combinations exist in our codebase.
PerformanceXtend compiles to plain Java. This gives higher performance than many interpreted transformation languages. In addition, you can use any Java profiler (such as Yourkit, JProfiler) to find bottlenecks in your transformations.
Long-Term-SupportXtend compiles to plain Java. You can just keep the compiled java code for safety and be totally independent about the Xtend project itself.
Test-SupportXtend compiles to plain Java. You can just use any testing tools (such as JUnit integration in Eclipse or mvn/surefire). We have extensive test cases for the transformation that are documented in nice reports that are generated with standard Java tooling.
Code CoverageXtend compiles to plain Java. You can just use any code coverage tools (such as Jacoco)
DebuggingDebugger integration is fully supported to step through your code.
ExtensibilityXtend is fully integrated with Java. It does not matter if you write your code in Java or Xtend.
DocumentationYou can use standard Javadocs in your Xtend transformations and use the standard tooling to get reports.
ModularityXtend integrates with Dependency Injection. Systems like Google Guice can be used to configure combinations of model transformation.
Active AnnotationsXtend supports the customization of its mapping to Java with active annotations. That makes it possible to adapt and extend the transformation system to custom requirements.
Full EMF supportThe Xtend transformations operate on the generated EMF classes. That makes it easy to work with unsettable attributes etc.
IDE IntegrationThe Xtend editors support essential operations such as "Find References", "Go To declaration" etc.

The Xtend syntax on the other hand is not a language based on any standard. But it’s performance, modularity and maintenance features are a strong argument for adding it as a candidate for model transformations.

by Andreas Graf at April 26, 2015 02:34 PM

Eclipse Hackathon Hamburg – Greetings

by eselmeister at April 25, 2015 10:35 AM

Yesterday, we had our second Eclipse Hackathon in Hamburg, Germany. It was a great meeting :-).


Stay tuned, the next Hackathon will be in approx. three month.

by eselmeister at April 25, 2015 10:35 AM

Save the date: Eclipse DemoCamp Mars June 23rd 2015

by Maximilian Koegel and Jonas Helming at April 24, 2015 10:09 AM

We are pleased to announce the Eclipse DemoCamp Munich 2015 on June 23rd.
The DemoCamp Munich is one of the biggest DemoCamps worldwide and, therefore, an excellent opportunity to showcase all the cool, new, and interesting technology being built by the Eclipse community. This event is open to Eclipse enthusiasts who want to demonstrate what they are doing with Eclipse. The aim is to create an opportunity for you to meet other Eclipse fans in Munich in an informal setting.

We are offering 110 seats, however we usually receive around 200 registrations. Pre-registration is mandatory, please be sure to do so as soon as possible. To give everyone the same chance, registration for the event will start in one week, on May 4th at exactly 2pm. There you’ll also find detailed information on the location, time and more.

We are looking forward to your registration and seeing you in June!

A big thanks to our sponsors: BSI Business Systems Integration AG, EclipseSource München GmbH, Eclipse Foundation and Capgemini Deutschland GmbH


Leave a Comment. Tagged with democamp, eclipse, democamp, eclipse

by Maximilian Koegel and Jonas Helming at April 24, 2015 10:09 AM

Lightweight Dialogs in e4 & JavaFX

by Tom Schindl at April 23, 2015 02:25 PM

I’ve just checked in the initial bits to use lightweight dialogs in your e4 + JavaFX applications. You can see it in action in the short video from below

Useage is fairly simple. First you need to have dialog implementation like this:

static class OpenProjectDialogImpl extends TitleAreaDialog {
  private ListView<Project> list;
  private final CommandService cmdService;

  public OpenProjectDialogImpl(Workbench workbench, 
    CommandService cmdService, @Service List<ProjectService> projectServiceList) {

    super("Open project", 
      "Open project", "Open an existing project");
    this.cmdService = cmdService;
    list = new ListView<>();
    list.setCellFactory(v -> 
      new SimpleListCell<Project>( 
        p -> labelExtractor(p, projectServiceList), 
        p -> cssProvider(p,projectServiceList)));
  protected void handleOk() {
    if( list.getSelectionModel().getSelectedItem() != null ) {
      cmdService.execute("", Collections.singletonMap("projectId", list.getSelectionModel().getSelectedItem().getProjectId()));

and displaying it is nothing more than:

public class OpenProjectDialog {
  public void open(LightWeightDialogService dialogService) {
    dialogService.openDialog(OpenProjectDialogImpl.class, ModalityScope.WINDOW);

As you can see in the video you can choose how the dialog is opened & closed. This is done through an OSGi-Service of type LightweightDialogTransitionService:

public class FadeDialogTranstionServiceImpl extends FadeDialogTranstionService implements LightweightDialogTransitionService {
  protected void configureFadeIn(FadeTransition transition) {
  protected void configureFadeOut(FadeTransition transition) {

by Tom Schindl at April 23, 2015 02:25 PM