April 19, 2014

Switching to Xcore in your Xtext language

This is a followup of my previous post, Switching from an inferred Ecore model to an imported one in your Xtext grammar. The rationale for switching to manually maintained metamodel can be found in the previous post. In this post, instead of using an Ecore file, we will use Xcore,

Xcore is an extended concrete syntax for Ecore that, in combination with Xbase, transforms it into a fully fledged programming language with high quality tools reminiscent of the Java Development Tools. You can use it not only to specify the structure of your model, but also the behavior of your operations and derived features as well as the conversion logic of your data types. It eliminates the dividing line between modeling and programming, combining the advantages of each.

I took inspiration from Jan Köhnlein’s blog post; after switching to a manually maintained Ecore in Xsemantics, I felt the need to further switch to Xcore, since I had started to write many operation implementations in the metamodel, and while you can do that in Ecore, using Xcore is much easier :) Thus in my case I was starting from an existing language, not to mention the use of Xbase (not covered in Jan’s post). Things were not easy, but once the procedure works, it is easily reproducible, and I’ll detail this for a smaller example.

So first of all, let’s create an Xtext project, org.xtext.example.hellocustomxcore, (you can find the sources of this example online at https://github.com/LorenzoBettini/Xtext2-experiments); the grammar of the DSL is not important: this is just an example. We will first start developing the DSL using the automatic Ecore model inference and later we will switch to Xcore.

(the language is basically the same of the previous post).

The grammar of this example is as follows:

grammar org.xtext.example.helloxcore.HelloXcore with 

generate helloxcore "http://www.xtext.org/example/hellocustomxcore/HelloXcore"


    'Hello' name=ID '!'

    'Greeting' name=ID expression = XExpression

and we run the MWE2 generator.

To have something working, we also write an inferrer

def dispatch void infer(Model element, IJvmDeclaredTypeAcceptor acceptor, boolean isPreIndexingPhase) {
   		acceptor.accept(element.toClass("generated." + element.eResource.URI.lastSegment.split("\\.").head.toFirstUpper))
   				for (hello: element.hellos) {
   					members += hello.toMethod
   							("say" + hello.name.toFirstUpper, hello.newTypeRef(String)) [
   						body = '''return "Hello «hello.name»";'''
   				for (greeting : element.greetings) {
   					members += greeting.toMethod
   							("say" + greeting.name.toFirstUpper, greeting.newTypeRef(String)) [
   						body = greeting.expression

With this DSL we can write programs of the shape (nothing interesting, this is just an example)

Hello foo!

Greeting bar {
    sayFoo() + "bar"

Now, let’s say we want to check in the validator that there are no elements with the same name; since both “Hello” and “Greeting” have the feature name, we can introduce in the metamodel a common interface with the method getName(). OK, we could achieve this also by introducing a fake rule in the Xtext grammar, but let’s do that with Xcore.

Switching to Xcore

Of course, first of all, you need to install Xcore in your Eclipse.

Before we use the export wizard, we must make sure we can open the generated .genmodel with the “EMF Generator” editor (otherwise the export will fail). If you get an error opening such editor about resolving proxy to JavaJVMTypes.ecore like in the following screenshot…


..then we must tweak the generated .genmodel and add a reference to JavaVMTypes.genmodel: open HelloXcore.genmodel with the text editor, and search for the part (only the relevant part of the line is shown)


and add the reference to the JavaVMTypes.genmodel:

usedGenPackages="../../../org.eclipse.xtext.common.types/model/JavaVMTypes.genmodel#//types ../../../org.eclipse.xtext.xbase/model/Xbase.genmodel#//xbase

Since we’re editing the .genmodel file, we also take the chance to modify the output folder for the model files to emf-gen (see also later in this section for adding emf-gen as a source folder):


And we remove the properties that relate to the edit and the editor plug-ins (since we don’t want to generate them anyway):


Now save the edited file, refresh the file in the workspace by selecting it and pressing F5 (yes, also this operation seems to be necessary), and this time you should be able to open it with the “EMF Generator” editor. We can go on exporting the Xcore file.

We want the files generated by Xcore to be put into the emf-gen source folder; so we add a new source folder to our project, say emf-gen, where all the EMF classes will be generated; we also make sure to include such folder in the build.properties file.

First, we create an .xcore file starting from the generated .genmodel file:

  • navigate to the HelloXcore.genmodel file (it is in the directory model/generated)
  • right click on it and select “Export Model…”
  • in the dialog select “Xcore”
  • The next page should already present you with the right directory URI
  • In the next page select the package corresponding to our DSL, org.xtext.example.helloxcore.helloxcore (and choose the file name for the exported .xcore file corresponding Helloxcore.xcore file)
  • Then press Finish
  • If you get an error about a missing EObjectDescription, remove the generated (empty) Helloxcore.xcore file, and just repeat the Export procedure from the start, and the second time it should hopefully work


The second time, the procedure should terminate successfully with the following result:

  • The xcore file, Helloxcore.xcore has been generated in the same directory of the .genmodel file (and the xcore file is also opened in the Xcore editor)
  • A dependency on org.eclipse.emf.ecore.xcore.lib has been added to the MANIFEST.MF
  • The new source folder emf-gen is full of compilation errors


Remember that the model files will be automatically generated when you modify the .xcore file (one of the nice things of Xcore is indeed the automatic building).

Fixing the Compilation Errors

These compilation errors are expected since Java files for the model are both in the src-gen and in the emf-gen folder. So let’s remove the ones in the src-gen folders (we simply delete the corresponding packages):


After that, everything compile fines!

Now, you can move the Helloxcore.xcore file in the “model” directory, and remove the “model/generated” directory.

Modifying the mwe2 workflow

In the Xtext grammar, HelloXcore.xtext, we replace the generate statement with an import:

//generate helloxcore "http://www.xtext.org/example/helloxcore/HelloXcore"
import "http://www.xtext.org/example/helloxcore/HelloXcore"

The DirectoryCleaner fragment related the “model” directory should be removed (otherwise it will remove our Helloxcore.xcore file as well); and we don’t need it anymore after we manually removed the generated folder with the generated .ecore and .genmodel files.

Then, in the language part, you need to loadResource the XcoreLang.xcore, the Xbase and Ecore ecore and genmodel, and finally the xcore file you have just exported, Helloxcore.xcore.

We can comment the ecore.EMFGeneratorFragment (since we manually maintain the metamodel from now on).

The MWE2 files is now as follows (I highlighted the modifications):

Workflow {
    bean = StandaloneSetup {
    	scanClassPath = true
    	platformUri = "${runtimeProject}/.."
    	// The following two lines can be removed, if Xbase is not used.
    	registerGeneratedEPackage = "org.eclipse.xtext.xbase.XbasePackage"
    	registerGenModelFile = "platform:/resource/org.eclipse.xtext.xbase/model/Xbase.genmodel"

    component = DirectoryCleaner {
    	directory = "${runtimeProject}/src-gen"

      // switch to xcore
//    component = DirectoryCleaner {
//    	directory = "${runtimeProject}/model"
//    }

    component = DirectoryCleaner {
    	directory = "${runtimeProject}.ui/src-gen"

    component = DirectoryCleaner {
    	directory = "${runtimeProject}.tests/src-gen"

    component = Generator {
    	pathRtProject = runtimeProject
    	pathUiProject = "${runtimeProject}.ui"
    	pathTestProject = "${runtimeProject}.tests"
    	projectNameRt = projectName
    	projectNameUi = "${projectName}.ui"
    	encoding = encoding
    	language = auto-inject {
    		// switch to xcore
        	loadedResource = "platform:/resource/org.eclipse.emf.ecore.xcore.lib/model/XcoreLang.xcore"
			loadedResource = "classpath:/model/Xbase.ecore"
    		loadedResource = "classpath:/model/Xbase.genmodel"
    		loadedResource = "classpath:/model/Ecore.ecore"
			loadedResource = "classpath:/model/Ecore.genmodel"
        	loadedResource = "platform:/resource/${projectName}/model/Helloxcore.xcore"

    		uri = grammarURI

    		// Java API to access grammar elements (required by several other fragments)
    		fragment = grammarAccess.GrammarAccessFragment auto-inject {}

    		// generates Java API for the generated EPackages
    		// switch to xcore
    		// fragment = ecore.EMFGeneratorFragment auto-inject {}

Before running the workflow, you also need to add org.eclipse.emf.ecore.xcore as a dependency in your MANIFEST.MF.

We can now run the mwe2 workflow, which should terminate successfully.

We must now modify the plugin.xml (note that there’s no plugin.xml_gen anymore), so that the org.eclipse.emf.ecore.generated_package extension point contains the reference to the our Xcore file:

<?xml version="1.0" encoding="UTF-8"?>
<?eclipse version="3.0"?>


  <extension point="org.eclipse.emf.ecore.generated_package">
       uri = "http://www.xtext.org/example/helloxcore/HelloXcore" 
       class = "org.xtext.example.helloxcore.helloxcore.HelloxcorePackage"
       genModel = "model/Helloxcore.xcore" /> 



Fixing Junit test problems

As we saw in the previous post, Junit tests do not work anymore with errors of the shape

org.eclipse.xtext.parser.ParseException: java.lang.IllegalStateException: Unresolved proxy http://www.xtext.org/example/helloxcore/HelloXcore#//Hello. Make sure the EPackage has been registered.

All we need to do is to modify the StandaloneSetup in the src folder (NOT the generated one, since it will be overwritten by subsequent MWE2 workflow runs) and override the register method so that it performs the registration of the EPackage (as it used to do before):

public class HelloXcoreStandaloneSetup extends HelloXcoreStandaloneSetupGenerated{

	public static void doSetup() {
		new HelloXcoreStandaloneSetup().createInjectorAndDoEMFRegistration();

	public void register(Injector injector) {
		// added after the switching to Xcore
		if (!EPackage.Registry.INSTANCE.containsKey("http://www.xtext.org/example/helloxcore/HelloXcore")) {
			EPackage.Registry.INSTANCE.put("http://www.xtext.org/example/helloxcore/HelloXcore", org.xtext.example.helloxcore.helloxcore.HelloxcorePackage.eINSTANCE);


And now the Junit tests will run again.

Modifying the metamodel with Xcore

We can now customize our metamodel, using the Xcore editor.

For example, we add the interface Element, with the method getName() and we make both Hello and Greeting implement this interface (they both have getName() thus the implementation of the interface is automatic).

interface Element {
	op String getName()

class Hello extends Element {
	String name

class Greeting extends Element {
	String name
	contains XExpression expression

Using the Xcore editor is easy, and you have content assist; as soon as you press save, the Java files will be automatically regenerated:


We also add a method getElements() to the Model class returning an Iterable<Element>(containing both the Hello and the Greeting objects). This time, with Xcore, it is really easy to do so (compare that with the procedure of the previous post, requiring the use of EAnnotation in the Ecore file), since Xcore uses Xbase expression syntax for defining the body of the operations (with full content assist, not to mention automatic import statement insertions). See also the generated Java code on the right:


And now we can implement the validator method checking duplicates, using the new getElements() method and the fact that now both Hello and Greeting implement Element:

package org.xtext.example.helloxcore.validation

import org.eclipse.xtext.validation.Check
import org.eclipse.xtext.xbase.typesystem.util.Multimaps2
import org.xtext.example.helloxcore.helloxcore.Element
import org.xtext.example.helloxcore.helloxcore.Model

class HelloXcoreValidator extends AbstractHelloXcoreValidator {

	public static val DUPLICATE_NAME = 'HelloXcoreDuplicateName'

	def checkDuplicateElements(Model model) {
		val nameMap = <String,Element>Multimaps2.newLinkedHashListMultimap
		for (e : model.elements)
			nameMap.put(e.name, e)
		for (entry : nameMap.asMap.entrySet) {
			val duplicates = entry.value
			if (duplicates.size > 1) {
				for (d : duplicates)
						"Duplicate name '" + entry.key + "' (" + d.eClass.name + ")",

That’s all! I hope you found this tutorial useful :)


Be Sociable, Share!
April 18, 2014

Web resource optimization and Maven Profiles

Some time ago, I described how to perform CSS and JS minification using Eclipse and WRO4J, thanks to m2e-wro4j.

In this article, we’ll deploy a Java EE 6 Restful application with an HTML5 front-end on a Wildfly application server. We’ll see how, using m2e-wro4j, m2e-wtp and the JBoss Tools Maven Profile Management UI, you can easily switch between minified and regular build profiles.

Setup your Eclipse-based IDE

First you’ll need to install a couple things into an Eclipse Java EE installation. I recommend you use Red Hat JBoss Developer Studio, as it comes with m2e, m2e-wtp, the Maven Profile Manager, the Wildfly server adapter and JBoss Central out-of-the-box, but any Eclipse based installation (Kepler) will do, provided you install the proper plugins.

Red Hat JBoss Developer Studio Eclipse Java EE
  • You can download and install Red Hat JBoss Developer Studio 7.1.1 from here

  • or install it over an existing Eclipse (Kepler) installation from the Eclipse Marketplace

Make sure you install the following JBoss Tools features :

  • Maven Profiles Management

  • JBossAS Server

  • JBoss Central

m2e-wro4j can then be installed from JBoss Central’s Software/Update tab :

m2e wro4j installation

Alternatively, you can find and install m2e-wro4j from the Eclipse Marketplace too.

Also make sure you have a Wildfly server installed on your machine.

About m2e-wro4j

The m2e-wro4j connector allows wro4j-maven-plugin to be invoked when .js, .css, .coffee, .less, .sass, .scss, .json, .template or pom.xml files are modified.

Given the following configuration :


When m2e-wtp is present, m2e-wro4j automatically translates ${project.build.directory}/${project.build.finalName}/* output directories (the default values used by wro4j-maven-plugin) to ${project.build.directory}/m2e-wtp/web-resources/. This gives you on-the-fly redeployment of optimized resources on WTP controlled servers.

In order to use wro4j-maven-plugin, you need a wro.xxx descriptor (that would be src/main/webapp/WEB-INF/wro.xml by default) and a wro.properties. Read https://code.google.com/p/wro4j/wiki/MavenPlugin for more details.

Create an HTML 5 project

Now let’s get to work. From the JBoss Central Getting Started tab, click on the HTML 5 project icon to create a Maven based web application with a JAX-RS back-end and an HTML 5 front-end.

html5 project jboss central

I used kitchensink as a project name.

This project already has some wro4j-maven-plugin configuration we can use. All we need is to make the switch between regular and minified versions of the build more user friendly.

Enable minification

First, we need to remove one xml snippet from the pom.xml which prevents wro4j-maven-plugin from running during Eclipse builds, thus cancelling m2e-wro4j’s efforts. Go to the minify profile and delete :

              <!--This plugin's configuration is used to store Eclipse m2e settings only. It has no influence on the Maven build itself.-->

In order to use the minified versions of javascript and css files in the app, the original index.html file requires you to (un)comment several lines like :

yep: {
          //assign labeled callbacks for later execution after script loads.
          //we are on mobile device so load appropriate CSS
          "jqmcss": "css/jquery.mobile-1.3.1.min.css",
          // For minification, uncomment this line
          //"mcss": "css/m.screen.min.css"
          // For minification, comment out this line
          "mcss": "css/m.screen.css"
      nope: {
          //we are on desktop
          // For minification, uncomment this line
          //"scss": "css/d.screen.min.css"
          // For minification, comment out this line
          "scss": "css/d.screen.css"

This is clearly not practical if you want to be able to quickly switch between minified and original versions of your files.

Fortunately, we can take advantage of some Maven black magic, also known as web resource filtering, to dynamically use the proper (non-minified) version of these files, depending on the profile we’re building with.

We have 3 things to do :

  1. define a maven property for the minified file extension currently in use

  2. enable web resource filtering in the maven-war-plugin configuration

  3. modify index.html so it uses the aforementioned maven property

Setup the maven property

In the main <properties> block, at the top of your pom.xml, add the following property :

<!-- By default, the original filename will be used -->

Now add a <properties> block to the minify profile :


Enable web resource filtering

Modify the default maven-war-plugin configuration to enable filtering on html files :

          <!-- Java EE 6 doesn't require web.xml, Maven needs to catch up! -->

Use the maven property

In index.html, replace at least the following occurrences :

from to







You can also apply similar changes to lodash and jquery if you want.

At this point, you should see the filtered target/m2e-wtp/web-resources/index.html references the original resources, as the minified extension enabled by default is an empty string.

Switch between minified and regular profiles

Let’s see what happens when enabling the minify profile. Ctrl+Alt+P is a shortcut bringing up the Maven Profile Management UI. Just check/uncheck profiles to enable/disable them :

profile selection

Once the minify profile is active, you’ll see that :

  • css/m.screen.min.css, css/d.screen.min.css,js/app.min.js are generated under target/m2e-wtp/web-resources/

  • target/m2e-wtp/web-resources/index.html now references the minified versions of the resources

minified resources

Deploy the application on WildFly

  • Right click on your project and select Run As > Run on Server …

  • Create a new Wildfly Server if necessary, pointing at your locally installed server

  • Once the server is created and deployed, a browser window should open :

deployed application1

If you’re on a desktop, modify the color of the h1 class in src/main/webapp/css/d.screen.css and save. This will trigger wro4j-maven-plugin:run which will regenerate the minified version under target/m2e-wtp/web-resources/css/d.screen.min.css, which in turn will be deployed on Wildfly by the server adapter.

Reloading the page (after the build is complete) will show the changes directly :

deployed application2

Now you can switch back to using the regular, non-minified version by hitting Ctrl+Alt+P in the workbench, unselect the minify profile and wait for the build to complete. After you reload your browser page, you’ll notice, if you look at the source page, the minified versions are not referenced anymore.

The minified files still exist under target/m2e-wtp/web-resources/ and are deployed as such. They’re unused, so they’re harmless, but you’d need to perform a clean build to remove them, if necessary.


Dealing with optimized resources in Eclipse is made really easy with a combination of the right tools. Just focus on code, save your work, see the changes instantly. WRO4J and wro4j-maven-plugin can give you much more than what we just saw here (resource aggregation, obfuscation, less processing …). Hopefully you’ll want to give it a try after reading this article, and if you do, don’t forget to give us your feedback.

Issues with m2e-wro4j can be opened at https://github.com/jbosstools/m2e-wro4j/issues.

Issues with the Maven Profile Manager can be opened at :

As always, for all your WRO4J or wro4j-maven-plugin specific issues, I strongly encourage you to :

Have fun!

Next Beta for Luna

We are ready with JBoss Tools 4.2 Beta1 and Red Hat JBoss Developer Studio 8 Beta1.

jbosstools jbdevstudio blog header

This announcement is a bit special since it is on our new website, with a cleaner and more consistent structure.

There is the specific version download available from JBoss Tools 4.2.0.Beta1.

Then there is the Luna download page which shows the latest Stable, Development and nightly download for Eclipse Luna - that is available from Luna releases.

From either of these you can get to the Downloads, updatesites and What’s New!


JBoss Developer Studio comes with everything pre-bundled in its installer. Simply download it from our JBoss Products pagerun it like this:

java -jar jbdevstudio-<installername>.jar

JBoss Tools or Bring-Your-Own-Eclipse (BYOE) JBoss Developer Studio requires a bit more:

This release requires at least Eclipse 4.4 (Luna) M6 but we recommend using the Eclipse 4.4 JEE Bundle since then you get most of the dependencies preinstalled.

Once you have installed Eclipse, you either find us on Eclipse Marketplace under "JBoss Tools (Luna)" or "JBoss Developer Studio (Luna)".

For JBoss Tools you can also use our update site directly if you are up for it.


Note: Integration Stack tooling will become available from JBoss Central at an later date.

What is new ?

As always there is more than can be covered in a single blog but here are some of my favorites. You can see everything in the What’s New section for this release.

Refactored and improved Server Tools

The server adapters for JBoss AS, EAP and WildFly have been refactored to support…

  1. runtime-less servers (i.e. no need to have a server locally installed)

  2. deployment over management API (i.e. no need to have local or SSH based access anymore, just Management Api)

  3. start/stop scripts for deploy only servers

  4. Profiles for easier setup


These improvements required significant changes in the server adapter core API and UI. The meat of the code is still the same - just split out to be more reusable and composable.

We would really like to hear from you if the UI is good, bad, better, worse; and in any case how we can make it even better. We’re obviously interesting in knowing whether these features do work for you.

You can see more in the Servers What’s New.

OpenShift Downloadable Cartridges

OpenShift has been offering support for custom defined cartridges - now OpenShift tools supports this natively too.

This means you can create or use an existing cartridge by choosing the "Code Anything" option for either Applications or Cartridges.

Downloadable Cartridges

This allows you to, for example use a Cartridge like WildFly 8 or the recent OpenShift on JRebel experiment without leaving the IDE.

More support for multiple Cordova versions

The Cordova runtime support is now extended to also support using locally downloaded runtimes. Making it possible to use custom builds or simply distributions that are not directly available from Cordova repositories.


There are also various other fixes which can be viewed at Aerogear What’s New.

p.s. we are contributing these tools to eclipse.org under Project Thym.

The Cordova Simulator also now understands the notion of multiple Cordova runtimes allowing you to test against multiple versions.


Arquillian XML Editor

The Arquillian tooling (for now only available in JBoss Tools) added a Sapphire based editor for the arquillian.xml file format.

For now it is a simple structured editor but we plan on using it for experimenting with Sapphire to provide better XML oriented editors.


This Arquillian editor was based on ideas and initial contribution from Masao Kunii working for Nippon Telegraph and Telephone Corporation (NTT) - thank you!

Forge 2 Goodies

Recent version of Forge 1 and 2 are now distributed with tools and especially the later one brings an interesting feature.

Forge 2 Connection Profiles now will consider Eclipse Database Tooling Platform (DTP) connections meaning you no longer have to configure your database settings in multiple places.

Connection Profiles

This feature is called connection profiles in Forge 2.

Usage tracking

We have learned a lot from our last 3+ years usage tracking and continue to be amazed with how many nationalities, countries and operating system distributions the tools are used in - it has been and continue to be very informative and helpful in guiding our tooling support.

In Beta1 we have gone a step further into learning not only about how many starts JBoss Tools, but also now which features are being used.

In Beta1 we now collect info about which server types are used and which JBoss Central installs are being done to be able to see how much and how often these features are used.

We will use that in the future to decide new development and maintanence work - thus if you love a feature in JBoss Tools then please say yes to usage tracking and use the feature and we’ll notice.

Note: We are only collecting aggregated summaries of i.e. server types and installs - meaning we cannot use any of this info to identify you. It is truly anonymous usage statistics - we are not the NSA!

Next steps

While we wait for feedback on Beta1, we are already working on what will become Beta2. Some of things that are moving here are:

  1. Looking at doing radical changes to how the visual page editor works since XULRunner is not maintained anymore and we need HTML5 support

  2. Getting better (read: much better) javascript content assist with help from tern.java

  3. Improve JavaEE 6 and JavaEE 7 support

  4. Full Java8 support

  5. …and more!

Hope you enjoy it and remember…

Have fun!

Max Rydahl Andersen

JBoss Tools Integration Stack 4.1.4.Final / JBoss Developer Studio Integration Stack 7.0.1.GA

Fuse Tooling goes Final in the Stack!

jbosstools jbdevstudio blog header

The Integration Stack for JBoss Tools Developer Studio is a set of plugins for Eclipse that provides tooling for the following frameworks:

  • BPEL Designer - Orchestrating your business processes.

  • BPMN2 Modeler - A graphical modeling tool which allows creation and editing of Business Process Modeling Notation diagrams using graphiti.

  • Drools - A Business Logic integration Platform which provides a unified and integrated platform for Rules, Workflow and Event Processing.

  • JBoss ESB - An enterprise service bus for connecting enterprise applications and services.

  • Fuse Apache Camel Tooling - A graphical tool for integrating software components that works with Apache ServiceMix, Apache ActiveMQ, Apache Camel and the FuseSource distributions.

  • jBPM3 - A flexible Business Process Management (BPM) Suite - JBoss Enterprise SOA Platform 5.3.x compatible version.

  • Modeshape - A distributed, hierarchical, transactional and consistent data store with support for queries, full-text search, events, versioning, references, and flexible and dynamic schemas. It is very fast, highly available, extremely scalable, and it is 100% open source.

  • Savara (JBoss Tools only) - A tool for ensuring artifacts defined at different stages of the software development lifecycle are valid against each other, and remain valid through the evolution of the system.

  • SwitchYard - A lightweight service delivery framework providing full lifecycle support for developing, deploying, and managing service-oriented applications.

  • Teiid Designer - A visual tool that enables rapid, model-driven definition, integration, management and testing of data services without programming using the Teiid runtime framework.

All of these components have been verified to work with the same dependencies as JBoss Tools 4.1 and Developer Studio 7, so installation is easy.


To install the Integration Stack tools, first install JBoss Developer Studio from the all-in-one installer, bundled and configured out of the box with everything you need to get started. Alternatively, if you already have eclipse-jee-kepler installed, you can install JBoss Developer Studio or JBoss Tools from the Eclipse Marketplace via Help > Eclipse Marketplace…


Once Developer Studio is installed, restart Eclipse and select the Software/Update tab in the JBoss Central view and look for the JBoss Developer Studio Integration Stack installation section. Select the items you’d like to install:


If you want to try out Savara you will need to use the JBoss Tools Integration Stack URL instead:

Note: If you installed into your own Eclipse you should bump up the launch resource parameters:

--launcher.XXMaxPermSize 256m --launcher.appendVmargs -vmargs -Dosgi.requiredJavaVersion=1.6 -XX:MaxPermSize=256m -Xms512m -Xmx1024m

What’s New?

Fuse Apache Camel Tooling has gone .Final

The previous release of the JBDS Integration Stack had a candidate release of Fuse Apache Camel Tooling. The support is now complete and ready to run with the Fuse Active MQ 6.1 release.

Updated Components

Fix release versions of BPMN2 Modeler, SwitchYard and Teiid Designer are also available in this release.

The new JBoss Tools home is live

Don’t miss the new Features tab for up to date information on your favorite Integration Stack component: /features

April 17, 2014

EMFStore 1.2.0 released

We just promoted the 1.2.0 RC to become the EMFStore 1.2.0 release.  EMFStore now includes extension points and API which allow you to more simply replace the storage technology on client and on server side based on the EMF resource set abstraction. As a proof-of-concept we have implemented a mongoDB back-end based on Mongo-EMF.

To try EMFStore and get started please see the updated Getting Started with EMFStore tutorial. Enjoy!

 EMFStore 1.2.0 released


Leave a Comment. Tagged with emf, EMFStore, emf, EMFStore

April 16, 2014

Welcome Richard

I’m very happy to introduce the Eclipse Foundation’s newest staff member, Richard Burcher.

Richard will be helping me with “project stuff”. If you take a quick look at Richard’s blog or Twitter feed, you’ll notice that he’s got a an affinity for location/geo-sorts of things in particular and open source software in general (along with an apparent disdain for capital letters). As part of that, Richard has a bunch of experience working with and growing communities. I intend to fully exploit that experience.

We’ve already begun Richard’s immersion into all things Eclipse. It’s been a fascinating and valuable experience for me to explain how things work around here. Assembling materials for the Eclipse Committer Bootcamp, helped me illuminate potential areas for improvement in our process. Working with Richard over the last few days has had an even bigger effect. I’m excited that we’ve already identified a potential ways that we can improve and streamline our implementation of the Eclipse Development Process, while honoring its fundamental underlying principles.

I’m excited by the potential for us to do more to help our open source projects evangelize and court their communities.

More on this later. But for now, please join me in warmly welcoming Richard aboard.

How to manage Git Submodules with JGit

For a larger project with Git you may find yourself wanting to share code among multiple repositories. Whether it is a shared library between projects or perhaps templates and such used among multiple different products. The Git built-in answer to this problem are submodules. They allow putting a clone of another repository as a subdirectory within a parent repository (sometimes also referred to as the superproject). A submodule is a repository in its own. You can commit, branch, rebase, etc. from inside it, just as with any other repository.

JGit offers an API that implements most of the Git submodule commands. And this API it is I would like to introduce you to.

The Setup

The code snippets used throughout this article are written as learning tests1. Simple tests can help to understand how third-party code works and adopting new APIs. They can be viewed as controlled experiments that allow you to discover exactly how the third-party code behaves.

A helpful side effect is that, if you keep the tests, they can help you to verify new releases of the third-party code. If your tests cover how you use the library, then incompatible changes in the third-party code will show themselves early on.

Back to the topic at hand: all tests share the same setup. See the full source code for details. There is an empty repository called parent. Next to it there is a library repository. The tests will add this as a submodule to the parent. The library repository has an initial commit with a file named readme.txt in it. A setUp method creates both repositories like so:

Git git = Git.init().setDirectory( "/tmp/path/to/repo" ).call();

The repositories are represented through the fields parent and library of type Git. This class wraps a repository and gives access to all Commands available in JGit. As I explained here earlier, each Command class corresponds to a native Git pocelain command. To invoke a command the builder pattern is used. For example, the result from the Git.commit() method is actually a CommitCommand. After providing any necessary arguments you can invoke its call() method.

Add a Submodule

The first and obvious step is to add a submodule to an existing repository. Using the setup outlined above, the library repository should be added as a submodule in the modules/library directory of the parent repository.

public void testAddSubmodule() throws Exception {
  String uri 
    = library.getRepository().getDirectory().getCanonicalPath();
  SubmoduleAddCommand addCommand = parent.submoduleAdd();
  addCommand.setURI( uri );
  addCommand.setPath( "modules/library" );
  Repository repository = addCommand.call();
  F‌ile workDir = parent.getRepository().getWorkTree();
  F‌ile readme = new F‌ile( workDir, "modules/library/readme.txt" );
  F‌ile gitmodules = new F‌ile( workDir, ".gitmodules" );
  assertTrue( readme.isF‌ile() );
  assertTrue( gitmodules.isF‌ile() );

The two things the SubmoduleAddCommand needs to know are from where the submodule should be cloned and a where it should be stored. The URI (shouldn’t it be called URL?) attribute denotes the location of the repository to clone from as it would be given to the clone command. And the path attribute specifies in which directory – relative to the parent repositories’ work directory root – the submodule should be placed. After the commands was run, the work directory of the parent repository looks like this:

The library repository is placed in the modules/library directory and its work tree is checked out. call() returns a Repository object that you can use like a regular repository. This also means that you have to explicitly close the returned repository to avoid leaking file handles.

The image reveals that the SubmoduleAddCommand did one more thing. It created a .gitmodules file in the root of the parent repository work directory and added it to the index.

[submodule "modules/library"]
path = modules/library
url = git@example.com:path/to/lib.git

If you ever looked into a Git config file you will recognize the syntax. The file lists all the submodules that are referenced from this repository. For each submodule it stores the mapping between the repository’s URL and the local directory it was pulled into. Once this file is committed and pushed, everyone who clones the repository knows where to get the submodules from (later more on that).


Once we have added a submodule we may want to know that it is actually known by the parent repository. The first test did a naive check in that it verified that certain files and directories existed. But there is also an API to list the submodules of a repository. This is what the code below does:

public void testListSubmodules() throws Exception {
  Map<String,SubmoduleStatus> submodules 
    = parent.submoduleStatus().call();
  assertEquals( 1, submodules.size() );
  SubmoduleStatus status = submodules.get( "modules/library" );
  assertEquals( INITIALIZED, status.getType() );

The SubmoduleStatus command returns a map of all the submodules in the repository where the key is the path to the submodule and the value is a SubmoduleStatus. With the above code we can verify that the just added submodule is actually there and INITIALIZED. The command also allows to add one or more paths to limit the status reporting to.

Speaking of status, JGit’s StatusCommand isn’t at the the same level as native Git. Submodules are always treated as if the command was run with &dash&dashignore-submodules=dirty: changes to the work directory of submodules are ignored.

Updating a Submodule

Submodules always point to a specific commit of the repository that they represent. Someone who clones the parent repository somewhen in the future will get the exact same submodule state although the submodule may have new commits upstream.

In order to change the revision, you must explicitly update a submodule like outlined here:

public void testUpdateSubmodule() throws Exception {
  ObjectId newHead = library.commit().setMessage( "msg" ).call();
  File workDir = parent.getRepository().getWorkTree();
  Git libModule = Git.open( new F‌ile( workDir, "modules/library" ) );
  parent.add().addF‌ilepattern( "modules/library" ).call();
  parent.commit().setMessage( "Update submodule" ).call();
  assertEquals( newHead, getSubmoduleHead( "modules/library" ) );

This rather lengthy snippet first commits something to the library repository (line 4) and then updates the library submodule to the latest commit (line 7 to 9).

To make the update permanent, the submodule must be committed (line 10 and 11). The commit stores the updated commit-id of the submodule under its name (modules/library in this example). Finally you usually want to push the changes to make them available to others.

Updating Changes to Submodules in the Parent Repository

Fetching commits from upstream into the parent repository may also change the submodule configuration. The submodules themselvs, however are not updated automatically.

This is what the SubmoduleUpdateCommand solves. Using the command without further parametrization will update all registered submodules. The command will clone missing submodules and checkout the commit specified in the configuration. Like with other submodule commands, there is an addPath() method to only update submodules within the given paths.

Cloning a Repository with Submodules

You probably got the pattern meanwhile, everything to do with submodules is manual labor. Cloning a repository that has a submodule configuration does not clone the submodules by default. But the CloneCommand has a cloneSubmodules attribute and setting this to true, well, also clones the configured submodules. Internally the SubmoduleInitCommand and SubmoduleUpdateCommand are executed recursively after the (parent) repository was cloned and its work directory was checked out.

Removing a Submodule

To remove a submodule you would expect to write something like
git.submoduleRm().setPath( ... ).call();
Unfortunately, neither native Git nor JGit has a built-in command to remove submodules. Hopefully this will be resolved in the future. Until then we must manually remove submodules. If you scroll down to the removeSubmodule() method you will see that it is no rocket science.

First, the respective submodule section is removed from the .gitsubmodules and .git/config files. Then the submodule entry in the index is also removed. Finally the changes – .gitsubmodules and the removed submodule in the index – are committed and the submodule content is deleted from the work directory.

For-Each Submodule

Native Git offers the git submodule foreach command to execute a shell command for each submodule. While JGit doesn’t exactly support such a command, it offers the SubmoduleWalk. This class can be used to iterate over the submodules in a repository. The following example fetches upstream commits for all submodules.

public void testSubmoduleWalk() throws Exception {

  int submoduleCount = 0;
  Repository parentRepository = parent.getRepository();
  SubmoduleWalk walk = SubmoduleWalk.forIndex( parentRepository );
  while( walk.next() ) {
    Repository submoduleRepository = walk.getRepository();
    Git.wrap( submoduleRepository ).fetch().call();
  assertEquals( 1, submoduleCount );

With next() the walk can be advanced to the next submodule. The method returns false if there are no more submodules. When done with a SubmoduleWalk, its allocated resources should be freed by calling release(). Again, if you obtain a Repository instance for a submodule do not forget to close it.

The SubmoduleWalk can also be used to gather detailed information about submodules.
Most of its getters relate to properties of the current submodule like path, head, remote URL, etc.

Sync Remote URLs

We have seen before that submodule configurations are stored in the .gitsubmodules file at the root of the repository work directory. Well, at least the remote URL can be overridden in .git/config. And then there is the config file of the submodule itself. This in turn can have yet another remote URL. The SubmoduleSyncCommand can be used to reset all remote URLs to the settings in .gitmodules

As you can see, the support for submodules in JGit is almost at level with native Git. Most of its commands are implemented or can be emulated with little effort. And if you find that something is not working or missing you can always ask the friendly and helpful JGit community for assistance.

  1. The term is taken from the section on ‘Exploring and Learning Boundaries’ in Clean Code by Robert C. Martin
April 15, 2014

XtextCON Update

XtextCON 2014 is still 40 days away, but I have to announce that ...

We are sold out

Initially I planned for 80 attendees. It turned out that was much too small, so we have added 30 more tickets as the event and venue can handle 110 people without problems. Today we have 112 registrations and since I want to make sure that everyone has an excellent time at this first XtextCON we closed registration today. I'm really sorry if you haven't booked yet but wanted to do so. We'll likely do an XtextCON next year again. 

New Sessions

We have added some really cool new sessions. All in all XtextCON will feature 18 Speakers and 28 sessions in two tracks. The added sessions are:
 - Xtext + Sirius : <3 (Cedric Brun)
 - Xtext + CDO - Does it blend? (Stefan Winkler)
 - Oomph - Automatically Provision a Project-Specifc IDE (Eike Stepper, Ed Merks)
 - 3D Modeling with Xtext (Martin Nilsson, Esa Ryhänen)
 - Handle Based Models for Xtext with Handly (Vladimir Piskarev)

Checkout the updated program.

EMF Forms: A Question of Effort

A comparison between view modeling and manual UI programming

In my previous blog, I introduced EMF Forms, a subcomponent of EMF Client Platform (ECP), which supports the development of form-based user interfaces based on a view model.  The approach allows the effective development of forms without manual and tedious layout coding or manually binding controls to data models.

The technological basis of EMF Forms has been used actively for more than a year in numerous user projects. In October, with release 1.1.0 of ECP, EMF Forms (still without a name) was presented publicly to the community for the first time. Since October, we’ve been able to win over many new users, we’ve received a lot of feedback and, above all, we’ve continued to develop the software. In this post, I would like to take a close look at EMF Forms. In particular, I would like to share our experience and feedback from user projects and compare EMF Forms to manual UI programming. In this context, the first and perhaps most relevant questions about new technology is: does it save effort and how much does it cost?  I start with a short introduction to EMF Forms.  For more details, please refer to the website.  Next, I compare the effort to set up an interface with and without EMF Forms, the effort to create an initial version of the UI and the effort to execute notable changes in the UI. If you already know about EMF Forms, you might want to continue reading here.

What is EMF Forms?

 Many business applications are focused on the in- and output of data as well as on subsequent data processing. Examples of such data-centric applications can be found in almost all industries, such as in CRM or ERP systems. Regardless of the specific domain, the corresponding data is often presented in forms-based UIs. These forms show the contents of one or more entities of the application and its attributes.

Displayed below is a screen shot of a simple example of a form-based UI. It shows a possible form for an entity “Person” with four attributes. Each attribute is identified by a label and a corresponding control (input field). In the example, the attributes are displayed in a simple two-column layout.

image001 EMF Forms: A Question of Effort

Figure 1: Simplified example of a form-based UI for the entity “Person” with four attributes

The implementation of this kind of user interface mainly includes the programming of individual controls such as text boxes, the binding of these controls to the data model and the creation of a layout, i.e., the placement of controls, labels and possible additional layout elements. Although the development of individual controls and their binding has been supported well with frameworks such as EMF or data binding, creating and customizing layouts is often a largely manual process. This means that all the visible elements, such as labels and input fields, are created manually in the source code, are bonded to the data model and are placed in the layout.

EMF Forms is a radically different approach. Instead of describing a user interface in source code, the UI is expressed by a simple model. It specifies which elements, specifically which attributes of the data model, are displayed at which position in the UI. The actual user interface is then rendered based on this model by interpreting the model. First, the renderer translates the controls in the view model into actual implementations.  A string attribute is displayed, for example, as a text field that is bound to that attribute. Next, the renderer translates the defined structure of the user interface in the view model into a specific layout. Figure 2 shows a very simplified example of a view model that would describe the UI used in the previous example (Figure 1).

image021 EMF Forms: A Question of Effort

Figure 2: A view model describes the form-based UI. This view model is interpreted by a renderer.

Details on the use of EMF Forms, the available view model elements and detailed tutorials can be found on the EMF Forms website. In this blog post, I will share our experiences from projects and feedback from users to compare the approach of EMF Forms with the traditional manual way of programming interfaces. Is EMF Forms’ approach really effective? Of course, there are two ways to create interfaces manually. In the first, manual code can be implemented in a particular UI toolkit. In the other, one can use a UI editor such as WindowBuilder. In general, the second version is of course more efficient but also has the limitation of not allowing use of custom controls or the reuse of interface elements. In the end, UI editors generate source code, a core difference to how EMF Forms works.


The first interesting question when using EMF Forms, of course, is whether the approach is actually more efficient than manually developing interfaces, whether it actually supports the development of a forms-based user interface with less effort.

The view model approach has at first an initial disadvantage: developers must invest time to evaluate the approach, to integrate it and to learn how to use it. Whether this effort is justified, of course, depends on the size of the developed UI, on the complexity and on the number of developers working on the forms.

We were able to observe, however, that the manual programming of user interfaces in most projects is indeed perceived as unnecessarily time-consuming and even an annoying activity. The willingness to adopt a new approach is therefore very high for most developers. In projects in which EMF Forms is already being used for form-based user interfaces, the framework is used throughout the project, including very simple UIs such as for setting dialogs or wizards.

EMF Forms and the view model are explicitly focused on the development of form-based interfaces, therefore it offers a significantly lower complexity level than a traditional UI toolkit. The explicit goal of view modeling is to provide better concepts for describing form-based user interfaces. EMF Forms offers, for example, an item “Control”, which allows developers to specify that a particular attribute from the data model (for example, “First Name”) shall be displayed in the user interface. A control is translated by a renderer into a label or a widget (for example, a text field). If such a control were to be implemented manually without EMF Forms, a label and a text box would have to be created manually.

In EMF Forms, it is sufficient to specify that a particular attribute be displayed. This information is specified in the element “Control”. By placing the control within the structure of the view model, the layout is implicitly defined. The renderer is then responsible for the actual implementation of the UI. Therefore, significantly fewer inputs are required for the specification of an interface in EMF Forms, which is much easier than manual coding in both the initial creation as well as in changing forms. The following screenshot compares the two approaches and shows what would be necessary when creating a similar interface in SWT, with a view model on the left and source code on the right. Of course this is just an example and not statistical proof, but describing UIs in a view model is generally much more concise.

To stay fair: On the side of the view model, the tree items shown contain additional information, which are not shown in the screenshot. In the example, however, the only additional information specified is which attribute of the data model to display in a certain control. This information can be entered efficiently via a selection dialog. Other attributes of the view model, such as whether a label for a control should be shown, are optional. In the example, the default values are used. When manually developing UIs, those kinds of default options must always be implemented. Furthermore, in the sample code shown in the screenshot, the created widgets are not bound to the data model, which would mean additional expense. In the case of EMF Forms, the renderer takes over this task. Controls are not only bound to the data model, they also provide additional functionality such as input validation, which would have to be implemented manually again.

image03 EMF Forms: A Question of Effort

Figure 3: Comparison between a user interface specified using the view model and the manual implementation in SWT

The initial spark

When considering efficiency, an important criterion is the required effort to create an initial interface that displays all attributes of an entity in a simple layout. Such first versions of forms are particularly helpful for newly defined data entities, for example, to check the data model to see if it is complete. For this use case, the exact layout is often not important yet. The final specification of a user interface for the entities is sometimes developed too early, while the data model is still subject to changes.

When manually developing UIs, UI editors or even UI mock-up tools can be helpful and allow faster results than manual programming. However, the created UIs are not functional; they are not bound to the data model. Using the model-based approach of EMF Forms, user interfaces can also be generated from scratch. In this case, the data model is read and the framework creates a view model on the fly that displays all attributes in a list. This approach is used by default for all entities from the data model for which no explicit view model has been defined yet. Figure 4 shows an example of the generation of a view model from the model data entity user. The generation of a default view model can also be customized, e.g., the default could be a two-column layout. The default view models provide a good starting point to adjusting a user interface step by step, which is the typical process in an agile project. In the following section, we describe the experience with EMF Forms when changes or additions are applied on an existing user interface.

image011 EMF Forms: A Question of Effort

Figure 4: EMF Forms allows the initial generation of a default view model from a data model


When it comes to the development costs of software, the initial cost is typically only half the truth. Equally interesting are the costs when changes occur. Particularly in agile development, changes are an accepted and integral part of the development process. Therefore, a crucial criterion for a framework like EMF Forms is how well the approach supports changing existing form-based user interfaces, either because the layout needs to be adjusted or because there are changes in the underlying data model. To apply changes to an existing user interface, whether in a view model or in manual code, it is first necessary to identify the correct location for the corresponding adjustment. Manually written layout code can be quite difficult to read; the structure of the code often differs significantly from that of the structure of the user interface. In Figure 2, an example of a two-column layout in SWT, the controls are created line-by-line even though the structure of the interface is column-based. The view model, as a specialized concept for form-based UIs, follows the logic of the user interface more closely and is therefore often easier to read and understand.

There are two different ways to make the actual change in an interface, which makes quite a difference for EMF Forms. The first case is a modification that can be done in the view model. An example of such a change would be moving an attribute inside a form, adding a structural element (for example, a new column) or adding new controls.

Adding a new attribute is the same as initially generating the view model; there is a simple wizard. It’s even easier to change the position of an existing element, be it a control, a group of controls or a whole element of the structure. It can be moved simply by drag and drop in the tree view of the view model.

In these two examples, the model-based approach fully shows its strengths – even highly structured manually written UI code is rarely as simple and understandable as a corresponding view model.

More interesting is the second case in which a change isn’t made directly in the view model but affects the renderer. An example of such a change would be increasing the margins of a rendered control. Many of these settings can be specified in a so-called template model in EMF Forms. Specifying these settings would then affect the layout of the entire application and thus result in a homogeneous look-and -feel.

If a general setting is not (yet) supported, the renderer shipped with EMF Forms can be extended or even replaced with a custom renderer. A typical example of such an adaptation would be adding special controls such as an input field for email addresses. For this purpose, manual UI programming is, of course, still necessary. However, each missing concept has to be implemented only once and can be combined with any existing concept. The additional expense thus refers only to proprietary, custom components.  With manual UI programming, these would need to be developed in any case.

There will always be parts of a form-based UI that are difficult to express in a view model without resulting in a similar complexity to manual UI programming. In these cases, EMF Forms pragmatically concedes and allows embedding of so-called custom areas in a form. This way, very specific parts of the UI can be programmed manually just as before. Of course, it is these types of UI element that should be avoided if possible. In practice, they can often be replaced in the medium term by adapting generic concepts.


This blog post compared the efficiency in programming form-based user interfaces of using a model-based approach such as EMF Forms on the one hand with manual UI programming on the other hand. It is not surprising that the first is seen to be better overall. EMF Forms was designed for exactly this purpose – form-based user interfaces – while UI toolkits have to support any type of user interface. Last but not least, I am of course very involved with EMF Forms, so this post cannot be considered an objective comparison. However, we are interested in feedback, even negative. Open source technologies are continually developed and improved only through feedback, especially regarding things that do not work as desired or use cases in which the framework has not been used before. Of course we are happy for positive feedback, too!  For more information on EMF Forms, please visit the EMF project website.

Professional Support

 Open-source software is free of licensing fees. Furthermore, it is easy to adapt and enhance with new features. Nevertheless, using open-source frameworks is not free. Like in closed-source software, no one is an expert on every framework. The total cost of ownership includes training, adoption, enhancement and maintenance of a framework. It might take significantly more time for somebody new to the project to extend a certain feature than for someone who is familiar with the framework. Furthermore, software has to be maintained. Even if this can be done literally by everybody for open-source software, a professional maintenance agreement with fixed response times is often mandatory in an industrial setting to ensure productivity. EclipseSource employs several EMF Forms committers and offers training and professional support. This includes:

  • Evaluation: Let us help to decide whether EMF Forms is the right choice for you. We will evaluate your requirements, assess whether and how they can be matched with EMF Forms and help you estimate the integration effort.

  • Prototyping: Let us provide you with a prototype demonstrating how EMF Forms will work in your domain.

  • Training: Let us teach you how to apply EMF Forms most efficiently in your project, including related technologies such as EMF or ECP.

  • Integration: Let us help you to integrate EMF Forms into your existing application as efficiently as possible.

  • Support: Let us assist your team when solving day-to-day issues such as technical problems or architecture decisions.

  • Sponsored Development and Maintenance: Let us adapt and enhance the framework based on your specific requirements.


2 Comments. Tagged with eclipse, emf, emfforms, rcp, eclipse, emf, emfforms, rcp

April 14, 2014

RAP CSS Tooling

While I was working on sample for “Eclipse4 on RAP” I had to use RAP CSS to get a mobile like behavior.

The CSS/Themeing support in RAP is quite powerful but one has to have a web-page opened to see all possible attributes applicable to a certain control which is not how we are used to work in an IDE dominated world.

We want content-assist, error reporting while typing, … but naturally none of the CSS-Editors know about the rap-specific selectors and properties.

Fortunately e(fx)clipse has an CSS extensible editor which is backed by a generic format definition what properties are available on which selector. All that has to be done to teach it additional CSS-Selectors and CSS-Properties is to create a file like this.

If you now:

you should get an editing feeling like the screencast below.

April 11, 2014

OSGi DevCon 2014 Schedule Announced

We are pleased to announce that the OSGi DevCon 2014 Schedule is now available. Register before April 19 to save $400 per ticket. Yes thats just 8 days away!  Don't forget there is a group discount available if there are 3 or more of you registering at the same time. Click here for the OSGi DevCon 2014 Schedule As you will see there is plenty to keep you busy for the 3 day conference (June

Retour EclipseCon Boston 2013

EclipseCon, la grand-messe internationale de la communauté Eclipse, vient de se terminer. Vous n'avez pas pu vous déplacer à Boston ? Cédric Brun, Etienne Juliot, Alex Lagarde, Mikaël Barbero, Mélanie Bats et Gaël Blondelle vous font revivre la conférence. Sur les grandes thématiques de cette année, nous avons eu : - Eclipse RCP, - Orion et le passage au web, - l'omniprésence du Modeling et des DSL, - Git, - ALM (build et déploiement continu).

Inactive Blogs