Modeling tools go up to the cloud

February 22, 2018 10:00 AM

TLDR; at Obeo, we have done the first steps to bring Sirius to the web. Today we are ready to work with you to bring your own modeling tool up to the cloud.

If you are curious about details, this is the longer story…

As you know, at Obeo, we have been working on modeling tools for a long time now, and we therefore have a lot of experience developing graphical tools. Obeo is a key player in the Modeling ecosystem: we are involved in open source through numerous Eclipse projects: Sirius, Acceleo, EMF/GMF, EcoreTools, EMF Compare… One of our great success is Sirius: a framework to easily and quickly create a modeling workbench dedicated to your domain.

Today, Sirius relies on the Eclipse platform, consequently the graphical modelers based on it are desktop applications.

You know that Desktop vs Web applications are in war since time immemorial…

In the past, we already won some battles, by mixing both approaches, switching from one to another. For instance, our Enterprise Architecture solution Obeo SmartEA, provides :

  • light client experience to explore a model, Explore model with light client
  • and a rich application to edit it. Edit model with rich application

Since December, you already know that we are working to reconcile the two worlds… For Sirius 6, we are developing a new solution to use Sirius definition with HTML Renderer within Eclipse workbenches. We will use an hybrid model to be able to get the best of both worlds and make them coexist in the same rich application. In collaboration with Thales, we are prototyping what would be a Capella graphical modeler using the dynamic visualisation capabilities offered by Sprotty and Eclipse Layout Kernel. I really appreciate that the TypeFox guys lent us a hand about that, this was truly in the open source spirit!

HTML renderer within Sirius workbenches

I am really happy to announce that now we are going one step further: we are making Sirius evolve to bring modeling tools up to the cloud.

Up

Modeling tools based on Sirius today look like this. You have a graphical editor with a palette to manage elements. Modeling tool today

Modeling tools based on Sirius tomorrow could look like this. You still have a graphical editor with a palette to manage elements. Modeling tool tomorrow

The main difference is that tomorrow Sirius applications will be based on web technologies, and therefore the graphical modelers will be cloud applications. With a cloud application :

  • you no longer need to install it, it simply do not require any installation process,
  • it does not occupy extra space on your hard drive,
  • your graphical workbench is easy to access from any browser and you can use it from any device,
  • it adapts to the workload increase.

There is still a long way to go. Path to go

As said before, today our modeling tools are based on Eclipse RCP technologies. We are working on a new architecture to make this work in a client/server world. Sirius will be split in different pieces providing services to clients rendering the modeling tool in the browser.

To bring those technologies further to the web, from day 1 we were convinced that we needed a standard communication protocol. A few months ago, an impressive community effort was started around the Language Server Protocol to ease the integration of languages to textual editors. We found that inspiring and that’s why we decided to start working on a Graphical Server Protocol. This initiative defines the protocol between an editor or an IDE and a graphical server that provides graphical features like tools, mappings, layout, layers… We based our work on what was done by Typefox in Sprotty.

I am glad to reveal our first prototypes of Sirius exposing its services through the Graphical Server Protocol to render diagrams specified in a browser. Today we can take any existing graphical workbench specified with Sirius and render it in a browser. Not only we render the graphical elements but also we interpret at runtime the creation tools available in the palette according to the one defined in the Sirius specification. And as we can see on the following video, it is possible to create new model elements thanks to those tools from the palette.

In the next video, just by updating the Sirius specification as usual, you can see in the browser how the elements are rendered and you can work on the look and feel of your modeler iteratively. Here at the beginning all the edges and labels are black. One can update the Sirius specification to set the men labels and edges in blue. Immediately when the configuration is saved you see that the colors changed in the browser.

A diagram can provide different layers. This demo shows the capability to enable/disable layers. It will filter accordingly the graphical elements visible in the diagram and the tools available in the palette. You see that the layout is automatically recalculated every time a layer is enabled to provide a smart layout for the diagram at all times.

I believe that both desktop and web applications will continue to coexist for a long time. While both desktop and web applications have their pros and cons, what is important is that the application fits users needs. Users don’t care whether your app is native or web based as long as it does the job properly. We do not give up one world for another, we are making what is needed to get the best of the two worlds.

That’s why we are really excited by the fact that Eclipse Che decided to replace its GWT IDE by a new one relying on modern technologies: Theia. Pursuing the same line as Eclipse RCP, Theia is an open framework that allows users to compose their own applications. I believe that this extensibility model should foster the emergence of a new ecosystem.Thanks to the extensibility offered by Theia, we can envision in the future the development of modeling tools for both desktop and web applications equally and even going further by integrating such tools to cloud frameworks such as Che. Mockup of Sirius based modeler integrated in Che

One of the expectation of the modeling community is to be able to simplify deployment, that’s one of our initial goal by going to cloud based applications.

As you saw in our videos, we are making Sirius evolve to bring modeling tools to the cloud.

We need you to go beyond.

As open source is in our DNA, most of these changes will be contributed to Sirius. The actual Sirius project will evolve to a client/server architecture. Let’s work together to bring your tooling to this new platform !

We would also like to invite all of you to participate to the Graphical Server Protocol initiative to make this an open collaboration providing a common protocol for graphical editors.

Finally remember that we are available to work with other organizations to bring modeling tools to the Web, including integration with web & cloud tooling such as Theia & Che.

Contact us, join us, support us!

We’re gonna get it, get it together I know, we’re gonna get it, get it together and flow We’re gonna get it, get it together and go Up, and up, and up

Coldplay up & up


February 22, 2018 10:00 AM

Eclipse Newsletter | Boot & Build Eclipse Projects

February 22, 2018 08:51 AM

Featuring articles about Spring Tools, Gradle plugin (Buildship), Dirigible cloud platform, and EMF Parsley.

February 22, 2018 08:51 AM

Countdown: 2 weeks | Complete IoT Developer Survey

February 20, 2018 09:27 AM

Take 10 minutes to complete the fourth annual IoT Developer survey!

February 20, 2018 09:27 AM

Integration Tooling for Eclipse Oxygen

by pleacu at February 19, 2018 02:13 PM

Try our complete Eclipse Oxygen and Red Hat JBoss Developer Studio 11 compatible integration tooling.

jbosstools jbdevstudio blog header

JBoss Tools Integration Stack 4.5.2.Final / Developer Studio Integration Stack 11.2.0.GA

All of the Integration Stack components have been verified to work with the same dependencies as JBoss Tools 4.5 and Developer Studio 11.

What’s new for this release?

This release provides full Teiid Designer tooling support for JBoss Data Virtualization 6.4 runtime. It provides an updated BPMN2 Modeler and jBPM/Drools for our Business Process Modeling friends. It also provides full synchronization with Devstudio 11.2.0.GA, JBoss Tools 4.5.2.Final and Eclipse Oxygen.2. Please note that SwitchYard is deprecated in this release.

Released Tooling Highlights

JBoss Business Process and Rules Development

BPMN2 Modeler Known Issues

See the BPMN2 1.4.2.Final Known Issues Section of the Integration Stack 11.2.0.GA release notes.

Drools/jBPM6 Known Issues

See the Drools 7.5.0.Final Known Issues Section of the Integration Stack 11.2.0.GA release notes.

SwitchYard Highlights

See the SwitchYard 2.4.1.Final Resolved Issues Section of the Integration Stack 11.2.0.GA release notes.

Data Virtualization Highlights

Teiid Designer

See the Teiid Designer 11.1.1.Final Resolved Issues Section of the Integration Stack 11.2.0.GA release notes.

What’s an Integration Stack?

Red Hat JBoss Developer Studio Integration Stack is a set of Eclipse-based development tools. It further enhances the IDE functionality provided by JBoss Developer Studio, with plug-ins specifically for use when developing for other Red Hat JBoss products. It’s where DataVirt Tooling, SOA tooling and BRMS tooling are aggregated. The following frameworks are supported:

JBoss Business Process and Rules Development

JBoss Business Process and Rules Development plug-ins provide design, debug and testing tooling for developing business processes for Red Hat JBoss BRMS and Red Hat JBoss BPM Suite.

  • BPEL Designer - Orchestrating your business processes.

  • BPMN2 Modeler - A graphical modeling tool which allows creation and editing of Business Process Modeling Notation diagrams using graphiti.

  • Drools - A Business Logic integration Platform which provides a unified and integrated platform for Rules, Workflow and Event Processing including KIE.

  • jBPM - A flexible Business Process Management (BPM) suite.

JBoss Data Virtualization Development

JBoss Data Virtualization Development plug-ins provide a graphical interface to manage various aspects of Red Hat JBoss Data Virtualization instances, including the ability to design virtual databases and interact with associated governance repositories.

  • Teiid Designer - A visual tool that enables rapid, model-driven definition, integration, management and testing of data services without programming using the Teiid runtime framework.

JBoss Integration and SOA Development

JBoss Integration and SOA Development plug-ins provide tooling for developing, configuring and deploying BRMS and SwitchYard to Red Hat JBoss Fuse and Fuse Fabric containers.

  • All of the Business Process and Rules Development plugins plus SwitchYard. Switchyard is deprecated as of this release.

  • Fuse Tooling has moved out of the Integration Stack to be a core part of JBoss Tools and Developer Studio.

The JBoss Tools website features tab

Don’t miss the Features tab for up to date information on your favorite Integration Stack components.

Installation

The easiest way to install the Integration Stack components is through the stand-alone installer or through our JBoss Tools Download Site.

For a complete set of Integration Stack installation instructions, see Integration Stack Installation Guide

Let us know how it goes!

Paul Leacu.


by pleacu at February 19, 2018 02:13 PM

Is your JUG an EclipseCon France partner?

by Anonymous at February 19, 2018 11:36 AM

JUG partners are helping to spread the word about EclipseCon France. Find out more about the program on the JUG partner page. And, welcome our first partners from Nantes, Paris, Toulouse and Bordeaux!


by Anonymous at February 19, 2018 11:36 AM

CheConf18

February 16, 2018 12:00 PM

A small post to announce that I will be speaking at CheConf18, a one day conference dedicated to Eclipse Che an extensible cloud development platform. I am really excited to participate to this event! I will co speak with Stevan Le Meur, Che maintainer from RedHat about Building Extensibility and Community for Che. During our session, you’ll get a preview of the work we prototyped at Obeo to bring the modeling stack to the web and what class of tools one can envision.

Perhaps you hadn’t hear about CheConf so far ? No problem: it’s not too late to participate. Indeed, there’s no need to negotiate with your boss, no need to book a flight and an hotel: this conference happens entirely online and is free!

You just need to subscribe on the web site, and you’ll be able to join the conference when you want. Have a look at the great schedule. And do not forget our session Wednesday February 21, 2018 17:30-18:00 Paris Time or wherever you are in the world!

Stay tuned!


February 16, 2018 12:00 PM

Cloud Native and Serverless Landscape

by Chris Aniszczyk at February 16, 2018 10:21 AM

For the last year or so, the CNCF has been exploring the intersection of cloud native and serverless through the CNCF Serverless WG:

As the first artifacts of the working group, we are happy to announce a whitepaper and landscape to bring some clarity to this early and evolving technology space. The CNCF Serverless WG is also working on a draft specification for describing event data in a common way to ease event declaration and delivery, focused on the serverless space. The goal is to eventually present this project to the CNCF TOC to formalize it as an official CNCF project:

We’re still early days, but in my opinion, serverless is one application/programming built on cloud native technology. There are some open source efforts out there for serverless but they tend to be focused on specific projects (e.g., OpenFaaS, kubeless) versus collaboration across cloud providers and startups. The CNCF is looking to enable collaboration/projects in this space that adhere to our values. What are our values? See these from our charter:

  • Fast is better than slow. The foundation enables projects to progress at high velocity to support aggressive adoption by users.
  • Open. The foundation is open and accessible, and operates independently of specific partisan interests. The foundation accepts all contributors based on the merit of their contributions, and the foundation’s technology must be available to all according to open source values. The technical community and its decisions shall be transparent.
  • Fair. The foundation will avoid undue influence, bad behavior or “pay-to-play” decision-making.
  • Strong technical identity. The foundation will achieve and maintain a high degree of its own technical identify that is shared across the projects.
  • Clear boundaries. The foundation shall establish clear goals, and in some cases, what the non-goals of the foundation are to allow projects to effectively co-exist, and to help the ecosystem understand where to focus for new innovation.
  • Scalable. Ability to support all scales of deployment, from small developer centric environments to the scale of enterprises and service providers. This implies that in some deployments some optional components may not be deployed, but the overall design and architecture should still be applicable.
  • Platform agnostic. The specifications developed will not be platform specific such that they can be implemented on a variety of architectures and operating systems.

Anyways, if you’re interested in this space, I highly recommend you attend the CNCF Serverless WG meetings which are public and currently happen on a weekly basis.


by Chris Aniszczyk at February 16, 2018 10:21 AM

Presentation: Spring Tools 4 - Eclipse and Beyond

by Martin Lippert, Kris De Volder at February 15, 2018 06:27 PM

Martin Lippert and Kris De Volder introduce and demo a new generation of Spring tools including Spring Tool Suite for Eclipse (STS4), STS4 VS Code and STS4 Atom.

By Martin Lippert

by Martin Lippert, Kris De Volder at February 15, 2018 06:27 PM

JBoss Tools 4.5.3.AM1 for Eclipse Oxygen.2

by jeffmaury at February 13, 2018 08:44 PM

Happy to announce 4.5.3.AM1 (Developer Milestone 1) build for Eclipse Oxygen.2.

Downloads available at JBoss Tools 4.5.3 AM1.

What is New?

Full info is at this page. Some highlights are below.

OpenShift

Minishift Server Adapter

A new server adapter has been added to support upstream Minishift. While the server adapter itself has limited functionality, it is able to start and stop the Minishift virtual machine via its minishift binary. From the Servers view, click New and then type minishift, that will bring up a command to setup and/or launch the Minishift server adapter.

minishift server adapter

All you have to do is set the location of the minishift binary file, the type of virtualization hypervisor and an optional Minishift profile name.

minishift server adapter1

Once you’re finished, a new Minishift Server adapter will then be created and visible in the Servers view.

minishift server adapter2

Once the server is started, Docker and OpenShift connections should appear in their respective views, allowing the user to quickly create a new Openshift application and begin developing their AwesomeApp in a highly-replicatable environment.

minishift server adapter3
minishift server adapter4

Fuse Tooling

New shortcuts in Fuse Integration perspective

Shortcuts for the Java, Launch, and Debug perspectives and basic navigation operations are now provided within the Fuse Integration perspective.

The result is a set of buttons in the Toolbar:

New Toolbar action

All of the associated keyboard shortcuts are also available, such as Ctrl+Shift+T to open a Java Type.

Performance improvement: Loading Advanced tab for Camel Endpoints

The loading time of the "Advanced" tab in the Properties view for Camel Endpoints is greatly improved.

Advanced Tab in Properties view

Previously, in the case of Camel Components that have a lot of parameters, it took several seconds to load the Advanced tab. For example, for the File component, it would take ~3.5s. It now takes ~350ms. The load time has been reduced by a factor of 10. (See this interesting article on response time)

If you notice other places showing slow performance, you can file a report by using the Fuse Tooling issue tracker. The Fuse Tooling team really appreciates your help. Your feedback contributes to our development priorities and improves the Fuse Tooling user experience.

Enjoy!

Jeff Maury


by jeffmaury at February 13, 2018 08:44 PM

Eclipse tested with a few Gnome themes

by Lorenzo Bettini at February 13, 2018 09:53 AM

In this small blog post I’ll show how Eclipse looks like in Linux Gnome (Ubuntu 17.10) with a few Gnome themes.

First of all, the default Ubuntu theme, Ambiance, makes Eclipse look not very nice… see the icons, which are “packed” and “compressed” in the toolbar, not to mention the cut “Filter Files” textbox in the “Git Staging” view:

Numix has similar problems:

Adwaita, (the default Gnome theme) instead makes it look great:

The same holds for alternative themes; the following screenshots are based on Arc, Pop and Matcha, respectively:

So, in the end, stay away from Ubuntu default theme 😉

Be Sociable, Share!

by Lorenzo Bettini at February 13, 2018 09:53 AM

Python 3 and Import Hooks for OSGi Services

by Scott Lewis (noreply@blogger.com) at February 13, 2018 02:15 AM

In a previous post I described using Python for implementing OSGi Services.   This Python<->Java service bridge allows Python-provided/implemented OSGi services called from Java, and Java-provided/implemented OSGi Services called from Python.   OSGi Remote Services provides a standardized way of communicating service meta-data (e.g. service contracts, endpoint meta-data) between Java and Python processes.

As this Java<->Python communication conforms to the OSGi Remote Services specification, everything is completely inter-operable with Declarative Services and/or other frameworks based upon OSGi Services.  It will also run in any OSGi R5+ environment, including Eclipse, Karaf, OSGi-based web servers, or other OSGi-based environments.

Recently, Python 3 has introduced the concept of an Import Hook.   An import hook allows the python path and the behavior of the python import statement to be dynamically or extended. 

In the most recent version (2.7) of the ECF Py4j Distribution Provider, we use import hooks so that Python module import is resolved by a Java-side OSGi ModuleResolver service.   For example, as described in this tutorial, this Python statement

from hello import HelloServiceImpl

imports the hello.py module as a string loaded from within an OSGi bundle.  Among other things, this allows OSGi dynamics to be used to add and remove modules from the python path without stopping and restarting either the Java or the Python processes.



by Scott Lewis (noreply@blogger.com) at February 13, 2018 02:15 AM

Eclipse Vert.x 3.5.1 released!

by vietj at February 13, 2018 12:00 AM

We have just released Vert.x 3.5.1!

Fixes first!

As usual this release fixes bugs reported in 3.5.0, see the release notes.

JUnit 5 support

This release introduces the new vertx-junit5 module.

JUnit 5 is a rewrite of the famous Java testing framework that brings new interesting features, including:

  • nested tests,
  • the ability to give a human-readable description of tests and test cases (and yes, even use emojis),
  • a modular extension mechanism that is more powerful than the JUnit 4 runner mechanism (@RunWith annotation),
  • conditional test execution,
  • parameterized tests, including from sources such as CSV data,
  • the support of Java 8 lambda expressions in the reworked built-in assertions API,
  • support for running tests previously written for JUnit 4.

Suppose that we have a SampleVerticle verticle that exposes a HTTP server on port 11981. Here is how we can test its deployment as well as the result of 10 concurrent HTTP requests:

@Test
@DisplayName("🚀 Deploy a HTTP service verticle and make 10 requests")
void useSampleVerticle(Vertx vertx, VertxTestContext testContext) {
  WebClient webClient = WebClient.create(vertx);
  Checkpoint deploymentCheckpoint = testContext.checkpoint();

  Checkpoint requestCheckpoint = testContext.checkpoint(10);
  vertx.deployVerticle(new SampleVerticle(), testContext.succeeding(id -> {
    deploymentCheckpoint.flag();

    for (int i = 0; i < 10; i++) {
      webClient.get(11981, "localhost", "/")
        .as(BodyCodec.string())
        .send(testContext.succeeding(resp -> {
          testContext.verify(() -> {
            assertThat(resp.statusCode()).isEqualTo(200);
            assertThat(resp.body()).contains("Yo!");
            requestCheckpoint.flag();
          });
        }));
    }
  }));
}

The test method above benefits from the injection of a working Vertx context, a VertxTestContext for dealing with asynchronous operations, and the guarantee that the execution time is bound by a timeout which can optionally be configured using a @Timeout annotation.

The test succeeds when all checkpoints have been flagged. Note that vertx-junit5 is agnostic of the assertions library being used: you may opt for the built-in JUnit 5 assertions or use a 3rd-party library such as AssertJ as we did in the example above.

You can checkout the source on GitHub, read the manual and learn from the examples.

Web API Contract enhancements

The package vertx-web-api-contract includes a variety of fixes, from schema $ref to revamped documentation. You can give a look at list of all fixes/improvements here and all breaking changes here.

From 3.5.1 to load the openapi spec and instantiate the Router you should use new method OpenAPI3RouterFactory.create() that replaces old methods createRouterFactoryFromFile() and createRouterFactoryFromURL(). This new method accepts relative paths, absolute paths, local URL with file:// and remote URL with http://. Note that if you want refeer to a file relative to your jar’s root, you can simply use a relative path and the parser will look out the jar and into the jar for the spec.

From 3.5.1 all settings about OpenAPI3RouterFactory behaviours during router generation are inglobed in a new object called RouterFactoryOptions. From this object you can:

  • Configure if you want to mount a default validation failure handler and which one (methods setMountValidationFailureHandler(boolean) and setValidationFailureHandler(Handler))
  • Configure if you want to mount a default 501 not implemented handler and which one (methods setMountNotImplementedFailureHandler(boolean) and setNotImplementedFailureHandler(Handler))
  • Configure if you want to mount ResponseContentTypeHandler automatically (method setMountResponseContentTypeHandler(boolean))
  • Configure if you want to fail during router generation when security handlers are not configured (method setRequireSecurityHandlers(boolean))

After initialization of route, you can mount the RouterFactoryOptions object with method routerFactory.setOptions() when you want before calling getRouter().

RxJava deprecation removal

It is important to know that 3.5.x will be the last release with the legacy xyzObservable() methods:

@Deprecated()
public Observable listenObservable(int port, String host);

has been replaced since Vert.x 3.4 by:

public Single rxListen(int port, String host);

The xyzObservable() deprecated methods will be removed in Vert.x 3.6.

Wrap up

Vert.x 3.5.1 release notes and breaking changes:

The event bus client using the SockJS bridge are available from NPM, Bower and as a WebJar:

Docker images are also available on the Docker Hub. The Vert.x distribution is also available from SDKMan and HomeBrew.

The artifacts have been deployed to Maven Central and you can get the distribution on Bintray.

Happy coding !


by vietj at February 13, 2018 12:00 AM

EclipseCon France 2018 | Call for Papers

February 12, 2018 02:15 PM

Call for paper submissions are now open until March 19. We'll see you June 13 - 14 in Toulouse!

February 12, 2018 02:15 PM

The Eclipse Committer Election Workflow

by waynebeaton at February 08, 2018 03:30 PM

In the world of open source, Committers are ones who hold they keys. Committers decide what code goes into the code base, they decide how a project builds, and they ultimately decide what gets delivered to the adopter community. With awesome power, comes awesome responsibility, and so it’s no mistake that the Open Source Rules of Engagement described by the Eclipse Development Process, puts Meritocracy on equal footing with Transparency and Openness: becoming a committer isn’t necessarily hard, but it does require a demonstration of commitment (committer… commitment… see what I did there?)

There’s two ways to become an Eclipse Committer. The first way is to be listed as an initial committer on a new project proposal. When projects come to the Eclipse Foundation we need them to actually start with committers, and so we include this as part of the bootstrapping. As part of the process of community vetting a new project proposal, the committers listed are themselves vetted by the community. That’s why we include space for a merit statement for every committer listed on a proposal (in many cases, the merit statement is an implied “these are the people who worked on the code that is being contributed”). In effect, the project proposal process also acts as a committer election that’s open to the entire community.

The second way to become a committer is to get voted in via Committer Election. This starts with a nomination by an existing committer that includes a statement of merit that usually takes the form of a list various contributions that the individual has made to the project. What constitutes a sufficient demonstration of merit varies by project team and PMC. Generally, though, after an individual has made a small number of high quality contributions that demonstrate that they understand how the project works, it’s pretty natural for them to be invited to join the team.

There’s actually a third way. In cases where a project is dysfunctional, the project leadership has an option to add and remove committers and project leads. In the rare cases where this option is exercised, it is first discussed in the corresponding Project Management Committee‘s (PMC) mailing list.

Last week, we rolled out some new infrastructure to support Committer Elections.

Every project page in the Project Management Infrastructure (PMI) includes a block of Committer Tools on the right side of the page. From this block, project committers can perform various actions, including the new Nominate a Committer action.

Screenshot from 2018-02-06 10-34-43

Committer Tools

Clicking this will bring up the nomination form where the existing committer will provide the name and email address of the nominee along with the statement of merit.

Screenshot from 2018-02-06 10-35-15

What the committer sees when they nominate a new committer.

When you click the Nominate button, the Committer Election begins by sending a note to the project mailing list inviting existing project committers to vote. Committers visit the election page to cast their vote and—since this is a transparent process—everybody else can watch the election unfold.

According to our election rules, an election ends when either everybody votes in the affirmative or seven days has passed. If at the end of the election we have at least three affirmative votes and no negative votes, the vote is considered successful and it passed on to the PMC for approval (note that when a project has fewer than three committers, success is declared if everybody votes in the affirmative). The PMC will validate that the merit statement is sufficient and that the election was executed correctly, and either approve or veto it. PMC-approved elections get passed into the next piece of the workflow: Committer Paperwork.

Regardless of how a developer becomes a committer (by vote, by proposal, or by appointment), they are required to complete legal paperwork before we can grant them write access to project resources. The Eclipse Foundation needs to ensure that all committers with write access to the code, websites, and issue tracking systems understand their role in the intellectual property process; and that we have accurate records of the people who are acting as change agents on the projects. Committers must provide documentation asserting that they have read, understood, and will follow the committer guidelines; and must gain their employers consent to their participation in Eclipse Foundation open source projects.

Our Commmitter Paperwork process is initiated whenever a developer joins us as a new committer, or—since paperwork is tied to a specific employer—when a committer changes employers.

Screen Shot 2018-02-07 at 11.54.35 AM

The exact nature of the type of paperwork required varies based on the individual’s employment status and the Eclipse Foundation membership status of their employer. Again, a full discussion of this is out-of-scope for this post, but we need to have either an Individual Committer Agreement or a Member Committer Agreement on file for every committer. The workflow guides the new committer through the options.

Note that we’ve just gotten approval on an update to the Individual Committer Agreement that eliminates the need for the companion Eclipse Foundation Committer Employer Consent Form. This should make it easier for new committers to get started. We’re rolling the new version out now.

We owe this great new implementation of this workflow to the tireless efforts of the entire Eclipse IT Team, and especially Eric, Chris, and Matt. Big Thanks!


by waynebeaton at February 08, 2018 03:30 PM

Starting an open source program office?

by Chris Aniszczyk at February 08, 2018 02:17 PM

To make good on my new years resolutions of writing more, I recently wrote an article for opensource.com on starting an open source program for your company:

Please check it out and let me know if you have any comments. I’d really like to see us build a future where more companies have formal open source programs, that’s a key path towards making open source sustainable for everyone.


by Chris Aniszczyk at February 08, 2018 02:17 PM

Becoming Xtext Co-Project Lead

by Christian Dietrich (dietrich@itemis.de) at February 07, 2018 09:07 AM

I started using Xtext more than 10 years ago. Back then it was a small part of the openArchitectureware framework. I began using it heavily after the move to Eclipse and got a power user and supporter in the newsgroups and forum. In 2016 I joined the Xtext Committer team and worked on the framework for about 50% of my time.

Roughly at the same time parts of the Xtext team moved away from itemis. So the people working on Xtext and their main focus changed. I still think Xtext is a very valuable framework and it deserves more attention than it currently gets. This is why I stepped up to become a co-project lead for the project to ensure it's management is put on wider legs.

whats-new


As Xtext committer and co-project lead my main goals are the following

  • Ensure that Xtext and Xtend are actively maintained and will work with future versions of the Eclipse Platform and JDT as well as with future versions of Java itself (for example the Java 9 support we are currently working on).
  • Relevant bugs and performance problems keep being addressed and fixed in a reasonable manner and timespan.
  • Enable more users to contribute to Xtext.
  • Develop new features that make life of our users more easy and keep track with trends and developments inside and outside the Eclipse ecosystem.
  • Make sure Xtext in future still will be supported and running smooth in standalone modes such as LSP as well as inside Eclipse IDE.
  • Make sure we have regular releases and keep track with the release process that is currently planned for Eclipse platform but we may slow down from the release cadence you are used from the past.

It is not only the TypeFox guys or our Xtext-Team at itemis (Karsten, Sebastian, Holger and other colleagues) that drive Xtext. I invite you as Xtext community to actively contribute to the framework: Not only by filing bugs or giving feedback but i warmly welcome everybody who is willing to contribute fixes or features to Xtext. Get in contact with us and let us work together on a great future for Xtext.

 


by Christian Dietrich (dietrich@itemis.de) at February 07, 2018 09:07 AM

EE.next working group | Community review process

February 06, 2018 01:00 PM

Announcing the EE.next working group to support the EE4J projects. Join the review process of the charter.

February 06, 2018 01:00 PM

Introducing the EE.next Working Group

by Mike Milinkovich at February 05, 2018 07:43 PM

As part of our continuing adventures in migrating Java EE to the Eclipse Foundation I am pleased to announce that the draft of the EE.next Working Group charter  has now been posted for community review. Comments and feedback are welcomed on the ee4j-community@eclipse.org mail list. But please please pretty please make sure you read the FAQ (also copied below) before you do.

You can think of this EE.next group as the replacement for the Java Community Process for Java EE. It will be the body that the ecosystem can join and participate in at a corporate level. Individuals can also join if they are committers on EE4J projects. EE.next will also be the place where the new specification process will be created and managed, and where specs will be reviewed and approved.

Under the process for establishing Eclipse Foundation working groups, there will now be a community review period lasting a minimum of 30 days.

 

FAQ

What is the purpose of a working group?

An Eclipse Foundation working group is a special-purpose consortia of Eclipse Members interested in supporting a technology domain. They are intended to complement the activities of a collection of Eclipse Foundation open source projects. Open source projects are excellent for many things, but they typically do not do a great job with activities such as marketing, branding, specification and compliance processes, and the like.

What is the role of the PMC versus the working group or the working group Steering Committee?

Eclipse Foundation projects are self-governing meritocracies that set their own technical agendas and plans. The Project Management Committee for an Eclipse top-level project oversees the day-to-day activities of its projects through activities such as reviewing and approving plans, accepting new projects, approving releases, managing committer elections, and the like.

Working groups and their steering committees are intended to complement the work happening in the open source projects with activities that lead to greater adoption, market presence, and momentum. Specifically the role of the working group is to foster the creation and growth of the ecosystem that surrounds the projects.

Working groups do not direct the activities of the projects or their PMC. They are intended to be peer organizations that work in close collaboration with one another.

Who defines and manages technical direction?

The projects manage their technical direction. The PMC may elect to coordinate the activities of multiple projects to facilitate the release of software platforms, for example.

Because the creation of roadmaps and long term release plans can require market analysis, requirements gathering, and resource commitments from member companies, the working group may sponsor complementary activities to generate these plans. However, ultimately it is up to the projects to agree to implement these plans or roadmaps. The best way for a working group to influence the direction of the open source projects is to ensure that they have adequate resources. This can take the form of developer contributions, or under the Member Funded Initiatives programs, working groups can pool funds to contract developers to implement the features they desire.

Why are there so many levels of membership?

Because the Java EE ecosystem is a big place, and we want to ensure that there are roles for all of the players in it. We see the roles of the various member classes to roughly align as follows:

  • Strategic members are the vendors that deliver Java EE implementations. As such they are typically putting in the largest number of contributors, and are leading many of the projects.
  • Influencer members are the large enterprises that rely upon Java EE today for their mission critical application infrastructure, and who are looking to EE.next to deliver the next generation of cloud native Java. They have made strategic investments in this technology, have a massive skills investment in their developers, and want to protect these investments as well as influence the future of this technology.
  • Participant members are the companies that offer complementary products and services within the Java EE ecosystem. Examples include ISVs which build products on Java EE, or system integrators that use these technologies in delivering solutions to their customers.
  • Committer members are comprised of the committers working on the various EE4J projects who are also members of the Eclipse Foundation. While the Eclipse bylaws define the criteria for committers to be considered members, in essence any committer members are either a) a committer who is an employee of an EE.next member company or b) any other committer who has explicitly chosen to join as a member. Giving Committer members a role in the working group governance process mimics the governance structure of the Eclipse Foundation itself, where giving committers an explicit voice has been invaluable.

What makes this different from the Java Community Process (JCP)?

The EE.next working group will be the successor organization to the JCP for the family of technologies formerly known as Java EE. It has several features that make it a worthy successor to the JCP:

  1. It is vendor neutral. The JCP was owned and operated first by Sun and later by Oracle. EE.next is designed to be inclusive and diverse, with no organization having any special roles or rights.
  2. It has open intellectual property flows. At the JCP, all IP flowed to the Spec Lead, which was typically Oracle. We are still working out the exact details, but the IP rights with EE.next and EE4J will certainly not be controlled by any for-profit entity.
  3. It is more agile. This is an opportunity to define a 21st century workflow for creating rapidly evolving Java-based technologies. We will be merging the best practices from open source with what we have learned from over 15 years of JCP experience.

Is the WG steering committee roughly equivalent to the JCP Executive Committee?

No, not really. The JCP EC always had two mixed roles: as a technical body overseeing the specification process, and as an ecosystem governance body promoting Java ME, SE, and EE. In EE.next the Steering Committee will be the overall ecosystem governance body. The EE.next Specification Committee will focus solely on the development and smooth operation of the technical specification process.

Does a project have to be approved as a spec before it can start?

That is actually a decision which will be made by the EE4J PMC, not the working group. However, it is a goal of the people and organizations working on creating this working group that the Java EE community move to more of a code-first culture. We anticipate and hope that the EE4J PMC will embrace the incubation of technologies under its banner. Once a technology has been successfully implemented and adopted by at least some in the industry, it can then propose that a specification be created for it.

In addition to the Steering Committee, what other committees exist?

There are four committees comprising the EE.next governance structure – the Steering Committee, the Specification Committee, the Marketing and Brand Committee, and the Enterprise Requirements Committee. A summary of the make-up of each of the committees is in the table below.

Strategic Member Influencer Member Participant Member Committer Member
Member of the Steering Committee Appointed Elected Elected Elected
Member of the Specification Committee Appointed Elected Elected Elected
Member of the Marketing Committee Appointed Elected Elected Elected
Member of the Enterprise Requirements Committee Appointed Appointed N/A N/A

by Mike Milinkovich at February 05, 2018 07:43 PM

In 5 Minuten zur DSL mit transitiven Importen in Xtext

by Christian Wehrheim (cwehrheim@itemis.de) at February 05, 2018 02:51 PM

Xtext ermöglicht das Referenzieren von Elementen in DSLs auf mehrere Arten. Eine Möglichkeit sieht den Import von Elementen über Namensräume vor. Dies geschieht über die Verwendung des ImportedNamespaceAwareLocalScopeProvider und erlaubt den "Import" einzelner oder, unter Einsatz von Wildcards (.*), aller Elemente eines Namensraumes.

Es kann aber Sprachen geben, in denen dieses Verhalten nicht gewünscht ist. In diesen Sprachen importiert der Nutzer explizit eine oder mehrere Ressource-Dateien, um auf deren Inhalte zugreifen zu können.

Eine einfache DSL mit Import-Verhalten dank Xtext

Eine DSL mit einem solchen Import-Verhalten lässt sich mit https://www.itemis.com/en/xtext/ recht einfach erstellen, indem man eine Parser-Regel mit dem speziellen Attributnamen XtextimportURI in die DSL einbaut. Das folgende Beispiel stellt eine einfache DSL dar, die es erlaubt, in beliebigen Ressourcen Namen zu definieren und diese in Grußbotschaften zu verwenden.

grammar org.xtext.example.mydsl.MyDsl with org.eclipse.xtext.common.Terminals
generate myDsl "http://www.xtext.org/example/mydsl/MyDsl"
Model:
	includes+=Include*
	names+=Name*
	greetings+=Greeting*;
Include:
	'import' importURI=STRING
	;
Name:
	'def' name=ID
	;
Greeting:
	'Hallo' name=[Name] '!'
	;

Wir möchten Kollegen aus unserer Firma Grußbotschaften schicken. Da die Firma aber groß ist und aus vielen Kollegen besteht, die in unterschiedlichen Bereichen arbeiten, möchten wir für jeden Firmenbereich eine eigene Datei erstellen, die die Namen der jeweiligen Kollegen enthält. Dies erhöht die Übersicht und Wartbarkeit.

Nur durch einen expliziten Import einer Ressource wollen wir die enthaltenen Namensdefinitionen in den Scope aufnehmen. Dabei soll dies möglichst schnell und ressourcenschonend erfolgen.

Der Ansatz ist hierbei, die Verwendung des Index, die das unnötige und (bei großen Modellen) zeitaufwendige Laden von Ressourcen überflüssig macht. Als ersten Schritt müssen wir die Informationen bzgl. der importierten Ressourcen in den Index schreiben. Dazu implementieren wir eine Klasse MyDslResourceDescriptionStrategy, die von DefaultResourceDescriptionStrategy ableitet. Die Strings mit den URIs, der in der Parser-Regel Model importierten Ressourcen, werden in einen durch Kommas getrennten String zusammengeführt und unter dem Schlüssel includes in der userData Map der Objektbeschreibung im Index gespeichert.

package org.xtext.example.mydsl

import com.google.inject.Inject
import java.util.HashMap
import org.eclipse.xtext.naming.QualifiedName
import org.eclipse.xtext.resource.EObjectDescription
import org.eclipse.xtext.resource.IEObjectDescription
import org.eclipse.xtext.resource.impl.DefaultResourceDescriptionStrategy
import org.eclipse.xtext.scoping.impl.ImportUriResolver
import org.eclipse.xtext.util.IAcceptor
import org.xtext.example.mydsl.myDsl.Model
import org.eclipse.emf.ecore.EObject

class MyDslResourceDescriptionStrategy extends DefaultResourceDescriptionStrategy {
	public static final String INCLUDES = "includes"
	@Inject
	ImportUriResolver uriResolver

	override createEObjectDescriptions(EObject eObject, IAcceptor acceptor) {
		if(eObject instanceof Model) {
			this.createEObjectDescriptionForModel(eObject, acceptor)
			return true
		}
		else {
			super.createEObjectDescriptions(eObject, acceptor)
		}
	}

	def void createEObjectDescriptionForModel(Model model, IAcceptor acceptor) {
		val uris = newArrayList()
		model.includes.forEach[uris.add(uriResolver.apply(it))]
		val userData = new HashMap<string,string>
		userData.put(INCLUDES, uris.join(","))
		acceptor.accept(EObjectDescription.create(QualifiedName.create(model.eResource.URI.toString), model, userData))
	}
}

Um unsere ResourceDescriptionStrategy nutzen zu können, müssen wir sie noch im MyDslRuntimeModule binden.

 

package org.xtext.example.mydsl

import org.eclipse.xtext.resource.IDefaultResourceDescriptionStrategy
import org.eclipse.xtext.scoping.IGlobalScopeProvider
import org.xtext.example.mydsl.scoping.MyDslGlobalScopeProvider

class MyDslRuntimeModule extends AbstractMyDslRuntimeModule {
def Class<? extends IDefaultResourceDescriptionStrategy> bindIDefaultResourceDescriptionStrategy() {
MyDslResourceDescriptionStrategy
}
}


Bisher haben wir nur Informationen gesammelt und im Index gespeichert. Um sie verwenden zu können, benötigen wir zusätzlich einen eigenen
IGlobalScopeProvider. Dazu implementieren wir eine Klasse MyDslGlobalScopeProvider, die von ImportUriGlobalScopeProvider ableitet, und überschreiben die Methode getImportedUris(Resource resource). Diese Methode liefert ein LinkedHashSet zurück, das letztendlich alle URIs enthält, die in der Ressource importiert werden sollen.

Das Auslesen der importierten Ressourcen aus dem Index wird von der Methode collectImportUris erledigt. Die Methode fragt den IResourceDescription.Manager nach der IResourceDescription der Ressource. Aus dieser wird für jedes Model-Element aus der userData Map der unter dem Schlüssel includes gespeicherte String mit den URIs der importierten Ressourcen ausgelesen, zerlegt und die einzelnen URIs in einem Set gespeichert.


package org.xtext.example.mydsl.scoping

import com.google.common.base.Splitter
import com.google.inject.Inject
import com.google.inject.Provider
import java.util.LinkedHashSet
import org.eclipse.emf.common.util.URI
import org.eclipse.emf.ecore.resource.Resource
import org.eclipse.xtext.EcoreUtil2
import org.eclipse.xtext.resource.IResourceDescription
import org.eclipse.xtext.scoping.impl.ImportUriGlobalScopeProvider
import org.eclipse.xtext.util.IResourceScopeCache
import org.xtext.example.mydsl.MyDslResourceDescriptionStrategy
import org.xtext.example.mydsl.myDsl.MyDslPackage

class MyDslGlobalScopeProvider extends ImportUriGlobalScopeProvider {
	private static final Splitter SPLITTER = Splitter.on(',');

	@Inject
	IResourceDescription.Manager descriptionManager;

	@Inject
	IResourceScopeCache cache;

	override protected getImportedUris(Resource resource) {
		return cache.get(MyDslGlobalScopeProvider.getSimpleName(), resource, new Provider<linkedhashset>() {
			override get() {
				val uniqueImportURIs = collectImportUris(resource, new LinkedHashSet(5))

				val uriIter = uniqueImportURIs.iterator()
				while(uriIter.hasNext()) {
					if (!EcoreUtil2.isValidUri(resource, uriIter.next()))
						uriIter.remove()
				}
				return uniqueImportURIs
			}

			def LinkedHashSet collectImportUris(Resource resource, LinkedHashSet uniqueImportURIs) {
				val resourceDescription = descriptionManager.getResourceDescription(resource)
				val models = resourceDescription.getExportedObjectsByType(MyDslPackage.Literals.MODEL)
				
				models.forEach[
					val userData = getUserData(MyDslResourceDescriptionStrategy.INCLUDES)
					if(userData !== null) {
						SPLITTER.split(userData).forEach[uri |
							var includedUri = URI.createURI(uri)
							includedUri = includedUri.resolve(resource.URI)
							uniqueImportURIs.add(includedUri)
						]
					}
				]
				return uniqueImportURIs
			}
		});
	}
}


Um unseren
MyDslGlobalScopeProvider nutzen zu können, müssen wir diesen wiederum im MyDslRuntimeModule binden.

package org.xtext.example.mydsl

import org.eclipse.xtext.resource.IDefaultResourceDescriptionStrategy
import org.eclipse.xtext.scoping.IGlobalScopeProvider
import org.xtext.example.mydsl.scoping.MyDslGlobalScopeProvider

class MyDslRuntimeModule extends AbstractMyDslRuntimeModule {
	def Class bindIDefaultResourceDescriptionStrategy() {
		MyDslResourceDescriptionStrategy
	}
	override Class bindIGlobalScopeProvider() {
		MyDslGlobalScopeProvider;
	}
}


Wir starten den Editor für unsere kleine Sprache und beginnen die Modell-Dateien zu erstellen. Dabei haben wir die Idee, die Ressourcen der unterschiedlichen Firmenbereiche nicht einzeln zu importieren, sondern eine Ressource zu erstellen, die alle Importe enthält, und diese dann zu importieren. Dazu erstellen wir folgende Ressourcen:

Ressourcen-Agile.png

 Ressourcen-Xtext.png

Ressource-Kollegen.png


Beim Erstellen der Ressource mit den Grußbotschaften stellen wir fest, dass die Namen nicht aufgelößt werden können.

Ressource-Greetings.png


Woran liegt das? Wir haben doch alle importierten Ressourcen in den Index geschrieben.

Das ist soweit richtig. Alle direkt importieren Ressourcen werden in den Index geschrieben. Die Importe in einer importierten Ressource jedoch werden ignoriert. Das von uns gewünschte Feature bezeichnet man als transitive Importe. Mit dem Import einer Ressource werden implizit alle von ihr importierten Ressourcen mit importiert.

Um in unserer Sprache transitive Importe zu ermöglichen, müssen wir unseren MyDslGlobalScopeProvider anpassen. Statt die URI einer importierten Ressource nur in dem Set zu speichern, rufen wir zusätzlich die Methode collectImportUris auf und übergeben die URI als Parameter, sodass deren importierte Ressourcen ebenfalls verarbeitet werden.

package org.xtext.example.mydsl.scoping

import com.google.common.base.Splitter
import com.google.inject.Inject
import com.google.inject.Provider
import java.util.LinkedHashSet
import org.eclipse.emf.common.util.URI
import org.eclipse.emf.ecore.resource.Resource
import org.eclipse.xtext.EcoreUtil2
import org.eclipse.xtext.resource.IResourceDescription
import org.eclipse.xtext.scoping.impl.ImportUriGlobalScopeProvider
import org.eclipse.xtext.util.IResourceScopeCache
import org.xtext.example.mydsl.MyDslResourceDescriptionStrategy
import org.xtext.example.mydsl.myDsl.MyDslPackage

class MyDslGlobalScopeProvider extends ImportUriGlobalScopeProvider {
	private static final Splitter SPLITTER = Splitter.on(',');

	@Inject
	IResourceDescription.Manager descriptionManager;

	@Inject
	IResourceScopeCache cache;

	override protected getImportedUris(Resource resource) {
		return cache.get(MyDslGlobalScopeProvider.getSimpleName(), resource, new Provider<linkedhashset>() {
			override get() {
				val uniqueImportURIs = collectImportUris(resource, new LinkedHashSet(5))

				val uriIter = uniqueImportURIs.iterator()
				while(uriIter.hasNext()) {
					if (!EcoreUtil2.isValidUri(resource, uriIter.next()))
						uriIter.remove()
				}
				return uniqueImportURIs
			}

			def LinkedHashSet collectImportUris(Resource resource, LinkedHashSet uniqueImportURIs) {
				val resourceDescription = descriptionManager.getResourceDescription(resource)
				val models = resourceDescription.getExportedObjectsByType(MyDslPackage.Literals.MODEL)
				
				models.forEach[
					val userData = getUserData(MyDslResourceDescriptionStrategy.INCLUDES)
					if(userData !== null) {
						SPLITTER.split(userData).forEach[uri |
							var includedUri = URI.createURI(uri)
							includedUri = includedUri.resolve(resource.URI)
							if(uniqueImportURIs.add(includedUri)) {
								collectImportUris(resource.getResourceSet().getResource(includedUri, true), uniqueImportURIs)
							}
						]
					}
				]
				
				return uniqueImportURIs
			}
		});
	}
}


Wenn wir nach dieser kleinen Anpassung unsere Ressource mit den Grußbotschaften erneut öffnen sehen wir, dass die Namen durch die transitiven Importe aufgelöst werden können.

Das Beispielprojekt kann hier heruntergeladen werden.


by Christian Wehrheim (cwehrheim@itemis.de) at February 05, 2018 02:51 PM

Use the eclipse-settings-maven-plugin to synchronize prefs files across projects

February 04, 2018 11:00 PM

The question « Should the meta files related to an IDE be committed in the git repository? » is a never-ending fight. According to Aurelien Pupier, the answer to this question is YES (Talk from 2015 - slides and video). I totally agree with him, because settings files like org.eclipse.core.resources.prefs, org.eclipse.jdt.core.prefs, org.eclipse.jdt.ui.prefs or org.eclipse.m2e.core.prefs can contain valuable configuration information that will be shared between all Eclipse IDE users working on the project: code formatting rules, save actions, automated code cleanup tasks, compiler settings…

Enable project specific settings

Even today a lot of people still prefer not to have the IDE metadata files in their git Repository. This means that every coworker needs to configure his IDE and more important everybody needs to keep the configuration in sync with the team over the time.

In both cases (having the settings files in your repo or not), the eclipse-settings-maven-plugin can be interesting for you. The idea is to use maven in order to replicate the same prefs files across multiple maven modules. This way you can distribute the prefs file if they are missing in the git repository. An other use case is the distribution accros multiple teams (for example at organization level).

The source for the settings file is a simple maven artifact located in a maven repository. With a single maven command, you can synchronize the prefs files.

Using eclipse-settings-maven-plugin to copy prefs files

If you wants to see how the setup looks like, you can refer to my sync-eclipse-settings-example page and the associated GitHub project. I have updated it in order to use the latest version published last week by my former colleagues at BSI Business Systems Integration AG.


February 04, 2018 11:00 PM