On Complexity and Good Intentions

by Mike Milinkovich at March 23, 2018 02:18 PM

We are now about six months into the process of migrating Java EE to the Eclipse Foundation, and I think we’re all learning a lot as we go. I wanted to take a moment and take stock of the scale of this project, its complexity, and where we are.

Java EE is a (roughly) twenty year old technology that is one of the world’s most successful software platforms. It powers the business critical applications that run our modern world. Millions of developers work with Java EE technologies every day. Billions of users use these systems every day. Throughout Java EE’s twenty year history it has been developed and marketed in a pretty particular way.

At the core of Java EE’s success has been an approach that enabled a multi-vendor ecosystem where enterprises had a choice of compatible implementations from a number of companies.

  • Java EE specifications were developed at the Java Community Process, using that JCP process where all intellectual property flowed to the Spec Lead, which was usually Sun (later Oracle). All participants in the specification process are signatories of the Java Specification Participation Agreement, which is a fairly complex legal document.
  • Progress and innovation in Java EE was largely governed by and driven within the constraints of this specification process.
  • Java EE reference implementations were, for the most part, developed by Oracle as part of the Glassfish (and related) project and made available under the CDDL and GPLv2+Classpath Exception licenses. Most of the developers were from Oracle, and the architectural vision and project management roles were performed by them. Contributors to the projects signed the Oracle Contributor Agreement that gave Oracle joint ownership of all contributions.
  • TCKs were developed entirely by Oracle and were highly confidential and tightly controlled. You had to sign an NDA just to get a copy of the TCK agreement if you were interested in getting access to the TCKs. The agreements were pretty dense and complex legal documents.
  • It was called Java EE. It had a logo that looked like a coffee cup. These trademarks were owned by Oracle and tightly controlled.
  • Generally speaking the big enterprises that used the technology were not involved in its evolution. For the most part, the contributors to the specs and implementations were from the Java EE platform vendors.

Together we are changing every single one of those items above. All at once. While retaining the core value of enabling compatible independent implementations in a multi-vendor ecosystem.

This is big and it is complicated.

I honestly believe that no institution other than the Eclipse Foundation could handle this task. We have the people, the skills, the history, and the knowledge of how the Java ecosystem works. The staff at the Eclipse Foundation are highly skilled and community minded professionals. Similarly, the team at Oracle, along with the folks from IBM, Payara, Red Hat, Tomitribe and the EE4J PMC are working hard to move this along. Collectively they are working their butts off to support this transition and to make Jakarta EE the platform and community of choice for the next twenty years.

Overall, I believe we’ve been pretty successful at managing the complexity, and working hard to communicate our progress and plans. We haven’t always been perfect, as case in point this past week where we had a bit of a kerfuffle on our Jakarta community mailing list. Without going into the details, I would say that the root cause of that was poor communication on my part. I didn’t do a good enough job in communicating the plans and dates for selecting the new logo. My bad.

Chris Anisczcyk, a good friend and open source community colleague of mine tweeted some months back that “Open source would be a lot more fun if everyone assumed good intentions.” With his wise words in mind, I want to say is this: what we are collectively undertaking here is a massive and complex task. Mistakes and miscommunications are going to happen. But let’s all assume good intentions, and build a community based on trust, honesty, and respect.

by Mike Milinkovich at March 23, 2018 02:18 PM

WTP 3.9.3 Released!

March 22, 2018 11:27 PM

Web Tools Platform 3.9.3 has been released! Installation and update can be performed using the Oxygen Update Site or through the Eclipse Marketplace. Release 3.9.3 fixes issues that occur in prior releases or have been reported since 3.9's release. WTP 3.9.3 is featured in the Oxygen.3 Eclipse IDE for Java EE Developers, with selected portions also included in other packages. Adopters can download the R3.9.3 build itself directly. WTP 3.9.3a is planned for mid-April, as part of Oxygen.3a and its support for this week's GA release of Java 10.

More news

March 22, 2018 11:27 PM

Eclipse Newsletter | Code in Different Languages

March 22, 2018 09:13 AM

Read what's new in the Eclipse JDT Language Server and Eclipse PDT (PHP), then learn about Eclipse Xtext and Eclipse Mita (IoT).

March 22, 2018 09:13 AM

LiClipse 4.5.2 released

by Fabio Zadrozny (noreply@blogger.com) at March 21, 2018 07:03 PM

LiClipse 4.5.2 is now out.

The major updates are related  to the upgrade of dependencies (such as PyDev and EGit).

On the PyDev front, the major change is initial support for getting type information from .pyi files and a critical fix for the creation of the preferences page.

For EGit, https://wiki.eclipse.org/EGit/New_and_Noteworthy/4.11 has more details!


by Fabio Zadrozny (noreply@blogger.com) at March 21, 2018 07:03 PM

Eclipse Oxygen.3 IDE Improvements: Java, Gradle and PHP

by howlger at March 21, 2018 02:00 PM

Eclipse Oxygen.3 is the last quarterly update of Oxygen. Thanks to everyone who has contributed in any way! Even if the main focus is already on Photon, which will be released on June 27, it is worth updating your Eclipse IDE (unless you want to test a pre-version of Photon instead).

As usual, I have made a short video that shows the IDE improvements that I find most noteworthy in action:


Gradle (see also Buildship 2.2):

PHP (see also PDT 5.3):

Eclipse Oxygen IDE Improvements: General, Java and Git Eclipse Oxygen.1a IDE Improvements: Java 9, JUnit 5, General, Gradle and PHP Eclipse Oxygen.2 IDE Improvements: Java IDE, Git, C/C++

Together with the previous Oxygen videos, you can view 72 improvements in action, in total about half an hour. Here are the direct chapter links (number of improvements in brackets):

Thank you for watching and happy coding!

by howlger at March 21, 2018 02:00 PM

EC by Example: Partitioning

by Donald Raab at March 21, 2018 05:53 AM

Learn how to partition a collection using Eclipse Collections.

What is partitioning?

Partitioning is a kind of filtering, except that all elements of a collection are retained. Instead of being included (like Select) or excluded (like Reject), the elements of the collection are split into two collections based on whether they return true or false when passed to a predicate.

A partition contains both selected and rejected elements

Partitioning a List (Java 8)

public void partitioningLists()
MutableList<Integer> mList = mList(1, 2, 3, 4, 5);
ImmutableList<Integer> iList = iList(1, 2, 3, 4, 5);

Predicate<Integer> evens = i -> i % 2 == 0;

PartitionMutableList<Integer> mutable =

PartitionImmutableList<Integer> immutable =

PartitionIterable<Integer> lazy =

ImmutableList<Integer> expectedEvens = iList(2, 4);
Assert.assertEquals(expectedEvens, mutable.getSelected());
Assert.assertEquals(expectedEvens, immutable.getSelected());
Assert.assertEquals(expectedEvens, lazy.getSelected().toList());

ImmutableList<Integer> expectedOdds = iList(1, 3, 5);
Assert.assertEquals(expectedOdds, mutable.getRejected());
Assert.assertEquals(expectedOdds, immutable.getRejected());
Assert.assertEquals(expectedOdds, lazy.getRejected().toList());

Partitioning a List (Java 10)

Here I will take advantage of local variable type inference using the var keyword in Java 10. With a type like PartitionMutableList, using var can significantly reduce the amount of noise in the code.

public void partitioningListsJava10()
var mutableList = mList(1, 2, 3, 4, 5);
var immutableList = iList(1, 2, 3, 4, 5);

Predicate<Integer> evens = i -> i % 2 == 0;

var mutable = mutableList.partition(evens);

var immutable = immutableList.partition(evens);

var lazy = mutableList.asLazy().partition(evens);

var expectedEvens = iList(2, 4);
Assert.assertEquals(expectedEvens, mutable.getSelected());
Assert.assertEquals(expectedEvens, immutable.getSelected());
Assert.assertEquals(expectedEvens, lazy.getSelected().toList());

var expectedOdds = iList(1, 3, 5);
Assert.assertEquals(expectedOdds, mutable.getRejected());
Assert.assertEquals(expectedOdds, immutable.getRejected());
Assert.assertEquals(expectedOdds, lazy.getRejected().toList());

Covariance at play

The return type for partition is determined by the source type. In the case of a MutableList as seen above, the method partition will return a PartitionMutableList. The following is a partial hierarchy of types that exist for partitioning a List. The full hierarchy includes similar relationships for Bag, Set, SortedSet, SortedBag and Stack.

A partial partition hierarchy for Lists

Partitioning a Set (Java 8)

public void partitioningSets()
MutableSet<Integer> mSet = mSet(1, 2, 3, 4, 5);
ImmutableSet<Integer> iSet = iSet(1, 2, 3, 4, 5);

Predicate<Integer> evens = i -> i % 2 == 0;

PartitionMutableSet<Integer> mutable =

PartitionImmutableSet<Integer> immutable =

PartitionIterable<Integer> lazy =

ImmutableSet<Integer> expectedEvens = iSet(2, 4);
Assert.assertEquals(expectedEvens, mutable.getSelected());
Assert.assertEquals(expectedEvens, immutable.getSelected());
Assert.assertEquals(expectedEvens, lazy.getSelected().toSet());

ImmutableSet<Integer> expectedOdds = iSet(1, 3, 5);
Assert.assertEquals(expectedOdds, mutable.getRejected());
Assert.assertEquals(expectedOdds, immutable.getRejected());
Assert.assertEquals(expectedOdds, lazy.getRejected().toSet());

Partitioning a Set (Java 10)

public void partitioningSetsJava10()
var mutableSet = mSet(1, 2, 3, 4, 5);
var immutableSet = iSet(1, 2, 3, 4, 5);

Predicate<Integer> evens = i -> i % 2 == 0;

var mutable = mutableSet.partition(evens);

var immutable = immutableSet.partition(evens);

var lazy = mutableSet.asLazy().partition(evens);

var expectedEvens = iSet(2, 4);
Assert.assertEquals(expectedEvens, mutable.getSelected());
Assert.assertEquals(expectedEvens, immutable.getSelected());
Assert.assertEquals(expectedEvens, lazy.getSelected().toSet());

var expectedOdds = iSet(1, 3, 5);
Assert.assertEquals(expectedOdds, mutable.getRejected());
Assert.assertEquals(expectedOdds, immutable.getRejected());
Assert.assertEquals(expectedOdds, lazy.getRejected().toSet());

APIs and features covered in the examples

  1. Partition (Eager and Lazy) — filters selecting and rejecting elements that based on a given condition. Partition is a terminal operation on LazyIterables, which forces execution to happen.
  2. mList — Creates a MutableList
  3. iList — Creates an ImmutableList
  4. mSet — Creates a MutableSet
  5. iSet — Creates an ImmutableSet
  6. asLazy — Returns a LazyIterable or LazyIntIterable
  7. toList — Converts the target Iterable into a MutableList
  8. toSet — Converts the target Iterable into a MutableSet
  9. var — Local variable Type Inference included in Java 10 (JEP 286)

Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it on GitHub.

by Donald Raab at March 21, 2018 05:53 AM

Complete the Jakarta EE Developer Survey

March 16, 2018 11:45 AM

Share your insights on Java EE and help shape the future of Jakarta EE.

March 16, 2018 11:45 AM

Building Domain-specific Languages with Xtext and Xtend

by Hendrik Bünder (buender@itemis.de) at March 15, 2018 02:31 PM

Specifying the requirements of a software system and converting such a specification into executable source code is difficult and error-prone. Requirements specifications written in prose are often ambiguous and hard to understand for developers. Therefore, the process of turning this documents into software is slow and prone to error. Domain-specific languages (DSL) challenge this problem by defining a semantically rich notation to describe domain concepts clear and concise. From the DSL models the boilerplate code can be deduced, thereby increasing the software development process efficiency as well as the overall quality. After giving an overview of the key concepts of a domain-specific language, the domain-specific language framework Eclipse Xtext will be introduced. In addition, it will be illustrated how DSL models can be processed efficiently using Eclipse Xtend.


Domain-specific Languages

Domain-specific languages are an integral part of our daily business. For example, when developers and business experts talk about requirements they will use terms like Client, Contract, or Payment to describe the expected behavior. However, as soon as executable source code is required developers start translating these concepts into classes, data structures, and algorithms. At this point, some of the domain-specific information might get lost in translation. Thereby, business analysts have a hard time ensuring that all their business rules have been translated correctly. In addition, developers struggle with implementing changes, because they first have to understand the changes within the domain model before they can estimate and implement the subsequent changes to the source code.

A domain-specific language is built to describe the concepts of a certain domain concisely with a semantically rich notation. Thereby, terms like Client or Contract are used in a language that is the foundation for domain model enhancements as well as for automated translation into executable software. In contrast to general purpose programming languages, concepts of a certain domain are described on a higher level of abstraction, so that they are understandable for business experts and developers.

Domain-specific languages are likely to be used in the context of model-driven development, however, there are many more usage scenarios. For example, they might be part of a larger software system used to express calculations or configurations. Further, they might be used as a thin layer on top of an existing language to provide feature-rich editor support.

Domain-driven design is an approach to describe the domain concepts in a language shared by business experts and developers. One central concept of this ubiquitous language are 'Entities'. The following example shows a simplistic textual DSL created with Xtext. The Entity DSL allows the specification of real-world entities with their relevant properties in a concise and clear notation.

entity Client{
entity Contract{

The simple example above shows two entities from the insurance domain that have a relation between each other. By describing the domain concepts on a rather high-level of abstraction (notice that there are no programming language specific data-types or notations) DSLs can be used by non-programmers. The technology independent language can close the gap between business experts and developers by becoming the common ground to discuss domain concepts benefiting both sides. On the one hand, the use of a formal language enables business analysts to specify domain concepts in a precise and unambiguous language. A task that is particularly hard using tools such as Word or Excel. On the other hand, the boilerplate code can be deduced from the DSL accelerating the development process. Moreover, the general code quality is increased, because the boilerplate code that is often the main spot for copy and paste errors is automatically created. Further, the source code structure is consistent benefiting maintenance and future development.

In addition to being the center of the development process, domain-specific languages might be integrated into larger software systems. There are for example statechart tools embedding a domain-specific language to describe the input types and the internal variables of a state. By using a formal language with a given set of keywords and language constructs mature editor support can be provided. Further, the expressions can be interpreted automatically to simulate the model behavior. Thereby, business experts can get immediate feedback without the necessity of a running application. In addition, it is also possible to generate source code e.g. in Java or C++ from the statechart model.

By providing a concise and semantically rich notation of the domain, DSLs increase efficiency and the overall quality of the product or process. Yet, in order to be successfully introduced a mature editor that integrates well with existing processes is required.

Introducing Xtext

Eclipse Xtext was built to quickly create domain-specific languages including an integrated, feature-rich editor. To be more precise: Xtext is a framework for building language workbenches for textual domain-specific languages.

Let's first have a look at the small but important word "textual". When talking about modeling most of us instinctively remember creating large graphical class diagrams. Instead of modeling lines and boxes on a canvas, textual modeling changes the user interface to a simple, yet feature-rich text editor. Not only creating and maintaining but also sharing - or should I say merging - text files is easier and often well supported by the IDE.

The text files created using the Xtext editor are analyzed by a parser, that instantiates an Ecore model representing the abstract syntax tree (AST). The AST is not only the basis for the Eclipse integration but also allows frameworks such as GEF to automatically create a graphical representation. Although it is easier to create and maintain models via text files, it is often beneficial to have a graphical representation to discuss the broader domain concepts and their relations.

The next thing mentioned by the definition above is the "language workbench". The term aggregates some of the concepts already mentioned. First, a feature-rich editor that offers code-completion, syntax-highlighting, formatting, error detection and so on. Second, a sophisticated language workbench offers different views on the same model as well as navigation and refactoring support. Finally, a language workbench should integrate with existing tools and frameworks to embed the DSL in existing processes. Xtext languages can be integrated into different IDEs such as the Eclipse IDE, IntelliJ IDEA, VSCode, and all editors that support the Language Server Protocol. A feature-rich, well-integrated workbench is a key factor to success for a domain-specific language.

After having spent some time on the benefits of a textual domain-specific language and the corresponding workbench, we will examine how Xtext and Xtend enable you to reach these goals.

Getting Started with Xtext

Xtext is a mature framework that was build to quickly create domain-specific languages with a sophisticated workbench. Boiling it down to the very minimum, a Xtext DSL only requires a grammar file. The powerful grammar defines the language and is input for a generation process that creates the full infrastructure including the parser, linker, type checker as well as editor support for the Eclipse IDE, any editor that supports the Language Server Protocol, and your favorite web browser.

Yet, the generated default often has to be customized in order to achieve company- or project-specific behavior. Therefore, the generated parts of the workbench can be customized by providing domain-specific implementations. Typical customizations include custom validations, narrowed proposals during code completion, or code formatting. A good default that is highly customizable, enables a fast proof of concept that can evolve over time becoming a highly specific DSL.

Having talked a lot about Xtext, let's get involved and create our first DSL. First of all, as you may have guessed already you need an Eclipse workspace with the Xtext framework included. You find a pre-bundled Eclipse version here or you can download the required plugins in your existing Eclipse IDE right here. After your Eclipse workspace is all setup, you can start creating your first DSL.
Since I don't want to go to much into the details, I recommend you to try the
Xtext 15-minute tutorial. The tutorial shows how to create the Entity DSL we used in the example above. Further, there is a Domain-Model example that comes with the Xtext plugins including more than 800 JUnit test cases. The example project provides a good overview of the potential use of test-driven development when creating a DSL. The test cases not only cover parsing and validating the text files, but also demonstrate how the user interface functionality such as code completion or the outline view can be tested automatically.

Having finished the tutorial, the documentation offers a great overview of the different concepts embodied in Xtext. Further, you should keep an eye on the Eclipse TMF forum where you find answers to many questions. Finally, if you want to contribute to Xtext itself you are kindly invited to provide pull requests to the Xtext GitHub repositories.

Leverage the domain model with Xtend

As shown above, Xtext enables you to create and evolve DSLs quickly. However, at the end of the day domain-specific model regardless if it is a domain model, an expression, or a configuration is created to be further processed. At this point, Xtend comes into play. Xtend is a statically-typed-programming language built with Xtext and compiled to Java. Since it compiles to Java it integrates seamlessly with existing Java programs and vice versa. Xtend offers powerful features such as template strings, extension methods, and built-in functions such as filter, map, and reduce. Since Xtend is a domain-specific language it enables developers to write concepts available in Java, in a concise and semantically rich notation.

Xtend include many language concepts that are especially beneficial when processing models. First, it offers template strings which are ideal to generate executable code from a given model.

def generateEntity(Entity entity){'''
    public class «entity.name»{

def generate(Property property){'''
private «property.type» «property.name»

Xtend enables the specification of multi-line strings that contain fixed text parts as well as dynamic parts computed from the given model. The example above shows a very basic multi-line String. Starting with triple quotes the String contains the static part
public class followed by a dynamic part in guillemets, aka « and ». When the string is interpreted at runtime the dynamic part is replaced by the name of the entity currently in focus. In the class body the template String contains another guillemets expression that calls the built-in forEach function on the properties of the current entity. Thereby, the generate method is called that returns a string representing the property type and name. In addition, the Xtend editor also highlights the whitespaces as they will appear in the generated file. In contrast to other templating engines functions to evaluate dynamic values can be included directly in the templates.

Second, another important ingredient of Xtend is the support for lambda expressions. Besides lambda expressions, there are also higher-order functions such as filter, map, reduce etc. already shipped with the Xtend language library. The example above shows how the built-in function forEach is used to get the textual representation for all properties modeled in the current entity. The combination of built-in and custom lambda functions enables concise statements, e.g. for dealing with model-to-model transformations or model simulation.

Finally, there are many more features included in Xtend, such as extension methods, operator overloading, powerful switch expression, polymorphic method invocation, and so on, that make Xtend a conclusive add-on to the Java language.

Besides being a powerful programming language, Xtend provides a compact and semantically rich language for processing domain-specific models. To get a better feeling for the language and its features have a look at this tutorial.


Domain-specific languages are used to express concepts of a certain domain in a concise and semantically rich notation. Employing DSLs enables model simulation, source code generation, and increases the overall quality. As shown above, Xtext is a framework built to quickly create domain-specific languages including a sophisticated and well-integrated editor. Since Xtext is highly customizable it supports the evolution of a DSL from an early prototype version to an individualized mature solution. Finally, the statically typed programming language Xtend provides mature features for model-to-model or model-to-text transformations. All in all, the combination of Xtext and Xtend will enable you to rapidly create your first domain-specific language workbench perfectly tailored for your domain.

by Hendrik Bünder (buender@itemis.de) at March 15, 2018 02:31 PM

Some love for Toolsmiths

by tevirselrahc at March 15, 2018 01:30 PM

Today, my minions added a new page to the the unsung heroes of me: The Toolsmiths!

They are those who are brave enough to add capabilities to Papyrus and even to build new modeling tools on top of the Papyrus platform!

They are, of course, all the main developers of the Papyrus modeling platform and the various products in the Papyrus product line, but also those who provide fixes through bugzilla, those who build add-ons to Papyrus, and those who use Papyrus as the base for their own domain/company-specific modeling tools.

Interested in joining this fearless bunch? The Toolsmith page is for you!


Are you interested in writing for this blog? Please let us know!

by tevirselrahc at March 15, 2018 01:30 PM

Eclipse IoT Day Santa Clara | Speakers Announced

March 15, 2018 12:00 PM

We're pleased to announce the speakers for the Eclipse IoT Day Santa Clara, co-located with IoT World 2018 on May 14.

March 15, 2018 12:00 PM

EC by Example: Filtering

by Donald Raab at March 15, 2018 12:26 AM

Learn how to filter a collection using Eclipse Collections.

Filtering: Include or Exclude?

If you have a singe method named filter, how do you know if it is supposed to be an inclusive or exclusive filter? In Eclipse Collections, there are two filtering methods named select and reject.

Filtering an Object List

public void filteringUsingSelectAndReject()
ExecutorService executor = Executors.newWorkStealingPool();
    MutableList<Integer> mList = mList(1, 2, 3, 4, 5);
ImmutableList<Integer> iList = iList(1, 2, 3, 4, 5);

Predicate<Integer> evens = i -> i % 2 == 0;
    MutableList<Integer> evensMutable = mList.select(evens);
ImmutableList<Integer> evensImmutable = iList.select(evens);
    LazyIterable<Integer> evensLazy = mList.asLazy().select(evens);
    ParallelListIterable<Integer> evensParallel =
mList.asParallel(executor, 2).select(evens);

ImmutableList<Integer> expectedEvens = iList(2, 4);
Assert.assertEquals(expectedEvens, evensMutable);
Assert.assertEquals(expectedEvens, evensImmutable);
Assert.assertEquals(expectedEvens, evensLazy.toList());
Assert.assertEquals(expectedEvens, evensParallel.toList());

MutableList<Integer> oddsMutable = mList.reject(evens);
ImmutableList<Integer> oddsImmutable = iList.reject(evens);
    LazyIterable<Integer> oddsLazy = mList.asLazy().reject(evens);
    ParallelListIterable<Integer> oddsParallel =
mList.asParallel(executor, 2).reject(evens);

ImmutableList<Integer> expectedOdds = iList(1, 3, 5);
Assert.assertEquals(expectedOdds, oddsMutable);
Assert.assertEquals(expectedOdds, oddsImmutable);
Assert.assertEquals(expectedOdds, oddsLazy.toList());
Assert.assertEquals(expectedOdds, oddsParallel.toList());

Filtering a primitive List

public void filteringPrimitivesUsingSelectAndReject()
MutableIntList mList = IntLists.mutable.with(1, 2, 3, 4, 5);
ImmutableIntList iList = IntLists.immutable.with(1, 2, 3, 4, 5);
    IntPredicate evens = i -> i % 2 == 0;

MutableIntList evensMutable = mList.select(evens);
ImmutableIntList evensImmutable = iList.select(evens);
    LazyIntIterable evensLazy = mList.asLazy().select(evens);

MutableIntList expectedEvens = IntLists.mutable.with(2, 4);
Assert.assertEquals(expectedEvens, evensMutable);
Assert.assertEquals(expectedEvens, evensImmutable);
Assert.assertEquals(expectedEvens, evensLazy.toList());

MutableIntList oddsMutable = mList.reject(evens);
ImmutableIntList oddsImmutable = iList.reject(evens);
    LazyIntIterable oddsLazy = mList.asLazy().reject(evens);

MutableIntList expectedOdds = IntLists.mutable.with(1, 3, 5);
Assert.assertEquals(expectedOdds, oddsMutable);
Assert.assertEquals(expectedOdds, oddsImmutable);
Assert.assertEquals(expectedOdds, oddsLazy.toList());

What other types support Select and Reject?

The Symmetric Sympathy is strong with select and reject.

Select and Reject is available across many types and concerns

Possible to filter both inclusively and exclusively in one iteration?

Yes. There is a method called partition. I will show partition in the next blog in this series.

APIs covered in the examples

  1. Select (Eager, Lazy and Parallel) — filters including elements that match a condition
  2. Reject (Eager, Lazy and Parallel) — filters excluding elements that match a condition
  3. mList — Creates a MutableList
  4. iList — Creates an ImmutableList
  5. IntLists.mutable.with — Creates a MutableIntList
  6. IntLists.immutable.with — Creates an ImmutableIntList
  7. asLazy — Returns a LazyIterable or LazyIntIterable
  8. asParallel — Returns a ParallelIterable
  9. toList — Converts the target Iterable into a MutableList

Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it on GitHub.

by Donald Raab at March 15, 2018 12:26 AM

Taking a (Tu)Leap!

by tevirselrahc at March 14, 2018 01:09 PM

TuleapIcon Thanks to Eclipse, my industry consortium is taking a leap into Enalean’s Tuleap!

Here are the three projects (communities) that have been created along with their descriptions, taken from each project:

Papyrus-IC Papyrus-IC-Product Papyrus-IC Steering
This project is the Papyrus IC’s way of openly managing the Papyrus projects and products This is our way of informing the Papyrus community about what we are doing and, in return, to get feedback from the Papyrus community. This is a source of information for the community and by the community — this is your community!  This project is where the Papyrus product line management lives. This is where we do the nitty-gritty work so that you can enjoy our fabulous Papyrus-based products. Unfortunately, most of what we do here can be boring, day-to-day stuff, and we want our developers, designers, and managers to keep their focus, so we keep them in a quiet, private area.
But don’t worry, any significant news, decision, development, etc. Will be made available in the Papyrus IC project!
This project is to help the Papyrus Industry Consortium’s Steering community to manage the consortium projects, products, and assets. This project is private to protect confidential information (e.g., user and supplier confidential information, financial information)
But don’t worry, Steering committee information for public consumption will be provided in the Papyrus-IC project!

My minions are still working at the governance of the projects and at transferring information from the Papyrus IC Steering into the other projects (especially to Papyrus-IC), so please be patient with them!

by tevirselrahc at March 14, 2018 01:09 PM

Testing Eclipse’s User Workflows: from OOMPH to Subversive, m2e and WTP

by vzurczak at March 13, 2018 06:23 PM


Few months ago, I worked on automating the tests of user workflows that involve Eclipse tooling. The client organization has more than a hundred of developers and they all use common frameworks based on JEE. They all use the same tools, from source version control to m2e and WTP. Eclipse being their IDE since quite a long time, they decided, some years ago, to automate the installation of Eclipse with preconfigured tools and predefined preferences. They did create their own solution. When OOMPH was released and became Eclipse’s official installer, they quickly dropped their project and adopted OOMPH.

From an OOMPH’s point of view, this organization has its own catalog and custom setup tasks. Unlike what the installer usually shows, there is only one distribution. Everything works behind a proxy. Non-composite p2 repositories are proxyfied by Nexus. All the composite p2 repositories (such as official Eclips’s ones) are mirrored by using Eclipse in command-line. The installer shows a single product, but in different versions (e.g. Neon, Oxygen…). It also provides several projects: several JDKs, several versions of Tomcat, several versions of Maven, several Eclipse tools, etc. We can really say this organization uses all the features OOMPH provides.

Here is a global overview of what is shown to users.

First screen in Eclipse's installer
Second screen in Eclipse's installer

So, this organization is mostly a group of Eclipse users. Their developments are quite limited. Their focus is about delivering valid Eclipse distributions to their members and verify everything work correctly in their environment. Given this context, my job was to automate things: update sites creation (easy with Tycho), prepare the installer for the internal environement and automate tests that launch the installer, make a real installation, start the newly installed Eclipse, make it execute several actions a real developer would do and verify everything works correctly inside this (restrained / controlled) environment.

Let’s take a look at the various parts.

Automating the creation of Custom Installers

This part is not very complicated.
I created a project on GitHub that shows how it works. Basically, we have a Maven module that invokes ANT. The ANT script downloads the official installer binaries from Eclipse.org. It verifies the checksum, unwrap their content, update the eclipse-inst.ini file, adds predefined preferences (related to the proxy) and rebuilds a package for users. To prevent downloading binaries everytime, we use a local cache (as a directory). If a binary already exists, we verify its checksum against the value provided by Eclipse.org. If it succeeds, it means our cache is valid against Eclipse repositories. Otherwise, it may indicate the cache is invalid and that a newer version was released. In such a situation, we indicate the user he (or she) should retry and/or delete the cache before giving it another try.

Since all of this a Maven project, it is possible to deploy these installers on a Maven repository.

Automating OOMPH tests with SWT Bot

OOMPH is a SWT application.
So, testing it automatically immediately made sense thanks to SWT Bot. Testing with SWT Bot implies deploying it in the tested application. Fortunately, OOMPH is also a RCP application. It means we can install things with p2. That was the first thing to do. And since I enjoy the Maven + ANT combo, I wrote an ANT script for this (inspired from the one available on Eclipse’s wiki – but much more simple). I also made the tasks reusable so that it can also deploy the bundle with the tests to run.

The next step was writing a SWT Bot test and run it against the installer.
The first test was very basic. The real job was launching it. When one wants to run SWT Bot tests, it launches a custom application that itself launches Eclipse. Unfortunately, the usual org.eclipse.swtbot.eclipse.junit.headless.swtbottestapplication application did not work. There are checks in it about the workbench. And even if OOMPH is a RCP and has SWT widgets, it does not have any workbench. This is why I created a custom application I embedded with my SWT Bot test. Once there, everything was ready.

1 – I have a bundle with SWT Bots tests. With a feature. With an update site (that can remain local, no need to deploy it anywhere).
2 – I have an ANT script that can install SWT Bot and my test bundle in OOMPH.
3 – I have an ANT script that can launch my custom SWT Bot application and executes my tests in OOMPH.

It works. The skeleton for the project is available on Github.
Otherwise, the shape and the Maven and ANT settings are the same. I only simplified the tests executed for OOMPH (they would not be meaningful for this article). The main test we wrote deploys Eclipse, but also downloads and unzip specific versions of Maven and Tomcat. Obviously, the catalog is made in such a way that installing these components also updates the preferences so that m2e and WTP can use them.

Notice there are settings in the ANT script that delete user directories (OOMPH puts some resources and information in cache). To make tests reliable, it is better to delete them. This can be annoying if you have other Eclipse installations on your machine. In the end, such tests aim at being executed on a separate infrastructure, e.g. in continuous integration.

Configuring Eclipse for SWT Bot

Once the tests for the installer have run, we have a new Eclipse installation.
And we have other tests to run in it. Just like what we did for OOMPH, we have to install SWT Bot in it. The p2 director will help us once again.

Notice we make this step separated from the execution of the tests themselves.
Testing OOMPH is quite easy. But the tests written for the IDE are much more complicated and we need to be able to re-run them. So, the configuration of the new Eclipse installation is apart from the tests execution.

Writing and Running Tests for Eclipse

In the same manner than for OOMPH, we have a custom plug-in that contains our tests for Eclipse. There is also a feature. and the (same) local update site. This plug-in is deployed along with SWT Bot. Launching the test is almost the same thing than for OOMPH, except there is a workbench here. We can rely on the usual SWT Bot application for Eclipse.

What is more unusual is the kind of test we run here.
I will give you an example. We have a test that…

1. … waits for OOMPH to intialize the workspace (based on the projects selected during the setup – this step is common to all our tests).
2. … opens the SVN perspective
3. … declares a new repository
4. … checks out the last revision
5. … lets m2eclipse import the projects (it is a multi-module project and m2e uses a custom settings.xml)
6. … applies a Maven profile on it
7. … waits for m2eclipse to download (many) artifacts from the organization’s Nexus
8. … waits for the compilation to complete
9. … verifies there is no error on the project
10. … deploys it on the Tomcat server that was installed by OOMPH (through WTP – Run as > Run on Server )
11. … waits for it to be deployed
12. … connects to the new web application (using a POST request)
13. … verifies the content of the page is valid.

This test takes about 5 minutes to run. It implies Eclipse tools, pre-packaged ones too, but also environment solutions (Nexus, SVN server, etc). Unlike what SWT Bot tests usually do, we make integration tests with an environment that is hardly reproductable. It is not just more complex, it must also acknowledge some situations like timeouts or slowlyness. And as usual, there may be glitches in the user interface. As an example, projects resources that are managed by SVN have revision numbers and commit’s author names as a suffix. So, you cannot search resources by full label (hence the TestUtils.findPartialLabel methods). Another example is that when one expands nodes in the SVN hierarchy, it may take some time for the child resources to by retrieved. Etc.

But what was the most complicated was developing these tests.

Iterative Development of these Tests

Usually, SWT Bot tests are developed and tested from the developer’s workspace: right click on the test class, Run as > SWT Bot test. It opens a new workbench and the test runs. That was not possible here. The Eclipse into which the tests must run is supposed to have been configured by OOMPH. You cannot compile the Maven project if you do not have the right settings.xml. You cannot deploy on Tomcat if it has not been declared in the server preferences. And you cannot set these preferences in the test itself because it is part of its job to verify OOMPH did it correctly! Again, it is not unit testing but integration testing. You cannot break the chain.

This is why each test is defined in its own Maven profile.
To run scenario 1, we execute…

mvn clean verify -P scenario1

We also added a profile that recompiles the SWT Bot tests and upgrade the plug-in in the Eclipse installation (the p2 directory can install and uninstall units at once). Therefore, if I modified a test, I can recompile, redeploy and run it by typing in…

mvn clean verify -P recompile-ide-tests -P scenario1

This is far from being perfect, but it made the developement much less painful than going through the full chain on every change.
I wished I could have duplicated preferences from the current workspace when I run tests from Eclipse (even if it is clear other problems would have arisen). We had 4 important scenarios, and each one is managed separately, in the code and in the Maven configuration.


Let’s start with the personal feedback.
I must confess this project was challenging, despite a solid experience with Maven, Tycho and SWT Bot. The OOMPH part was not that hard (I only had to dig in SWT Bot and platform’s code). Testing the IDE itself, with all the involved components and the environment, was more complicated.

Now, the real question is: what is worth the effort?
The answer is globally yes. The good parts are these tests can be run in a continuous integration workflow. That was the idea at the beginning. Even if it is not done (yet), that could (should) be a next step. I have spent quite some time to make these tests robust. I must have run them about a thousand times, if not more. And still sometimes, one can fail due to an environment glitch. This is also why we adopted the profile-per-scenario approach, to ease the construction of a build matrix and be able to validate scenarios separately and/or in parallel. It is also obvious that these tests run faster than by hand. An experienced tester spends about two hours to verify these scenarios manually. A novice will spend a day. Running the automated tests takes at most 30 minutes, provided you can read the docs and execute 5 succeeding Maven commands. And these tests can be declined over several user environments. So, this is globally positive.

Now, there are few drawbacks. We did not go to the continuous integration. For the moment, releases will keep on being managed on-demand / on-schedule (so few times a year). In addition, everything that was done was for Linux systems. There would be minor adaptations to test the process on Windows (mainly, do not launch the same installer). We also found out minor differences between Eclipse versions. SWT Bot intensively uses labels. However, there are labels and buttons that have changed, as an example, between Neon and Oxygen. So, our tests do not work on every Eclipse version. The problem would remain if we tested by hand. Eventually, and unlike what it seems when you read them, the written tests remain complex to maintain. So, short and mid-term benefits might be counter-balanced by a new degree of complexity (easy to use, not so easy to upgrade). Tests by hand take time but remain understandable and manageable by many persons. Writting or updating SWT Bot tests require people to be well-trained and patient (did I mention I run IDE tests at least a thousand times?). Besides, having automated tests does not prevent from tracking tests on TestLink. So, manual tests remain documented and maintained. In fact, not all the tests have been automated, only the main and most painful ones.

Anyway, as usual, progress is made up of several steps. This work was one of them. I hope those facing the same issues will find help in this article and in the associated code samples.

by vzurczak at March 13, 2018 06:23 PM

Google Summer of Code 2018

by tsegismont at March 13, 2018 12:00 AM

It’s this time of year again! Google Summer of Code 2018 submission period has just started!

Submit through the Eclipse organization

This year, the Eclipse Vert.x project participates through the Eclipse organization. Make sure to review our GSoC 2018 ideas and to submit before March, 27!

Assessment application

As we did before, we ask candidates to implement a simple Vert.x application. This helps us make sure candidates have a basic understanding of asynchronous programming and the Vert.x toolkit. But submit your proposal even if not done with the assessment application! Google will not extend the submission period but we can continue reviewing assessments while evaluating the submitted proposals.


If you have questions, feel free to ask possible mentors via email or on our community channels.

All the details for this year (and ideas from past years) can be found on the Vert.x GSoC page.

Looking forward to your proposals!

by tsegismont at March 13, 2018 12:00 AM

Last call for EclipseCon France submissions

March 12, 2018 09:00 PM

Deadline to propose a talk is Monday, March 19. Get your talk in now for your chance to be part of a great program!

March 12, 2018 09:00 PM

Eclipse Foundation supports EU funded Brain-IoT Project

March 12, 2018 02:00 PM

Eclipse Foundation Europe Selected to Provide Open Source Community Building Expertise for EU funded IoT Research Project

March 12, 2018 02:00 PM

Hello OpenJ9 on Windows, I didn’t expect you so soon!

by howlger at March 09, 2018 02:30 PM

Faster startup time, lower memory footprint and higher application throughput only by replacing the Java virtual machine (VM)? That sounds too good to be true. So far there has been no real alternative to Oracle’s Java HotSpot VM on Windows. With Eclipse OpenJ9, which emerged from open-sourcing IBM’s J9 VM, there is now the alternative that promises exactly this.

At the end of January the first OpenJDK 9 with Eclipse OpenJ9 nightly builds for Windows were published, but they were not very stable at that time. This week I tested the nightly builds again to run the Eclipse IDE and I was pleasantly surprised: OpenJ9 ran without crashing. Here are my results: the start time of the Eclipse Oxygen.2 Java IDE improves with OpenJ9 from 20 to 17 seconds, with some tuning (see below) even to 12 seconds compared to the Java 9 JDK with Oracle’s HotSpot VM on my more than six-year-old laptop. Also the Windows Task Manager shows less memory used by the Eclipse IDE and tasks like compiling a large project are a bit faster with OpenJ9.

To start the Eclipse IDE with OpenJ9, in eclipse.ini add the following two lines above -vmargs:


Embedding the JDK into an Eclipse installation directory as jre subdirectory does not yet work, but as long as you do not start the Eclipse IDE from the command line from another directory you can use -vm with jre\bin\javaw.exe. To further improve the startup time, add the following two lines below -vmargs:


The cloning of a GitHub repository fails due to missing certificate authority (CA) certificates. You can fix this OpenJDK 9 issue by replacing the lib\security directory (which contains the cacerts file) with the same directory of an OpenJDK 10 early access build.

In the Eclipse IDE that is running on OpenJDK the standard basic text font defaults (for reasons I don’t know) to Courier New 10 instead of Consolas 10. You can change this in Window > Preferences: General > Appearance > Colors and Fonts by selecting Basic > Text Font and pressing Edit… (if you like, you can also use Source Code Pro 9 like the Clean Sheet theme does).

I have not noticed so far any further differences between the Eclipse IDE running on Oracle’s JDK 9 and the Eclipse IDE running on the current OpenJDK 9 with OpenJ9 nightly build. Debugging and hot code replace works as expected.

Many thanks to the OpenJ9 team! I look forward to the final release. It’s great to have two good open source Java virtual machines for Windows. Who knows, but with only one of the two, neither of the two might be open source today.

PS: If the Eclipse IDE still starts too slowly for you, have a look at a developer build of the upcoming Eclipse Photon IDE. 😉

by howlger at March 09, 2018 02:30 PM

Eclipse Foundation Announces 2018 Board Member Election Results

March 07, 2018 11:00 PM

Today we are please to announce the results of the Eclipse Foundation Sustaining Member and Committer Member elections.

March 07, 2018 11:00 PM

CNCF Annual Report for 2017 and Kubernetes Graduation

by Chris Aniszczyk at March 07, 2018 08:03 PM

We recently published the first annual report for the Cloud Native Computing Foundation (CNCF) which encompassed our community’s work in 2017:

The CNCF is technically a little over two years old and it was about time we start publishing annual reports based on our progress. This is a well treaded path by other open source foundations out there like the Eclipse Foundation and Mozilla so we thank them for inspiration to be more transparent.

Another thing that we launched this week was the Cloud Native Landscape (interactive edition) and more importantly, the Cloud Native Trailmap which guides you through the journey of becoming cloud native by adopting different projects in the foundation.

Finally, it was fantastic for Kubernetes to be the first project to graduate from the CNCF.  What does this exactly mean? This is very akin to graduation in other open source foundations such as the ASF. Graduation here is really about confidence in CNCF development processes and really a stamp from the CNCF Technical Oversight Committee (TOC) on what is a sustainable, production ready and mature open source project  you can bet your business on. As projects mature in the CNCF in terms of following solid open source governance processes and become widely adopted, expect to see more projects graduating in the future.

by Chris Aniszczyk at March 07, 2018 08:03 PM

JBoss Tools 4.5.3.AM2 for Eclipse Oxygen.3

by jeffmaury at March 07, 2018 09:39 AM

Happy to announce 4.5.3.AM2 (Developer Milestone 2) build for Eclipse Oxygen.3.

Downloads available at JBoss Tools 4.5.3 AM2.

What is New?

Full info is at this page. Some highlights are below.


CDK and Minishift Server Adapter better developer experience

When working with both CDK and upstream Minishift, it is recommanded to distinguish environments through the MINISHIFT_HOME variable. It was possible before to use this parameter but it requires a two steps process:

  • first create the server adapter (through the wizard)

  • then change the MINISHIFT_HOME in the server adapter editor

It is now possible to set this parameter from the server adapter wizard. So now, everything is correctly setup when you create the server adapter.

Let’s see an example with the CDK server adapter.

From the Servers view, select the new Server menu item and enter cdk in the filter:

cdk server adapter wizard

Select Red Hat Container Development Kit 3.2+

cdk server adapter wizard1

Click the Next button:

cdk server adapter wizard2

The MINISHIFT_HOME parameter can be set here and is defaulted.

Fuse Tooling

Display Fuse version corresponding to Camel version proposed

When you create a new project, you select the Camel version from a list. Now, the list of Camel versions includes the Fuse version to help you choose the version that corresponds to your production version.

Fuse Version also displayed in drop-down list close to Camel version

Update validation for similar IDs between a component and its definition

Starting with Camel 2.20, you can use similar IDs for the component name and its definition unless the specific property "registerEndpointIdsFromRoute" is provided. The validation process checks the Camel version and the value of the "registerEndpointIdsFromRoute" property.

For example:

<from id="timer" uri="timer:timerName"/>

Improved guidance in method selection for factory methods on Global Bean

When selecting factory method on a Global bean, a lot of possibilities were proposed in the user interface. The list of factory methods for a global bean is now limited to only those methods that match the constraints of the bean’s global definition type (bean or bean factory).

Customize EIP labels in the diagram

The Fuse Tooling preferences page for the Editor view includes a new "Preferred Labels" option.

Fuse Tooling editor preference page

Use this option to define the label of EIP components (except endpoints) shown in the Editor’s Design view.

Dialog for defining the display text for an EIP


Credentials Framework

Sunsetting jboss.org credentials

Download Runtimes and CDK Server Adapter used the credentials framework to manage credentials. However, the JBoss.org credentials cannot be used any more as the underlying service used by these components does not support these credentials.

The credentials framework still supports the JBoss.org credentials in case other services / components require or use this credentials domain.


Aerogear component deprecation

The Aerogear component has been marked deprecated as there is no more maintenance on the source code. It is still available in Red Hat Central and may be removed in the future.


Arquillian component removal

The Arquillian component has been removed from Red Hat Central as it has been deprecated a while ago.


BrowserSim component deprecation

The BrowserSim component has been marked deprecated as there is no more maintenance on the source code. It is still available in Red Hat Central and may be removed in the future.


Freemarker component removal

The Freemarker component has been removed from Red Hat Central as it has been deprecated a while ago.


LiveReload component deprecation

The LiveReload component has been marked deprecated as there is no more maintenance on the source code. It is still available in Red Hat Central and may be removed in the future.


Jeff Maury

by jeffmaury at March 07, 2018 09:39 AM