Skip to main content

Cloud Native Predictions for 2021 and Beyond

by Chris Aniszczyk at January 19, 2021 04:08 PM

I hope everyone had a wonderful holiday break as the first couple weeks of January 2021 have been pretty wild, from insurrections to new COVID strains. In cloud native land, the CNCF recently released its annual report on all the work we accomplished last year. I recommend everyone take an opportunity to go through the report, we had a solid year given the wild pandemic circumstances.

As part of my job, I have a unique and privileged vantage point of cloud native trends given to all the member companies and developers I work with, so I figured I’d share my thoughts of where things will be going in 2021 and beyond:

Cloud Native IDEs

As a person who has spent a decent portion of his career working on developer tools inside the Eclipse Foundation, I am nothing but thrilled with the recent progress of the state of the art. The future will hold that the development lifecycle (code, build, debug) will happen mostly in the cloud versus your local Emacs or VSCode setup. You will end up getting a full dev environment setup for every pull request, pre-configured and connected to their own deployment to aid your development and debugging needs. A concrete example of this technology today is enabled via GitHub Codespaces and GitPod. While GitHub Codespaces is still in beta, you can try this experience live today with GitPod, using Prometheus as an example. In a minute or so, you have a completely live development environment with an editor and preview environment. The wild thing is that this development environment (workspace) is described in code and shareable with other developers on your team like any other code artifact.

In the end, I expect to see incredible innovation in the cloud native IDE space over the next year, especially as GitHub Codespaces enters out of beta and becomes more widely available so developers can experience this new concept and fall in love.

Kubernetes on the Edge

Kubernetes was born through usage across massive data centers but Kubernetes will evolve just like Linux did for new environments. What happened with Linux was that end users eventually stretched the kernel to support a variety of new deployment scenarios from mobile, embedded and more. I strongly believe Kubernetes will go through a similar evolution and we are already witnessing Telcos (and startups) explore Kubernetes as an edge platform through transforming VNFs into Cloud Native Network Functions (CNFs) along with open source projects like k3s, KubeEdge, k0s, LFEdge, Eclipse ioFog and more. The forces driving hyperscaler clouds to support telcos and the edge, combined with the ability to reuse cloud native software and build upon already a large ecosystem will cement Kubernetes as a dominant platform in edge computing over the next few years.

Cloud Native + Wasm

Web Assembly (Wasm) is a technology that is nascent but I expect it to become a growing utility and workload in the cloud native ecosystem especially as WASI matures and as Kubernetes is used more as an edge orchestrator as described previously. One use case is powering an extension mechanism, like what Envoy does with filters and LuaJIT. Instead of dealing with Lua directly, you can work with a smaller optimized runtime that supports a variety of programming languages. The Envoy project is currently in its journey in adopting Wasm and I expect a similar pattern to follow for any environment that scripting languages are a popular extension mechanism to be wholesale replaced by Wasm in the future.

On the Kubernetes front, there are projects like Krustlet from Microsoft that are exploring how a WASI-based runtime could be supported in Kubernetes. This shouldn’t be too surprising as Kubernetes is already being extended via CRDs and other mechanisms to run different types of workloads like VMs (KubeVirt) and more.

Also, if you’re new to Wasm, I recommend this new intro course from the Linux Foundation that goes over the space, along with the excellection documentation 

Rise of FinOps (CFM)

The coronavirus outbreak has accelerated the shift to cloud native. At least half of companies are accelerating their cloud plans amid the crisis… nearly 60% of respondents said cloud usage would exceed prior plans owing to the COVID-19 pandemic (State of the Cloud Report 2020). On top of that, Cloud Financial Management (or FinOps) is a growing issue and concern for many companies and honestly comes up in about half of my discussions the last six months with companies navigating their cloud native journey. You can also argue that cloud providers aren’t incentivized to make cloud financial management easier as that would make it easier for customers to spend less, however, the true pain is lack of open source innovation and standardization around cloud financial management in my opinion (all the clouds do cost management differently). In the CNCF context, there aren’t many open source projects trying to make FinOps easier, there is the KubeCost project but it’s fairly early days.

Also, the Linux Foundation recently launched the “FinOps Foundation” to help innovation in this space, they have some great introductory materials in this space. I expect to see a lot more open source projects and specifications in the FinOps space in the coming years.

More Rust in Cloud Native

Rust is still a young and niche programming language, especially if you look at programming language rankings from Redmonk as an example. However, my feeling is that you will see Rust in more cloud native projects over the coming year given that there are already a handful of CNCF projects taking advantage of Rust to it popping up in interesting infrastructure projects like the microvm Firecracker. While CNCF currently has a super majority of projects written in Golang, I expect Rust-based projects to be on par with Go-based ones in a couple of years as the Rust community matures.

GitOps + CD/PD Grows Significantly

GitOps is an operating model for cloud native technologies, providing a set of best practices that unify deployment, management and monitoring for applications (originally coined by Alexis Richardson from Weaveworks fame). The most important aspect of GitOps is describing the desired system state versioned in Git via a declaration fashion, that essentially enables a complex set of system changes to be applied correctly and then verified (via a nice audit log enabled via Git and other tools). From a pragmatic standpoint, GitOps improves developer experience and with the growth of projects like Argo, GitLab, Flux and so on, I expect GitOps tools to hit the enterprise more this year. If you look at the data from say GitLab, GitOps is still a nascent practice where the majority of companies haven’t explored it yet but as more companies move to adopt cloud native software at scale, GitOps will naturally follow in my opinion. If you’re interested in learning more about this space, I recommend checking out the newly formed GitOps Working Group in CNCF.

Service Catalogs 2.0: Cloud Native Developer Dashboards

The concept of a service catalog isn’t a new thing, for some of us older folks that grew up in the ITIL era you may remember things such as CMDBs (the horror). However, with the rise of microservices and cloud native development, the ability to catalog services and index a variety of real time service metadata is paramount to drive developer automation. This can include using a service catalog to understand ownership to handle incident management, manage SLOs and more. 

In the future, you will see a trend towards developer dashboards that are not only a service catalog, but provide an ability to extend the dashboard through a variety of automation features all in one place. The canonical open source examples of this are Backstage and Clutch from Lyft, however, any company with a fairly modern cloud native deployment tends to have a platform infrastructure team that has tried to build something similar. As the open source developer dashboards mature with a large plug-in ecosystem, you’ll see accelerated adoption by platform engineering teams everywhere.

Cross Cloud Becomes More Real

Kubernetes and the cloud native movement have demonstrated that cloud native and multi cloud approaches are possible in production environments, the data is clear that “93% of enterprises have a strategy to use multiple providers like Microsoft Azure, Amazon Web Services, and Google Cloud” (State of the Cloud Report 2020). The fact that Kubernetes has matured over the years along with the cloud market, will hopefully unlock programmatic cross-cloud managed services. A concrete example of this approach is embodied in the Crossplane project that provides an open source cross cloud control plane taking advantage of the Kubernetes API extensibility to enable cross cloud workload management (see “GitLab Deploys the Crossplane Control Plane to Offer Multicloud Deployments”).

Mainstream eBPF

eBPF allows you to run programs in the Linux Kernel without changing the kernel code or loading a module, you can think of it as a sandboxed extension mechanism. eBPF has allowed a new generation of software to extend the behavior of the Linux kernel to support a variety of different things from improved networking, monitoring and security. The downside of eBPF historically is that it requires a modern kernel version to take advantage of it and for a long time, that just wasn’t a realistic option for many companies. However, things are changing and even newer versions of RHEL finally support eBPF so you will see more projects take advantage. If you look at the latest container report from Sysdig, you can see the adoption of Falco rising recently which although the report may be a bit biased from Sysdig, it is reflected in production usage. So stay tuned and look for more eBPF based projects in the future!

Finally, Happy 2021!

I have a few more predictions and trends to share especially around end user driven open source, service mesh cannibalization/standardization, Prometheus+OTel, KYC for securing the software supply chain and more but I’ll save that for more detailed posts, nine predictions are enough to kick off the new year! Anyways, thanks for reading and I hope to see everyone at KubeCon + CloudNativeCon EU in May 2021, registration is open!

by Chris Aniszczyk at January 19, 2021 04:08 PM

DSL Forge, dead or (still) alive?

by alajmi at January 19, 2021 02:22 PM

It has been a long time since the last post I’ve published on the DSL Forge blog. As the initial release back in 2014 and the “hot” context of that time, water has flowed under the bridges. The last couple of years, a lot of effort has been spent on the Coding Park platform, a commercial product based on DSL Forge. Unfortunately, not all the developments made since then have been integrated into the open-source repository.

Anyway, I’ve finally managed to have some time to clean the repository and fix some bugs, hence it’s up-to-date now, and still available under the EPL licence on GitHub.

There are several reasons why the project has not progressed the way we wanted at the beginning, let’s take a step back and think about what happened.

Lack of ambition

One of the reasons why the adoption of cloud-based tools has not taken off is the standstill, and sometimes the lack of ambition, of top managers in big industry corporations who traditionnally use Eclipse technologies to build their internal products. Many companies have a huge legacy desktop applications built on top of Elipse RCP. Despite the acceleration that was put the last 5 years to encourage organizations to move to the web/cloud, eventually, very few have taken action.

No standard cloud IDE

Another reason is the absence of a “standard” platform which is unanimously supported to build new tools on top of it. Of course, there are some nice cloud IDEs flourishing under the Eclipse foundation umbrella, such as Dirigible (SAP), Theia (TypeFox), or Che (Codenvy then Red Hat), but it’s still unclear for customers which of these is the winning horse. Today, Theia seems more accurate than its competitors if you judge based on the number of contributors, and the big tech companies that push the technology forward such as IBM, SAP, and Red Hat just to name a few of them. However, the frontier between these cloud IDEs is still confusing: Theia uses the workspace component of Che, later Theia has become the official UI of Che. Theia is somehow based on VS Code, but then has its own extension mechanism, etc.


In the meantime, there have been attempts to standardize the client/server exchange protocol in the case of text editing, with the Microsoft’s Language Server Protocol (LSP), and later with a variant of LSP to support graphical editing (GLSP). Pushing standards is a common strategy to make stakeholders in a given market collaborate in order to optimize their investments, however, like any other standard-focused community, there is a difference between theory and practice. Achieving a complete interoperability is quite unrealistic, because developing the editor front-end requires a lot of effort already, and even with the LSP in mind, it is common to end up developing the same functionality specifically for each editor, which is not always the top priority of commercial projects or startups willing to reduce their time-to-market.

The cost of migration

As said earlier, there is a large amount of legacy source code built on Eclipse RCP. The sustainability of this code is of strategic importance for many corporations, and unfortunately, most of it is written in Java and relies on SWT. Migrating this code is expensive as it implies rewriting a big part of it in JavaScript, with a particular technical stack/framework in mind, so it’s a long journey, architects have a lot of technical decisions to take along the way, and there is no garantee that they took the right decisions on the long run.

The decline of the Eclipse IDE

Friends of Eclipse, don’t be upset! Having worked with a lot of junior developers in the last 5 years, I have noticed the Eclipse IDE is no longer of interest to many of them. A few years ago, Eclipse was best known for being a good Java IDE, back in the times when IBM was a driving force in the community. Today, the situation is different; Microsoft’s VS Code has established itself as the code editor of choice. It is still incomprehensible to see the poor performance of the Eclipse IDE, especially at startup. It is urgent that one of the cloud IDEs mentioned above take over.

The high volatility of web technologies

We see new frameworks and new trends in web development technologies every day. For instance, the RIA frameworks appeared in the early 2010s finally had a short life, especially with the rise of the new frameworks such as React and Angular. Sever-side rendering is now part of History. One consequence of this was the slow down of investments in RIA-based frameworks, including the Eclipse Remote Application Platform (RAP). Today, RAP is still under maintenance, however its scalability is questionable and its rendering capabilities look outdated compared to newer web frameworks. The incredible pace in which web technologies evolve is one of the factors that make decision makers hesitate to invest in cloud-based modeling tools.

The end of a cycle

As a large part of legacy code must be rewritten in JavaScript or any of its variants (TypeScript, JSX, …), many historical developers (today’s senior developers) with a background in Java, have found themselves overwhelmed by the rise of new paradigms coming from the culture of web development. In legacy desktop applications, it is common to see the UI code, should it be in SWT or Swing, melted with the business logic. Of course, architects have always tried to separate the concerns as much as possible, but the same paradigm, structures, and programming language are used everywhere. With the new web frameworks, the learning curve is so steep that senior developers struggle to get hands on the new paradigms and coding style.


The last 10 years, EMF has become an industry-proven standard for model persistency, however it is quite unknown in the web development community. The most widely used format in data exchange through the web is JSON, and even though the facilities that come with EMF are advanced compared to the tooling support of JSON, the reality is, achieving complete bidirectionnality between EMF and JSON is not always garanteed. That beeing said, EclipseSource are doing a great job in this area thanks their work on the framework.

Where is DSL Forge in all of this?

The DSL Forge project will continue to exist as long as it serves users. First, because the tool is still used in academic research. With a variety of legacy R&D prototypes built on RCP, it is easy to have quickly a web-based client thanks to the port of the SWT library on the web which does almost 90% of the job. Moreover, the framework is still used in commercial products, particularly in the field of Cybersecurity and Education. For example, the Coding Park platform, initially developed on Eclipse RAP is still marketed under this technology stack.

Originally, the DSL Forge was seen as a port of Xtext to the web that relies on ACE editor; this is half true as it has also a nice ANTLR/ACE integration. The tool released in 2014 was ahead of its time. Companies were not ready to make the leap (a lot are still in this situation now even with all the progress made), the demand was not mature enough, and the small number of contributors was a barrier to adoption. Given all of that, we made our own path outside the software development tools market. Meanwhile, the former colleagues of Itemis (now at TypeFox) did a really good job: not only they have built a flawless cloud IDE, but also they have managed to forge strategic partnerships which are contributing to the success of Theia. Best of luck for Theia and the incredible team of TypeFox!

To conclude

Today, Plugbee is still supporting the maintenance of DSL Forge to guarantee the sustainability of customer products.

For now, if you are looking to support a more modern technical stack, your best bet is to start with the Xtext servlet. For example, we have integrated the servlet into a Spring Boot/React application, and it works like a charm. The only effort we have made to achieve the integration was to bind properly the Xtext services to ACE editor. This effort has been made as part of the new release of Coding Park. The code will be extracted and made publicly available on the DSL Forge repository soon. If you are interested in this kind of integrations, feel free to get in touch.

Finally, if you are interested in using Eclipse to build custom modeling tools or to migrate existing products to the web, please have a look at our training offer or feel free to contact us.

by alajmi at January 19, 2021 02:22 PM

The Eclipse Foundation’s Move to Europe: Membership Impacts

by Mike Milinkovich at January 15, 2021 08:00 AM

This is a continuation of yesterday’s Welcome to the Eclipse Foundation AISBL blog.

Yesterday, we announced that we completed our move to European-based governance with the creation of Eclipse Foundation AISBL, a Belgian international nonprofit association. In this post, I wanted to take this opportunity to provide an overview of the membership-impacting changes associated with our move to Europe. 

Part of the transition effort has involved updating our membership documents and bylaws to reflect European-based governance and currency. All of these new documents are available on our governance documents page.  Here’s a quick summary of the key changes of relevance to members:

  • Day-to-day interactions don’t change, including the work done in our projects and working groups. 
  • Members will be asked to switch their membership from the U.S. organization to the new Belgian international non-profit organization. We will reach out to members over the next few months to make this happen. You can review the draft membership agreement.
  • As of October 1st of last year, all membership fees are now restated in euros.  Existing members’ fees are being discounted by 10% from October 1, 2020 through September 30, 2021 to help compensate for currency exchange rates.
  • The Solutions membership level is renamed to Contributing to better reflect the diverse group of organizations that participate in, and contribute to, the Eclipse Foundation ecosystem.
  • We have established new bylaws to reflect Belgian laws.

For details about these, and other changes for members, see my previous blog, and our frequently asked questions.

We will be contacting all of our members and committers to update their agreements with the new Belgian entity. This may include your membership agreement, committer agreements, and working group participation agreements as applicable. 

If you have questions or feedback, feel free to reach out to me, or to our team at Thank you for your support!

by Mike Milinkovich at January 15, 2021 08:00 AM

Welcome to the Eclipse Foundation AISBL

by Mike Milinkovich at January 14, 2021 06:00 AM

Today, we’re announcing that the Eclipse Foundation has successfully completed all of the necessary formalities and has formally established the Eclipse Foundation AISBL, an international non-profit association based in Brussels, Belgium.

As a European-based global organization, the Eclipse Foundation is in the ideal position to build on the growing momentum of strategic open source in Europe and on our strength in the region to support open source innovation globally.

Today’s announcement  is the culmination of months of work, since we first announced our intent to establish ourselves as European in May 2020. I want to thank everyone who has had a hand in making our legal transition to Europe a reality. There have been many aspects to consider and a lot of work behind the scenes to get all of the required pieces in place. And the journey isn’t over yet! I will be publishing a second blog post shortly discussing what this means for our members and committers. Tl;dr: keep doing what you’re doing. 

Building on Our Strength in Europe Advances Open Source Innovation Globally

The Eclipse Foundation is the largest open source software foundation in Europe in terms of staff, projects, developers, and members. We have more than 170 members and more than 900 committers based in Europe. And we’re already home to a number of publicly funded European research projects that enable academics, subject matter experts, and large organizations to collaborate and build on research results to benefit corporations and the public.

We see a huge opportunity to build on our strong membership base, active developer community, and strong institutional relationships in Europe to enable the free flow of open software innovation throughout the world. Everyone will benefit from more choices and greater diversity of open source software technologies to build on.

As the Eclipse Foundation continues to grow — we added 75 new members in 2020 alone — the choices, diversity, and benefits will multiply. The future of open source has never looked brighter.

Europe Has Embraced Open Source Software

The strategic value of open source software is recognized across European government organizations, corporations, and publicly funded institutions:

  • The European Commission considers open source initiatives to be strategically important to drive digital and industrial transformations that will help to shape Europe’s digital future.
  • Leading European corporations, including Bosch, Daimler TSS, IBM, and SAP — all founding members of the Eclipse Foundation AISBL — see open source collaboration as an important way to accelerate innovation and increase their competitive edge.
  • Academic and research institutions are increasingly using open source software as a catalyst for innovation.

All of these organizations see the benefits of joining forces with each other, and with organizations around the world, to collaborate on open source software innovation. Many already see the Eclipse Foundation as the right place to foster global industry collaboration on open source projects in strategic technology areas, such as cloud, edge computing, artificial intelligence, connected vehicles, telecom, and IoT.

Get More Information

To provide more insight into our legal move to Europe and what it means for Eclipse Foundation members, we’ve developed a number of resources we think you’ll find helpful. I will also be providing an additional post tomorrow with additional details for members.

This is a big day for the Eclipse Foundation and its community. I want to thank all of my colleagues on the staff and our Board that helped make this possible.

by Mike Milinkovich at January 14, 2021 06:00 AM

Converter methods in Eclipse Collections

by Donald Raab at January 13, 2021 06:05 AM

Converting from one type of collection to another

Mind map image created by Kenji Hiranabe in Astah UML. Included here with his permission.

Converting from one type to another type

In Eclipse Collections there are many different collection types. There are Mutable and Immutable collection types. There are Object and primitive collection types. There are types of List, Set, Bag, Stack, Map, BiMap, Multimap. There are so many things you can do with all of the Eclipse Collections types and APIs. But how do you convert from one type to another?

To convert a collection to another type, find methods with the prefixto

Methods that begin with to will copy the contents of a collection to a specific type and will have a linear time cost.

Converter Method Symmetry

I’m going to show the many paths you can take to get from one type to another using Eclipse Collections in code. I have built a code kata for this particular purpose.


I will demonstrate converter methods using three different APIs available in Eclipse Collections — RichIterable, IntIterable (works for all primitive Iterables) and Collectors2.

There is good symmetry between these APIs, but in the process of implementing the kata, I discovered some things were missing so I opened new issues for the missing APIs in the Eclipse Collections GitHub repo.

RichIterable Converter Methods

The following methods are available on RichIterable (and subtypes) and can be used to convert from one collection type to another.

Methods: toList, toSet, toBag, toStack, toMap,toSortedList, toSortedListBy, toSortedSet, toSortedSetBy, toSortedBag, toSortedBagBy, toSortedMap, toSortedMapBy, toArray, toString

1. RichIterable: toList

public void toList()
Interval interval = Interval.oneTo(5);
// Convert interval to a MutableList<Integer>
MutableList<Integer> list = interval.toList();
Assert.assertEquals(Lists.mutable.with(1, 2, 3, 4, 5), list);

2. RichIterable: toSet

public void toSet()
MutableList<Integer> list = Lists.mutable.with(1, 2, 2, 3, 3);
// Convert list to a MutableSet<Integer>
MutableSet<Integer> set = list.toSet();
Assert.assertEquals(Sets.mutable.with(1, 2, 3), set);

3. RichIterable: toBag

public void toBag()
MutableList<Integer> list = Lists.mutable.with(1, 2, 2, 3, 3);
// Convert list to a MutableBag<Integer>
MutableBag<Integer> bag = list.toBag();
Assert.assertEquals(Bags.mutable.with(1, 2, 2, 3, 3), bag);

4. OrderedIterable: toStack

public void toStack()
MutableList<Integer> list = Lists.mutable.with(1, 2, 3);
// Convert list to a MutableStack<Integer>
MutableStack<Integer> stack = list.toStack();
Assert.assertEquals(Stacks.mutable.with(1, 2, 3), stack);
// Pop 3 elements off the stack
Assert.assertEquals(list.toReversed(), stack.pop(3));

5. RichIterable: toMap

public void toMap()
MutableList<Integer> list = Lists.mutable.with(1, 2, 3);
// Convert list to a MutableMap<String, Integer> where the keys
// are the String value of the element, and the values are
// the Integer value
MutableMap<String, Integer> map =
list.toMap(String::valueOf, i -> i);
Maps.mutable.with("1", 1, "2", 2, "3", 3), map);

6. RichIterable: toSortedList

public void toSortedList()
MutableList<Integer> list = Lists.mutable.with(5, 3, 1, 4, 2);
// Convert list to a sorted MutableList<Integer>
MutableList<Integer> forward =
// Convert list to a MutableList<Integer> sorted in reverse
MutableList<Integer> reverse =
Assert.assertEquals(Lists.mutable.with(1, 2, 3, 4, 5), forward);
Assert.assertEquals(Lists.mutable.with(5, 4, 3, 2, 1), reverse);

7. RichIterable: toSortedListBy

public void toSortedListByLastName()
// Convert this.people to a MutableList<Person> sorted by
// lastName
MutableList<Person> sorted =

8. RichIterable: toSortedSet

public void toSortedSet()
MutableList<Integer> list = Lists.mutable.with(5, 3, 1, 4, 2);
// Convert list to a sorted MutableSortedSet<Integer>
MutableSortedSet<Integer> forward =
// Convert list to a MutableSortedSet<Integer> sorted in reverse
MutableSortedSet<Integer> reverse =
SortedSets.mutable.with(1, 2, 3, 4, 5), forward);
SortedSets.mutable.with(5, 4, 3, 2, 1), reverse);

9. RichIterable: toSortedSetBy

public void toSortedSetByFirstName()
// Convert this.people to a MutableSortedSet<Person> sorted by
// firstName
MutableSortedSet<Person> sorted =

10. RichIterable: toSortedBag

public void toSortedBag()
MutableList<Integer> list = Lists.mutable.with(5, 3, 1, 4, 2);
// Convert list to a sorted MutableSortedBag<Integer>
MutableSortedBag<Integer> forward = list.toSortedBag();
// Convert list to a MutableSortedBag<Integer> sorted in reverse
MutableSortedBag<Integer> reverse =
SortedBags.mutable.with(1, 2, 3, 4, 5), forward);
SortedBags.mutable.with(5, 4, 3, 2, 1), reverse);

11. RichIterable: toSortedBagBy

public void toSortedBagByAge()
// Convert this.people to a MutableSortedBag<Person> sorted by
// age
MutableSortedBag<Person> sorted =

12. RichIterable: toSortedMap

public void toSortedMap()
MutableList<Integer> list = Lists.mutable.with(3, 1, 2);
// Convert list to a MutableSortedMap<String, Integer> where
// the keys are the String value of the Integer and the values
// are the Integer values
MutableSortedMap<String, Integer> map =
list.toSortedMap(String::valueOf, i -> i);
SortedMaps.mutable.with("1", 1, "2", 2, "3", 3), map);

13. RichIterable: toSortedMapBy

public void toSortedMapByLastName()
// Convert this.people to MutableSortedMap<String, Person>
// where the keys are the last name of the person
// and the values are the person, and the keys are sorted on
// their uppercase String value
MutableSortedMap<String, Person> map =
person -> person);

14. RichIterable: toArray

public void toArray()
MutableList<Integer> list = Lists.mutable.with(1, 2, 3);
// Convert the list to an Integer array
Integer[] array = list.toArray(new Integer[3]);
Assert.assertArrayEquals(new Integer[]{1, 2, 3}, array);

15. RichIterable: toString

public void toStringTest()
MutableList<Integer> list = Lists.mutable.with(1, 2, 3);
// Convert the list to a String
String toString = list.toString();
// Convert the list to a String with "[", "," "]" as
// separators using makeString
String makeString = list.makeString("[", ", ", "]");
Assert.assertEquals("[1, 2, 3]", toString);
Assert.assertEquals("[1, 2, 3]", makeString);

Primitive Iterable Converter Methods

The following methods are available on IntIterable and other subtypes of PrimitiveIterable. These methods can be used to convert from one primitive collection type to another.

Methods: toList, toSet, toBag,toSortedList, toArray, toString

1. IntIterable: toList

public void toList()
IntInterval interval = IntInterval.oneTo(5);
// Convert interval to a MutableIntList
MutableIntList list = interval.toList();
// Convert list to an ImmutableIntList
ImmutableIntList immutableIntList = list.toImmutable();
Assert.assertEquals(IntLists.mutable.with(1, 2, 3, 4, 5), list);
Assert.assertEquals(list, immutableIntList);

2. IntIterable: toSet

public void toSet()
MutableIntList list = IntLists.mutable.with(1, 2, 2, 3, 3);
// Convert list to a MutableIntSet
MutableIntSet set = list.toSet();
// Convert set to an ImmutableIntSet
ImmutableIntSet immutableIntSet = set.toImmutable();
Assert.assertEquals(IntSets.mutable.with(1, 2, 3), set);
Assert.assertEquals(set, immutableIntSet);

3. IntIterable: toBag

public void toBag()
MutableIntList list = IntLists.mutable.with(1, 2, 2, 3, 3);
// Convert list to a MutableIntBag
MutableIntBag bag = list.toBag();
// Convert bag to an ImmutableIntBag
ImmutableIntBag immutableIntBag = bag.toImmutable();
Assert.assertEquals(IntBags.mutable.with(1, 2, 2, 3, 3), bag);
Assert.assertEquals(bag, immutableIntBag);

4. IntIterable: toSortedList

public void toSortedList()
MutableIntList list = IntLists.mutable.with(5, 3, 1, 4, 2);
// Convert list to a sorted MutableIntList
MutableIntList sorted = list.toSortedList();
IntLists.mutable.with(1, 2, 3, 4, 5), sorted);
// Sort the sorted list in reverse order
MutableIntList forward = sorted.sortThis();
MutableIntList reversed = sorted.sortThisBy(i -> -i);
IntLists.mutable.with(5, 4, 3, 2, 1), reversed);

5. IntIterable: toArray

public void toArray()
MutableIntList list = IntLists.mutable.with(1, 2, 3);
int[] array = list.toArray(new int[3]);
Assert.assertArrayEquals(new int[]{1, 2, 3}, array);

6. IntIterable: toString

public void toStringTest()
MutableIntList list = IntLists.mutable.with(1, 2, 3);
String toString = list.toString();
String makeString = list.makeString("[", ", ", "]");
Assert.assertEquals("[1, 2, 3]", toString);
Assert.assertEquals("[1, 2, 3]", makeString);

Collectors2 Converter Methods

The following methods are available on Collectors2 and can be used to convert from a Java Stream to an Eclipse Collection type.

Methods: toList, toSet, toBag, toStack, toMap,toSortedList, toSortedListBy, toSortedSet, toSortedSetBy, toSortedBag, toSortedBagBy, makeString

1. Collectors2: toList

public void toList()
Stream<Integer> interval = IntStream.rangeClosed(1, 5).boxed();
// Convert interval to a MutableList<Integer> using Collectors2
MutableList<Integer> list =
Assert.assertEquals(Lists.mutable.with(1, 2, 3, 4, 5), list);

2. Collectors2: toSet

public void toSet()
List<Integer> list = List.of(1, 2, 2, 3, 3);
// Convert list to a MutableSet<Integer> using Collectors2
MutableSet<Integer> set =;
Assert.assertEquals(Sets.mutable.with(1, 2, 3), set);

3. Collectors2: toBag

public void toBag()
List<Integer> list = List.of(1, 2, 2, 3, 3);
// Convert list to a MutableBag<Integer> using Collectors2
MutableBag<Integer> bag =;
Assert.assertEquals(Bags.mutable.with(1, 2, 2, 3, 3), bag);

4. Collectors2: toStack

public void toStack()
List<Integer> list = List.of(1, 2, 3);
// Convert list to a MutableStack<Integer> using Collectors2
MutableStack<Integer> stack =;
Assert.assertEquals(Stacks.mutable.with(1, 2, 3), stack);

5. Collectors2: toMap

public void toMap()
List<Integer> list = List.of(1, 2, 3);
// Convert list to a MutableMap<String, Integer> where the keys
// are the String value of the element, and the values are
// the Integer value using Collectors2
MutableMap<String, Integer> map =
Collectors2.toMap(String::valueOf, i -> i));
Maps.mutable.with("1", 1, "2", 2, "3", 3), map);

6. Collectors2: toSortedList

public void toSortedList()
List<Integer> list = List.of(5, 3, 1, 4, 2);
// Convert list to a sorted MutableList<Integer> using
// Collectors2
MutableList<Integer> forward =;
// Convert list to a MutableList<Integer> sorted in reverse
// order using Collectors2
MutableList<Integer> reverse =
Assert.assertEquals(Lists.mutable.with(1, 2, 3, 4, 5), forward);
Assert.assertEquals(Lists.mutable.with(5, 4, 3, 2, 1), reverse);

7. Collectors2: toSortedListBy

public void toSortedListByLastName()
// Convert this.people to a MutableList<Person> sorted by last
// name using Collectors2
MutableList<Person> sorted =

8. Collectors2: toSortedSet

public void toSortedSet()
List<Integer> list = List.of(5, 3, 1, 4, 2);
// Convert list to a sorted MutableSortedSet<Integer> using
// Collectors2
MutableSortedSet<Integer> forward =;
// Convert list to a MutableSortedSet<Integer> sorted in
// reverse order using Collectors2
MutableSortedSet<Integer> reverse =
SortedSets.mutable.with(1, 2, 3, 4, 5), forward);
SortedSets.mutable.with(5, 4, 3, 2, 1), reverse);

9. Collectors2: toSortedSetBy

public void toSortedSetByFirstName()
// Convert this.people to a MutableSortedSet<Person> sorted by
// firstName using Collectors2
MutableSortedSet<Person> sorted =

10. Collectors2: toSortedBag

public void toSortedBag()
List<Integer> list = List.of(5, 3, 1, 4, 2);
// Convert list to a sorted MutableSortedBag<Integer>
// using Collectors2
MutableSortedBag<Integer> forward =;
// Convert list to a MutableSortedBag<Integer> sorted in
// reverse order using Collectors2
MutableSortedBag<Integer> reverse =
SortedBags.mutable.with(1, 2, 3, 4, 5), forward);
SortedBags.mutable.with(5, 4, 3, 2, 1), reverse);

11. Collectors2: toSortedBagBy

public void toSortedBagByAge()
// Convert this.people to a MutableSortedBag<Person> sorted by
// age using Collectors2
MutableSortedBag<Person> sorted =

12. Collectors2: makeString

public void toStringTest()
List<Integer> list = List.of(1, 2, 3);
// Convert the list to a String
String toString = list.toString();
// Convert the list to a String with "[", "," "]" as separators
// using makeString on Collectors2
String makeString =
Collectors2.makeString("[", ", ", "]"));
Assert.assertEquals("[1, 2, 3]", toString);
Assert.assertEquals("[1, 2, 3]", makeString);

And there’s more…

I covered a lot of converter methods available in Eclipse Collections. In the process, I discovered a few that are currently missing. There are a few more methods available like toImmutable, toReversed, and Collectors2 has equivalent immutable Collector implementations to match the mutable ones I used in the examples. Another method that you can use to convert a collection to another collection type is named into.

<R extends Collection<T>> R into(R target);

This method is available on RichIterable and will take any target Collection you give it as long as it extends java.util.Collection. The collection you pass in as an argument is the collection you will get back, with all the elements of the source collection added to it. This should give you as much flexibility as you need to convert from one collection type to another.

I am a Project Lead and Committer for the Eclipse Collections OSS project at the Eclipse Foundation. Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it on GitHub.

Converter methods in Eclipse Collections was originally published in Javarevisited on Medium, where people are continuing the conversation by highlighting and responding to this story.

by Donald Raab at January 13, 2021 06:05 AM

Testing Eclipse RCP applications with the Mockito JUnit 5 extension

by Patrick at January 12, 2021 09:29 PM

In an earlier post I showed how to add Mockito to Eclipse target definitions. This allows plug-in developers to write and run Mockito-based unit tests both inside the Eclipse IDE and in Maven Tycho builds.

There was, however, one missing piece to this functionality – the ability to run tests using the Mockito JUnit 5 extension. The specific issue was that the mockito-junit-jupiter JAR was not structured as a bundle that would work with an OSGi classloader.

In the most recent version of Mockito (3.7) this has now been fixed and Eclipse RCP developers can now begin to use the extension.

Adding Mockito to your Eclipse RCP target platform

In the past, a new version of Mockito (or any third-party library) would be difficult to incorporate into your target platform. Luckily, there has also been a recent change in the m2e PDE tooling that allows us to include Maven artifacts directly.

So instead of waiting for Mockito to be updated in Eclipse Orbit or wrapping it in a custom plug-in, we can now simply add Mockito and its dependencies (Byte Buddy and Objenesis) as Maven target entries.

After reloading the target, you can add a dependency in your test fragment manifest to the org.mockito.junit-jupiter bundle and start writing tests.

class HelloWorldService_Mockito_ExtensionMock_Test {

	private HelloWorldService helloWorldService;
	void test() {

		assertEquals("Hello!", helloWorldService.sayHello());

Wrapping up

I’ve updated the projects in the GitHub repository with the new target definition locations and a new test exercising the Mockito JUnit 5 extension.

by Patrick at January 12, 2021 09:29 PM

Help Shape the Future of IoT and Edge by Completing Our Survey

by Thabang Mashologu at January 12, 2021 01:55 PM

If your business is deploying or using commercial IoT and edge computing solutions, please take a few minutes before February 28 to complete our 2021 IoT and Edge Commercial Adoption Survey. With your input, everyone in the IoT ecosystem will have better insight into the requirements, priorities, and challenges organizations face as they adopt and use commercial IoT and edge solutions, including those that incorporate open source technologies.

The IoT and Edge Commercial Adoption Survey goes beyond our IoT Developer Survey to provide insight into the overall IoT industry landscape. This is the second year we’re running a survey focused on IoT adoption, and the first year it includes questions about edge technologies and edge computing workloads.


Survey Results Help IoT and Edge Stakeholders Focus Their Efforts

The survey results give organizations and enterprises of all sizes deeper insight into:

  • Forecasts for IoT and edge market growth 
  • Challenges and potential barriers that impact market development and size
  • The latest IoT and edge computing market trends
  • The mix of proprietary and open source solutions being used in IoT solutions and where open source software provides key benefits
  • How edge computing is being incorporated into IoT solutions
  • Strategies other companies are using to increase their IoT footprint

This insight will help all IoT and edge ecosystem stakeholders — software vendors, platform vendors, solution providers, and manufacturing organizations — make strategic and technology decisions that meet business and industry needs.
The survey results will also influence the roadmaps of the Eclipse IoT ecosystem and the Edge Native Working Group, helping to ensure they remain focused on the top requirements and priorities for commercial IoT solutions.

Our 2019 IoT Commercial Adoption Survey revealed a number of interesting trends, including the fact that 60 percent of respondents were factoring open source into their IoT development plans. We also learned that IoT development is predominantly fueled by investments from industrial markets, such as energy management, building automation, smart cities, industrial automation, and agriculture. For more insight into the 2019 results, read Mike Milinkovich’s blog on the topic.


Add Your Voice to the IoT and Edge Commercial Adoption Survey


The 2021 IoT and Edge Commercial Adoption Survey is open January 12 to February 28. It should take 10 minutes or less of your time to complete. The more people that respond to the survey, the broader the view we can provide.

Start the survey now.


Get Involved in Eclipse Foundation IoT and Edge Communities


The Eclipse Foundation IoT and Edge Native communities are thriving environments with dozens of open source technology projects that address real-world issues and provide the basis for commercial IoT solutions.

To learn more about the industry-scale collaboration happening in the Eclipse IoT Working Group, visit the Eclipse IoT website.

To learn how the edge native community at the Eclipse Foundation is delivering production-ready platforms for edge native application development, operation, and management, visit the Edge Native Working Group website.


by Thabang Mashologu at January 12, 2021 01:55 PM

Including Maven artifacts in an Eclipse RCP target platform

by Patrick at January 11, 2021 09:59 PM

One of the most difficult issues facing Eclipse RCP developers has been how to consume third-party libraries. We often want to use JAR files not available as OSGi plug-ins (missing OSGi metadata) or that are not available in a p2 repository.

So far, our options have included:

  • Accessing a JAR file in Eclipse Orbit. For many years Eclipse committers have been adding metadata to third-party JARs and making them available in a p2 repository (thank you, btw). The problem with this approach is that many libraries are not available, and those that are may not be the desired version.
  • Using a PDE wizard to generate an Eclipse Plug-in from one or more JAR files. This works, but it’s ugly and you have to maintain these projects over time, including checking them into your VCS.

Neither of these options has been a good long-term solution for Eclipse RCP developers.

The new Maven target location type

I’m happy to say that Eclipse RCP developers can now directly access all of the libraries available in Maven repositories, whether these libraries contain OSGi metadata or not.

In our Eclipse RCP applications we typically manage our dependencies through target definition files. These files list the repositories and artifacts that our application depends on and compiles with. Basically, this is the Eclipse RCP version of Maven dependencies specified in POMs.

Target definitions are made up of target locations which can be one of four types. For all practical purposes, the only type that really mattered was Software Site, which is a p2 repository location. This is the only target location type available in Maven Tycho builds, so the other location types are not very useful.

Now there is finally a fifth target location type: Maven. This type is honored both inside the Eclipse IDE and also in Tycho builds (versions 2.1 or later)

Installing the Maven location type

The new Maven location type is contributed by the m2e PDE feature. The first step to accessing the location type is to install this feature if it’s not already present.

In your Eclipse IDE, choose Help > Install New Software… Add the update site below and then select the m2e PDE Integration feature.

Once the feature is installed and you restart Eclipse, you should see the new target location type in the Target Definition Editor.

Using the Maven location type

Selecting the new location type after clicking Add in the Target Definition Editor brings you to a dialog allowing you to enter the Maven GAV coordinates for the JAR.

After completing the dialog, the target will resolve and you should see the JAR (now an OSGi bundle) in your target.

Now you can create dependencies on this new bundle and start using it directly. No more Orbit, no more ugly JAR wrapper projects.

I’d like to thank Christoph Läubrich for doing the work that made this possible. According to this forum post there is still more work that can be done to improve the functionality, and if this matters to your organization please consider providing support!

by Patrick at January 11, 2021 09:59 PM

Member Case Study: Obeo Accelerates Growth With a Strategic Membership

by Thabang Mashologu at January 08, 2021 02:00 PM

Happy New Year! It's a new year and we have a new case study to inspire your entrepreneurial open source journey.

In 2009, just two years after Obeo joined the Eclipse Foundation, the company took a bold step and became a Strategic member with a representative on the Eclipse Foundation Board of Directors. The move expanded Obeo’s influence and visibility, and has helped the company grow internationally. We’re sharing Obeo’s story in our latest member case study


Obeo specializes in open source solutions to create and transform complex industrial systems, IT applications, and corporate digital operations. Open source software has been key to Obeo’s product strategy since the company was created in 2005 in France. Today, Obeo employs dozens of people, partners with major industry players, and has a global customer base.


Obeo CEO, Cédric Brun, says the reputation the company built by participating in the Eclipse Foundation has been key to the company’s growth, particularly its growth outside of France. 


“Our involvement in the Eclipse Foundation helped us to be recognized in our technical niche in just a few years,” Brun says.


In our member case study, Brun provides insight into why Obeo decided to upgrade to Strategic membership in the Eclipse Foundation, the advantages of being on the Board of Directors with global technology leaders, and the new opportunities and business relationships that have resulted.


Read the Obeo case study here.


To learn more about the benefits of Eclipse Foundation membership, visit our membership page.


Share Your Open Source Success Story


All Eclipse Foundation members are invited to share their experiences with vendor-neutral, open source collaboration at the Eclipse Foundation in a member case study. To get involved, email


If you missed our earlier member case studies, use the links below to download them:

·  Payara Services Gains an Equal Footing With Industry Leaders

·  Cedalo Finds New Opportunities in the Eclipse IoT Ecosystem


Next up: How itemis has built its automotive industry business through Eclipse Foundation membership.


Join Us for the Entrepreneurial Open Source Workshop


Obeo CEO Cédric Brun and Cedalo CEO Philipp Struss will be joined by Gael Blondelle, our VP of Ecosystem Development, for the Entrepreneurial Open Source Workshop on January 14. Attend the event to hear about real-world lessons learned on the power and value of open source collaboration in entrepreneurship and building businesses.


by Thabang Mashologu at January 08, 2021 02:00 PM

ECF 3.14.19 released - simplify remote service discovery via properties

by Scott Lewis ( at January 08, 2021 12:21 AM

 ECF 3.14.19 has been released.

Along with the usual bug fixes, this release includes new documentation on the use of properties for discovering and importing remote services.   The docs describe the use of properties files for simplifying the import of remote services.   

This capability is especially useful for Eclipse RCP clients accessing Jax-RS/REST remote services.

Patrick Paulin describes a production usage his blog posting here.

by Scott Lewis ( at January 08, 2021 12:21 AM

The next 5 years for Eclipse Collections

by Donald Raab at January 06, 2021 11:20 PM

My top 25 wish list for the future of Eclipse Collections development

9 Years OSS, 5 years at Eclipse Foundation

Eclipse Collections has existed as an open source project on GitHub for a total of 9 years. Eclipse Collections has been a project at the Eclipse Foundation for 5 years. There have been 4 major versions of Eclipse Collections released, and there were 7 major versions of GS Collections prior to that.

The open source community has done a lot of work on this amazing library, and I would like to thank everyone who has contributed and continues to contribute their time, spirit and code. There is plenty more that can be done to evolve the library, and it will continue to be work done by the community for the community. This makes it kind of hard to predict anything, but I can easily make a wish list.

More of this, Less of that

It’s always nice to consider what new features should be added to a library, but in order to move forward we also have to pay down the technical debt.

Here’s my wish list for Eclipse Collections for the next 5 years.

Technical Debt

  • Combine Unit Tests and Unit Tests Java 8 modules
  • Replace Scala Unit Tests with Java Equivalent Tests
  • Remove JMH Scala Tests
  • Refresh and update the Eclipse Collections Reference Guide
  • Improve JavaDoc
  • Move slow running unit tests to acceptance tests
  • Replace legacy anonymous inner classes with lambdas and method references
  • Clean up performance tests

Java Upgrades

  • Upgrade library to Java 11 in next three years
  • Leverage Java Module System fully
  • Leverage Local Variable Type Inference sparingly to improve readability
  • Test and integrate with Project Valhalla features

New Containers

  • DataFrames
  • Trees
  • More primitive collection types
  • Lazy Collections (specific types like LazyList)
  • Off-heap Collections
  • Persistent Collections (Functional)

New APIs

  • Improve symmetry between object and primitive APIs
  • More converter methods between types
  • Implement more Parallel APIs

Performance Tuning

  • Optimize APIs from JDK

IDE Refactoring Support

  • Refactor from Java Streams to Eclipse Collections
  • Refactor from for-loops to Eclipse Collections
  • Refactor from Object collections to Primitive Collections

Growing the community

I would love to see more projects using and benefitting from the engineering investment we have collectively made in Eclipse Collections. Here’s my top 10 Project list that I would like to see include the goodness of Eclipse Collections.

The future is up to you, the contributor

Is there something missing from my wish list that you would like to see in Eclipse Collections? You have the power to impact and influence with your voice and your keyboard. Eclipse Collections is open for contributions, and we’d love to have more contributors helping shape the future of the library.

Have a Happy, Safe and Healthy New Year!

I am a Project Lead and Committer for the Eclipse Collections OSS project at the Eclipse Foundation. Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it on GitHub.

by Donald Raab at January 06, 2021 11:20 PM

JBoss Tools 4.18.0.AM1 for Eclipse 2020-09

by jeffmaury at December 23, 2020 10:10 PM

Happy to announce 4.18.0.AM1 (Developer Milestone 1) build for Eclipse 2020-09.

Downloads available at JBoss Tools 4.18.0 AM1.

What is New?

Full info is at this page. Some highlights are below.


Support for codestarts in New Quarkus project wizard has added a new option codestart that allows extension that support this new feature to contribute sample code in the generated project. It is enabled by default and is accessible from the second step in the wizard:



Devfile based deployments

The Application Explorer view is now based on odo 2.x, which allows deployments to be based on devfile (developer oriented manifest file). The components from the default odo registry are listed with legacy S2I components:


It is also now possible to bootstrap from an empty project as the components from the registry may expose starter projects (sample code that will initialize your empty project).


Hibernate Tools

A number of additions and updates have been performed on the available Hibernate runtime providers.

Runtime Provider Updates

The Hibernate 5.4 runtime provider now incorporates Hibernate Core version 5.4.25.Final and Hibernate Tools version 5.4.25.Final.

The Hibernate 5.3 runtime provider now incorporates Hibernate Core version 5.3.20.Final and Hibernate Tools version 5.3.20.Final.

Server Tools

Wildfly 22 Server Adapter

A server adapter has been added to work with Wildfly 22.

CDI Tools

Eclipse Microprofile support

CDI Tools now support Eclipse Microprofile. Eclipse Microprofile related assets are checked against @Inject injections points and are validated according to rules specified in various Eclipse Microprofile specifications.

Forge Tools

Forge Runtime updated to 3.9.8.Final

The included Forge runtime is now 3.9.8.Final.


Jeff Maury

by jeffmaury at December 23, 2020 10:10 PM

2021 Predictions: Open Source Cloud Development Tools as the New Standard

by Brian King at December 22, 2020 01:35 PM

During 2020 we began to witness the coming of age of open source Cloud Development Tool technologies and increased adoption of those technologies, along with the acceleration of remote workplace practices. My prediction is that 2021 will be a tipping point for Open Source Cloud IDEs and associated technologies.

Innovation happens around the edges, and that certainly has been, and continues to be, the case as more tooling has started moving to the cloud in recent years. Gitpod, an Eclipse Theia adopter, has been around for more than two years, adding value to their product to make developers' lives easier. With native GitLab integration announced recently, automated dev environments for common daily coding tasks are now available to more and more developers.

On the other hand, enterprise tooling offerings have been quick to adopt open source cloud-based technologies to advance their own innovations. RedHat’s Codeready Workspaces, SAP’s Business Application Studio, and Broadcom’s Che4Z (all based on Eclipse open source projects) are just a few examples. Open source cloud-based tools are revitalizing domains like Java and mainframe development and will continue to do so for other domains. Strong interest is being seen in domains such as embedded, modeling / diagramming, and workspace management, just to name a few.

In October, technologists working on cutting edge features and infrastructure gathered together at the IDE Summit. The goal was to start tackling some of the technical challenges that exist today and start to utilize some new technologies to make IDE tooling even more powerful. In the ECD Tools Working Group, we are in a privileged position to not only observe what is happening in this industry, but to also actively participate and shape future outcomes. Here are our top three predictions for 2021.

2021 will serve as the “tipping point” for cloud-based software development

A wholesale move to the cloud driven by the era of COVID-19 and remote work, combined with the increased adoption of cloud-based tools like Eclipse Theia and upcoming release of Github Codespaces, accelerates the trend towards cloud-based development tools. Traditional tools will have a long tail, but the point of no return has been reached. No one is going back to premise-based solutions.

Enterprise DevOps Teams will adopt a hybrid environment, with a mix of open source and proprietary solutions 

Historically, companies have been “all in” on either proprietary solutions or, not wanting to be locked in, they build their own and use open source solutions. While there are options to satisfy both those approaches, a trend we have been noticing more is a hybrid of both. For enterprises part of it is explained by being where the momentum is, and part by wanting to get the right tools to their developers in the right place at the right time. A great example is the extension ecosystem, where VS Code extensions can now be used in not only VS Code, but in multiple products that support the Open VSX Registry. Open source innovation allows teams to pick and choose what works best for their specific needs.

Cloud development tools will breathe new life into “legacy” domains 

Cloud development tools will continue to drive a renewed interest in extending the life of infrastructure running older architectures such as mainframes running COBOL and other languages. Many teams are using cloud-based tools to train a younger generation of developers to maintain, and build on, this installed base. Even ubiquitous languages like Java are making a big comeback because of cloud tools.

by Brian King at December 22, 2020 01:35 PM

Release 5.6

December 22, 2020 12:00 AM

New version 5.6 has been released.

Release is of Type A


  • MongoDB Client API
  • MongoDB DAO API
  • MongoDB based persistence for Application Templates
  • MongoDB based full-stack Application Templates
  • Case Sensitive support for the Persistence Layer
  • Sidebar Navigation support for Entity Data Modeler


  • Case Sensitive support for Constraints
  • Missing Git commit info
  • SAP Kyma local database configuration
  • SAP Cloud Foundary related fixes
  • PostgreSQL related fixes
  • ActiveMQ version update to support PostgreSQL
  • Kafka parameters configurable via environment
  • Minor fixes


  • 57K+ Users
  • 82K+ Sessions
  • 186 Countries
  • 412 Repositories in DirigibleLabs



December 22, 2020 12:00 AM

A New Era for the Open VSX Registry

by Brian King at December 18, 2020 11:57 AM

Last week on December 9, we completed the transition of the public instance of the Open VSX Registry at to the Eclipse Foundation. I previously wrote about this, and the implications for you, especially if you publish extensions. This post is a followup with some reminders and further information.

The handover went smoothly, with very little disruption of service; and at the time of this writing, the site is running well with a healthy number of requests coming in. This would not have been possible without the hard work and months of preparation by the project team. Special mention and thanks to Sharon Corbett for project management on the Eclipse Foundation side; Miro Spönemann of TypeFox, who developed the bulk of the new features; Mikael Barbero for setting up the new infrastructure and integrating changes; and Christopher Guindon for extending the Eclipse authorization API. 

Publisher Requirements

This is an important reminder for extension publishers. If you are a publisher to the Open VSX Registry, there are two items that will require your attention if you have not already done so:

  • You will be required to accept the Eclipse Publisher Agreement

  • You will need to ensure that your extensions are published under a license

More details on how to do this can be found in the previous post; and I would also urge you to read the documentation for publishing extensions.

You have until January 8 to complete these steps, and if not done by that date then published extensions will be deactivated, which means they are no longer available from the UI and API. If you complete the requirements at a later date, the deactivated extensions will be reactivated.

Namespace Changes

Due to increasing security concerns by adopters of Open VSX, namespaces can no longer be public. Starting Dec. 17 2020, only members of a namespace have the authority to publish.

This change has the following consequences:

  • When someone creates a namespace, they automatically become a contributor of that namespace.
  • Extensions are shown as verified in the UI if the publishing user is a member of the namespace and the namespace has at least one owner. Otherwise the extensions are shown as unverified with a warning icon and an explanatory banner.
  • Namespaces with no members are considered as orphaned (previously they were public).
  • All previous publishers to an orphaned namespace have been added as contributors of that namespace.
  • Orphaned namespaces with no published extensions have been deleted.

This change does not affect the publishing process if you create the namespace yourself. For more information on namespaces, refer to the Namespace Access documentation page.

Issue and Requests

The source and configuration of was moved to the Eclipse Foundation organization on Github and is now at Issues about availability of the website, namespace requests and deletion requests should be reported at that repository.

Legal questions, complaints and reports of abuse should be sent to

Get Involved

Here are some ways to get involved in the Open VSX project right now:

Today, there’s growing momentum around open source tools and technologies that support Visual Studio (VS) Code extensions. Leading global organizations are adopting these tools and technologies. This momentum has spurred demand for a marketplace without restrictions and limitations. Thanks for joining us on the journey as we build this out, and we look forward to continued innovation from you in 2021.

by Brian King at December 18, 2020 11:57 AM

WTP 3.20 Released!

December 16, 2020 08:55 PM

The Eclipse Web Tools Platform 3.20 has been released! Installation and updates can be performed using the Eclipse IDE 2020-12 Update Site or through any of the related Eclipse Marketplace . Release 3.20 is included in the 2020-09 Eclipse IDE for Enterprise Java Developers , with selected portions also included in several other packages . Adopters can download the R3.20 p2 repository directly and combine it with the necessary dependencies.

More news

December 16, 2020 08:55 PM

2020 - What a year for Eclipse Dirigible!

by Nedelcho Delchev at December 16, 2020 12:00 AM

It has been a challenging, but in the same time an incredible year for Eclipse Dirigible in terms of progress, contribution and adoption.

2020 in numbers:

  • 12 releases from 4.2 to 5.6 with 1 major 5.0
  • 138 issues fixed
  • 11 blog posts
  • 10 new API - Content, Template Engine, Execute, Lifecycle, SOAP, Websocket, Kafka Consumer, Kafka Producer, MongoDB Client, MongoDB DAO
  • 11K+ users from 143 countries for 2020
  • 412 repositories in DirigibleLabs till date

Notable new features


  • Terminal replaced with xterm.js
  • Monaco (VSCode Editor) set as default editor
  • GraalVM engine introduced and set as default
  • Debug View replaced with Chrome Dev Tools
  • Git functionality re-architecture


Conferences & Social Media

Derivative work

Stay safe and healthy!

See you all in 2021! 🥳

by Nedelcho Delchev at December 16, 2020 12:00 AM

Announcing Eclipse Ditto Release 1.5.0

December 10, 2020 12:00 AM

Wrapping up this crazy year, the Ditto team is happy to announce the next feature update of Ditto 1.x: Eclipse Ditto 1.5.0

1.5.0 focuses on:

  • Desired properties management (CRUD)
  • Addition of “cloudevents” HTTP endpoint
  • Ditto internal pub/sub supports using a “grouping” concept which improves Ditto’s scalability capabilities
  • Issuing “weak Acknowledgements” when a command requesting acks was filtered out by Ditto (improvement of “at least once” delivery scenarios)
  • Feature ID may be used in header mappings of connections

Please have a look at the 1.5.0 release notes for a more detailed information on the release.


The new Java artifacts have been published at the Eclipse Maven repository as well as Maven central.

Also the Ditto Java client’s artifacts were published to Maven central.

The Docker images have been pushed to Docker Hub:

Kubernetes ready: Helm chart

In order to run Eclipse Ditto in a Kubernetes environment, best rely on the official Helm chart and deploy Ditto via the Helm package manager.


The Eclipse Ditto team

December 10, 2020 12:00 AM

Using properties to simplify discovery of OSGi Remote Services

by Scott Lewis ( at December 09, 2020 05:38 AM

OSGi Remote Services are discovered by ECF's Remote Services implementation in two ways:  

1. Via a network discovery protocol provider such as:  Zeroconf, jSLP, etcd, Zookeeper, or some custom protocol

2. Via an xml format known as an Endpoint Description Extender Format (EDEF)

 The EDEF format is specified by the OSGi Remote Service Admin specification.   

When importing an EDEF-defined remote service, it's typically necessary to construct the entire EDEF file 'by hand' rather than having he EDEF generated automatically.  This can be quite complicated to construct by hand as some properties are required, others are optional and it's not obvious what all of the values must be for successful import.

A new capability has been added to ECF's Remote Service Admin implementation that allows EDEF Properties to be used with the EDEF, thus simplifying the creation of remote service consumers that use EDEF for import.

This capability was added to support the usage of JaxRS Remote Services in an Eclipse RCP client.  See a description of this use case here.

by Scott Lewis ( at December 09, 2020 05:38 AM

LiClipse 7.1.0 released (improved Dark theme, LiClipseText and PyDev updates)

by Fabio Zadrozny ( at December 08, 2020 07:52 PM

I'm happy to announce that LiClipse 7.1.0 is now available for download.

LiClipse is now based on Eclipse 4.17 (2020-09), one really nice features is that this now enables dark-scrollbars for trees on Windows.

I think an image may be worth a thousand words here, so below is a screenshot showing how the LiClipse Dark theme looks like (on Windows) with the changes!

This release also updates PyDev to 8.1.0, which provides support for Python 3.9 as well as quick-fixes to convert strings to f-strings, among many other things (see: for more details).

Another upgraded dependency is LiClipseText 2.2.0, which now provides grammars to support TypeScript, RobotFramework and JSON by default.

by Fabio Zadrozny ( at December 08, 2020 07:52 PM

ECA Validation Update for Gerrit

December 08, 2020 05:45 PM

We are planning to install a new version of our Gerrit ECA validation plugin this week in an effort to reduce errors when a contribution is validated.

With this update, we are moving our validation logic to our new ECA Validation API that we created for our new Gitlab instance.

We are planning to push these changes live on Wednesday, December 9 at 16:00 GMT, though there is no planned downtime associated with this update.

Our plan is to revert back to a previous version of the plugin if we detect any anomalies after deploying this change.

Please note that we are also planning to apply these changes to our GitHub ECA validation app in Q1 of 2021. You can expect more news about this in the new year!

For those interested, the code for the API and the plugin are open-source and can be seen at git-eca-rest-api and gerrit-eca-plugin.

Please use our GitHub issue to discuss any concerns you might have with this change.

December 08, 2020 05:45 PM

Become an Eclipse Technology Adopter

December 04, 2020 05:50 PM

Did you know that organizations — whether they are members of the Eclipse Foundation or not — can be listed as Eclipse technology adopters?

In November 2019, The Eclipse IoT working group launched a campaign to promote adopters of Eclipse IoT technologies. Since then, more than 60 organizations have shown their support to various Eclipse IoT projects.

With that success in mind, we decided to build a new API service responsible for managing adopters for all our projects.

If needed, this new service will allow us to create an Adopters page for each of our working groups. This is something that we are currently working on for Eclipse Cloud Development Tools. Organizations that wishes to be listed on this new page can submit their request today by following our instructions.

On top of that, every Eclipse project can now leverage our JavaScript plugin to display logos of adopters without committing them in their website git repository.

As an example, you can check out the Eclipse Ditto website.

What Is Changing?

We are migrating logos and related metadata to a new repository. This means that adopters of Eclipse IoT technologies will be asked to submit their request to this new repository. This change is expected to occur on December 10, 2020.

We plan on updating our documentation to point new users to this new repository. If an issue is created in the wrong repository, we will simply move them to the right location.

The process is very similar with this new repository but we did make some improvements:

  1. The path where we store logos is changing
  2. The file format is changing from .yml to .json to reduce user errors.
  3. The structure of the file was modified to make it easier for an organization to adopt multiple projects.

We expect this change to go uninterrupted to our users. The content of the Eclipse IoT Adopters page won’t change and the JavaScript widget hosted on will continue to work as is.

Please create an issue if you have any questions or concerns regarding this migreation.

How Can My Organization be Listed as Adoptor of Eclipse Technology?

The preferred way to become an adopter is with a pull-request:

  1. Add a colored and a white organization logo to static/assets/images/adoptors. We expect logos to be submitted as .svg and they must be transparent. The file size should be less than 20kb since we are planning to use them for the web!
  2. Update the adopter JSON file: config/adopters.json. Organizations can be easily marked as having multiple adopted projects across different working groups, no need to create separate entries for different projects or working groups!

The alternative way to become an adopter is to submit an issue with your logo and the project name that your organization has adopted.

How Can We List Adopters on Our Project Website?

We built a JavaScript plugin to make this process easier.


Include our plugin in your page:

<script src="//"></script>

Load the plugin:

project_id: "[project_id]"

Create an HTML element containing the chosen selector:

<div class="eclipsefdn-adopters"></div>
  • By default, the selector’s value is eclipsefdn-adopters.


project_id: "[project_id]",
selector: ".eclipsefdn-adopters",
ul_classes: "list-inline",
logo_white: false
Attribute Type Default Description
project_id String Required: Select adopters from a specific project ID.
selector String .eclipsefdn-adopters Define the selector that the plugin will insert adopters into.
ul_classes String Define classes that will be assigned to the ul element.
logo_white Boolean false Whether or not we use the white version of the logo.

For more information, please refer our

A huge thank you to Martin Lowe for all his contributions to this project! His hard work and dedication was crucial for getting this project done on time!

December 04, 2020 05:50 PM

Add Checkstyle support to Eclipse, Maven, and Jenkins

by Christian Pontesegger ( at December 02, 2020 08:52 AM

After PMD and SpotBugs we will have a look at Checkstyle integration into the IDE and our maven builds. Parts of this tutorial are already covered by Lars' tutorial on Using the Checkstyle Eclipse plug-in.

Step 1: Add Eclipse IDE Support

First install the Checkstyle Plugin via the Eclipse Marketplace. Before we enable the checker, we need to define a ruleset to run against. As in the previous tutorials, we will setup project specific rules backed by one ruleset that can also be used by maven later on.

Create a new file for your rules in <yourProject>.releng/checkstyle/checkstyle_rules.xml. If you are familiar with writing rules just add them. In case you are new, you might want to start with one of the default rulesets of checkstyle.

Once we have some rules, we need to add them to our projects. Therefore right click on a project and select Checkstyle/Activate Checkstyle. This will add the project nature and a builder. To make use of our common ruleset, create a file <project>/.checkstyle with following content.

<?xml version="1.0" encoding="UTF-8"?>

<fileset-config file-format-version="1.2.0" simple-config="false" sync-formatter="false">
<local-check-config name="Skills Checkstyle" location="/yourProject.releng/checkstyle/checkstyle_rules.xml" type="project" description="">
<additional-data name="protect-config-file" value="false"/>
<fileset name="All files" enabled="true" check-config-name="Skills Checkstyle" local="true">
<file-match-pattern match-pattern=".java$" include-pattern="true"/>

Make sure to adapt the name and location attributes of local-check-config according to your project structure.

Checkstyle will now run automatically on builds or can be triggered manually via the context menu: Checkstyle/Check Code with Checkstyle.

Step 2: Modifying Rules

While we had to do our setup manually, we can now use the UI integration to adapt our rules. Select the Properties context entry from a project and navigate to Checkstyle, page Local Check Configurations. There select your ruleset and click Configure... The following dialog allows to add/remove rules and to change rule properties. All your changes are backed by our checkstyle_rules.xml file we created earlier.

Step 3: Maven Integration

We need to add the Maven Checkstyle Plugin to our build. Therefore add following section to your master pom:


<!-- enable checkstyle code analysis -->


In the configuration we address the ruleset we also use for the IDE plugin. Make sure that the relative path fits to your project setup. In the provided setup execution is bound to the verify phase.

Step 4: File Exclusions

Excluding files has to be handled differently for IDE and Maven. The Eclipse plugin allows to define inclusions and exclusions via file-match-pattern entries in the .checkstyle configuration file. To exclude a certain package use:

  <fileset name="All files" enabled="true" check-config-name="Skills Checkstyle" local="true">
<file-match-pattern match-pattern="org.yourproject.generated.package.*$" include-pattern="false"/>

In maven we need to add exclusions via the plugin configuration section. Typically such exclusions would go to the pom of a specific project and not the master pom:

<!-- remove generated resources from checkstyle code analysis -->


Step 5: Jenkins Integration

If you followed my previous tutorials on code checkers, then this is business as usual: use the warnings-ng plugin on Jenkins to track our findings:

	recordIssues tools: [checkStyle()]

Try out the live chart on the skills project.

by Christian Pontesegger ( at December 02, 2020 08:52 AM

Jakarta EE Community Update November / December 2020

by Tanja Obradovic at December 01, 2020 08:14 PM

This month is certainly focused on celebrations all around! We are super excited about the Jakarta EE 9 release and upcoming JakartaOne Livestream 2020 event! Register Now and join us on December 8th to learn more about Jakarta EE 9 and find out what’s coming in 2021.


Welcome to Jakarta EE 9

As indicated previously on November 20th, Jakarta EE 9 specifications were completed! This release will be the base for any future development and innovation for Cloud Native Java. Congratulations to the Jakarta EE Working Group and broader community that made this releaselse happen! Special thanks goes to Kevin Sutter (IBM), along with Steve Milledge (Payara), who were leading the effort on this release.

Please check out the links below.

Platform Spec:

Web Profile Spec:

Jakarta EE Specs Home:

The first compatible product for Jakarta EE 9 release is Eclipse GlassFish v6-RC2, but we are expecting many others to follow and we’ll be publish them at Compatible Products page.


Encourage All to Adopt New jakarta.* Namespace

If you have migrated your custom application to Jakarta EE 9 and adopted new jakarta.* namespace or you are a vendor that is adopting the new jakarta.* namespace, please let us know. You can tweet about it and use #embraceJakarta and we’ll be happy to promote you!

To help vendors make the transition, the Jakarta EE community has developed a data sheet summarizing the namespace migration challenge and opportunity. You can download the datasheet here.


Register for JakartaOne Livestream Today

Please visit for more information on the Program, speakers and all Studio Jakarta EE sessions with Ivar Grinstad, I (Tanja Obradovic) and special guests!

Register today to reserve your spot. JakartaOne Livestream is a great way to learn more about the technical benefits and architectural advances that become possible with cloud native Java, Jakarta EE, Eclipse MicroProfile, and Java EE technologies.

Remember this is not just a technical virtual conference, but a celebration of the community achievements, collective work and effort delivering Jakarta EE 9 release! We celebrated the milestone release with a cupcake - for this one I will bake a cake and I surely hope you will do the same. If you do so please send us pictures so we can share with the community! Let’s see who’s cake will win the best Jakarta EE 9 cake title! Here is something that can help you decorate your cake.

For live event updates, speaker announcements, news, and more, follow @JakartaOneConf on Twitter.


Book Your Jakarta EE Virtual Tour

Jakarta EE Developer Advocate, Ivar Grimstad, and I (Tanja Obradovic) are continuing with Jakarta EE Virtual Tour, providing one-hour talks on Jakarta EE 9 and beyond to Java communities.

 January 2021 is already quite busy with the scheduled sessions, but don’t hesitate to contact me ( if you’d like us to present at your Java User Group (JUG) or Meetup event throughout 2021.



Stay Connected With the Jakarta EE Community

The Jakarta EE community is very active and there are a number of channels to help you stay up to date with all of the latest and greatest news and information. Subscribe to your preferred channels today:

·  Social media: Twitter, Facebook, LinkedIn Group

·  Mailing lists:,, project mailing lists, slack workspace

·  Newsletters, blogs, and emails: Eclipse newsletter, Jakarta EE blogs, Hashtag Jakarta EE

·  Meetings: Jakarta Tech Talks, Jakarta EE Update, Jakarta Town Hall, and Eclipse Foundation events and conferences

You can find the complete list of channels here.

 To help shape the future of open source, cloud native Java, get involved in the Jakarta EE Working Group

To learn more about Jakarta EE-related plans and check the date for the next Jakarta Tech Talk, be sure to bookmark the Jakarta EE Community Calendar.


by Tanja Obradovic at December 01, 2020 08:14 PM

EMF JSON mapper at!

by Jonas Helming and Maximilian Koegel at November 30, 2020 12:27 PM

Do you want to convert EMF model instances into JSON or vice versa? Do you want to make EMF data available...

The post EMF JSON mapper at! appeared first on EclipseSource.

by Jonas Helming and Maximilian Koegel at November 30, 2020 12:27 PM

Add SpotBugs support to Eclipse, Maven, and Jenkins

by Christian Pontesegger ( at November 24, 2020 06:01 PM

SpotBugs (successor of FindBugs) is a tool for static code analysis, similar like PMD. Both tools help to detect bad code constructs which might need improvement. As they partly detect different issues, they may be well combined and used simultaneously.

Step 1: Add Eclipse IDE Support

The SpotBugs Eclipse Plugin can be installed directly via the Eclipse Marketplace.

After installation projects can be configured to use it from the projects Properties context menu. Navigate to the SpotBugs category and enable all checkboxes on the main site. Further set Minimum rank to report to 20 and Minimum confidence to report to Low.

Once done SpotBugs immediately scans the project for problems. Found issues are displayed as custom markers in editors. Further they are visible in the Bug Explorer view as well as in the Problems view.

SpotBugs also comes with a label decoration on elements in the Package Explorer. If you do not like these then disable all Bug count decorator entries in Preferences/General/Appearance/Label Decorations.

Step 2: Maven Integration

Integration is done via the SpotBugs Maven Plugin. To enable, add following section to your master pom:


<!-- enable spotbugs code analysis -->



The execution entry takes care that the spotbugs goal is automatically executed during the verify phase. If you remove the execution section you would have to call the spotbugs goal separately:

mvn spotbugs:spotbugs

Step 3: File Exclusions

You might have code that you do not want to get checked (eg generated files). Exclusions need to be defined in an xml file. A simple filter on package level looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<!-- skip EMF generated packages -->
<Package name="~org\.eclipse\.skills\.model.*" />

See the documentation for a full description of filter definitions.

Once defined this file can be used from the SpotBugs Eclipse plugin as well as from the maven setup.

To simplify the maven configuration we can add following profile to our master pom:

<!-- apply filter when filter file exists -->

<!-- enable spotbugs exclude filter -->


It gets automatically enabled when a file .settings/spotbugs-exclude.xml exists in the current project.

Step 4: Jenkins Integration

Like with PMD, we again use the warnings-ng plugin on Jenkins to track our findings:

	recordIssues tools: [spotBugs(useRankAsPriority: true)]

Try out the live chart on the skills project.

Final Thoughts

PMD is smoother on integration as it stores its rulesets in a common file which can be shared by maven and the Eclipse plugin. SpotBugs currently requires to manage rulesets separately. Still both can be implemented in a way that users automatically get the same warnings in maven and the IDE.

by Christian Pontesegger ( at November 24, 2020 06:01 PM

My main update site moved

by Andrey Loskutov ( at November 23, 2020 08:51 AM

My host provider GMX decided that free hosting that they offered for over a decade is not fitting to their portfolio  anymore (for some security reasons) and simply switched my domain off.


... aus Sicherheitsgründen modernisieren wir regelmäßig unser Produktportfolio.
Im Zuge dessen möchten wir Sie darüber informieren, dass wir Ihren Webspace mit Ihrem Subdomain-Namen zum 19‌.11‌.20‌20 kündigen. 

Because of that, Eclipse update site for all my plugins is moved now: 



Same way, my "home" is moved to

(Github obviously has no issues with free hosting).

That means, anyone who used to have my main update site in scripts / Oomph setups, has to change them to use instead.

I'm sorry for that, but that is nothing I could change.

by Andrey Loskutov ( at November 23, 2020 08:51 AM

Weak acknowledgments to decouple signal publishers and subscribers

November 16, 2020 12:00 AM


Ditto 1.2.0 introduced at-least-once delivery via acknowledgement requests.
It increased coupling between the publisher and the subscriber of signals in that the subscriber is no longer at the liberty to filter for signals it is interested in. Instead, the subscriber must consume all signals in order to fulfill acknowledgement requests and prevent endless redelivery.

To combat the problem, Ditto 1.4.0 made acknowledgement labels unique and introduced the requirement to manage declared acknowledgements, identifying of each subscriber.
It is now possible for Ditto to issue weak acknowledgements on behalf of the subscriber whenever it decides to not consume a signal. That allows subscribers to configure RQL and namespace filters freely without causing any futile redelivery.

Note: Weak acknowledgements are available since Ditto 1.5.0.

What it is

A weak acknowledgement is issued by Ditto for any acknowledgement request that will not be fulfilled now or ever without configuration change.
A weak acknowledgement is identified by the header ditto-weak-ack: true.

The status code of weak acknowledgements is 200 OK; it signifies that any redelivery is not to be made on their account.

A weak acknowledgement may look like this in Ditto protocol:

  "topic": "com.acme/xdk_53/things/twin/acks/my-mqtt-connection:my-mqtt-topic",
  "headers": {
    "ditto-weak-ack": true
  "path": "/",
  "value": "Acknowledgement was issued automatically, because the subscriber is not authorized to receive the signal.",
  "status": 200

How it works

Since Ditto 1.4.0, subscribers of twin events or live signals are required to declare unique acknowledgement labels they are allowed to send. The labels of acknowledgement requests are then identifying the intended subscribers.
If the intended subscriber exists but does not receive the signal for non-transient reasons, Ditto issues a weak acknowledgement for that subscriber.
Such reasons may be:

  • The intended subscriber is not authorized to receive the signal by policy;
  • The intended subscriber did not subscribe for the signal type (twin event, live command, live event or live message);
  • The intended subscriber filtered the signal out by its namespace or RQL filter;
  • The intended subscriber dropped the signal because its payload mapper produced nothing.


The distributed nature of cluster pub/sub means that weak acknowledgements are not always issued correctly.
They are only eventually correct in the sense that some time after a change to the publisher-subscriber pair, the issued weak acknowledgements will reflect the change.
Such changes include:

  • Opening and closing of Websocket or other connections acting as the subscriber;
  • Subscribing and unsubscribing for different signal types via Websocket;
  • Modification of connections via the connectivity API;
  • Migration of a connection from one Ditto cluster member to another due to load balancing.


Please get in touch if you have feedback or questions towards this new concept of weak acknowledgements.


The Eclipse Ditto team

November 16, 2020 12:00 AM

What’s new in Fabric8 Kubernetes Java client 4.12.0

by Rohan Kumar at October 30, 2020 07:00 AM

The recent Fabric8 Kubernetes Java client 4.12.0 release includes many new features and bug fixes. This article introduces the major features we’ve added between the 4.11.0 and 4.12.0 releases.

I will show you how to get started with the new VolumeSnapshot extension, CertificateSigningRequests, and Tekton triggers in the Fabric8 Tekton client (to name just a few). I’ll also point out several minor changes that break backward compatibility with older releases. Knowing about these changes will help you avoid problems when you upgrade to the latest version of Fabric8’s Java client for Kubernetes or Red Hat OpenShift.

How to get the new Fabric8 Java client

You will find the most current Fabric8 Java client release on Maven Central. To start using the new Java client, add it as a dependency in your Maven pom.xml. For Kubernetes, the dependency is:


For OpenShift, it’s:


Breaking changes in this release

We have moved several classes for this release, so upgrading to the new version of the Fabric8 Kubernetes Java client might not be completely smooth. The changes are as follows:

  • We moved the CustomResourceDefinition to io.fabric8.kubernetes.api.model.apiextensions.v1 and io.fabric8.kubernetes.api.model.apiextensions.v1beta1.
  • We moved SubjectAccessReview, SelfSubjectAccessReview, LocalSubjectAccessReview, and SelfSubjectRulesReview to io.fabric8.kubernetes.api.model.authorization.v1 and io.fabric8.kubernetes.api.model.authorization.v1beta1.
  • The io.fabric8.tekton.pipeline.v1beta1.WorkspacePipelineDeclaration is now io.fabric8.tekton.pipeline.v1beta1.PipelineWorkspaceDeclaration.
  • We introduced a new interface, WatchAndWaitable, which is used by WatchListDeletable and other interfaces. This change should not affect you if you are using the Fabric8 Kubernetes Java client’s domain-specific language (DSL).

The new VolumeSnapshot extension

You might know about the Fabric8 Kubernetes Java client extensions for Knative, Tekton, Istio, and Service Catalog. In this release, we’ve added a new Container Storage Interface (CSI) VolumeSnapshot extension. VolumeSnapshots are in the directory. To start using the new extension, add the following dependency to your Maven pom.xml:


Once you’ve added the dependency, you can start using the VolumeSnapshotClient. Here’s an example of how to create a VolumeSnapshot:

try (VolumeSnapshotClient client = new DefaultVolumeSnapshotClient()) {
      System.out.println("Creating a volume snapshot");

Spin up a single pod with

Just like you would with kubectl run, you can quickly spin up a pod with the Fabric8 Kubernetes Java client. You only need to provide a name and image:

try (KubernetesClient client = new DefaultKubernetesClient()) {"default").withName("hello-openshift")

Authentication API support

A new authentication API lets you use the Fabric8 Kubernetes Java client to query a Kubernetes cluster. You should be able to use the API for all operations equivalent to kubectl auth can-i. Here’s an example:

try (KubernetesClient client = new DefaultKubernetesClient()) {
    SelfSubjectAccessReview ssar = new SelfSubjectAccessReviewBuilder()

    ssar = client.authorization().v1().selfSubjectAccessReview().create(ssar);

    System.out.println("Allowed: "+  ssar.getStatus().getAllowed());

OpenShift 4 resources

The Fabric8 Kubernetes Java client now supports all of the new OpenShift 4 resources in its OpenShift model. Additional resources added in,,, and are also available within the OpenShift model. Here is an example of using PrometheusRule to monitor a Prometheus instance:

try (OpenShiftClient client = new DefaultOpenShiftClient()) {
    PrometheusRule prometheusRule = new PrometheusRuleBuilder()


    PrometheusRuleList prometheusRuleList = client.monitoring().prometheusRules().inNamespace("rokumar").list();
    System.out.println(prometheusRuleList.getItems().size() + " items found");

Certificate signing requests

We’ve added a new entry point, certificateSigningRequests(), in the main KubernetesClient interface. This means you can use CertificateSigningRequest resources in all of your applications developed with Fabric8:

try (KubernetesClient client = new DefaultKubernetesClient()) {

    CertificateSigningRequest csr = new CertificateSigningRequestBuilder()
            .addNewUsage("client auth")

Custom resource definitions

We’ve moved the apiextensions/v1 CustomResourceDefinition (CRD) to the io.fabric8.kubernetes.api.model.apiextensions.v1beta1 and io.fabric8.kubernetes.api.model.apiextensions.v1 packages. You can now use CustomResourceDefinition objects inside apiextensions() like this:

try (KubernetesClient client = new DefaultKubernetesClient()) {
            .getItems().forEach(crd -> System.out.println(crd.getMetadata().getName()));

Creating bootstrap project templates

We’ve provided a new, built-in way to create a project with all of the role bindings you need. It works like OpenShift’s oc adm create-bootstrap-project-template command. Specify the parameters that the template requires in the DSL method. The method then creates the Project and related RoleBindings for you:

try (OpenShiftClient client = new DefaultOpenShiftClient()) {
    client.projects().createProjectAndRoleBindings("default", "Rohan Kumar", "default", "developer", "developer");

Tekton model 0.15.1

We’ve updated the Tekton model to version 0.15.1 so that you can take advantage of all the newest upstream features and enhancements for Tekton. This example creates a simple Task and TaskRun to echo “hello world” in a pod. Instead of YAML, we use the Fabric8 TektonClient:

try (TektonClient tkn = new DefaultTektonClient()) {
    // Create Task
            .withArgs("Hello World")

    // Create TaskRun

When you run this code, you will see the Task and TaskRun being created. The TaskRun, in turn, creates a pod, which prints the “Hello World” message:

tekton-java-client-demo : $ tkn taskrun list
NAME                        STARTED         DURATION     STATUS
echo-hello-world-task-run   2 minutes ago   19 seconds   Succeeded
tekton-java-client-demo : $ kubectl get pods
NAME                                  READY   STATUS      RESTARTS   AGE
echo-hello-world-task-run-pod-4gczw   0/1     Completed   0          2m17s
tekton-java-client-demo : $ kubectl logs pod/echo-hello-world-task-run-pod-4gczw
Hello World

Tekton triggers in the Fabric8 Tekton client

The Fabric8 Tekton client and model now support Tekton triggers. You can use triggers to automate the creation of Tekton pipelines. All you have to do is embed your triggers in the Tekton continuous deployment (CD) pipeline. Here is an example of using the Fabric8 Tekton client to create a Tekton trigger template:

try (TektonClient tkn = new DefaultTektonClient()) {
                    .withDescription("The git repository url")
                    .withDescription("The git revision")
                    .withDescription("The message to print")
                    .withDefault("This is default message")
                    .withDescription(" The Content-Type of the event")
            .withResourcetemplates(Collections.singletonList(new PipelineRunBuilder()
                            .withValue(new ArrayOrString("$(tt.params.message)"))
                            .withValue(new ArrayOrString("$(tt.params.contenttype)"))

Automatically refresh OpenID Connect tokens

If your Kubernetes provider uses OpenID Connect tokens (like IBM Cloud), you don’t need to worry about your tokens expiring. The new Fabric8 Kubernetes Java client automatically refreshes your tokens by contacting the OpenID Connect provider, which is listed in the ~/.kube/config.

Support for Knative 0.17.2 and Knative Eventing Contrib

For this release, we’ve updated the Knative model to the latest version. We also added new support for the additional resources from Knative Eventing Contrib, which involves sources and channel implementations that integrate with Apache CouchDB, Apache Kafka, Amazon Simple Queue Service (AWS SQS), GitHub, GitLab, and so on.

Here’s an example of creating an AwsSqsSource using KnativeClient:

try (KnativeClient client = new DefaultKnativeClient()) {
    AwsSqsSource awsSqsSource = new AwsSqsSourceBuilder()
            .withNewAwsCredsSecret("credentials", "aws-credentials", true)
            .withSink(new ObjectReferenceBuilder()

Get involved!

There are a few ways to get involved with the development of the Fabric8 Kubernetes Java client:


The post What’s new in Fabric8 Kubernetes Java client 4.12.0 appeared first on Red Hat Developer.

by Rohan Kumar at October 30, 2020 07:00 AM

Jakarta EE Community Update October 2020

by Tanja Obradovic at October 28, 2020 01:55 PM

This month’s Jakarta EE round-up includes news about the latest Jakarta EE 9-compatible product, Jakarta EE 9 specification status, JakartaOne Livestream and Jakarta EE Virtual Tour 2020 (and 2021!) dates, community calls, and more. Keep reading to get all the details.


Another Jakarta EE 8 Compatible Product

Great news! The FUJITSU Software Enterprise Application Platform has achieved full platform compatibility with Jakarta EE 8. That is the 9th Jakarta EE 8 compatible product!!

For the complete list of Jakarta EE 8 platform and web profile compatible products, click here.


Encourage Developer Tool and Platform Providers to Migrate to the New Namespace

If you have a preferred developer tool vendor or a platform provider please consider asking them to migrate to the new Jakarta namespace so you can continue to use them with Jakarta EE 9 and beyond. Also, this is a great time to start planning migration of your enterprise applications to the new Jakarta namespace!

 To help vendors make the transition, the Jakarta EE community has developed a data sheet summarizing the namespace migration challenge and opportunity. You can download the data sheet here.


All but Five of the Jakarta EE 9 Specifications are in ballot!

As we get closer to the November 20 General Availability release date for Jakarta EE 9, here’s a summary of the latest status on specification approvals. Following the Jakarta EE Specification Process (JESP), we now have more than half of the specifications approved as Ratified Final Specification, 8 specifications are being voted on and we are about to start the ballot for 4 specifications. The only one still in waiting is the final Jakarta EE Specification, expected to be on the ballot at the end of next week! We are right on track for November 20th release.

Completed: 57 percent (or 20 specification)

Jakarta Concurrency
Jakarta Persistence
Jakarta Web Services Metadata
Jakarta Activation
Jakarta Bean Validation
Jakarta Dependency Injection
Jakarta Expression Language
Jakarta JSON Processing
Jakarta Servlet
Jakarta SOAP with Attachments
Jakarta Authentication
Jakarta Authorization
Jakarta Debugging Support for Other Language
Jakarta JSON Binding
Jakarta Mail
Jakarta Contexts and Dependency Injection (CDI)
Jakarta XML Web Services Specification
Jakarta Batch
Jakarta Security
Jakarta Server Faces


In the ballot process: 26 percent or 8 specifications

Jakarta Messaging
Jakarta WebSocket
Jakarta Server Pages
Jakarta XML Binding
Jakarta RESTful Web Services
Jakarta Transactions
Jakarta Connectors
Jakarta Standard Tag Library

About to start the ballot process: 14 percent or 4 specifications

Jakarta Interceptors
Jakarta Enterprise Beans
Jakarta Enterprise Web Services
Jakarta EE Web Profile

Updates in progress: 3 percent or 1 specification

Jakarta EE Platform

The chart below provides a visual summary of our progress.



Register for JakartaOne Livestream Today

Be sure to reserve Tuesday, December 8, to attend JakartaOne Livestream. This year’s virtual event will include demos and interviews as well as a keynote address by Eclipse Foundation Executive Director, Mike Milinkovich.

The program committee is now reviewing the submitted papers — thanks to everyone who submitted — and you can expect to see the event details and program schedule in early November.

Register today to reserve your spot. JakartaOne Livestream is a great way to learn more about the technical benefits and architectural advances that become possible with cloud native Java, Jakarta EE, Eclipse MicroProfile, and Java EE technologies.

For live event updates, speaker announcements, news, and more, follow @JakartaOneConf on Twitter.


JakartaOne Livestream - Spanish: Watch the Replay

The JakartaOne Livestream Spanish event on October 12 was a huge success with 513 registered individuals so far. More than 300 people attended the live event and almost 200 more have watched the replay.

The event included a keynote address, vendor talks about compatible implementations by Red Hat, Tomitribe, Payara, and Oracle, and five technical talks.

To see the session topics, click here.

To watch the session replays, click here. Note that the sessions are delivered in Spanish.


Book Your Jakarta EE Virtual Tour

Jakarta EE Developer Advocate, Ivar Grimstad, and I (Tanja Obradovic) have started our Jakarta EE Virtual Tour, providing one-hour talks on Jakarta EE 9 and beyond to Java communities.

The current schedule for the Jakarta EE Virtual Tour is shown below, but there are still openings, and the tour will continue in 2021, so don’t hesitate to contact me ( if you’d like us to present at your Java User Group (JUG) or Meetup event.

Upcoming Events

Eclipse Foundation staff and community members will be participating in a number of upcoming events related to Java and Jakarta EE. Here are brief summaries to help you choose the ones you want to attend.

Java Community Online Conference (JCON) 2020

·  Speaker: Gaël Blondelle, Managing Director, Eclipse Foundation Europe GmbH, and Vice President, Ecosystem Development at the Eclipse Foundation

·  Topic: Cloud Native Java at the Eclipse Foundation - Not your parents' Eclipse!

·  Date: Thursday October 29, 2020, 15:00-16:00 CET

Cloud Native Development Panel Discussion Meetup

·  Speakers: Niklas Heidloff from IBM and Rudy De Busscher from Payara with me (Tanja Obradovic) as moderator

·  Topic: All things cloud native

·  Date: Tuesday October 27, 2020, 17:00-18:00 GMT+1

KubeCon + CloudNativeCon North America

·  Speakers: The Eclipse Foundation will host a virtual community booth with cloud native Java experts on hand and community members who will participate in the booth chat session so be sure to visit us, meet community experts, and ask questions.

·  Topics: Live talks, demos, and Q&A sessions

·  Dates: November 17-20


Jakarta EE Community Calls

The Jakarta EE community hosted two calls in October. If you weren’t able to join the calls live, we’ve provided very brief summaries and links to the recordings below.

Jakarta EE Working Group Members’ Call

On October 6, the Jakarta EE Steering Committee hosted a call with Jakarta EE Working Group members to discuss the following topics:

·  Welcome Jakarta EE members: Will Lyons and David Blevins

·  Introduction of Jakarta EE Working Group committees: Will Lyons (Steering), Paul Buck (Specification), Neil Patterson (Marketing)

·  Jakarta EE to date: Will Lyons, Tanja Obradovic

·  Jakarta EE 9: Kevin Sutter

·  Opportunities and benefits for members: Tanja Obradovic

·  Jakarta EE Working Group experiences and how we can do better: Eric Meng, Ruslan Synytsky, Rob Tompkins, and others

Access the recording here.

Public Steering Committee Call

On October 13, Jakarta EE Steering Committee members provided the following updates during the J4K conference:

·  Jakarta EE 9: Kevin Sutter

·  JakartaOne Livestream 2020: Tanja Obradovic

·  Jakarta EE 10: Ivar Grimstad

Access the recording here.

Join Our Upcoming Calls

Jakarta EE community calls are open to everyone! For upcoming dates and connection details, see the Jakarta EE Community Calendar.

We know it’s not always possible to join calls in real time, so here are links to the recordings and presentations:

·  October call presentations

·  The complete playlist


Stay Connected With the Jakarta EE Community

The Jakarta EE community is very active and there are a number of channels to help you stay up to date with all of the latest and greatest news and information. Subscribe to your preferred channels today:

·  Social media: Twitter, Facebook, LinkedIn Group

·  Mailing lists:,, project mailing lists

·  Newsletters, blogs, and emails: Eclipse newsletter, Jakarta EE blogs

·  Meetings: Jakarta Tech Talks, Jakarta EE Update, Jakarta Town Hall, and Eclipse Foundation events and conferences

You can find the complete list of channels here.

To help shape the future of open source, cloud native Java, get involved in the Jakarta EE Working Group.

To learn more about Jakarta EE-related plans and check the date for the next Jakarta Tech Talk, be sure to bookmark the Jakarta EE Community Calendar.


by Tanja Obradovic at October 28, 2020 01:55 PM

T-4:00 OCP is ready to go

October 17, 2020 10:00 AM

Atop the Sirius-II Mission’s rocket is the Obeo Cloud Platform (OCP), with two flavors – OCP Modeler and OCP Publication – safely strapped inside. The Obeo Cloud Platform is built as an open-core product relying on the open source Eclipse Sirius project (EPL 2.0 licence) and more precisely on the Sirius Web component. As I explained in my previous post, Sirius Web is a framework from Obeo for building cloud graphical modelers for a dedicated DSL.

OCP Publication

OCP Publication exposes models in a read-only mode for fast access from OSLC clients or web browsers. The first version of this product is ready to be deployed to our customers at the end of October.

One of the first use cases we developed allows us to reunite two of our chosen fields: Capella and model servers, it is Publication For Capella. Publication for Capella provides a tight integration between OSLC-compliant ALM tools (Polarion, Doors Next…) and the MBSE workbench Capella. It enables a fine grained traceability between your requirements and your system design.

OCP Modeler

OCP Modeler is a unique technology to easily develop custom and state-of-the-art modeling tools to be deployed to the Cloud. The Obeo Cloud Platform Modeler is a Sirius Web build extended with Enterprise features, to deploy on public, private clouds or on premise and including support and upgrade guarantees. The Obeo Cloud Platform provides collaborative features: authentication, live collaboration, webhooks…

To get more details, attend the OCP Modeler’s launch on October 21 at 2:00 p.m CET.

October 17, 2020 10:00 AM

Eclipse Che vs. VS Code (online|codespaces)

by Jonas Helming and Maximilian Koegel at October 14, 2020 12:39 PM

Have you heard about Eclipse Che and wonder how it compares to VS Code Online or “VS Code Codespaces”? What are...

The post Eclipse Che vs. VS Code (online|codespaces) appeared first on EclipseSource.

by Jonas Helming and Maximilian Koegel at October 14, 2020 12:39 PM

T-8:00 1st stage Sirius Web loading begins

October 13, 2020 10:00 AM

Source code is flowing into the first stage of the Obeo rocket. Our goal is to bring the spirit of Sirius into a new technological space : Sirius Web is the Cloud-based evolution of Sirius, 100% Open Source.

The Sirius Web engine combines the open source components EMF-JSON & Sirius Components. These components will be available under the Sirius project :, with the source code on Github :

  • EMF-JSON is a small library to serialize EMF models to JSON,
  • The Sirius Components repository provides backend and frontend components
  • The Sirius Web repository combines the open source Sirius components to provide a graphical modeler sample application.

How do you create a cloud-ready modeler based on Sirius Web?

  1. Define your metamodel thanks to EMF.
  2. Provide a Sirius configuration to specify the mapping between the different concepts of your DSL and how they should be represented graphically.
  3. Register the metamodel & the Sirius specification in the Sirius Web application.
  4. Build and launch the application!

As a result, you get a graphical modeler dedicated to your DSL rendered in your browser.

To get more details about Sirius Web and how to run it for your own DSL, attend Obeo’s rocket liftoff. Launch remains on schedule for October 21 at 2:00 p.m CET.

October 13, 2020 10:00 AM

JBoss Tools and Red Hat CodeReady Studio for Eclipse 2020-09

by jeffmaury at October 13, 2020 09:57 AM

JBoss Tools 4.17.0 and Red Hat CodeReady Studio 12.17 for Eclipse 2020-09 are here waiting for you. Check it out!



Red Hat CodeReady Studio comes with everything pre-bundled in its installer. Simply download it from our Red Hat CodeReady product page and run it like this:

java -jar codereadystudio-<installername>.jar

JBoss Tools or Bring-Your-Own-Eclipse (BYOE) CodeReady Studio require a bit more:

This release requires at least Eclipse 4.17 (2020-09) but we recommend using the latest Eclipse 4.17 2020-06 JEE Bundle since then you get most of the dependencies preinstalled.

Java11 is now required to run Red Hat Developer Studio or JBoss Tools (this is a requirement from Eclipse 4.17). So make sure to select a Java11 JDK in the installer. You can still work with pre-Java11 JDK/JRE and projects in the tool.

Once you have installed Eclipse, you can either find us on the Eclipse Marketplace under "JBoss Tools" or "Red Hat CodeReady Studio".

For JBoss Tools, you can also use our update site directly.

What is new?

Our main focus for this release was an improved tooling for the Quarkus framework, improvements for container based development and bug fixing. Eclipse 2020-09 itself has a lot of new cool stuff but let me highlight just a few updates in both Eclipse 2020-09 and JBoss Tools plugins that I think are worth mentioning.


OpenShift Container Platform 4.6 support

With the new OpenShift Container Platform (OCP) 4.6 now available, JBoss Tools is compatible with this major release in a transparent way. Just define your connection to your OCP 4.6 based cluster as you did before for an OCP 3 cluster, and use the tooling !


Support for YAML configuration file

Quarkus supports configuration through YAML format. For more information, see the Quarkus documentation

In order to use it, follow the steps:

  • create a Quarkus project using the new Quarkus wizard

  • create a new application.yaml or application.yml next to the in src/main/resources

The editor will open and you will get content assist and syntax validation.

Server Tools

Wildfly 21 Server Adapter

A server adapter has been added to work with Wildfly 21. It adds support for Java EE 8, Jakarta EE 8 and Microprofile 3.3.

Hibernate Tools

Hibernate Runtime Provider Updates

A number of additions and updates have been performed on the available Hibernate runtime providers.

Runtime Provider Updates

The Hibernate 5.4 runtime provider now incorporates Hibernate Core version 5.4.21.Final and Hibernate Tools version 5.4.21.Final.

The Hibernate 5.3 runtime provider now incorporates Hibernate Core version 5.3.18.Final and Hibernate Tools version 5.3.18.Final.


Views, Dialogs and Toolbar

Adjustable view fonts

The font used for tree and table views can now be customized with a font preference. This preference is called "Tree and Table font for views" and can be found in Window > Preferences > General > Appearance > Colors and Fonts under the "View and Editor Folders" category.

adjustable view font preference

The Project Explorer is an example of a view that gets affected by this font preference.

adjustable view font
Remove gifs from views

Several years ago, the icons of the platform views were migrated to .png files. As already opened views store their reference to the image, the .gif files were left in the code. These have been removed now. If you are using the same workspace for multiple years and view icons are missing due to that removal, you have to close and reopen the view.

Default changed for confirm on exit for last window

By default, Eclipse now closes if you select the close icon on the last window without additional confirmation dialog. If you want to get a confirmation dialog, you can enable that via Window > Preferences > General > Startup and Shutdown > Confirm exit when closing last window.

Workbench models created in releases before 2014 are not automatically converted

Workbench models (workbench.xmi) stored in workspaces created with releases before 2014 and never opened with a later release are not automatically converted anymore if opened with the 2020-09 release.

Text Editors

Multiple Last Edit Locations

Previous Edit Location navigation (formerly named Last Edit Location) is now expanded to remember multiple edit locations.

The last 15 edit locations are now remembered. For convenience, similar edit locations in close proximity to each other are also merged so that each of the 15 remembered locations remains distinct.

multiple last edit locations

How to use

Two new keyboard shortcuts are introduced:

  • Ctrl+Alt+LEFT_ARROW (or on Mac Ctrl+Opt+LEFT_ARROW) navigates to the most recent edit location, just as Ctrl+Q always has in prior releases.

    However, now continuing to hold Ctrl+Alt and then pressing LEFT_ARROW again begins a traversal through the history of prior edit locations, with each additional press of LEFT_ARROW moving a step further back in history. Once traversal stops, future Ctrl+Alt+LEFT_ARROW actions are now temporarily anchored to this older historical location for easy exploration of that code region.

    The classic Ctrl+Q mapping has been likewise enhanced with this new functionality, so that Ctrl+Q and Ctrl+Alt+LEFT_ARROW are synonymous.

  • Ctrl+Alt+RIGHT_ARROW (or on Mac Ctrl+Opt+RIGHT_ARROW) conversely moves the anchor forward through edit history, so after traversing backward with Ctrl+Alt+LEFT_ARROW, you can go forward again by holding Ctrl+Alt and repeatedly pressing RIGHT_ARROW. A new menu item has likewise been added for this forward navigation as well.

New edit locations are always inserted at the end, so original historical ordering is always maintained. New edits also reset the last location "anchor" back to the most recent edit, so that pressing Ctrl+Alt+LEFT_ARROW once again brings you to the most recent edit rather than a historical one.

Printing editor content adds date in header

Printing editor content now includes the current date in addition to the filename in the header of each printed page.

print header date

Themes and Styling

Improved GTK light theme

The GTK light theme has been updated to align better with the default GTK3 Adwaita theme.


gtk light old


gtk light new
Windows menus are styled in the dark theme

SWT now natively styles the menu under Windows in the dark theme.


menu background old


menu background dark
Dropbox boxes (Combos) are styled under Windows in the dark theme

SWT now natively styles drop-down boxes under Windows in the dark theme.


combo win32 dark old


combo win32 dark new
Selection highlighter for dark theme

The active tab selection highlighter has been enabled for Eclipse’s default dark themes. This will help users identify which tab is active at a glance.

dark selection highlighter
Selection highlighter for tables under Windows in the dark theme

SWT now natively supports selection highlighter in tables under Windows in the dark theme.

selection highlight


Filter null bytes from console output

The interpretation of ASCII control characters in the Console View was extended to recognize the characters: \0 - null byte. If interpretation is enabled, any null byte will be stripped and not shown in console view. This is most relevant for the Linux platform where a null byte in console view causes anything after it on the same line to be not rendered.

This feature is disabled by default. You can enable it on the Run/Debug > Console preference page.

General Updates

Builds for Linux AArch64 (aka Arm64) added

Binaries for Linux AArch64 (Arm64) are available for testing. With the raising popularity of this architecture people can continue using the Eclipse IDE even when changing their machine.

Java Developement Tools (JDT)

Java 15 Support

Java 15

Java 15 is out and Eclipse JDT supports Java 15 for 4.17 via Marketplace.

The release notably includes the following Java 15 features:

  • JEP 378: Text Blocks (Standard).

  • JEP 384: Records (Second Preview).

  • JEP 375: Pattern Matching for Instanceof (Second Preview).

  • JEP 360: Sealed Classes (Preview).

Please note that preview option should be on for preview language features. For an informal introduction of the support, please refer to Java 15 Examples wiki.


Collapse all nodes in JUnit view

JUnit view now provides a context-menu option to collapse all nodes:

junit collapse all
Sort test results by execution time

JUnit view now provides the ability to sort results by execution time. By default, results will be sorted by execution order. Choosing Sort By > Execution Time from the JUnit View menu will reorder the results once all tests are complete. While tests are still running, they will be shown in execution order.

junit sort time before

Sorting by execution order results in:

junit sort time after

Java Editor

Substring/Subword matches for types

Content Assist now fully supports both substring and subword matches for types:

substring types

Substring matches are always shown and subword matches can be enabled/disabled with the existing Show subword matches option on the Java > Editor > Content Assist preference page.

Optimization tab

A new tab has been added that gathers cleanups that improve the time performance: the existing lazy operator cleanup and the regex precompiler cleanup.

regex preferences

A new clean up has been added that makes use of Objects.equals() to implement the equals(Object) method.

It reduces the code and improves the reading. The cleanup is only available for Java 7 or higher. Although this kind of comparison is almost exclusively seen in the equals(Object) method, it can also reduce code in other methods.

To select the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog select Use Objects.equals() in the equals method implementation on the Unnecessary Code tab.

objects equals preferences

For the given code:

objects equals before

You get this after the clean up:

objects equals after
Precompiles the regular expressions

A new clean up has been added that optimizes the regular expression execution by precompiling it.

It replaces some usages of java.lang.String by usages of java.util.regex.Pattern. The cleanup is done only if it is sure that the string is used as a regular expression. If there is any doubt, nothing is done. The regular expression must be explicitly used several times to be sure the cleanup is useful.

To select the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog select Precompiles reused regular expressions on the Optimization tab.

regex preferences

For the given code:

regex before

You get this after the clean up:

regex after
String.format quickfix

A new quickfix has been added to replace string concatenation with String.format, similar to the existing ones for StringBuilder and MessageFormat.

String.format quickfix
Method reference quickfix

A new quickfix has been added to create missing methods for method references.

Current restriction is that this quickfix is only available on current class.

Expect current implementation to work on simple cases only.

Method references invoking nested generics or type parameters might be problematic to resolve correct.

methodreference 1

Java Views and Dialog

Toggle Code Minings From Find Actions Menu

The code minings within an editor can be enabled/disabled through the Find Actions menu (Ctrl+3).

toggle code minings

Java Formatter

Assert statement wrapping

A new setting in the Formatter profile controls line wrapping of assert statements. A line wrap can be added between the assert condition and its error message. The setting can be found in the Profile Editor (Preferences > Java > Code Style > Formatter > Edit…​) in the Line Wrapping > Wrapping Settings > Statemtens > 'assert' messages node.

formatter wrap assert


Anonymous class instance in evaluation

The JDT debugger is now capable of inspecting/evaluating expressions with anonymous class instances.

anon instance inspection code
anon instance inspection
JEP 358: Helpful NullPointerExceptions

The JDT debugger has now a checkbox option to activate the command line support for JEP 358. This is disabled below Java 14 and enabled by default for Java programs launched with Java 14 and above.



JVM is now capable of analyzing which variable was null at the point of NullPointerException and describe the variable with a null-detail message in the NPE.

Actual type in Variables view

The option Show Type Names in the Variables and Expressions views now displays the value’s actual type instead of its declared type. This simplifies debugging especially when variable details (toString()) is shown As the label for all variables.

To enable Show Type Names in the Variables view, column mode must be disabled (View Menu > Layout > Show Columns).


Object s = "some string";
      	Collection<?> c = Arrays.asList(s, 1);
      	// breakpoint
variables actual type

And more…​

You can find more noteworthy updates in on this page.

What is next?

Having JBoss Tools 4.17.0 and Red Hat CodeReady Studio 12.17 out we are already working on the next release.


Jeff Maury

by jeffmaury at October 13, 2020 09:57 AM

e(fx)clipse 3.7.0 is released

by Tom Schindl at October 12, 2020 06:50 PM

We are happy to announce that e(fx)clipse 3.7.0 has been released. This release contains the following repositories/subprojects:

There are almost no new features (eg the new boxshadow) but only bugfixes who are very important if you use OpenJFX in an OSGi-Environment.

For those of you who already use our pom-First approache the new bits have been pushed to and the Sample application at has been updated to use the latest release.

by Tom Schindl at October 12, 2020 06:50 PM

Getting started with Eclipse GEF – the Mindmap Tutorial

by Tamas Miklossy ( at October 12, 2020 06:00 AM

The Eclipse Graphical Editing Framework is a toolkit to create graphical Java applications either integrated in Eclipse or standalone. The most common use of the framework is to develop diagram editors, like a simple Mindmap editor we will create in the GEF Mindmap Tutorial series. Currently, the tutorial consists of 6 parts and all together 19 steps. They are structured as follows:


Part I – The Foundations

  • Step 1: Preparing the development environment
  • Step 2: Creating the model
  • Step 3: Defining the visuals


  • Step 4: Creating the GEF parts
  • Step 5: Models, policies and behaviors
  • Step 6: Moving and resizing a node

Part III – Adding nodes and connections

  • Step 7: Undo and redo operations
  • Step 8: Creating new nodes
  • Step 9: Creating connections

Part IV – Modifying and removing nodes

  • Step 10: Deleting nodes (1)
  • Step 11: Modifying nodes
  • Step 12: Creating feedback
  • Step 13: Deleting nodes (2)

Part V – Creating an Eclipse editor

  • Step 14: Creating an Eclipse editor
  • Step 15: Undo, redo, select all and delete in Eclipse
  • Step 16: Contributing toolbar actions

Part VI – Automatic layouting

  • Step 17: Automatic layouting via GEF layout
  • Step 18: Automatic layouting via Graphviz DOT
  • Step 19: Automatic layouting via the Eclipse Layout Kernel

You can register for the tutorial series using the link below. The article How to set up Eclipse tool development with OpenJDK, GEF, and OpenJFX describes the necessary steps to properly set up your development environment.

Your feedback regarding the Mindmap Tutorial (and the Eclipse GEF project in general) is highly appreciated. If you have any questions or suggestions, please let us know via the Eclipse GEF forum, or create an issue on Eclipse Bugzilla.

For further information, we recommend to take a look at the Eclipse GEF blog articles and watch the Eclipse GEF session on the EclipseCon Europe 2018.


by Tamas Miklossy ( at October 12, 2020 06:00 AM

Eclipse Collections 10.4.0 Released

by Nikhil Nanivadekar at October 09, 2020 08:36 PM

View of the Grinnell Glacier from overlook point after a grueling 9 mile hike

This is a release which we had not planned for, but we released it nonetheless.

This must be the first time since we open sourced Eclipse Collections that we performed two releases within the same month.

Changes in Eclipse Collections 10.4.0

There are only 2 changes in the 10.4.0 release compared to the feature rich 10.3.0 release viz.

  • Added CharAdapter.isEmpty(), CodePointAdapter.isEmpty(), CodePointList.isEmpty(), as JDK-15 introduced CharSequence.isEmpty().
  • Fixed Javadoc errors.

Why was release 10.4.0 necessary?

In today’s rapid deployment world, it should not be a novel aspect that a project performs multiple releases. However, the Eclipse Collections maintainer team, performs releases when one or more of the below criteria are satisfied:

  1. A bulk of features are ready to be released
  2. A user requests a release for their use case
  3. JDK-EA compatibility is breaking
  4. It has been more than 6 months that a version is released

The Eclipse Collections 10.4.0 release was necessary due to point #3. Eclipse Collections participates in the Quality Outreach program of Open JDK. As a part of this program the library is expected to test the Early Access (EA) versions of Java and identify potential issues in the library or the JDK. I had missed setting up the JDK-15-EA builds until after Eclipse Collections 10.3.0 was released. After setting up the JDK-15-EA builds on 16 August 2020, I found compiler issues in the library due to isEmpty() added as a default method on CharSequence. Stuart Marks has written an in-depth blog of why this new default method broke compatibility. So, we had 2 options, let the library not be compatible with JDK-15, or release a new version with the fix. The Eclipse Collections team believes in supporting Java versions from Java 8 to Java-EA. After release 10.3.0, we had opened a new major version target (11.0.0), but the changes required did not warrant a new major version. So, we decided to release 10.4.0 with the fixes to support JDK-15. Eclipse Collections 10.4.0 release is compatible with JDK-15 and JDK-16-EA.

Thank you

To the vibrant and supportive Eclipse Collections community on behalf of contributors, committers, and maintainers for using Eclipse Collections. We hope you enjoy Eclipse Collections 10.4.0.

I am a Project Lead and Committer for the Eclipse Collections OSS project at the Eclipse Foundation. Eclipse Collections is open for contributions.

Show your support, star us on GitHub.

Eclipse Collections Resources:
Eclipse Collections comes with it’s own implementations of List, Set and Map. It also has additional data structures like Multimap, Bag and an entire Primitive Collections hierarchy. Each of our collections have a rich API for commonly required iteration patterns.

  1. Website
  2. Source code on GitHub
  3. Contribution Guide
  4. Reference Guide

Photo of the blog: I took the photo after hiking to the Grinnell Glacier overlook point. It was a strenuous hike, but the view from up top made it worth it. I picked this photo, to elaborate the sense of accomplishment after completing a release in a short amount of time.

Eclipse Collections 10.4.0 Released was originally published in Oracle Groundbreakers on Medium, where people are continuing the conversation by highlighting and responding to this story.

by Nikhil Nanivadekar at October 09, 2020 08:36 PM

Obeo's Chronicles, Autumn 2020

by Cédric Brun ( at October 06, 2020 12:00 AM

I can’t believe we are already looking at Q4. I have so much news to share with you!

Eclipse Sirius, Obeo Cloud Platform and Sirius Web:

This last summer we had the pleasure to organize SiriusCon. This one-day event is each year the opportunity for the modeling community to share their experience, and for the development team to provide visibility on what is currently being worked on and how we see the future of the technology. SiriusCon reached 450 attendees from 53 different countries thanks to 13 fabulous speakers !

The latest edition was special to us, it used to be organized at the end of each year but we decided to postpone it for a few months to be ready for an announcement very close to our heart. We’ve been working on bringing on the Web what we love about Sirius for quite a few years already and reached a point where we have a promising product. Now is the time to accelerate, Mélanie Bats announced it during the conference: we are releasing “Sirius Web” as Open-Source and officially started the countdown !

The announcement at SiriusCon 2020

The reactions to this announcement were fantastic with a lot of excitement within the community.

I am myself very excited for several reasons:

Firstly, I expect this decision will, just like Sirius Desktop was released in Open-Source in 2013, a key factor leading to the creation of hundreds of graphical modelers, in the same way currently demonstrated by the Sirius Gallery but now easily accessible through the Web and leveraging all the capabilities this platform brings.

Our vision is to empower the tool specifier from the data structure and tool definition up to the deployment and exploitation of a modeling tool, directly from the browser, end to end and in an integrated and seamless way.

We are not there yet, though as you’ll see the technology is already quite promising.

Obeo Cloud Platform Modeler

Secondly, for Obeo this decision strengthens our product-based business model while being faithful to our “open core” approach. We will offer, through Obeo Cloud Platform a Sirius Web build extended with Enterprise features, to deploy on public, private clouds or on premise and including support and upgrade guarantees.

Obeo Cloud Platform Offer

Since the announcement the team is working on Sirius Web to publish it as an Open-Source product so that you can start experimenting as soon as EclipseCon 2020. Mélanie will present this in detail during her talk: “Sirius Web: 100% open source cloud modeling platform” ,

EclipseCon 2020

Hint: it’s still time to register for EclipseCon 2020 but do it quickly! The program committee did an excellent job in setting up an exciting program thanks to your many submissions, don’t miss it!

Capella Days Online is coming up!

That’s not it! Each day we see Eclipse Capella get more and more adoption across the globe, this Open-Source product has its own 4-days event: Capella Days Online 2020!

A unique occasion to get many experience reports from multiple domains: Space systems (CNES and GMV), Rail and transportation (Virgin Hyperloop, Nextrail and Vitesco technologies), healthcare (Siemens and Still AB), waste collecting with The SeaCleaners and all of that in addition to aerospace, defence and security with Thales Group. The program is packed with high-quality content: 12 sessions over 4 days from October 12th to 15th, more than 500 attendees already registered, join us and register!

Capella Days
Capella Days Program

SmartEA 6.0 supports Archimate 3.1 and keeps rising!

We use those open-source technologies, like Eclipse Sirius, Acceleo, EMF Compare, M2doc and many more in our “off the shelf” software solution for Enterprise Architecture: Obeo SmartEA.

SmartEA 6.0

This spring we released SmartEA 6.0, which got the Archimate 3.1 certification and brought among many other improvements: new modeling capabilities, extended user management, enhanced BPMN modeling and streamlined user experience.

Our solution is a challenger on the market and convinces more and more customers. Stay tuned, I should be able to share a thrilling announcement soon!

World Clean Up Day and The SeaCleaners

In a nutshell: an excellent dynamic on many fronts and exciting challenges ahead! This is all made possible thanks to the energy and cohesion of the Obeo team in this weird, complex and unusual time. We are committed to the environment and to reduce plastic waste, as such we took part in the World Clean Up Day in partnership with The Sea Cleaners . Beyond the impact of this action which has so much sense to us, it was also a sharing and fun moment!

#WeAreObeo at the World Cleanup Day

Obeo's Chronicles, Autumn 2020 was originally published by Cédric Brun at CEO @ Obeo on October 06, 2020.

by Cédric Brun ( at October 06, 2020 12:00 AM

MapIterable.getOrDefault() : New but not so new API

by Nikhil Nanivadekar at September 23, 2020 02:30 AM

MapIterable.getOrDefault() : New but not so new API

Sunset at Port Hardy (June 2019)

Eclipse Collections comes with it’s own List, Set, and Map implementations. These implementations extend the JDK List, Set, and Map implementations for easy interoperability. In Eclipse Collections 10.3.0, I introduced a new API MapIterable.getOrDefault(). In Java 8, Map.getOrDefault() was introduced, so what makes it a new API for Eclipse Collections 10.3.0? Technically, it is new but not so new API! Consider the code snippets below, prior to Eclipse Collections 10.3.0:

MutableMap.getOrDefault() compiles and works fine
ImmutableMap.getOrDefault() does not compile

As you can see in the code, MutableMap has getOrDefault() available, however ImmutableMap does not have it. But there is no reason why ImmutableMap should not have this read-only API. I found that MapIterable already had getIfAbsentValue() which has the same behavior. Then why did I still add getOrDefault() to MapIterable?

I added MapIterable.getOrDefault() mainly for easy interoperability. Firstly, most Java developers will be aware of the getOrDefault() method, only Eclipse Collections users would be aware of getIfAbsentValue(). By providing the API same as the JDK it reduces the necessity to learn a new API. Secondly, even though getOrDefault() is available on MutableMap, it is not available on the highest Map interface of Eclipse Collections. Thirdly, I got to learn a Java compiler check which I had not experienced before. I will elaborate this check a bit more in detail because I find it interesting.

After I added getOrDefault() to MapIterable, various Map interfaces in Eclipse Collections started giving compiler errors with messages like: inherits unrelated defaults for getOrDefault(Object, V) from types and java.util.Map. This I thought was cool, because at compile time, the Java compiler is ensuring that if there is an API with default implementation in more than one interface in a multi-interface scenario, then Java will not decide which implementation to pick but rather throw compiler errors. Hence, Java ensures at compile time that there is no ambiguity regarding which implementation will be used at runtime. How awesome is that?!? In order to fix the compile time errors, I had to add a default implementations on the interfaces which gave the errors. I always believe in Compiler Errors are better than Runtime Exceptions.

Stuart Marks has put together an awesome blog which covers the specifics of such scenarios. I suggest reading that for in-depth understanding of how and why this behavior is observed.

Post Eclipse Collections 10.3.0 the below code samples will work:

MapIterable.getOrDefault() compiles and works fine
MutableMap.getOrDefault() compiles and works fine
ImmutableMap.getOrDefault() compiles and works fine

Eclipse Collections 10.3.0 was released on 08/08/2020 and is one of our most feature packed releases. The release constitutes numerous contributions from the Java community.

I am a Project Lead and Committer for the Eclipse Collections OSS project at the Eclipse Foundation. Eclipse Collections is open for contributions.

Show your support star us on GitHub.

Eclipse Collections Resources:
Eclipse Collections comes with it’s own implementations of List, Set and Map. It also has additional data structures like Multimap, Bag and an entire Primitive Collections hierarchy. Each of our collections have a rich API for commonly required iteration patterns.

  1. Website
  2. Source code on GitHub
  3. Contribution Guide
  4. Reference Guide

by Nikhil Nanivadekar at September 23, 2020 02:30 AM

Migrating from Fabric8 Maven Plugin to Eclipse JKube 1.0.0

by Rohan Kumar at September 21, 2020 07:00 AM

The recent release of Eclipse JKube 1.0.0 means that the Fabric8 Maven Plugin is no longer supported. If you are currently using the Fabric8 Maven Plugin, this article provides instructions for migrating to JKube instead. I will also explain the relationship between Eclipse JKube and the Fabric8 Maven Plugin (they’re the same thing) and introduce the highlights of the new Eclipse JKube 1.0.0 release. These migration instructions are for developers working on the Kubernetes and Red Hat OpenShift platforms.

Eclipse JKube is the Fabric8 Maven Plugin

Eclipse JKube and the Fabric8 Maven Plugin are one and the same. Eclipse JKube was first released in 2014 under the name of Fabric8 Maven Plugin. The development team changed the name when we pre-released Eclipse JKube 0.1.0 in December 2019. For more about the name change, see my recent introduction to Eclipse JKube. This article focuses on the migration path to JKube 1.0.0.

What’s new in Eclipse JKube 1.0.0

If you are hesitant about migrating to JKube, the following highlights from the new 1.0.0 release might change your mind:

Fabric8 Maven Plugin generates both Kubernetes and Red Hat OpenShift artifacts, and it automatically detects and deploys resources to the underlying cluster. But developers who use Kubernetes don’t need OpenShift artifacts, and OpenShift developers don’t need Kubernetes manifests. We addressed this issue by splitting Fabric8 Maven Plugin into two plugins for Eclipse JKube: Kubernetes Maven Plugin and OpenShift Maven Plugin.

Eclipse JKube migration made easy

Eclipse JKube has a migrate goal that automatically updates Fabric8 Maven Plugin references in your pom.xml to the Kubernetes Maven Plugin or OpenShift Maven Plugin. In the next sections, I’ll show you how to migrate a Fabric8 Maven Plugin-based project to either platform.

Replace the code for the Fabric8 Maven plugin with either the code for the Kubernetes Maven plugin or the OpenShft Maven plugin.

For demonstration purposes, we can use my old random generator application, which displays a random JSON response at a /random endpoint. To start, clone this repository:

$ git clone
cd fmp-demo-project

Then build the project:

$ mvn clean install

Eclipse JKube migration for Kubernetes users

Use the following goal to migrate to Eclipse JKube’s Kubernetes Maven Plugin. Note that we have to specify a complete artifactId and groupId because the plugin is not automatically included in the pom.xml:

$ mvn org.eclipse.jkube:kubernetes-maven-plugin:migrate

Here are the logs for the migrate goal:

fmp-demo-project : $ mvn org.eclipse.jkube:kubernetes-maven-plugin:migrate
[INFO] Scanning for projects...
[INFO] ----------------------< meetup:random-generator >-----------------------
[INFO] Building random-generator 0.0.1
[INFO] --------------------------------[ jar ]---------------------------------
[INFO] --- kubernetes-maven-plugin:1.0.0-rc-1:migrate (default-cli) @ random-generator ---
[INFO] k8s: Found Fabric8 Maven Plugin in pom with version 4.4.1
[INFO] k8s: Renamed src/main/fabric8 to src/main/jkube
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  3.154 s
[INFO] Finished at: 2020-09-08T19:32:01+05:30
[INFO] ------------------------------------------------------------------------
fmp-demo-project : $

You’ll notice that all of the Fabric8 Maven Plugin references have been replaced by references to Eclipse JKube. The Kubernetes Maven Plugin is the same as the Fabric8 Maven Plugin. The only differences are the k8s prefix and that it generates Kubernetes manifests.

Once you’ve installed the Kubernetes Maven Plugin, you can deploy your application as usual:

$ mvn k8s:build k8s:resource k8s:deploy

Eclipse JKube migration for OpenShift users

Use the same migration process for the OpenShift Maven Plugin as you would for the Kubernetes Maven Plugin. Run the migrate goal but with the OpenShift MavenPlugin specified:

$ mvn org.eclipse.jkube:openshift-maven-plugin:migrate

Here are the logs for this migrate goal:

fmp-demo-project : $ mvn org.eclipse.jkube:openshift-maven-plugin:migrate
[INFO] Scanning for projects...
[INFO] ----------------------< meetup:random-generator >-----------------------
[INFO] Building random-generator 0.0.1
[INFO] --------------------------------[ jar ]---------------------------------
[INFO] --- openshift-maven-plugin:1.0.0-rc-1:migrate (default-cli) @ random-generator ---
[INFO] k8s: Found Fabric8 Maven Plugin in pom with version 4.4.1
[INFO] k8s: Renamed src/main/fabric8 to src/main/jkube
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  4.227 s
[INFO] Finished at: 2020-09-08T19:41:34+05:30
[INFO] ------------------------------------------------------------------------

This goal replaces all of your Fabric8 Maven Plugin references with references to OpenShift Maven Plugin. You can then deploy your application to Red Hat OpenShift just as you normally would:

$ mvn oc:build oc:resource oc:deploy


See the Eclipse JKube migration guide for more about migrating from the Fabric8 Maven Plugin on OpenShift or Kubernetes. Feel free to create a GitHub issue to report any problems that you encounter during the migration. We really value your feedback, so please report bugs, ask for improvements, and tell us about your migration experience.

Whether you are already using Eclipse JKube or just curious about it, don’t be shy about joining our welcoming community:


The post Migrating from Fabric8 Maven Plugin to Eclipse JKube 1.0.0 appeared first on Red Hat Developer.

by Rohan Kumar at September 21, 2020 07:00 AM

After eight: How to set up Eclipse tool development with OpenJDK, GEF, and OpenJFX for newer Java versions

by Svenja Wendler ( at September 18, 2020 08:00 AM

The article describes the solution to possible stumbling blocks in overcoming a transition from Oracle JDK to OpenJDK 11 in PDE development with GEF and JavaFX.

Eclipse development with Java and JavaFX

"Legacy – with concrete feet into the future" I read in an announcement and wondered whether Java 8 will soon be on the way to the technological future with concrete shoes. But before this happens, we prefer to strip them off and migrate to an up-to-date Java version.

We will focus on an eclipse-based application with JavaFX components below. The conversion is to be made to the latest Java-LTE version, i.e. Java 11. The following survey shows that many developers are still cautious about migrating to higher versions of Java:


JavaFX is no longer a JRE component from Java 11

The first hurdle is already apparent when switching to Java 11, because JavaFX is no longer part of the JDK, either at Oracle or in the open source distribution OpenJDK. There are several solutions to this problem. One would be to use a JDK distribution that delivers Java 11 with JavaFX, such as Bellsoft's Liberica JDK. However, this article focuses on using e(fx)clipse and the OpenJFX SDK.

We use JavaFX in our YAKINDU products and have successfully and successfully converted the development of the GEF framework to the following configuration:

In the following we on the one hand convert our development environment to OpenJDK 11 with OpenJFX and e(fx)clipse, and then turn to the transition for our development, including compiler and launch configurations.

Transforming the development environment

We download and install a new Eclipse IDE, ideally for Eclipse committers.

We upload OpenJDK 11to any directory.

We download OpenJFX SDKand store it in a directory.

We install e(fx)clipse at least in version 3.6.0 in our Eclipse environment (Update-Site:

We finish Eclipse and insert the following lines below the "-vmargs" line into the eclipse.ini file ("---add-modules=ALL-SYSTEM" does not need to be re-inserted if already available):

Files\Java\javafx-sdk-11.0.2\lib --add-modules=ALL-SYSTEM

Note: We adjust the path to the OpenJFX libraries according to the operating system and the location in the file system. We don't use quotation marks, even if the path contains spaces. Otherwise, the setting would be tacitly ignored. Furthermore, the path must not be terminated with a slash or backslash. The changes to the eclipse.ini must be made after the installation of e(fx)clipse, otherwise Eclipse will not start again.

If OpenJDK 11 is the only JDK installed, nothing else needs to be changed. If OpenJDK 11 is not installed, but is only unpacked or other Java versions are installed on the computer, then the following lines should also be inserted in the eclipse.ini directly above the "-vmargs" line:

/path/to/jdk-11.0.5+10/Contents/Home/bin (adapt to your directory)

Now let's start Eclipse again. We then install the end-user tools of GEF DOT via the eclipse release update site (for example,

These use JavaFX (and SWT integration) so we can check if our installation worked. If successful, the "DOT Graph" view should look like this:

Possible source of error here are the settings in the eclipse.ini, which we should look again step by step.

Possible source of error here are the settings in the eclipse.ini, which we should look again step by step.

If the IDE finally runs successfully with OpenJDK 11, OpenJFX 11 and e(fx)clipse, we now take care of the workspace and the runtime.

Transforming the development

In order to change the trend, the following must be done:

  • Set OpenJDK as JRE to use
  • Ensure that this JRE is used as an execution environment
  • Set the openjfx-libs folder in the e(fx)clipse preferences

These changes should compile the workspace. The following section describes these steps in more detail.

We set OpenJDK as a runtime environment in the Eclipse preferences. To do this, we select  „Window → Preferences → Java → Installed JREs → Add … and the path to the bin directory of the JDK.

We make sure that this JDK is applied to the execution environment we set. If necessary, we may remove all other JDKs to ensure that the OpenJDK is actually used:

We set the OpenJFX SDK in the preferences for e(fx)clipse. Above, we saved the OpenJFX SDK to a directory. Its lib directory must be in the Eclipse Preferences (Window → Preferences → JavaFX) JavaFX 11 + SDK. This should be the same path as before in the eclipse.ini. This setting makes your Eclipse aware of the JavaFX libraries for development.

Now everything is done to compile the workspace. If we want to start the application in the runtime, there is still a small thing to do.

We'll add the following VM arguments in the launch configuration; they are the same ones that we have previously entered in the eclipse.ini :


If necessary, you can load the sources ( of the GEF framework into the workspace and try out the above points directly. For more information on GEF development, see the developer documentation page:


The procedure above should be used to switch existing Eclipse applications to OpenJDK 11 and OpenJFX 11 with e(fx)clipse.

Are there any comments or questions about this approach?

We welcome any kind of feedback.

by Svenja Wendler ( at September 18, 2020 08:00 AM

WTP 3.19 Released!

September 16, 2020 10:55 PM

The Eclipse Web Tools Platform 3.19 has been released! Installation and updates can be performed using the Eclipse IDE 2020-09 Update Site or through the Eclipse Marketplace . Release 3.19 is included in the 2020-09 Eclipse IDE for Enterprise Java Developers , with selected portions also included in several other packages . Adopters can download the R3.19.0 p2 repository directly and combine it with the necessary dependencies.

More news

September 16, 2020 10:55 PM

Browser like BoxShadow for JavaFX coming with e(fx)clipse 3.7.0

by Tom Schindl at September 16, 2020 09:54 AM

Using BoxShadow is a very common thing in modern UIs, so it might not be suprising that designers defining UIs often also use them heavily.

Unfortunately JavaFX has NO 100% compatible effect and even worse one who is closest (DropShadow) leads to a massive performance hit as shown in this video

On the left hand side is a Node who has a DropShadow-Effect applied to it and you notice that once the effect is applied that the animation isn’t smooth any more. On the right hand side you see a new Node we’ll release with e(fx)clipse 3.7.0 who provides a new BoxShadow-Node (named BoxShadow2).

Beside getting a huge performance win, the BoxShadow-Node uses the same semantics the browser counterpart does so you can port CSS definition to your JavaFX-Application.

For completness here’s the code for this demo video.

package org.eclipse.fx.ui.controls.demo;

import org.eclipse.fx.ui.controls.effects.BoxShadow2;

import javafx.animation.Animation;
import javafx.animation.Animation.Status;
import javafx.animation.TranslateTransition;
import javafx.application.Application;
import javafx.geometry.Insets;
import javafx.geometry.Pos;
import javafx.scene.Node;
import javafx.scene.Scene;
import javafx.scene.control.Button;
import javafx.scene.effect.DropShadow;
import javafx.scene.layout.BorderPane;
import javafx.scene.layout.Region;
import javafx.scene.layout.StackPane;
import javafx.scene.paint.Color;
import javafx.stage.Stage;
import javafx.util.Duration;

public class InTheShadow extends Application {

	public static void main(String[] args) {

	public void start(Stage primaryStage) throws Exception {
		BorderPane p = new BorderPane();
		p.setPadding(new Insets(20));

		Button shadowMe = new Button("Toggle Shadow");

		Region pane;

		if (Boolean.getBoolean("efxclipse-shadow")) {
			BoxShadow2 shadow = new BoxShadow2(createComplexUI());
			pane = shadow;
		} else {
			pane = new StackPane(createComplexUI());


		shadowMe.setOnAction(evt -> toggleShadow(pane));

		Scene s = new Scene(p, 1200, 800);
		primaryStage.setTitle("efxclipse-shadow: " + Boolean.getBoolean("efxclipse-shadow"));

	private void toggleShadow(Region pane) {
		if (pane instanceof BoxShadow2) {
			BoxShadow2 s = (BoxShadow2) pane;
		} else {
			if (pane.getEffect() != null) {
			} else {
				DropShadow dropShadow = new DropShadow();
				dropShadow.setColor(Color.color(0.4, 0.5, 0.5));

	private Node createComplexUI() {
		StackPane pane = new StackPane();
		pane.setStyle("-fx-background-color: white");

		for (int i = 0; i < 100; i++) {
			Button b = new Button("Button " + i);
			b.setTranslateX(i % 100);

		Button animated = new Button("Animated");
		StackPane.setAlignment(animated, Pos.BOTTOM_CENTER);

		TranslateTransition t = new TranslateTransition(Duration.millis(1000), animated);
		animated.setOnAction(evt -> {
			if (t.getStatus() == Status.RUNNING) {
			} else {;


		return pane;


by Tom Schindl at September 16, 2020 09:54 AM

N4JS goes LSP

by n4js dev ( at September 08, 2020 11:00 AM

A few weeks ago we started to publish a VSCode extension for N4JS to the VSCode Marketplace. This was one of the last steps on our road to support LSP-based development tools. We chose to make this major change because of several reasons that affected both users and developers of N4JS.

An N4JS project in VSCode with the N4JS language extension

Our language extension for N4JS is hosted at the Microsoft VSCode Marketplace and will be updated regularly by our Jenkins jobs. Versions will be kept in sync with the language version, compiler version and version of the N4JS libraries to avoid incompatible setups. At the moment, the LSP server supports all main features of the language server protocol (LSP) such as validation, content assist, outline view, jump to definition and implementation, open symbol, the rename refactoring and many more. In addition, it will also generate output files whenever a source change is detected. We therefore heavily improved the incremental LSP builder of the Xtext framework and plan to migrate back those changes to the Xtext repository. For the near future we plan to work on stability, performance and also to support some of the less frequently used LSP features.

When looking back, development of N4JS has been based on the Xtext framework from the start and thus it was straightforward to build an Eclipse-based IDE as our main development tool. Later on, we also implemented a headless compiler used for manual and automated testing from the command line. The development of the compiler already indicated some problems stemming from the tight integration of the Eclipse and the Xtext frameworks together with our language specific implementations. To name an example, we had two separate builder implementations: one for the IDE and the other for the headless compiler. Since the Eclipse IDE is using a specific workspace and project model, we also had two implementations for this abstraction. Another important problem we faced with developing an Eclipse-based IDE was that at some points we had to implement UI tests using the SWTBot framework. For us, SWTBot tests turned out to be very hard to develop, to maintain, and to keep from becoming flaky. Shifting to LSP-based development tools, i.e. the headless compiler and an LSP server, allows us to overcome the aforementioned problems.

Users of N4JS now have the option to either use our extension for VSCode or integrate our LSP server into their favorite IDE themselves, even into the Eclipse IDE. They also benefit from more lightweight tools regarding disk size and start-up performance, as well as a better integration into well-known tools from the JavaScript development ecosystem.

by n4js dev ( at September 08, 2020 11:00 AM

No Java? No Problem!

by Ed Merks ( at August 18, 2020 07:50 AM

For the 2020-09 Eclipse Simultaneous Release, the Eclipse IDE will require Java 11 or higher to run.  If the user doesn't have that installed, Eclipse simply won't start, instead popping up this dialog: 

That of course begs the question, what should I do now? The Eclipse Installer itself is an Eclipse application so it too will fail to start for the same reason.  At least on Windows the Eclipse Installer is distributed as a native executable, so it will open a semi-helpful page in the browser to direct the user find a suitable JRE or JDK to install rather than popping up the above dialog.

Of course we are concerned that many users will update 2020-06 to 2020-09 only to find that Eclipse fails to start afterwards because they are currently running with Java 8.  But Mickael Istria has planned ahead for this as part of the 2020-06 release, adding a validation check during the update process to determine if the current JVM is suitable for the update, thereby helping prevent this particular problem.

Now that JustJ is available for building Eclipse products with an embedded JRE, we can do even better.  Several of the Eclipse Packaging Project's products will include a JustJ JRE in the packages for 2020-09, i.e., the C/C++, Rust, and JavaScript packages.  Also the Eclipse Installer for 2020-09 will provide product variants that include a JustJ JRE.  So they all will simply run out of the box regardless of which version of Java is installed and of course even when Java is not installed at all.

Even better, the Eclipse Installer will provide JustJ JREs as choices in the dialogs.  A user who does not have Java installed will be offered JustJ JRE 14.02 as the default JRE.

Choices of JustJ JREs will always be available in the Eclipse Installer; it will be the default only if no suitable version of Java is currently installed on the machine.

Eclipse Installers with an embedded JustJ JRE will be available starting with 2020-09 M3 for all supported platforms.  For a sneak preview, you can find them in the nightly builds folder.  The ones with "-jre" in the name contain an embedded JRE (and the ones with "-restricted" in the name will only install 2020-09 versions of the products).

It was a lot of work getting this all in place, both building the JREs and updating Oomph's build to consume them.  Not only that, just this week I had to rework EMF's build so that it functions with the latest platform where some of the JDT bundles have incremented their BREEs to Java 11.  There's always something disruptive that creates a lot of work.  I should point out that no one funds this work, so I often question how this is all actually sustainable in the long term (not to mention questioning my personal sanity).

I did found a small GmbH here in Switzerland.  It's very pretty here!

If you need help, consider that help is available. If no one pays for anything, at some point you will only get what you pay for, i.e., nothing. But that's a topic for another blog...

by Ed Merks ( at August 18, 2020 07:50 AM

Dogfooding the Eclipse Dash License Tool

by waynebeaton at July 22, 2020 03:43 PM

There’s background information about this post in my previous post. I’ve been using the Eclipse Dash License Tool on itself.

$ mvn dependency:list | grep -Poh "\S+:(system|provided|compile)$" | java -jar licenses.jar -
Querying Eclipse Foundation for license data for 7 items.
Found 6 items.
Querying ClearlyDefined for license data for 1 items.
Found 1 items.
Vetted license information was found for all content. No further investigation is required.
$ _

Note that in this example, I’ve removed the paths to try and reduce at least some of the clutter. I also tend to add a filter to sort the dependencies and remove duplicates (| sort | uniq), but that’s not required here so I’ve left it out.

The message that “[v]etted license information was found for all content”, means that the tool figures that all of my project’s dependencies have been fully vetted and that I’m good to go. I could, for example, create a release with this content and be fully aligned with the Eclipse Foundation’s Intellectual Property Policy.

The tool is, however, only as good as the information that it’s provided with. Checking only the Maven build completely misses the third party content that was introduced by Jonah’s helpful contribution that helps us obtain dependency information from a yarn.lock file.

$ cd yarn
$ node index.js | java -jar licenses.jar -
Querying Eclipse Foundation for license data for 1 items.
Found 0 items.
Querying ClearlyDefined for license data for 1 items.
Found 0 items.
License information could not automatically verified for the following content:

npm/npmjs/@yarnpkg/lockfile/1.1.0 (null)

Please create contribution questionnaires for this content.

$ _

So… oops. Missed one.

Note that the updates to the IP Policy include a change that allows project teams to leverage third-party content (that they believe to be license compatible) in their project code during development. All content must be vetted by the IP due diligence process before it may be leveraged by any release. So the project in its current state is completely onside, but the license of that identified bit of content needs to be resolved before it can be declared as proper release as defined by the Eclipse Foundation Development Process.

This actually demonstrates why I opted to create the tool as CLI that takes a flat list of dependencies as input: we use all sorts of different technologies, and I wanted to focus the tool on providing license information for arbitrary lists of dependencies.

I’m sure that Denis will be able to rewrite my bash one-liner in seven keystrokes, but here’s how I’ve combined the two so that I can get complete picture with a “single” command:

$ { mvn dependency:list | grep -Poh "\S+:(system|provided|compile)$" ; cd yarn && node index.js; } | java -jar licenses.jar -
Querying Eclipse Foundation for license data for 8 items.
Found 6 items.
Querying ClearlyDefined for license data for 2 items.
Found 1 items.
License information could not automatically verified for the following content:

npm/npmjs/@yarnpkg/lockfile/1.1.0 (null)

Please create contribution questionnaires for this content.
$ _

I have some work to do before I can release. I’ll need to engage with the Eclipse Foundation’s IP Team to have that one bit of content vetted.

As a side effect, the tool generates a DEPENDENCIES file. The dependency file lists all of the dependencies provided in the input in ClearlyDefined coordinates along with license information, whether or not the content is approved for use or is restricted (meaning that further investigation is required), and the authority that determined the status.

maven/mavencentral/org.glassfish/jakarta.json/1.1.6, EPL-2.0 OR GPL-2.0 WITH Classpath-exception-2.0, approved, emo_ip_team
maven/mavencentral/commons-codec/commons-codec/1.11, Apache-2.0, approved, CQ15971
maven/mavencentral/org.apache.httpcomponents/httpcore/4.4.13, Apache-2.0, approved, CQ18704
maven/mavencentral/commons-cli/commons-cli/1.4, Apache-2.0, approved, CQ13132
maven/mavencentral/org.apache.httpcomponents/httpclient/4.5.12, Apache-2.0, approved, CQ18703
maven/mavencentral/commons-logging/commons-logging/1.2, Apache-2.0, approved, CQ10162
maven/mavencentral/org.apache.commons/commons-csv/1.8, Apache-2.0, approved, clearlydefined
npm/npmjs/@yarnpkg/lockfile/1.1.0, unknown, restricted, none

Most of the content was vetted by the Eclipse Foundation’s IP Team (the entries marked “CQ*” have corresponding entries in IPZilla), one was found in ClearlyDefined, and one requires further investigation.

The tool produces good results. But, as I stated earlier, it’s only as good as the input that it’s provided with and it only does what it is designed to do (it doesn’t, for example, distinguish between prerequisite dependencies and dependencies of “works with” dependencies; more on this later). The output of the tool is obviously a little rough and could benefit from the use of a proper configurable logging framework. There’s a handful of other open issues for your consideration.

by waynebeaton at July 22, 2020 03:43 PM

Why ServiceCaller is better (than ServiceTracker)

July 07, 2020 08:00 PM

My previous post spurned a reasonable amount of discussion, and I promised to also talk about the new ServiceCaller which simplifies a number of these issues. I also thought it was worth looking at what the criticisms were because they made valid points.

The first observation is that it’s possible to use both DS and ServiceTracker to track ServiceReferences instead. In this mode, the services aren’t triggered by default; instead, they only get accessed upon resolving the ServiceTracker using the getService() call. This isn’t the default out of the box, because you have to write a ServiceTrackerCustomizer adapter that intercepts the addingService() call to wrap the ServiceTracker for future use. In other words, if you change:

serviceTracker = new ServiceTracker<>(bundleContext, Runnable.class, null);;

to the slightly more verbose:

serviceTracker = new ServiceTracker<>(bundleContext, Runnable.class,
new ServiceTrackerCustomizer<Runnable, Wrapped<Runnable>>() {
public Wrapped<Runnable> addingService(ServiceReference<Runnable> ref) {
return new Wrapped<>(ref, bundleContext);
static class Wrapped<T> {
private ServiceReference<T> ref;
private BundleContext context;
public Wrapped(ServiceReference<T> ref, BundleContext context) {
this.ref = ref;
this.context = context;
public T getService() {
try {
return context.getService(ref);
} finally {

Obviously, no practical code uses this approach because it’s too verbose, and if you’re in an environment where DS services aren’t widely used, the benefits of the deferred approach are outweighed by the quantity of additional code that needs to be written in order to implement this pattern.

(The code above is also slightly buggy; we’re getting the service, returning it, then ungetting it afterwards. We should really just be using it during that call instead of returning it in that case.)

Introducing ServiceCaller

This is where ServiceCaller comes in.

The approach of the ServiceCaller is to optimise out the over-eager dereferencing of the ServiceTracker approach, and apply a functional approach to calling the service when required. It also has a mechanism to do single-shot lookups and calling of services; helpful, for example, when logging an obscure error condition or other rarely used code path.

This allows us to elegantly call functional interfaces in a single line of code:

Class callerClass = getClass();
ServiceCaller.callOnce(callerClass, Runnable.class, Runnable:run);

This call looks for Runnable service types, as visible from the caller class, and then invoke the function getClass() as lambda. We can use a method reference (as in the above case) or you can supply a Consumer<T> which will be passed the reference that is resolved from the lookup.

Importantly, this call doesn’t acquire the service until the callOnce call is made. So, if you have an expensive logging factory, you don’t have to initialise it until the first time it’s needed – and even better, if the error condition never occurs, you never need to look it up. This is in direct contrast to the ServiceTracker approach (which actually needs more characters to type) that accesses the services eagerly, and is an order of magnitude better than having to write a ServiceTrackerCustomiser for the purposes of working around a broken API.

However, note that such one-shot calls are not the most efficient way of doing this, especially if it is to be called frequently. So the ServiceCaller has another mode of operation; you can create a ServiceCaller instance, and hang onto it for further use. Like its single-shot counterpart, this will defer the resolution of the service until needed. Furthermore, once resolved, it will cache that instance so you can repeatedly re-use it, in the same way that you could do with the service returned from the ServiceTracker.

private ServiceCaller<Runnable> service;
public void start(BundleContext context) {
this.service = new ServiceCaller<>(getClass(), Runnable.class);
public void stop(BundleContext context) {
public void doSomething() {;

This doesn’t involve significantly more effort than using the ServiceTracker that’s widely in use in Eclipse Activators at the moment, yet will defer the lookup of the service until it’s actually needed. It’s obviously better than writing many lines of ServiceTrackerCustomiser and performs better as a result, and is in most cases a type of drop-in replacement. However, unlike ServiceTracker (which returns you a service that you can then do something with afterwards), this call provides a functional consumer interface that allows you to pass in the action to take.

Wrapping up

We’ve looked at why ServiceTracker has problems with eager instantiation of services, and the complexity of code required to do it the right way. A scan of the Eclipse codebase suggests that outside of Equinox, there are very few uses of ServiceTrackerCustomiser and there are several hundred calls to ServiceTracker(xxx,yyy,null) – so there’s a lot of improvements that can be made fairly easily.

This pattern can also be used to push down the acquisition of the service from a generic Plugin/Activator level call to where it needs to be used. Instead of standing this up in the BundleActivator, the ServiceCaller can be used anywhere in the bundle’s code. This is where the real benefit comes in; by packaging it up into a simple, functional consumer, we can use it to incrementally rid ourselves of the various BundleActivators that take up the majority of Eclipse’s start-up.

A final note on the ServiceCaller – it’s possible that when you run the callOnce method (or the call method if you’re holding on to it) that a service instance won’t be available. If that’s the case, you get notified by a false return call from the call method. If a service is found and is processed, you’ll get a true returned. For some operations, a no-op is a fine behaviour if the service isn’t present – for example, if there’s no LogService then you’re probably going to drop the log event anyway – but it allows you to take the corrective action you need.

It does mean that if you want to capture return state from the method call then you’ll need to have an alternative approach. The easiest way is to have an final Object result[] = new Object[1]; before the call, and then the lambda can assign the return value to the array. That’s because local state captured by lambdas needs to be a final reference, but a final reference to a mutable single element array allows us to poke a single value back. You could of course use a different class for the array, depending on your requirements.

So, we have seen that ServiceCaller is better than ServiceTracker, but can we do even better than that? We certainly can, and that’s the purpose of the next post.

July 07, 2020 08:00 PM

Why ServiceTracker is Bad (for DS)

July 02, 2020 08:00 PM

In a presentation I gave at EclipseCon Europe in 2016, I noted that there were prolems when using ServiceTracker and on slide 37 of my presentation noted that:

  • is a blocking call
  • results in DS activating services

Unfortunately, not everyone agrees because it seems insane that ServiceTracker should do this.

Unfortunately, ServiceTracker is insane.

The advantage of Declarative Services (aka SCR, although no-one calls it that) is that you can register services declaratively, but more importantly, the DS runtime will present the existence of the service but defer instantiation of the component until it’s first requested.

The great thing about this is that you can have a service which does many class loads or timely actions and defer its use until the service is actually needed. If your service isn’t required, then you don’t pay the cost for instantiating that service. I don’t think there’s any debate that this is a Good Thing and everyone, so far, is happy.


The problem, specifically when using ServiceTracker, is that you have to do a two-step process to use it:

  1. You create a ServiceTracker for your particular service class
  2. You call open() on it to start looking for services
  3. Time passes
  4. You acquire the service form the ServiceTracker to do something with it

There is a generally held mistaken belief that the DS component is not instantiated until you hit step 4 in the above. After all, if you’re calling the service from another component – or even looking up the ServiceReference yourself – that’s what would happen.

What actually happens is that the DS component is instantiated in step 2 above. That’s because the open() call – which is nicely thread-safe by the way, in the way that getService() isn’t – starts looking for services, and then caches the InitialTracked service, which causes DS to instantiate the component for you. Since most DS components often have a default, no-arg constructor, this generally misses most people’s attention.

If your component’s constructor – or more importantly, the fields therein, cause many classes to be loaded or perform substantial work or calculation, the fact that you’re hitting a synchronized call can take some non-trivial amount of time. And since this is typically in an Activator.start() method, it means that your nicely delay-until-its-needed component is now on the critical path of this bundle’s start-up, despite not actually needing the service right now.

This is one of the main problems in Eclipse’s start-up; many, many thousands of classes are loaded too eagerly. I’ve been working over the years to try and reduce the problem but it’s an uphill struggle and bad patterns (particularly the use of Activator) are endemic in a non-trivial subset of the Eclipse ecosystem. Of course, there are many fine and historical reasons why this is the case, not the least of which is that we didn’t start shipping DS in the Eclipse runtime until fairly recently.

Repo repro

Of course, when you point this out, not everyone is aware of this subtle behaviour. And while opinions may differ, code does not. I have put together a sample project which has two bundles:

  • Client, which has an Activator (yeah I know, I’m using it to make a point) that uses a ServiceTracker to look for Runnable instances
  • Runner, which has a DS component that provides a Runnable interface

When launched together, as soon as the method is called, you can see the console printing "Component has been instantiated" message. This is despite the Client bundle never actually using the service that the ServiceTracker causes to be obtained.

If you run it with the system property -DdisableOpen=true, the statement is not called, and the component is not instantiated.

This is a non-trivial reason as to why Eclipse startup can be slow. There are many, many uses of ServiceTracker to reach out to other parts of the system, and regardless of whether these are lazy DS components or have been actively instantiated, the use of causes them to all be eagerly activated, even before they’re needed. We can migrate Eclipse’s services to DS (and in fact, I’m working on doing just that) but until we eliminate the ServiceTracker from various Activators, we won’t see the benefit.

The code in the github repository essentially boils down to:

public void start(BundleContext bundleContext) throws Exception {
serviceTracker = new ServiceTracker<>(bundleContext, Runnable.class, null);
if (!Boolean.getBoolean("disableOpen")) {; // This will cause a DS component to be instantiated even though we don't use it

Unfortunately, there’s no way to use ServiceTracker to listen to lazily activated services, and as an OSGi standard, the behaviour is baked in to it.

Fortunately, there’s a lighter-weight tracker you can use called ServiceCaller – but that’s a topic for another blog post.


Using will cause lazily instantiated DS components to be activated eagerly, before the service is used. Instead of using ServiceTracker, try moving your service out to a DS component, and then DS will do the right thing.

July 02, 2020 08:00 PM

How to install RDi in the latest version of Eclipse

by Wim at June 30, 2020 03:57 PM

Monday, June 29, 2020
In this blog, I am going to show you how to install IBM RDi into the latest and the greatest version of Eclipse. If you prefer to watch a video then scroll down to the end. **EDIT** DOES NOT WORK WITH ECLIPSE 2020/09 AND HIGHER.

Read more

by Wim at June 30, 2020 03:57 PM

Quarkus – Supersonic Subatomic IoT

by Jens Reimann at June 30, 2020 03:22 PM

Quarkus is advertised as a “Kubernetes Native Java stack, …”, so we took it to a test, and checked what benefits we can get, by replacing an existing service from the IoT components of EnMasse, the cloud-native, self-service messaging system.

The context

For quite a while, I wanted to try out Quarkus. I wanted to see what benefits it brings us in the context of EnMasse. The IoT functionality of EnMasse is provided by Eclipse Honoâ„¢, which is a micro-service based IoT connectivity platform. Hono is written in Java, makes heavy use of Vert.x, and the application startup and configuration is being orchestrated by Spring Boot.

EnMasse provides the scalable messaging back-end, based on AMQP 1.0. It also takes care of the Eclipse Hono deployment, alongside EnMasse. Wiring up the different services, based on an infrastructure custom resource. In a nutshell, you create a snippet of YAML, and EnMasse takes care and deploys a messaging system for you, with first-class support for IoT.

Architecture diagram, explaining the tenant service.
Architectural overview – showing the Tenant Service

This system requires a service called the “tenant service”. That service is responsible for looking up an IoT tenant, whenever the system needs to validate that a tenant exists or when its configuration is required. Like all the other services in Hono, this service is implemented using the default stack, based on Java, Vert.x, and Spring Boot. Most of the implementation is based on Vert.x alone, using its reactive and asynchronous programming model. Spring Boot is only used for wiring up the application, using dependency injection and configuration management. So this isn’t a typical Spring Boot application, it is neither using Spring Web or any of the Spring Messaging components. And the reason for choosing Vert.x over Spring in the past was performance. Vert.x provides an excellent performance, which we tested a while back in our IoT scale test with Hono.

The goal

The goal was simple: make it use fewer resources, having the same functionality. We didn’t want to re-implement the whole service from scratch. And while the tenant service is specific to EnMasse, it still uses quite a lot of the base functionality coming from Hono. And we wanted to re-use all of that, as we did with Spring Boot. So this wasn’t one of those nice “greenfield” projects, where you can start from scratch, with a nice and clean “Hello World”. This is code is embedded in two bigger projects, passes system tests, and has a history of its own.

So, change as little as possible and get out as much as we can. What else could it be?! And just to understand from where we started, here is a screenshot of the metrics of the tenant service instance on my test cluster:

Screenshot of original resource consumption.
Metrics for the original Spring Boot application

Around 200MiB of RAM, a little bit of CPU, and not much to do. As mentioned before, the tenant service only gets queries to verify the existence of a tenant, and the system will cache this information for a bit.

Step #1 – Migrate to Quarkus

To use Quarkus, we started to tweak our existing project, to adopt the different APIs that Quarkus uses for dependency injection and configuration. And to be fair, that mostly meant saying good-bye to Spring Boot specific APIs, going for something more open. Dependency Injection in Quarkus comes in the form of CDI. And Quarkus’ configuration is based on Eclipse MicroProfile Config. In a way, we didn’t migrate to Quarkus, but away from Spring Boot specific APIs.

First steps

Starting with adding the Quarkus Maven plugin and some basic dependencies to our Maven build, and off we go.

And while replacing dependency inject was a rather smooth process, the configuration part was a bit more tricky. Both Hono and Microprofile Config have a rather opinionated view on the configuration. Which made it problematic to enhance the Hono configuration in the way that Microprofile was happy. So for the first iteration, we ended up wrapping the Hono configuration classes to make them play nice with Microprofile. However, this is something that we intend to improve in Hono in the future.

Packaging the JAR into a container was no different than with the existing version. We only had to adapt the EnMasse operator to provide application arguments in the form Quarkus expected them.

First results

From a user perspective, nothing has changed. The tenant service still works the way it is expected to work and provides all the APIs as it did before. Just running with the Quarkus runtime, and the same JVM as before:

Screenshot of resource consumption with Quarkus in JVM mode.
Metrics after the conversion to Quarkus, in JVM mode

We can directly see a drop of 50MiB from 200MiB to 150MiB of RAM, that isn’t bad. CPU isn’t really different, though. There also is a slight improvement of the startup time, from ~2.5 seconds down to ~2 seconds. But that isn’t a real game-changer, I would say. Considering that ~2.5 seconds startup time, for a Spring Boot application, is actually not too bad, other services take much longer.

Step #2 – The native image

Everyone wants to do Java “native compilation”. I guess the expectation is that native compilation makes everything go much faster. There are different tests by different people, comparing native compilation and JVM mode, and the outcomes vary a lot. I don’t think that “native images” are a silver bullet to performance issues, but still, we have been curious to give it a try and see what happens.

Native image with Quarkus

Enabling native image mode in Quarkus is trivial. You need to add a Maven profile, set a few properties and you have native image generation enabled. With setting a single property in the Maven POM file, you can also instruct the Quarkus plugin to perform the native compilation step in a container. With that, you don’t need to worry about the GraalVM installation on your local machine.

Native image generation can be tricky, we knew that. However, we didn’t expect this to be as complex as being “Step #2”. In a nutshell, creating a native image compiles your code to CPU instruction, rather than JVM bytecode. In order to do that, it traces the call graph, and it fails to do so when it comes to reflection in Java. GraalVM supports reflection, but you need to provide the information about types, classes, and methods that want to participate in the reflection system, from the outside. Luckily Quarkus provides tooling to generate this information during the build. Quarkus knows about constructs like de-serialization in Jackson and can generate the required information for GraalVM to compile this correctly.

However, the magic only works in areas that Quarkus is aware of. So we did run into some weird issues, strange behavior that was hard to track down. Things that worked in JVM mode all of a sudden were broken in native image mode. Not all the hints are in the documentation. And we also didn’t read (or understand) all of the hints that are there. It takes a bit of time to learn, and with a lot of help from some colleagues (many thanks to Georgios, Martin, and of course Dejan for all the support), we got it running.

What is the benefit?

After all the struggle, what did it give us?

Screenshot of resource consumption with Quarkus in native image mode.
Metrics when running as native image Quarkus application

So, we are down another 50MiB of RAM. Starting from ~200MiB, down to ~100MiB. That is only half the RAM! Also, this time, we see a reduction in CPU load. While in JVM mode (both Quarkus and Spring Boot), the CPU load was around 2 millicores, now the CPU is always below that, even during application startup. Startup time is down from ~2.5 seconds with Spring Boot, to ~2 seconds with Quarkus in JVM mode, to ~0.4 seconds for Quarkus in native image mode. Definitely an improvement, but still, neither of those times is really bad.

Pros and cons of Quarkus

Switching to Quarkus was no problem at all. We found a few areas in the Hono configuration classes to improve. But in the end, we can keep the original Spring Boot setup and have Quarkus at the same time. Possibly other Microprofile compatible frameworks as well, though we didn’t test that. Everything worked as before, just using less memory. And except for the configuration classes, we could pretty much keep the whole application as it was.

Native image generation was more complex than expected. However, we also saw some real benefits. And while we didn’t do any performance tests on that, here is a thought: if the service has the same performance as before, the fact that it requires only half the of memory, and half the CPU cycles, this allows us to run twice the amount of instances now. Doubling throughput, as we can scale horizontally. I am really looking forward to another scale test since we did do all other kinds of optimizations as well.

You should also consider that the process of building a native image takes quite an amount of time. For this, rather simple service, it takes around 3 minutes on an above-than-average machine, just to build the native image. I did notice some decent improvement when trying out GraalVM 20.0 over 19.3, so I would expect some more improvements on the toolchain over time. Things like hot code replacement while debugging, are things that are not possible with the native image profile though. It is a different workflow, and that may take a bit to adapt. However, you don’t need to commit to either way. You can still have both at the same time. You can work with JVM mode and the Quarkus development mode, and then enable the native image profile, whenever you are ready.

Taking a look at the size of the container images, I noticed that the native image isn’t smaller (~85 MiB), compared to the uber-JAR file (~45 MiB). Then again, our “java base” image alone is around ~435 MiB. And it only adds the JVM on top of the Fedora minimal image. As you don’t need the JVM when you have the native image, you can go directly with the Fedora minimal image, which is around ~165 MiB, and end up with a much smaller overall image.


Switching our existing Java project to Quarkus wasn’t a big deal. It required some changes, yes. But those changes also mean, using some more open APIs, governed by the Eclipse Foundation’s development process, compared to using Spring Boot specific APIs. And while you can still use Spring Boot, changing the configuration to Eclipse MicroProfile opens up other possibilities as well. Not only Quarkus.

Just by taking a quick look at the numbers, comparing the figures from Spring Boot to Quarkus with native image compilation: RAM consumption was down to 50% of the original, CPU usage also was down to at least 50% of original usage, and the container image shrank to ~50% of the original size. And as mentioned in the beginning, we have been using Vert.x for all the core processing. Users that make use of the other Spring components should see more considerable improvement.

Going forward, I hope we can bring the changes we made to the next versions of EnMasse and Eclipse Hono. There is a real benefit here, and it provides you with some awesome additional choices. And in case you don’t like to choose, the EnMasse operator has some reasonable defaults for you 😉

Also see

This work is based on the work of others. Many thanks to:

The post Quarkus – Supersonic Subatomic IoT appeared first on ctron's blog.

by Jens Reimann at June 30, 2020 03:22 PM

Updates to the Eclipse IP Due Diligence Process

by waynebeaton at June 25, 2020 07:23 PM

In October 2019, The Eclipse Foundation’s Board of Directors approved an update to the IP Policy that introduces several significant changes in our IP due diligence process. I’ve just pushed out an update to the Intellectual Property section in the Eclipse Foundation Project Handbook.

I’ll apologize in advance that the updates are still a little rough and require some refinements. Like the rest of the handbook, we continually revise and rework the content based on your feedback.

Here’s a quick summary of the most significant changes.

License certification only for third-party content. This change removes the requirement to perform deep copyright, provenance and scanning of anomalies for third-party content unless it is being modified and/or if there are special considerations regarding the content. Instead, the focus for third-party content is on license compatibility only, which had previously been referred to as Type A due diligence.

Leverage other sources of license information for third-party content. With this change to license certification only for third-party content, we are able to leverage existing sources of information license information. That is, the requirement that the Eclipse IP Team personally review every bit of third-party content has been removed and we can now leverage other trusted sources.

ClearlyDefined is a trusted source of license information. We currently have two trusted sources of license information: The Eclipse Foundation’s IPZilla and ClearlyDefined. The IPZilla database has been painstakingly built over most of the lifespan of the Eclipse Foundation; it contains a vast wealth of deeply vetted information about many versions of many third-party libraries. ClearlyDefined is an OSI project that combines automated harvesting of software repositories and curation by trusted members of the community to produce a massive database of license (and other) information about content.

Piggyback CQs are no longer required. CQs had previously been used for tracking both the vetting process and the use of third-party content. With the changes, we are no longer required track the use of third-party content using CQs, so piggyback CQs are no longer necessary.

Parallel IP is used in all cases. Previously, our so-called Parallel IP process, the means by which project teams could leverage content during development while the IP Team completed their due diligence review was available only to projects in the incubation phase and only for content with specific conditions. This is no longer the case: full vetting is now always applied in parallel in all cases.

CQs are not required for third-party content in all cases. In the case of third-party content due diligence, CQs are now only used to track the vetting process.

CQs are no longer required before third-party content is introduced. Previously, the IP Policy required that all third-party content must be vetted by the Eclipse IP Team before it can be used by an Eclipse Project. The IP Policy updates turn this around. Eclipse project teams may now introduce new third-party content during a development cycle without first checking with the IP Team. That is, a project team may commit build scripts, code references, etc. to third-party content to their source code repository without first creating a CQ to request IP Team review and approval of the third-party content. At least during the development period between releases, the onus is on the project team to—​with reasonable confidence—​ensure any third-party content that they introduce is license compatible with the project’s license. Before any content may be included in any formal release the project team must engage in the due diligence process to validate that the third-party content licenses are compatible with the project license.

History may be retained when an existing project moves to the Eclipse Foundation. We had previously required that the commit history for a project moving to the Eclipse Foundation be squashed and that the initial contribution be the very first commit in the repository. This is no longer the case; existing projects are now encouraged (but not required) to retain their commit history. The initial contribution must still be provided to the IP Team via CQ as a snapshot of the HEAD state of the existing repository (if any).

The due diligence process for project content is unchanged.

If you notice anything that looks particularly wrong or troubling, please either open a bug report, or send a note to EMO.

by waynebeaton at June 25, 2020 07:23 PM

Eclipse JustJ

by Ed Merks ( at June 25, 2020 08:18 AM

I've recently completed the initial support for provisioning the new Eclipse JustJ project, complete with a logo for it.

I've learned several new technologies and honed existing technology skills to make this happen. For example, I've previously used Inkscape to create nicer images for Oomph; a *.png with alpha is much better than a *.gif with a transparent pixel, particularly with the vogue, dark-theme fashion trend, which for old people like me feels more like the old days of CRT monitors than something modern, but hey, to each their own. In any case, a *.svg is cool, definitely looks great at every resolution, and can easily be rendered to a *.png.

By the way, did you know that artwork derivative of  Eclipse artwork requires special approval? Previously the Eclipse Board of Directors had to review and approve such logos, but now our beloved, supreme leader, Mike Milinkovich, is empowered to do that personally.

Getting to the point where we can redistribute JREs at Eclipse has been a long and winding road.  This of course required Board approval and your elected Committer Representatives helped push that to fruition last year.  Speaking of which, now there is an exciting late-breaking development: the move of AdoptOpenJDK to Eclipse Adoptium.  This will be an important source JREs for JustJ!

One of the primary goals of JustJ is to provide JREs via p2 update sites such that a product build can easily incorporate a JRE into the product. With that in place, the product runs out-of-the-box regardless of the JRE installed on the end-user's computer, which is particularly useful for products that are not Java-centric where the end-user doesn't care about the fact that Eclipse is implemented using Java.  This will also enable the Eclipse Installer to run out-of-the-box and will enable the installer to create an installation that, at the user's discretion, uses a JRE provided by Eclipse. In all cases, this includes the ability to update the installation's embedded JRE as new ones are released.

The first stage is to build a JRE from a JDK using jlink.  This must run natively on the JDK's actual supported operating system and hardware architecture.  Of course we want to automate this step, and all the steps involved in producing a p2 repository populated with JREs.  This is where I had to learn about Jenkins pipeline scripts.  I'm particularly grateful to Mikaël Barbero for helping me get started with a simple example.  Now I am a pipeline junkie, and of course I had to learn Groovy as well.

In the initial stage, we generate the JREs themselves, and that involves using shell scripts effectively.  I'm not a big fan of shell scripts, but they're a necessary evil.  I authored a single script that produces JREs on all the supported operating systems; one that I can run locally on Windows and on my two virtual boxes as well. The pipeline itself needs to run certain stages on specific agents such that their steps are performed on the appropriate operating system and hardware.  I'm grate to Robert Hilbrich of DLR for supporting JustJ's builds with their organization's resource packs!  He's also been kind enough to be one of our first test guinea pigs building a product with a JustJ JRE.  The initial stage produces a set of JREs.

In the next stage, JREs need to be wrapped into plugins and features to produce a p2 repository via a Maven/Tycho build.  This is a huge amount of boiler plate scaffolding that is error-prone to author and challenging to maintain, especially when providing multiple JRE flavors.  So of course we want to automate the generation of this scaffolding as well.  Naturally if we're going to generate something, we need a model to capture the boiled-down essence of what needs to be generated.  So I whipped together an EMF model and used JET templates to sketch out the scaffolding. With the super cool JET Editor, these are really easy to author and maintain.  This stage is described in the documentation and produces a p2 update site.  The sites are automatically maintained and the index pages are automatically generated.

To author nice documentation I had to learn PHP much better.  It's really quite cool and very powerful, particularly for producing pages with dynamic content.  For example, I used it to implement more flexible browsing support of so that one can really see all the files present, even when there is an index.html or index.php in the folder.  In any case, there is now lots of documentation for JustJ to describe everything in detail, and it was authored with the help of PHP scaffolding.

Last but not least, there is an Oomph setup to automate the provisioning of a full development environment along with a tutorial to describe in detail everything in that workspace.  There's no excuse not to contribute.  While authoring this tutorial, I found that creating nice, appropriately-clipped screen captures is super annoying and very time consuming, so I dropped a little goodie into Oomph to make that easier.   You might want to try it. Just add "-Dorg.eclipse.oomph.ui.screenshot=<some-folder-location>" to your eclipse.ini to enable it.  Then, if you hit Ctrl twice quickly, screen captures will be produced immediately based on where your application currently has focus.  If you hit Shift twice quickly, screen captures will be produced after a short delay.  This allows you to bring up a menu from the menu bar, from a toolbar button, or a context menu, and capture that menu.  In all cases, the captures include the "simulated" mouse cursor and starts with the "focus", expanding outward to the full enclosing window.

The bottom line, JustJ generates everything given just a set of URLs to JDKs as input, and it maintains everything automatically.  It even provides an example of how to build a product with an embedded JRE to get you started quickly.  And thanks to some test guinea pigs, we know it really works as advertised.

On the personal front, during this time period, I finished my move to Switzerland.  Getting up early here is a feast for the eyes! The movers were scurrying around my apartment the same days as the 2020-06 release, which was also the same day as one of the Eclipse Board meetings.  That was a little too much to juggle at once!

At this point, I can make anything work and I can make anything that already works work even better. Need help with something?  I'm easy to find...

by Ed Merks ( at June 25, 2020 08:18 AM

Clean Sheet Service Update (0.8)

by Frank Appel at May 23, 2020 09:25 AM

Written by Frank Appel

Thanks to a community contribution we’re able to announce another Clean Sheet Service Update (0.8).

The Clean Sheet Eclipse Design

In case you've missed out on the topic and you are wondering what I'm talking about, here is a screenshot of my real world setup using the Clean Sheet theme (click on the image to enlarge). Eclipse IDE Look and Feel: Clean Sheet Screenshot For more information please refer to the features landing page at, read the introductory Clean Sheet feature description blog post, and check out the New & Noteworthy page.


Clean Sheet Service Update (0.8)

This service update fixes a rendering issue of ruler numbers. Kudos to Pierre-Yves B. for contributing the necessary fixes. Please refer to the issue #87 for more details.

Clean Sheet Installation

Drag the 'Install' link below to your running Eclipse instance

Drag to your running Eclipse* workspace. *Requires Eclipse Marketplace Client


Select Help > Install New Software.../Check for Updates.
P2 repository software site: @
Feature: Code Affine Theme

After feature installation and workbench restart select the ‘Clean Sheet’ theme:
Preferences: General > Appearance > Theme: Clean Sheet


On a Final Note, …

Of course, it’s interesting to hear suggestions or find out about potential issues that need to be resolved. Feel free to use the Xiliary Issue Tracker or the comment section below for reporting.

I’d like to thank all the Clean Sheet adopters for the support! Have fun with the latest update :-)

The post Clean Sheet Service Update (0.8) appeared first on Code Affine.

by Frank Appel at May 23, 2020 09:25 AM

Clean Sheet Service Update (0.7)

by Frank Appel at April 24, 2020 08:49 AM

Written by Frank Appel

It’s been a while, but today we’re happy to announce a Clean Sheet Service Update (0.7).

The Clean Sheet Eclipse Design

In case you've missed out on the topic and you are wondering what I'm talking about, here is a screenshot of my real world setup using the Clean Sheet theme (click on the image to enlarge). Eclipse IDE Look and Feel: Clean Sheet Screenshot For more information please refer to the features landing page at, read the introductory Clean Sheet feature description blog post, and check out the New & Noteworthy page.


Clean Sheet Service Update (0.7)

This service update provides the long overdue JRE 11 compatibility on windows platforms. Kudos to Pierre-Yves B. for contributing the necessary fixes. Please refer to the issues #88 and #90 for more details.

Clean Sheet Installation

Drag the 'Install' link below to your running Eclipse instance

Drag to your running Eclipse* workspace. *Requires Eclipse Marketplace Client


Select Help > Install New Software.../Check for Updates.
P2 repository software site: @
Feature: Code Affine Theme

After feature installation and workbench restart select the ‘Clean Sheet’ theme:
Preferences: General > Appearance > Theme: Clean Sheet


On a Final Note, …

Of course, it’s interesting to hear suggestions or find out about potential issues that need to be resolved. Feel free to use the Xiliary Issue Tracker or the comment section below for reporting.

I’d like to thank all the Clean Sheet adopters for the support! Have fun with the latest update :-)

The post Clean Sheet Service Update (0.7) appeared first on Code Affine.

by Frank Appel at April 24, 2020 08:49 AM

Using the remote OSGi console with Equinox

by Mat Booth at April 23, 2020 02:00 PM

You may be familiar with the OSGi shell you get when you pass the "-console" option to Equinox on the command line. Did you know you can also use this console over Telnet sessions or SSH sessions? This article shows you the bare minimum needed to do so.

by Mat Booth at April 23, 2020 02:00 PM

EclipseCon 2020 CFP is Open

April 16, 2020 08:30 PM

If you are interested in speaking, our call for proposals is now open. Please visit the CFP page for information on how to submit your talk.

April 16, 2020 08:30 PM

Add Your Voice to the 2020 Jakarta EE Developer Survey

April 07, 2020 01:00 PM

Our third annual Jakarta EE Developer Survey is now open and I encourage everyone to take a few minutes and complete the survey before the April 30 deadline.

April 07, 2020 01:00 PM

Eclipse Oomph: Suppress Welcome Page

by kthoms at March 19, 2020 04:37 PM

I am frequently spawning Eclipse workspaces with Oomph setups and the first action I do when a new workspace is provisioned is to close Eclipse’s welcome page. So I wanted to suppress that for a current project setup. So I started searching where Eclipse stores the preference that disables the intro page. The location of that preference is within the workspace directory at


The content of the preference file is


So to make Oomph create the preference file before the workspace is started the first time use a Resource Creation task and set the Target URL


Then put the above mentioned preference content as Content value.

by kthoms at March 19, 2020 04:37 PM

MPS’ Quest of the Holy GraalVM of Interpreters

by Niko Stotz at March 11, 2020 11:19 PM

A vision how to combine MPS and GraalVM

Way too long ago, I prototyped a way to use GraalVM and Truffle inside JetBrains MPS. I hope to pick up this work soon. In this article, I describe the grand picture of what might be possible with this combination.

Part I: Get it Working

Step 0: Teach Annotation Processors to MPS

Truffle uses Java Annotation Processors heavily. Unfortunately, MPS doesn’t support them during its internal Java compilation. The feature request doesn’t show any activity.

So, we have to do it ourselves. A little less time ago, I started with an alternative Java Facet to include Annotation Processors. I just pushed my work-in-progress state from 2018. As far as I remember, there were no fundamental problems with the approach.

Optional Step 1: Teach Truffle Structured Sources

For Truffle, all executed programs stem from a Source. However, this Source can only provide Bytes or Characters. In our case, we want to provide the input model. The prototype just put the Node id of the input model as a String into the Source; later steps resolved the id against MPS API. This approach works and is acceptable; directly passing the input node as object would be much nicer.

Step 2: Implement Truffle Annotations as MPS Language

We have to provide all additional hints as Annotations to Truffle. They are complex enough, so we want to leverage MPS’ language features to directly represent all Truffle concepts.

This might be a simple one-to-one representation of Java Annotations as MPS Concepts, but I’d guess we can add some more semantics and checks. Such feedback within MPS should simplify the next steps: Annotation Processors (and thus, Truffle) have only limited options to report issues back to us.

We use this MPS language to implement the interpreter for our DSL. This results in a TruffleLanguage for our DSL.

Step 3: Start Truffle within MPS

At the time when I wrote the proof-of-concept, a TruffleLanguage had to be loaded at JVM startup. To my understanding, Truffle overcame this limitation. I haven’t looked into the current possibilities in detail yet.

I can imagine two ways to provide our DSL interpreter to the Truffle runtime:

  1. Always register MpsTruffleLanguage1, MpsTruffleLanguage2, etc. as placeholders. This would also work at JVM startup. If required, we can register additional placeholders with one JVM restart.
    All non-colliding DSL interpreters would be MpsTruffleLanguage1 from Truffle’s point of view. This works, as we know the MPS language for each input model, and can make sure Truffle uses the right evaluation for the node at hand. We might suffer a performance loss, as Truffle had to manage more evaluations.

    What are non-colliding interpreters? Assume we have a state machine DSL, an expression DSL, and a test DSL. The expression DSL is used within the state machines; we provide an interpreter for both of them.
    We provide two interpreters for the test DSL: One executes the test and checks the assertions, the other one only marks model nodes that are covered by the test.
    The state machine interpreter, the expression interpreter, and the first test interpreter are non-colliding, as they never want to execute on the same model node. All of them go to MpsTruffleLanguage1.
    The second test interpreter does collide, as it wants to do something with a node also covered by the other interpreters. We put it to MpsTruffleLanguage2.

  2. We register every DSL interpreter as a separate TruffleLanguage. Nice and clean one-to-one relation. In this scenario, we probably had to get Truffle Language Interop right. I have not yet investigated this topic.

Step 4: Translate Input Model to Truffle Nodes

A lot of Truffle’s magic stems from its AST representation. Thus, we need to translate our input model (a.k.a. DSL instance, a.k.a. program to execute) from MPS nodes into Truffle Nodes.

Ideally, the Truffle AST would dynamically adopt any changes of the input model — like hot code replacement in a debugger, except we don’t want to stop the running program. From Truffle’s point of view this shouldn’t be a problem: It rewrites the AST all the time anyway.

DclareForMPS seems a fitting technology. We define mapping rules from MPS node to Truffle Node. Dclare makes sure they are in sync, and input changes are propagated optimally. These rules could either be generic, or be generated from the interpreter definition.

We need to take care that Dclare doesn’t try to adapt the MPS nodes to Truffle’s optimizing AST changes (no back-propagation).

We require special handling for edge cases of MPS → Truffle change propagation, e.g. the user deletes the currently executed part of the program.

For memory optimization, we might translate only the entry nodes of our input model immediately. Instead of the actual child Truffle Nodes, we’d add special nodes that translate the next part of the AST.
Unloading the not required parts might be an issue. Also, on-demand processing seems to conflict with Dclare’s rule-based approach.

Part II: Adapt to MPS

Step 5: Re-create Interpreter Language

The MPS interpreter framework removes even more boilerplate from writing interpreters than Truffle. The same language concepts should be built again, as abstraction on top of the Truffle Annotation DSL. This would be a new language aspect.

Step 6: Migrate MPS Interpreter Framework

Once we had the Truffle-based interpreter language, we want to use it! Also, we don’t want to rewrite all our nice interpreters.

I think it’s feasible to automatically migrate at least large parts of the existing MPS interpreter framework to the new language. I would expect some manual adjustment, though. That’s the price we had to pay for two orders of magnitude performance improvement.

Step 7: Provide Plumbing for BaseLanguage, Checking Rules, Editors, and Tests

Using the interpreter should be as easy as possible. Thus, we have to provide the appropriate utilities:

  • Call the interpreter from any BaseLanguage code.
    We had to make sure we get language / model loading and dependencies right. This should be easier with Truffle than with the current interpreter, as most language dependencies are only required at interpreter build time.
  • Report interpreter results in Checking Rules.
    Creating warnings or errors based on the interpreter’s results is a standard use-case, and should be supported by dedicated language constructs.
  • Show interpreter results in an editor.
    As another standard use-case, we might want to show the interpreter’s results (or a derivative) inside an MPS editor. Especially for long-running or asynchronous calculations, getting this right is tricky. Dedicated editor extensions should take care of the details.
  • Run tests that involve the interpreter.
    Yet another standard use-case: our DSL defines both calculation rules and examples. We want to assure they are in sync, meaning executing the rules in our DSL interpreter and comparing the results with the examples. This must work both inside MPS, and in a headless build / CI test environment.

Step 8: Support Asynchronous Interpretation and/or Caching

The simple implementation of interpreter support accepts a language, parameters, and a program (a.k.a. input model), and blocks until the interpretation is complete.

This working mode is useful in various situations. However, we might want to run long-running interpretations in the background, and notify a callback once the computation is finished.

Example: An MPS editor uses an interpreter to color a rule red if it is not in accordance with a provided example. This interpretation result is very useful, even if it takes several seconds to calculate. However, we don’t want to block the editor (or even whole MPS) for that long.

Extending the example, we might also want to show an error on such a rule. The typesystem runs asynchronously anyways, so blocking is not an issue. However, we now run the same expensive interpretation twice. The interpreter support should provide configurable caching mechanisms to avoid such waste.

Both asynchronous interpretation and caching benefit from proper language extensions.

Step 9: Integrate with MPS Typesystem and Scoping

Truffle needs to know about our DSL’s types, e.g. for resolving overloaded functions or type casting. We already provide this information to the MPS typesystem. I didn’t look into the details yet; I’d expect we could generate at least part of the Truffle input from MPS’ type aspect.

Truffle requires scoping knowledge to store variables in the right stack frame (and possibly other things I don’t understand yet). I’d expect we could use the resolved references in our model as input to Truffle. I’m less optimistic to re-use MPS’ actual scoping system.

For both aspects, we can amend the missing information in the Interpreter Language, similar to the existing one.

Step 10: Support Interpreter Development

As DSL developers, we want to make sure we implemented our interpreter correctly. Thus, we write tests; they are similar to other tests involving the interpreter.

However, if they fail, we don’t want to debug the program expressed in our DSL, but our interpreter. For example, we might implement the interpreter for a switch-like construct, and had forgotten to handle an implicit default case.

Using a regular Java debugger (attached to our running MPS instance) has only limited use, as we had to debug through the highly optimized Truffle code. We cannot use Truffle’s debugging capabilities, as they work on the DSL.
There might be ways to attach a regular Java debugger running inside MPS in a different thread to its own JVM. Combining the direct debugger access with our knowledge of the interpreter’s structure, we might be able to provide sensible stepping through the interpreter to the DSL developer.

Simpler ways to support the developers might be providing traces through the interpreter, or ship test support where the DSL developer can assure specific evaluators were (not) executed.

Step 11: Create Language for Interop

Truffle provides a framework to describe any runtime in-memory data structure as Shape, and to convert them between languages. This should be a nice extension of MPS’ multi-language support into the runtime space, supported by an appropriate Meta-DSL (a.k.a. language aspect).

Part III: Leverage Programming Language Tooling

Step 12: Connect Truffle to MPS’ Debugger

MPS contains the standard interactive debugger inherited from IntelliJ platform.

Truffle exposes a standard interface for interactive debuggers of the interpreted input. It takes care of the heavy lifting from Truffle AST to MPS input node.

If we ran Truffle in a different thread than the MPS debugger, we should manage to connect both parts.

Step 13: Integrate Instrumentation

Truffle also exposes an instrumentation interface. We could provide standard instrumentation applications like “code” coverage (in our case: DSL node coverage) and tracing out-of-the-box.

One might think of nice visualizations:

  • Color node background based on coverage
  • Mark the currently executed part of the model
  • Project runtime values inline
  • Show traces in trace explorer

Other possible applications:

  • Snapshot mechanism for current interpreter state
  • Provide traces for offline debugging, and play them back

Part IV: Beyond MPS

Step 14: Serialize Truffle Nodes

If we could serialize Truffle Nodes (before any run-time optimization), we would have an MPS-independent representation of the executable DSL. Depending on the serialization format (implement Serializable, custom binary format, JSON, etc.), we could optimize for use-case, size, loading time, or other priorities.

Step 15: Execute DSL stand-alone without Generator

Assume an insurance calculation DSL.
Usually, we would implement

  • an interpreter to execute test cases within MPS,
  • a Generator to C to execute on the production server,
  • and a Generator to Java to provide an preview for the insurance agent.

With serialized Truffle Nodes, we need only one interpreter:

Part V: Crazy Ideas

Step 16: Step Back Debugger

By combining Instrumentation and debugger, it might be feasible to provide step-back debugging.

In the interpreter, we know the complete global state of the program, and can store deltas (to reduce memory usage). For quite some DSLs, this might be sufficient to store every intermediate state and thus arbitrary debug movement.

Step 17: Side Step Debugger

By stepping back through our execution and following different execution paths, we could explore alternate outcomes. The different execution path might stem from other input values, or hot code replacement.

Step 18: Explorative Simulations

If we had a side step debugger, nice support to project interpretation results, and a really fast interpreter, we could run explorative simulations on lots of different executions paths. This might enable legendary interactive development.

by Niko Stotz at March 11, 2020 11:19 PM

Postmortem - February 7 storage and authentication outage

by Denis Roy at February 20, 2020 04:12 PM

On Friday, February 7 2020, suffered a severe service disruption to many of its web properties when our primary authentication server and file server suffered a hardware failure.

For 90 minutes, our main website,, was mostly available, as was our Bugzilla bug tracking tool, but logging in was not possible. Wiki, Eclipse Marketplace and other web properties were degraded. Git and Gerrit were both completely offline for 2 hours and 18 minutes. Authenticated access to Jiro -- our Jenkins+Kubernetes-based CI system, was not possible, and builds that relied on Git access failed during that time.

There was no data loss, but there were data inconsistencies. A dozen Git repositories and Gerrit code changes were in an inconsistent state due to replication schedules, but thanks to the distributed nature of Git, the code commits were still in local developer Git repositories, as well as on the failed server, which we were eventually able to revive (in an offline environment). Data inconsistencies were more severe in our LDAP accounts database, where dozens of users were unable to log in, and in some isolated cases, users reported that their account was reverted back to old data from years prior.

In hindsight, we feel this outage could have, and should have been avoided. We’ve identified many measures we must enact to prevent such unplanned outages in the future. Furthermore, our communication and incident handling processes proved to be flawed, and will be scrutinized and improved, to ensure our community is better informed during unplanned incidents.

Lastly, we’ve identified aging hardware and Single Points of Failure (SPoF) that must be addressed.


File server & authentication setup

At the center of the Eclipse infra is a pair of servers that handle 2 specific tasks:

  • Network Attached Storage (NAS) via NFS

  • User Authentication via OpenLDAP

The server pair consists of a primary system, which handles all the traffic, and a hot spare. Both servers are configured identically for production service, but the spare server sits idly and receives data periodically from the primary. This specific architecture was originally implemented in 2005, with periodical hardware upgrades over time.


Timeline of events

Friday Feb 7 - 12:33pm EST: Fred Gurr (Eclipse Foundation IT/Releng team) reports on the Foundation’s internal Slack channel that something is happening to the Infra. Denis observes many “Flaky” status reports on but is in transit and cannot investigate further. Webmaster Matt Ward investigates.

12:43pm: Matt confirms that our primary nfs/ldap server is not responding, and activates “Plan A: assess and fix”.

12:59pm: Denis reaches a computer and activates “Plan B: prepare for Failover” while Matt works on Plan A. The “Sorry, we are down” page is served for all Flaky services except, which continues to be served successfully by our nginx cache.

1:18pm: The standby server is ready to assume the “primary” role.

1:29pm: Matt makes the call for failover, as the severity of the hardware failure is not known, and not easily recoverable.

1:49pm:, Bugzilla, Marketplace, Wiki return to stable service on the new primary.

2:18pm: Git and Gerrit return to stable service.

2:42pm: Our Kubernetes/OpenShift cluster is updated to the latest patchlevel and all CI services restarted.

4:47pm: All legacy JIPP servers are restarted, and all other remaining services report functional.  At this time, we are not aware of any issues.

During the weekend, Matt continues to monitor the infra. Authentication issues crop up over the weekend, which are caused by duplicated accounts and are fixed by Matt.

Monday, 4:49am EST: Mikaël Barbero (Eclipse Foundation IT/Releng team) reports that there are more duplicate users in LDAP that cannot log into our systems. This is now a substantial issue. They are fixed systematically with an LDAP duplicate finder, but the process is very slow.

10:37am: First Foundation broadcast on the cross-project mailing list that there is an issue with authentication.

Tuesday, 9:51am: Denis blogs about the incident and posts a message to the mailing list about the ongoing authentication issues. The message, however, is held for moderation and is not distributed until many hours later.

Later that day: Most duplicated accounts have been removed, and just about everything is stabilized. We do not yet understand the source of the duplicates.

Wednesday: duplicate removals continue, as well as investigation into the cause.

Thursday 9:52am: We file a dozen bugs against projects whose Git and Gerrit repos may be out of sync. Some projects had already re-pushed or rebased their missing code patches and resolved the issue as FIXED.

Friday, 2:58pm: All remaining duplicates are removed. Our LDAP database is fully cleaned. The failed server re-enters production as the hot standby - even though its hardware is not reliable. New hardware is sourced and ordered.


Hardware failure

The physical servers assuming our NAS/LDAP setup are server-class hardware, 2U chassis with redundant power supplies, ECC (error checking and correction) memory, RAID-5 disk arrays with a battery-backup RAID controller memory. Both primary and standby servers were put into production in 2011.

On February 7, the primary server experienced a kernel crash from the RAID controller module. The RAID controller detected an unrecoverable ECC memory error. The entire server became unresponsive.

As originally designed in 2005, periodical (batched) data updates from the primary to the hot spare were simple to set up and maintain. This method also had a distinct advantage over live replication: rapid recovery in case of erasure (accidental or malicious) or data tampering. Of course, this came at a cost of possible data loss. However, it was deemed that critical data (in our case, Source Code) susceptible to loss during the short time was also available on developer workstations.

Failover and return to stability

As the standby server was prepared for production service, the reasons for the crash on the primary server were investigated. We assessed the possibility of continuing service on the primary; that course of action would have provided the fastest recovery with the fewest surprises later on.

As the nature of the hardware failure remained unknown, failover was the only option. We confirmed that some data replication tasks had run less than one hour prior to failure, and all data replication was completed no later than 3 hours prior. IP addresses were updated, and one by one, services that depended on NFS and authentication were restarted to flush caches and minimize any potential for an inconsistent state.

At about 4:30pm, or four hours after the failure, both webmasters were confident that the failover was successful, and that very little dust would settle over the weekend.

Authentication issues

Throughout the weekend, we had a few reports of authentication issues -- which were expected, since we failed over to a standby authentication source that was at least 12 hours behind the primary. These issues were fixed as they were reported, and nothing seemed out of place.

On Monday morning, Feb 10th, the Foundation’s Releng team reported that several committers had authentication issues to the CI systems. We then suspected that something else was at play with our authentication database, but it was not clear to us what had happened, or what the magnitude was. The common issue was duplicate accounts -- some users had an account in two separate containers simultaneously, which prevented users from being able to authenticate. These duplicates were removed as rapidly as we could, and we wrote scripts to identify old duplicates and purge them -- but with >450,000 accounts, it was time-consuming.

At that time, we got so wrapped up in trying to understand and resolve the issue that we completely underestimated its impact on the community, and we were absolutely silent about it.


Problem solved

On Friday afternoon, February 14, we were able to finally clean up all the duplicate accounts and understand why they existed in the first place.

Prior to December, 2011, our LDAP database only contained committer accounts. In December 2011, we imported all the non-committer accounts from Bugzilla and Wiki into an LDAP container we named “Community”. This allowed us to centralize authentication around a single source of truth: LDAP.

All new accounts were, and are created in the Community container, and are moved into the Committer container if/when they became an Eclipse Committer.

Our primary->secondary LDAP sync mechanism was altered, at that time, to sync the Community container as well -- but it was purely additive. Once you had an account in Community, it was there for life on the standby server, even if you became a committer later on. Or if you’d ever change your email address. This was the source of the duplicate accounts on the standby server.

A new server pair has been ordered on February 14, 2020 . These servers will be put into production service as soon as possible, and the old hardware will be recommissioned to clustered service. With these new machines, we believe our existing architecture and configuration can continue to serve us well over the coming months and years.


Take-aways and proposed improvements

Although the outage didn’t last incredibly long (2 hours from failure to the beginning of restored service), we feel it shouldn’t have occurred in the first place. Furthermore, we’ve identified key areas where our processes can be improved - notably, in how we communicate with you.

Here are the action items we’re committed to implementing in the near term, to improve our handling of such incidents:

  • Communication: Improved Service Status page. gives a picture of what’s going on, but with an improved service, we can communicate the nature of outages, the impact, and estimated time until service is restored.

  • Communication: Internally, we will improve communication within our team and establish a maintenance log, whereby members of the team can discover the work that has been done.

  • Staffing: we will explore the possibility of an additional IT hire, thus enhancing our collective skillset, and enabling more overall time on the quality and reliability of the infra.

  • Aging Hardware: we will put top-priority on resolving aging SPoF, and be more strict about not running hardware devices past their reasonable life expectancy.

    • In the longer term, we will continue our investment in replacing SPoF with more robust technologies. This applies to authentication, storage, databases and networking.

  • Process and procedures: we will allocate more time to testing our disaster recovery and business continuity procedures. Such tests would likely have revealed the LDAP sync bug.

We believe that these steps will significantly reduce unplanned outages such as the one that occured on February 7. They will also help us ensure that, should a failure occur, we recover and return to a state of stability more rapidly. Finally, they will help you understand what is happening, and what the timelines to restore service are, so that you can plan your work tasks and remain productive.

by Denis Roy at February 20, 2020 04:12 PM

Anatomy of a server failure

by Denis Roy at February 11, 2020 02:51 PM

Last Friday, Feb 7 at around 12:30pm (Ottawa time), I received a notification from Fred Gurr (part of our release engineering team) that something was going on with the infra. The multitude of colours on the Eclipse Service Status page confirmed it -- many of our services and tools were either slow, or unresponsive.

After some initial digging, we discovered that the primary backend file server (housing Git, Gerrit, web session data, and a lot of files for our various web properties) was not responding. It was also host to our accounts database -- the center for all user authentication.

Jumping into action

It's a well-rehearsed routine for colleage Matt Ward and I -- he worked on assessing the problem and identifying the fix, while I worked on Plan B - failover to our hot standby. At around 1:35pm, roughly 1 hour into the outage, Matt made the call -- failover is the only option, as a  hardware component has failed. 20 minutes later, most services had either recovered or were well on their way.

But the failover is not perfect. Data is sync'ed every 2 hours. Account and authentication info is replicated nightly. This was a by-design strategy decision, as it offers us a recovery window in case of data erasure, corruption or unauthenticated access.

Lessons learned

The failed server was put in service in 2011, celebrating its *gasp* ninth year of 24/7 service. That is a few years too many, and although it (and its standby counterpart) were slated for replacement in 2017, the effort was pushed back to make room for competing priorities. In a moment of bitter irony, the failed hardware was planned to be replaced in the second quarter of this year -- mere months away. We gambled with the house, we lost.

Cleaning up

Today, there is much dust to settle. Our authentication database has some gremlins that we need to fix, and there could be a few missing commits that were not replicated.

We also need to source replacement hardware for the failed component, so that we can re-enable our hot standby. At the same time, we need to immediately source replacement servers for those 2011 dinosaurs. They've served us well, but their retirement is long overdue.

by Denis Roy at February 11, 2020 02:51 PM

Interfacing null-safe code with legacy code

by Stephan Herrmann at February 06, 2020 07:38 PM

When you adopt null annotations like these, your ultimate hope is that the compiler will tell you about every possible NullPointerException (NPE) in your program (except for tricks like reflection or bytecode weaving etc.). Hallelujah.

Unfortunately, most of us use libraries which don’t have the blessing of annotation based null analysis, simply because those are not annotated appropriately (neither in source nor using external annotations). Let’s for now call such code: “legacy”.

In this post I will walk through the options to warn you about the risks incurred by legacy code. The general theme will be:

Can we assert that no NPE will happen in null-checked code?

I.e., if your code consistently uses null annotations, and has passed analysis without warnings, can we be sure that NPEs can only ever be thrown in the legacy part of the code? (NPEs inside legacy code are still to be expected, there’s nothing we can change about that).

Using existing Eclipse versions, one category of problems would still go undetected whereby null-checked code could still throw NPE. This has been recently fixed bug.

Simple data flows

Let’s start with simple data flows, e.g., when your program obtains a value from legacy code, like this:


You shouldn’t be surprised, the javadoc even says: “The method returns null if the property is not found.” While the compiler doesn’t read javadoc, it can recognize that a value with unspecified nullness flows into a variable with a non-null type. Hence the warning:

Null type safety (type annotations): The expression of type ‘String’ needs unchecked conversion to conform to ‘@NonNull String’

As we can see, the compiler warned us, so we are urged to fix the problem in our code. Conversely, if we pass any value into a legacy API, all bad that can happen would happen inside legacy code, so nothing to be done for our mentioned goal.

The underlying rule is: legacy values can be safely assigned to nullable variables, but not to non-null variables (example Properties.getProperty()). On the other hand, any value can be assigned to a legacy variable (or method argument).

Put differently: values flowing from null-checked to legacy pose no problems, whereas values flowing the opposite direction must be assumed to be nullable, to avoid problems in null-checked code.

Enter generics

Here be dragons.

As a minimum requirement we now need null annotations with target TYPE_USE (“type annotations”), but we have this since 2014. Good.


Here we obtain a List<String> value from a Legacy class, where indeed the list names is non-null (as can be seen by successful output from names.size()). Still things are going south in our code, because the list contained an unexpected null element.

To protect us from this problem, I marked the entire class as @NonNullByDefault, which causes the type of the variable names to become List<@NonNull String>. Now the compiler can again warn us about an unsafe assignment:

Null type safety (type annotations): The expression of type ‘List<String>’ needs unchecked conversion to conform to ‘List<@NonNull String>’

This captures the situation, where a null value is passed from legacy to null-checked code, which is wrapped in a non-null container value (the list).

Here’s a tricky question:

Is it safe to pass a null-checked value of a parameterized type into legacy code?

In the case of simple values, we saw no problem, but the following example tells us otherwise once generics are involved:

Again we have a list of type List<@NonNull String>, so dereferencing values obtained from that list should never throw NPE. Unfortunately, the legacy method printNames() succeeded to break our contract by inserting null into the list, resulting in yet another NPE thrown in null-checked code.

To describe this situation it helps to draw boundaries not only between null-checked and legacy code, but also to draw a boundary around the null-checked value of parameterized type List<@NonNull String>. That boundary is breached when we pass this value into legacy code, because that code will only see List<String> and happily invoke add(null).

This is were I recently invented a new diagnostic message:

Unsafe null type conversion (type annotations): The value of type ‘List<@NonNull String>’ is made accessible using the less-annotated type ‘List<String>’

By passing names into legacy code, we enable a hidden data flow in the opposite direction. In the general case, this introduces the risk of NPE in otherwise null-checked code. Always?


Java would be a much simpler language without wildcards, but a closer look reveals that wildcards actually don’t only help for type safety but also for null-safety. How so?

If the legacy method were written using a wildcard, it would not be (easily) possible to sneak in a null value, here are two attempts:

The first attempt is an outright Java type error. The second triggers a warning from Eclipse, despite the lack of null annotations:

Null type mismatch (type annotations): ‘null’ is not compatible to the free type variable ‘?’

Of course, compiling the legacy class without null-checking would still bypass our detection, but chances are already better.

If we add an upper bound to the wildcard, like in List<? extends CharSequence>, not much is changed. A lower bound, however, is an invitation for the legacy code to insert null at whim: List<? super String> will cause names.add() to accept any String, including the null value. That’s why Eclipse will also complain against lower bounded wildcards:

Unsafe null type conversion (type annotations): The value of type ‘List<@NonNull String>’ is made accessible using the less-annotated type ‘List<? super String>’

Comparing to raw types

It has been suggested to treat legacy (not null-annotated) types like raw types. Both are types with a part of the contract ignored, thereby causing risks for parts of the program that still rely on the contract.

Interestingly, raw types are more permissive in the parameterized-to-raw conversion. We are generally not protected against legacy code inserting an Integer into a List<String> when passed as a raw List.

More interestingly, using a raw type as a type argument produces an outright Java type error, so my final attempt at hacking the type system failed:



We have seen several kinds of data flow with different risks:

  • Simple values flowing checked-to-legacy don’t cause any specific headache
  • Simple values flowing legacy-to-checked should be treated as nullable to avoid bad surprises. This is checked.
  • Values of parameterized type flowing legacy-to-checked must be handled with care at the receiving side. This is checked.
  • Values of parameterized type flowing checked-to-legacy add more risks, depending on:
    • nullness of the type argument (@Nullable type argument has no risk)
    • presence of wildcards, unbounded or lower-bounded.

Eclipse can detect all mentioned situations that would cause NPE to be thrown from null-checked code – the capstone to be released with Eclipse 2020-03, i.e., coming soon …

by Stephan Herrmann at February 06, 2020 07:38 PM

Eclipse and Handling Content Types on Linux

by Mat Booth at February 06, 2020 03:00 PM

Getting deep desktop integration on Linux.

by Mat Booth at February 06, 2020 03:00 PM

Remove SNAPSHOT and Qualifier in Maven/Tycho Builds

by Lorenzo Bettini at February 05, 2020 10:20 AM

Before releasing Maven artifacts, you remove the -SNAPSHOT from your POMs. If you develop Eclipse projects and build with Maven and Tycho, you have to keep the versions in the POMs and the versions in MANIFEST, feature.xml and other Eclipse project artifacts consistent. Typically when you release an Eclipse p2 site, you don’t remove the .qualifier in the versions and you will get Eclipse bundles and features versions automatically processed: the .qualifer is replaced with a timestamp. But if you want to release some Eclipse bundles also as Maven artifacts (e.g., to Maven central) you have to remove the -SNAPSHOT before deploying (or they will still be considered snapshots, of course 🙂 and you have to remove .qualifier in Eclipse bundles accordingly.

To do that, in an automatic way, you can use a combination of Maven plugins and of tycho-versions-plugin.

I’m going to show two different ways of doing that. The example used in this post can be found here:

First method

The idea is to use the goal parse-version of the org.codehaus.mojo:build-helper-maven-plugin. This will store the parts of the current version in some properties (by default, parsedVersion.majorVersion, parsedVersion.minorVersion and parsedVersion.incrementalVersion).

Then, we can pass these properties appropriately to the goal set-version of the org.eclipse.tycho:tycho-versions-plugin.

This is the Maven command to run:

mvn \
  build-helper:parse-version org.eclipse.tycho:tycho-versions-plugin:set-version \

The goal set-version of the Tycho plugin will take care of updating the versions (without the -SNAPSHOT and .qualifier) both in POMs and in Eclipse projects’ metadata.

Second method

Alternatively, we can use the goal set (with argument -DremoveSnapshot=true) of the org.codehaus.mojo:versions-maven-plugin. Then, we use the goal update-eclipse-metadata of the org.eclipse.tycho:tycho-versions-plugin, to update Eclipse projects’ versions according to the version in the POM.

This is the Maven command to run:

mvn \
  versions:set -DgenerateBackupPoms=false -DremoveSnapshot=true \

The first goal will change the versions in POMs while the second one will change the versions in Eclipse projects’ metadata.

Configuring the plugins

As usual, it’s best practice to configure the used plugins (in this case, their versions) in the pluginManagement section of your parent POM.

For example, in the parent POM of we have:




In the end, choose the method you prefer. Please keep in mind that these goals are not meant to be used during a standard Maven lifecycle, that’s why we ran them explicitly.

Furthermore, the goal set of the org.codehaus.mojo:versions-maven-plugin might give you some headache if the structure of your Maven/Eclipse projects is quite different from the default one based on nested directories. In particular, if you have an aggregator project different from the parent project, you will have to pass additional arguments or set the versions in different commands (e.g., first on the parent, then on the other modules of the aggregator, etc.)

by Lorenzo Bettini at February 05, 2020 10:20 AM

JDT without Eclipse

January 16, 2020 11:00 PM

The JDT (Java Development Tools) is an important part of Eclipse IDE but it can also be used without Eclipse.

For example the Spring Tools 4, which is nowadays a cross-platform tool (Visual Studio Code, Eclipse IDE, …), is highly using the JDT behind the scene. If you would like to know more, I recommend you this podcast episode: Spring Tools lead Martin Lippert

A second known example is the Java Formatter that is also part of the JDT. Since a long time there are maven and gradle plugins that performs the same formatting as Eclipse IDE but as part of the build (often with the possibly to break the build when the code is wrongly formatted).

Reusing the JDT has been made easier since 2017 when it was decided to publish each release and its dependencies on maven central (with following groupId: org.eclipse.jdt, org.eclipse.platform). Stephan Herrmann did a lot of work to achieve this goal. I blogged about this: Use the Eclipse Java Development Tools in a Java SE application and I have pushed a simple example the Java Formatter is used in a simple main(String[]) method built by a classic minimal Maven project: java-formatter.

Workspace or not?

When using the JDT in an headless application, two cases needs to be distinguished:

  1. Some features (the parser, the formatter…) can be used in a simple Java main method.

  2. Other features (search index, AST rewriter…) require a workspace. This imply that the code run inside an OSGi runtime.

To illustrate this aspect, I took some of the examples provided by the site in the blog post series Eclipse JDT Tutorials and I adapted them so that each code snippet can be executed inside a JUnit test. This is the Programcreek examples project.

I have split the unit-tests into two projects:

  • programcreek-standalone for the one that do not require OSGi. The maven project is really simple (using the default convention everywhere)

  • programcreek-osgi for the one that must run inside an OSGi runtime. The bnd maven plugins are configured in the pom.xml to take care of the OSGi stuff.

If you run the test with Maven, it will work out-of-the box.

If you would like to run them inside an IDE, you should use one that starts OSGi when executing the tests (in the same way the maven build is doing it). To get a bnd aware IDE, you can use Eclipse IDE for Java Developers with the additional plugin Bndtools installed, but there are other possibilities.

Source code can be found on GitHub: programcreek-examples

January 16, 2020 11:00 PM

Oracle made me a Stackoverflow Guru

by Stephan Herrmann at January 16, 2020 06:40 PM

Just today Oracle helped me to become a “Guru” on Stackoverflow! How did they do it? By doing nothing.

In former times, I was periodically enraged, when Oracle didn’t pay attention to the feedback I was giving them during my work on ecj (the Eclipse Compiler for Java) – at least not the attention that I had hoped for (to be fair: there was a lot of good communication, too). At those times I had still hoped I could help make Java a language that is completely and unambiguously defined by specifications. Meanwhile I recognized that Java is at least three languages: the language defined by JLS etc., the language implemented by javac, and the language implemented by ecj (and no chance to make ecj to conform to both others). I realized that we were not done with Java 8 even 3 years after its release. Three more years later it’s still much the same.

So let’s move on, haven’t things improved in subsequent versions of Java? One of the key new rules in Java 9 is, that

“If [a qualified package name] does not name a package that is uniquely visible to the current module (§7.4.3), then a compile-time error occurs”.

Simple and unambiguous. That’s what compilers have to check.

Except: javac doesn’t check for uniqueness if one of the modules involved is the “unnamed module”.

In 2018 there was some confusion about this, and during discussion on stackoverflow I raised this issue to the jigsaw-dev mailing list. A bug was raised against javac, confirmed to be a bug by spec lead Alex Buckley. I summarized the situation in my answer on stackoverflow.

This bug could have been easily fixed in javac version 12, but wasn’t. Meanwhile upvotes on my answer on stackoverflow started coming in. The same for Java 13. The same for Java 14. And yet no visible activity on the javac bug. You need ecj to find if your program violates this rule of JLS.

Today the 40th upvote earned me the “Guru” tag on stackoverflow.

So, please Oracle, keep that bug unresolved, it will earn me a lot of reputation for a bright future – by doing: nothing 🙂

by Stephan Herrmann at January 16, 2020 06:40 PM

Building and running Equinox with maven without Tycho

January 12, 2020 11:00 PM

Eclipse Tycho is a great way to let maven build PDE based projects. But the Plug-in Development Environment (PDE) model is not the only way to work with OSGi.

In particular, since 2 or 3 years the Eclipse Platform jars (including the Equinox jars) are regularly published on Maven Central (check the artifacts having org.eclipse.platform as groupId).

I was looking for an alternative to P2 and to the target-platform mechanism.

bnd and bndtools logo

Bnd and Bndtools are always mentioned as potential alternative to PDE (I attended several talks discussing this at EclipseCon 2018: Migrating from PDE to Bndtools in Practice, From Zero to a Professional OSGi Project in Minutes). So I decided to explore this path.

This StackOverflow question catches my attention: How to start with OSGi. I had a close look at the answer provided by Peter Kriens (the founder of the Bnd and Bndtools projects), where he discusses the different possible setup:

  • Maven Only

  • Gradle Only

  • Eclipse, M2E, Maven, and Bndtools

  • Eclipse, Bndtools, Gradle

Even in the "Maven Only" or "Gradle Only" setups, the proposed solution relies on plugins using bnd under the hood.

How to start?

My project is quite simple, the dependencies are already on maven central. I will not have a complex use-case with multiple versions of the same library or with platform dependent artifacts. So fetching the dependencies with maven is sufficient.

I decided to try the "Maven Only" model.

How to start?

I was not sure to understand how to use the different bnd maven plugins: bnd-maven-plugin, bnd-indexer-maven-plugin, bnd-testing-maven-plugin, bnd-export-maven-plugin

Luckily I found the slides of the Bndtools and Maven: A Brave New World workshop (given at EclipseCon 2017) and the corresponding git repository: osgi-community-event2017.

The corresponding effective-osgi maven archetypes used during the workshop are still working well. I could follow the step-by-step guide (in the readme of the maven archetypes project). I got everything working as described and I could find enough explanations about the generated projects. I think I understood what I did and this is very important when you start.

After some cleanup and a switch from Apache Felix to Eclipse Equinox, I got my running setup and I answered my question: "How to start with OSGi without PDE and Tycho".

The corresponding code is in this folder: effectiveosgi-example.

January 12, 2020 11:00 PM

4 Years at The Linux Foundation

by Chris Aniszczyk at January 03, 2020 09:54 AM

Late last year marked the 4th year anniversary of the formation of the CNCF and me joining The Linux Foundation:

As we enter 2020, it’s amusing for me to reflect on my decision to join The Linux Foundation a little over 4 years ago when I was looking for something new to focus on. I spent about 5 years at Twitter which felt like an eternity (the average tenure for a silicon valley employee is under 2 years), focused on open source and enjoyed the startup life of going from a hundred or so engineers to a couple of thousand. I truly enjoyed the ride, it was a high impact experience where we were able to open source projects that changed the industry for the better: Bootstrap (changed front end development for the better), Twemoji (made emojis more open source friendly and embeddable), Mesos (pushed the state of art for open source infrastructure), co-founded TODO Group (pushed the state of corporate open source programs forward) and more!

When I was looking for change, I wanted to find an opportunity that could impact more than I could just do at one company. I had some offers from FAANG companies and amazing startups but eventually settled on the nonprofit Linux Foundation because I wanted to build an open source foundation from scratch, teach other companies about open source best practices and assumed non profit life would be a bit more relaxing than diving into a new company (I was wrong). Also, I was throughly convinced that an openly governed foundation pushing Kubernetes, container specifications and adjacent independent cloud native technologies would be the right model to move open infrastructure forward.

As we enter 2020, I realize that I’ve been with one organization for a long time and that puts me on edge as I enjoy challenges, chaos and dread anything that makes me comfortable or complacent. Also, I have a strong desire to focus on efforts that involve improving the state of security and privacy in a connected world, participatory democracy, climate change; also anything that pushes open source to new industries and geographies.

While I’m always happy to entertain opportunities that align to my goals, the one thing that I do enjoy at the LF is that I’ve had the ability to build a variety of new open source foundations improving industries and communities: CDF, GraphQL Foundation, Open Container Initiative (OCI), Presto Foundation, TODO Group, Urban Computing Foundation and more.

Anyways, thanks for reading and I look forward to another year of bringing open source practices to new industries and places, the world is better when we are collaborating openly.

by Chris Aniszczyk at January 03, 2020 09:54 AM

An update on Eclipse IoT Packages

by Jens Reimann at December 19, 2019 12:17 PM

A lot has happened, since I wrote last about the Eclipse IoT Packages project. We had some great discussions at EclipseCon Europe, and started to work together online, having new ideas in the progress. Right before the end of the year, I think it is a good time to give an update, and peek a bit into the future.


One of the first things we wanted to get started, was a home for the content we plan on creating. An important piece of the puzzle is to explain to people, what we have in mind. Not only for people that want to try out the various Eclipse IoT projects, but also to possible contributors. And in the end, an important goal of the project is to attract interested parties. For consuming our ideas, or growing them even further.

Eclipse IoT Packages logo

So we now have a logo, a homepage, built using using templates in a continuous build system. We are in a position to start focusing on the actual content, and on the more tricky tasks and questions ahead. And should you want to create a PR for the homepage, you are more than welcome. There is also already some content, explaining the main goals, the way we want to move forward, and demo of a first package: “Package Zero”.


While the homepage is a good entry point for people to learn about Eclipse IoT and packages, our GitHub repository is the home for the community. And having some great discussions on GitHub, quickly brought up the need for a community call and a more direct communication channel.

If you are interested in the project, come and join our bi-weekly community call. It is a quick, 30 minutes call at 16:00 CET, and open to everyone. Repeating every two weeks, starting 2019-12-02.

The URL to the call is: You can also subscribe to the community calendar to get a reminder.

In between calls, we have a chat room eclipse/packages on Gitter.

Eclipse IoT Helm Chart Repository

One of the earliest discussion we had, was around the question of how and were we want to host the Helm charts. We would prefer not to author them ourselves, but let the projects contribute them. After all, the IoT packages project has the goal of enabling you to install a whole set of Eclipse IoT projects, with only a few commands. So the focus is on the integration, and the expert knowledge required for creating project Helm chart, is in the actual projects.

On the other side, having a one-stop shop, for getting your Eclipse IoT Helm charts, sounds pretty convenient. So why not host our own Helm chart repository?

Thanks to a company called Kiwigrid, who contributed a CI pipeline for validating charts, we could easily extend our existing homepage publishing job, to also publish Helm charts. As a first chart, we published the Eclipse Ditto chart. And, as expected with Helm, installing it is as easy as:

Of course having a single chart is only the first step. Publishing a single Helm charts isn’t that impressive. But getting an agreement on the community, getting the validation and publishing pipeline set up, attracting new contributors, that is definitely a big step in the right direction.


I think that we now have a good foundation, for moving forward. We have a place called “home”, for documentation, code and community. And it looks like we have also been able to attract more people to the project.

While our first package, “Package Zero”, still isn’t complete, it should be pretty close. Creating a first, joint deployment of Hono and Ditto is our immediate focus. And we will continue to work towards a first release of “Package Zero”. Finding a better name is still an item on the list.

Having this foundation in place also means, that the time is right, for you to think about contributing your own Eclipse IoT Package. Contributions are always welcome.

The post An update on Eclipse IoT Packages appeared first on ctron's blog.

by Jens Reimann at December 19, 2019 12:17 PM

Eclipse m2e: How to use a WORKSPACE Maven installation

by kthoms at November 27, 2019 09:39 AM

Today a colleague of me asked me about the Maven Installations preference page in Eclipse. There is an entry WORKSPACE there, which is disabled and shows NOT AVAILABLE. He wanted to know how to enable a workspace installation of Maven.

Since we both did not find the documentation of the feature I digged into the m2e sources and found class MavenWorkspaceRuntime. The relevant snippets are the method getMavenDistribution() and the MAVEN_DISTRIBUTION constant:

private static final ArtifactKey MAVEN_DISTRIBUTION = new ArtifactKey(
      "org.apache.maven", "apache-maven", "[3.0,)", null); //$NON-NLS-1$ //$NON-NLS-2$ //$NON-NLS-3$


protected IMavenProjectFacade getMavenDistribution() {
  try {
    VersionRange range = VersionRange.createFromVersionSpec(getDistributionArtifactKey().getVersion());
    for(IMavenProjectFacade facade : projectManager.getProjects()) {
      ArtifactKey artifactKey = facade.getArtifactKey();
      if(getDistributionArtifactKey().getGroupId().equals(artifactKey.getGroupId()) //
          && getDistributionArtifactKey().getArtifactId().equals(artifactKey.getArtifactId())//
          && range.containsVersion(new DefaultArtifactVersion(artifactKey.getVersion()))) {
        return facade;
  } catch(InvalidVersionSpecificationException e) {
    // can't happen
  return null;

From here you can see that m2e tries to look for workspace (Maven) projects and to find one the has the coordinates org.apache.maven:apache-maven:[3.0,).

So the answer how to enable a WORKSPACE Maven installation is: Import the project apache-maven into the workspace. And here is how to do it:

  1. Clone Apache Maven from
  2. Optionally: check out a release tag
    git checkout maven-3.6.3
  3. Perform File / Import / Existing Maven Projects
  4. As Root Directory select the apache-maven subfolder in your Maven clone location

Now you will have the project that m2e searches for in your workspace:

And the Maven Installations preference page lets you now select this distribution:

by kthoms at November 27, 2019 09:39 AM

Eclipse startup up time improved

November 05, 2019 12:00 AM

I’m happy to report that the Eclipse SDK integration builds starts in less than 5 seconds (~4900 ms) on my machine into an empty workspace. IIRC this used to be around 9 seconds 2 years ago. 4.13 (which was already quite a bit improved used around 5800ms (6887ms with EGit and Marketplace). For recent improvements in this release see Thanks to everyone who contributed.

November 05, 2019 12:00 AM

Setup a Github Triggered Build Machine for an Eclipse Project

by Jens v.P. ( at October 29, 2019 12:55 PM

Disclaimer 1: This blog post literally is a "web log", i.e., it is my log about setting up a Jenkins machine with a job that is triggered on a Github pull request. A lot of parts have been described elsewhere, and I link to the sources I used here. I also know that nowadays (e.g., new Eclipse build infrastructure) you usually do that via docker -- but then you need to configure docker, in which

by Jens v.P. ( at October 29, 2019 12:55 PM

LiClipse 6.0.0 released

by Fabio Zadrozny ( at October 25, 2019 06:59 PM

LiClipse 6.0.0 is now out.

The main changes is that many dependencies have been updated:

- it's now based on Eclipse 4.13 (2019-09), which is a pretty nice upgrade (in my day-to-day use I find it appears smoother than previous versions, although I know this sounds pretty subjective).

- PyDev was updated to 7.4.0, so, Python 3.8 (which was just released) is now already supported.


by Fabio Zadrozny ( at October 25, 2019 06:59 PM

Qt World Summit 2019 Berlin – Secrets of Successful Mobile Business Apps

by ekkescorner at October 22, 2019 12:39 PM

Qt World Summit 2019

Meet me at Qt World Summit 2019 in Berlin


I’ll speak about development of mobile business apps with

  • Qt 5.13.1+ (Qt Quick Controls 2)
    • Android
    • iOS
    • Windows 10


Qt World Summit 2019 Conference App

As a little appetizer I developed a conference app. HowTo download from Google Play Store or Apple and some more screenshots see here.


sources at GitHub

cu in Berlin

by ekkescorner at October 22, 2019 12:39 PM

A nicer icon for Quick Access / Find Actions

October 20, 2019 12:00 AM

Finally we use a decent icon for Quick Access / Find Actions. This is now a button in the toolbar which allows you to trigger arbitrary commands in the Eclipse IDE.

October 20, 2019 12:00 AM

A Tool for Jakarta EE Package Renaming in Binaries

by BJ Hargrave ( at October 17, 2019 09:26 PM

In a previous post, I laid out my thinking on how to approach the package renaming problem which the Jakarta EE community now faces. Regardless of whether the community chooses big bang or incremental, there are still existing artifacts in the world using the Java EE package names that the community will need to use together with the new Jakarta EE package names.

Tools are always important to take the drudgery away from developers. So I have put together a tool prototype which can be used to transform binaries such as individual class files and complete JARs and WARs to rename uses of the Java EE package names to their new Jakarta EE package names.

The tools is rule driven which is nice since the Jakarta EE community still needs to define the actual package renames for Jakarta EE 9. The rules also allow the users to control which class files in a JAR/WAR are transformed. Different users may want different rules depending upon their specific needs. And the tool can be used for any package renaming challenge, not just the specific Jakarta EE package renames.

The tools provides an API allowing it to be embedded in a runtime to dynamically transform class files during the class loader definition process. The API also supports transforming JAR files. A CLI is also provided to allow use from the command line. Ultimately, the tool can be packaged as Gradle and Maven plugins to incorporate in a broader tool chain.

Given that the tool is prototype, and there is much work to be done in the Jakarta EE community regarding the package renames, I have started a list of TODOs in the project' issues for known work items.

Please try out the tool and let me know what you think. I am hoping that tooling such as this will ease the community cost of dealing with the package renames in Jakarta EE.

PS. Package renaming in source code is also something the community will need to deal with. But most IDEs are pretty good at this sort of thing, so I think there is probably sufficient tooling in existence for handling the package renames in source code.

by BJ Hargrave ( at October 17, 2019 09:26 PM

I’ll never forget that first EclipseCon meeting with you guys and Disney characters all around and…

by Doug Schaefer at October 16, 2019 01:18 AM

I’ll never forget that first EclipseCon meeting with you guys and Disney characters all around and the music. And all the late nights in the Santa Clara bar and summits and meetings talking until no one else was left. Great times indeed. Until we meet again Michael!

by Doug Schaefer at October 16, 2019 01:18 AM

Missing ECE already? Bring back a little of it - take the survey!

by Anonymous at October 15, 2019 09:22 PM

We hope you enjoyed the 2019 version of EclipseCon Europe and OSGi Community Event as much as we did.

Please share your thoughts and feedback by completing the short attendee survey. We read all responses, and we will use them to improve next year's event.

Speakers, please upload your slides to your session page. Attendees really appreciate this!

by Anonymous at October 15, 2019 09:22 PM

JShell in Eclipse

by Jens v.P. ( at October 08, 2019 12:16 PM

Java 9 introduced a new command line tool: JShell. This is a read–eval–print loop (REPL) for Java with some really nice features. For programmers I would assume writing a test is the preferred choice, but for demonstrating something (in a class room for example) this is a perfect tool if you are not using a special IDE such as BlueJ (which comes with its own REPL). The interesting thing about

by Jens v.P. ( at October 08, 2019 12:16 PM

Back to the top