Skip to main content

Introducing Oniro: A Vendor Neutral, Open Source OS for Next-Gen Devices

by Mike Milinkovich at October 26, 2021 12:01 PM

It’s a rare event when a new operating system comes along. And it’s even rarer to have the opportunity to influence the direction of that OS at its earliest stages. So I’m delighted to tell you that today we are announcing a new working group and top-level project that gives you that opportunity. The Oniro community will nurture and evolve the Oniro operating system, a transparent, vendor-neutral, and independent OS for the next generation of distributed systems.

The Oniro OS will provide a true, community-driven open source solution that runs on a wider spectrum of devices than today’s operating systems. And it will make it far easier to integrate different types of next-gen hardware and software.

Architected to Go Beyond Today’s Operating Systems

The Oniro OS can run on more devices than current operating systems because it features a multi-kernel architecture:

  • A Linux Yocto kernel allows the OS to run on larger embedded devices, such as Raspberry Pi-class devices 
  • A Zephyr kernel allows the OS to run on highly resource-constrained devices, such as a coffee maker or a thermostat

With the ability to run the same OS on different classes of devices, Oniro will provide an ideal solution to support the future of IoT, machine economy, edge, mobile, and other next-gen devices:

  • Consumers and adopters of the Oniro OS will have a more seamless experience than they have with the current generation of operating systems.
  • Devices will be able to directly connect to one another and share data, enabling a much higher degree of interoperability than is possible today.
  • Data exchanged between devices can flow directly to one another rather than always being shared via the cloud, enabling low latency architectures which are also inherently more secure and private. 

We expect the initial use cases for Oniro will be in the IoT and industrial IoT domains with applications for mobile devices coming later as the community evolves, grows, and establishes its roadmap.

Enabling the Global Ecosystem for OpenHarmony

Oniro is an independent open source implementatio of OpenAtom’s OpenHarmony. To deliver on the promise of Oniro, the community will deliver an independent, but compatible implementation of the OpenHarmony specifications, tailored for the global market. OpenHarmony is based on HarmonyOS, a multi-kernel OS that was developed by Huawei and contributed to the OpenAtom Foundation last year. In the future Oniro will also deliver additional specifications to help drive global adoption.

By creating a compatible implementation of OpenHarmony, the Oniro community can ensure that applications built for Oniro will run on OpenHarmony and vice versa. This interoperability will allow the Oniro community to create a global ecosystem and marketplace for applications and services that can be used across both operating systems, anywhere in the world. 

Join an Innovative Open Source Community

I truly believe that Oniro is open source done right. It’s a huge opportunity to build an operating system that rethinks how devices across many different device classes can interoperate in a secure and privacy-preserving way. 

Because Oniro’s evolution is being guided by an open and vendor-neutral community using the Eclipse Development Process, openness and transparency are a given. This will go a long way towards building the engagement and stakeholder trust necessary to create the global ecosystem.

The founding members of the Oniro Working Group include telecom giant, Huawei, Arm software experts Linaro, and industrial IoT specialists Seco. As more organizations become aware of Oniro, we expect the community to encompass organizations of all sizes and from all industries. 

I strongly encourage everyone with an interest in next-gen devices — corporations, academics, individuals — to take the opportunity to get involved in Oniro in its earliest stages. To get started, join the Oniro conversation by subscribing to the Oniro working group list.

by Mike Milinkovich at October 26, 2021 12:01 PM

Open Source Leader the Eclipse Foundation Launches Vendor-Neutral Operating System for Next-Generation Device Interoperability

by Jacob Harris at October 26, 2021 07:00 AM


Oniro will provide a true open source solution to make multi-device hardware and software integration easier

Brussels, October 26, 2021The Eclipse Foundation, a European open source foundation, furthering the recently announced cooperation with the OpenAtom Foundation, announced today the launch of the Oniro project and working group. 

Oniro aspires to become a transparent, vendor-neutral, and independent alternative to established IoT and edge operating systems. To achieve this goal and ensure Oniro has a global reach, the Eclipse Foundation and its members will deliver a compatible independent implementation of OpenHarmony, an open source operating system specified and hosted by the OpenAtom Foundation.

“Oniro is open source done right,” said Mike Milinkovich, executive director of the Eclipse Foundation. “It represents a unique opportunity to develop and host a next-generation operating system to support the future of mobile, IoT, machine economy, edge and many other markets.”

With the creation of the Oniro top-level project, the Eclipse Foundation aims to strengthen the global technology ecosystem, while bringing a vendor-neutral, open source OS to the global market.

To facilitate the governance for the Oniro device ecosystem, the Eclipse Foundation is also launching a new dedicated working group. The Eclipse Foundation’s working group structure provides the vendor neutrality and legal framework that enables transparent and equal collaboration between companies.

“We’re very proud to be hosting a major European open source project with worldwide contribution aiming to develop an independent OS,” says Gaël Blondelle, vice president of European ecosystem development. “To achieve this, we want to welcome developers and companies from Europe and the rest of the world to join our working group at the Eclipse Foundation and bring this groundbreaking project to life together.”

Quotes from Supporters

“We have been working hard with Linaro, Seco, Array, NOITechPark, Synesthesia to prepare Oniro’s initial code contribution and public cloud CI/CD infrastructure, and it is so exciting to see everything moving under the expert governance of the Eclipse Foundation,” said Davide Ricci, Director of the Huawei’s Consumer Business Group European Open Source Technology Center. “Under the Eclipse Foundation the project will have its greatest chance at onboarding new contributing members and bringing real products on the shelves of consumer electronics stores around the world. We reckon Oniro is not a sprint, rather a marathon, and we are thrilled and committed to this world changing journey.”

“Over the past year, Linaro has worked closely with Huawei and other Oniro members on preparing the OS foundations of Oniro, leveraging the work Linaro is already doing on open source projects such as MCUboot, the Yocto project, Trusted Substrate and multiple RTOSs,” said Andrea Gallo, VP of Business Development. “Formalizing the governance of this project through the Eclipse Foundation is the natural next step in delivering a truly vendor-neutral and independent operating system.” 

“Oniro will be the future of the open source OS, it will mark a new trend for its deeply innovative nature and defining it only as an operating system would be extremely reductive. In fact, it focuses on the end-user with an incredible user-experience, but it is also oriented to the content creators and OEMs at the same time, bringing to all of them certainty, choice and convenience,” said Gianluca Venere, Chief Innovation Officer, SECO. “It is born for device collaboration at the edge, to be hardware architecture independent, to create a swarm intelligence, and to enable ambient computing. For more than 40 years SECO has been designing and manufacturing innovative products and services for OEMs and we strongly believe that Oniro is a game changer in supporting our customers to the digital transformation.”

About the Eclipse Foundation
The Eclipse Foundation provides our global community of individuals and organizations with a mature, scalable, and business-friendly environment for open source software collaboration and innovation. The Foundation is home to the Eclipse IDE, Jakarta EE, and over 400 open source projects, including runtimes, tools, and frameworks for cloud and edge applications, IoT, AI, automotive, systems engineering, distributed ledger technologies, open processor designs, and many others. The Eclipse Foundation is an international non-profit association supported by over 330 members, including industry leaders who value open source as a key enabler for their business strategies. To learn more, follow us on Twitter @EclipseFdn, LinkedIn or visit

Media Contacts:

Schwartz Public Relations for Eclipse Foundation
Julia Rauch / Sophie Dechansreiter / Tobias Weiß
Sendlinger Straße 42A
80331 Munich
+49 (89) 211 871 – 43 / -35 / -70

Nichols Communications for Eclipse Foundation in North America
Jay Nichols
+1 408-772-1551

PR Paradigm for Eclipse Foundation in France
Oscar Barthe
(+33) 06 73 51 78 91 

MSL Group for Eclipse Foundation in Italy
Rosa Parente
+39 340 8893581

by Jacob Harris at October 26, 2021 07:00 AM

The Eclipse IoT Working Group Celebrates its 10th Anniversary

by Jacob Harris at October 25, 2021 11:00 AM

The world’s largest open source community for edge and IoT continues to drive innovation that benefits a broad range of industries and applications 


BRUSSELS – October 25, 2021 – The Eclipse Foundation, one of the world’s largest open source software foundations, today celebrated the 10th Anniversary of the Eclipse IoT Working Group. Eclipse IoT is the largest open source IoT community in the world with 47 working group members, 47 projects, 360 contributors, and more than 32 million lines of code.

“It would be challenging to measure the industry impact of the Eclipse IoT Working Group over the past 10 years,” said Mike Milinkovich, executive director of the Eclipse Foundation. “From day one, this working group had a vision focused on developing actionable code as opposed to blueprints or standards, which has enabled it to stand apart from other organizations. This focus, along with the broad and diverse mix of Eclipse IoT ecosystem participants, has led to an extremely vibrant community that has helped drive commercial innovation and adoption at scale.”
In addition to original founding members, IBM and Eurotech, the current Eclipse IoT ecosystem now includes globally recognized players such as Bosch.IO, Red Hat, Huawei, Intel, SAP, and Siemens. The community is further enriched with Industrial IoT (IIoT) specialists like Aloxy, Cedalo, itemis, and Kynetics; along with edge IoT innovators that include ADLINK Technology and Edgeworx.
Eclipse IoT is home to open source innovation that has delivered some of the industry’s most popular IoT protocols. CoAP (Eclipse Californium), DDS (Eclipse Cyclone DDS), LwM2M (Eclipse Leshan), MQTT (Eclipse Paho, Eclipse Mosquitto and Eclipse Amlen) and OPC UA (Eclipse Milo) are all built around Eclipse IoT projects. Other popular Eclipse IoT production-ready platforms cover use cases such as digital twins (Eclipse Ditto), energy management (Eclipse VOLTTRON), contactless payments (Eclipse Keyple), Smart cities (Eclipse Kura) in addition to Eclipse Kapua  — a modular IoT cloud platform that manages data, devices, and much more.
To learn more about how to get involved with Eclipse IoT, Edge Native, Sparkplug or other working groups at the Eclipse Foundation, visit the Foundation’s membership page. Working group members benefit from a broad range of services, including exclusive access to detailed industry research findings, marketing assistance, and expert open source governance.

For further IoT & edge related information, please reach us at:

Quotes from Eclipse IoT Working Group pioneers

Andy Stanford-Clark, IBM UK CTO & Co-Inventor of MQTT
“Our original vision for the IoT Working Group was to create and curate a software stack which would enable developers to write ‘applications for platforms’, rather than ‘custom code for specific devices.’ Over the 10 years, I think we’ve made that vision a reality. I’m immensely proud of what we’ve achieved together.”

Andy Piper, Developer Advocate & Founding Project Lead, Eclipse Paho 
“It is inspiring to see the range and scope of projects that make up the Eclipse IoT Working Group, 10 years on - we knew that the keys to success would be open source, interoperability, and open standards. I’m hugely proud of the success of MQTT and Mosquitto, and the wider ecosystem in this space.” 

Marco Career, CTO, Eurotech
“Eclipse IoT WG has shattered the silos of monolithic M2M applications and proprietary connectivity by promoting open standards and open architectures while creating a vibrant community of interoperable projects. Eurotech is proud of having been part of this journey and we wish Eclipse IoT WG 10 more successful years”.

Deb Bryant, Senior Director, Open Source Program Office, Red Hat
“The 10th anniversary of the Eclipse Foundation IoT Working Group is a significant milestone not only for its members and partners, but for the technology and open source communities. Many solutions to challenges within global IoT ecosystems are the result of the Eclipse Foundation IoT Working Group’s dedication over the last decade to creating a vendor-neutral community of open source projects. Red Hat is proud to be a member of the Eclipse Foundation and looks forward to continuing our support for the IoT Working Group and helping to foster open source IoT achievements.”

Benjamin Cabé, Principal Program Manager, Microsoft:
“It is both exciting and humbling to see how our initial vision of enabling an Internet of Things based on open source and open standards has effectively turned into a reality, ten years down the road. The Eclipse IoT Working Group and its community of passionate individuals have been a catalyst for IoT innovation, and I am looking forward to ten more years of success!”

About the Eclipse Foundation
The Eclipse Foundation provides our global community of individuals and organizations with a mature, scalable, and business-friendly environment for open source software collaboration and innovation. The Foundation is home to the Eclipse IDE, Jakarta EE, and over 400 open source projects, including runtimes, tools, and frameworks for cloud and edge applications, IoT, AI, automotive, systems engineering, distributed ledger technologies, open processor designs, and many others. The Eclipse Foundation is an international non-profit association supported by over 330 members, including industry leaders who value open source as a key enabler for their business strategies. To learn more, follow us on Twitter @EclipseFdn, LinkedIn or visit
Third-party trademarks mentioned are the property of their respective owners.


Media contacts 

Schwartz Public Relations for the Eclipse Foundation, AISBL
Julia Rauch / Sophie Dechansreiter / Tobias Weiß
Sendlinger Straße 42A
80331 Munich
+49 (89) 211 871 – 43 / -35 / -70

Nichols Communications for the Eclipse Foundation, AISBL
Jay Nichols
+1 408-772-1551

by Jacob Harris at October 25, 2021 11:00 AM

What Cloud Developers Want

by Mike Milinkovich at October 22, 2021 12:30 PM

The results of our first-ever Cloud Developer Survey are in, providing important insight into the development tools being used today, the role of open source, and the capabilities developers are looking for in next generation cloud-based tools and IDEs.  

The Cloud Developer Survey was conducted April 22-May 1, 2021, with more than 300 software developers, DevOps specialists, architects, and IT leaders in the US, UK, France, and Germany being interviewed. It’s important to point out that this survey was fielded by an independent team of analysts with the express purpose of minimizing bias, and to provide a clear market perspective to our member community. 

In commissioning this research project, our primary objective was to gain a better understanding of cloud-based developer trends by identifying the requirements, priorities, and challenges faced by organizations that deploy and use cloud-based development solutions, including those based on open source technologies. Our expectation is that through these findings, we can better ensure developers have the tools and technologies they need for cloud native application development.

An interesting finding is that more than 40 percent of survey respondents indicated that their company’s most important applications are now cloud native. And only three percent said their company has no cloud migration plans for important on-premise applications. This bodes well for the growth in cloud-based tools to help accelerate this trend and migration.

Developers Expect Open Source Tools and Technologies

One of the most significant trends revealed by the survey is the extremely high value developers place on open source. This is a rare number to see in survey results, but 100 percent of participating organizations said they allow their developers to use open source technologies for software development; though 62 percent do place at least some restrictions on usage.

Looking ahead, developers expect open source to continue to grow in popularity, with more than 80 saying they consider open source to be important both now and in the future. With the focus on cloud native applications and growing reliance on open source, it’s safe to say that open source and cloud development go hand-in-hand, and are here to stay.

Flexibility, Better Integrations, and Innovation are Attractive 

The Cloud Developer Survey also revealed that while developers use a variety of tools, they prefer using those with which they’re already familiar. This is reflected by the fact that 57 percent of survey respondents are still using desktop IDEs, including the Eclipse IDE. What this means is that there remains a huge developer community that has yet to benefit from open source cloud IDE technologies like Eclipse Theia, Eclipse Che, and Open VSX Registry, along with the ecosystem and products built around them.

Developers that do use cloud-based tools aren’t necessarily tied to using what their cloud provider recommends. Instead, they prefer open source options that offer opportunities for customization and innovation. No matter which technologies developers opt to use, increasing productivity is crucial. Developers are looking for better integrations of APIs and other features and tools that help save them time and effort.

Developers also want the flexibility to choose best-of-breed products and tools as needed to work more efficiently and to support the next wave of innovation in artificial intelligence, machine learning, and edge technologies. Open source drives innovation in these technologies, and flexible, open source tools will be key to attracting top talent to these cutting-edge development opportunities.

Read the Full Report and Recommendations

To review the complete Cloud Developer Survey results and the associated recommendations, download the survey report.

For more information about the Eclipse Cloud DevTools ecosystem and its benefits for members, visit the website.

by Mike Milinkovich at October 22, 2021 12:30 PM

Eclipse IDE 2021-09 Supports Java 17

by Karsten Silz at October 20, 2021 05:30 AM

The Eclipse Foundation released Eclipse IDE 2021-09, a quarterly update of its flagship project, on September 15, 2021. It supports Java 17 through a plugin and improves Java refactoring, code assist, Git history navigation, and the IDE's dark mode. The recently established Working Group has not reversed the decline in sub-project activities.

By Karsten Silz

by Karsten Silz at October 20, 2021 05:30 AM

OSGi Services with gRPC - Let's be reactive

by Scott Lewis ( at October 20, 2021 02:54 AM

ECF has just introduced an upgrade to the grpc distribution provider.   Previously, this distribution provider used ReaxtiveX java version 2 only.  With this release, ReactiveX java version 3 is also supported.

As many know, gRPC allows services (both traditional call/response [aka unary] and streaming services) to be defined by a 'proto3' file.  For example, here is a simple service with four methods, one unary (check) and 3 streaming (server streaming, client streaming, and bi-directional streaming)
syntax = "proto3";


option java_multiple_files = true;
option java_outer_classname = "HealthProto";
option java_package = "";

message HealthCheckRequest {
string message = 1;

message HealthCheckResponse {
enum ServingStatus {
SERVICE_UNKNOWN = 3; // Used only by the Watch method.
ServingStatus status = 1;

service HealthCheck {
// Unary method
rpc Check(HealthCheckRequest) returns (HealthCheckResponse);
// Server streaming method
rpc WatchServer(HealthCheckRequest) returns (stream HealthCheckResponse);
// Client streaming method
rpc WatchClient(stream HealthCheckRequest) returns (HealthCheckResponse);
// bidi streaming method
rpc WatchBidi(stream HealthCheckRequest) returns (stream HealthCheckResponse);
The gRPC project provides a plugin so that when protoc is run, java code (or other language code) is generated that can then be used on the server and/or clients.

With some additional plugins, the classes generated by protoc can use the ReactiveX API for generating code.   So, for example, here is the java code generated by running protoc, grpc, reactive-grpc, and the osgi-generator plugins on the above HealthCheck service definition.  

Note in particular the HealthCheckService interface generated by the osgi-generator protoc plugin:

import io.reactivex.rxjava3.core.Single;
import io.reactivex.rxjava3.core.Flowable;

value = "by grpc-osgi-generator (REACTIVEX) - A protoc plugin for ECF's grpc remote services distribution provider at ",
comments = "Source: health.proto. ")
public interface HealthCheckService {
* <pre>
* Unary method
* </pre>
default Single<> check(Single<> requests) {
return null;
* <pre>
* Server streaming method
* </pre>
default Flowable<> watchServer(Single<> requests) {
return null;
* <pre>
* Client streaming method
* </pre>
default Single<> watchClient(Flowable<> requests) {
return null;
* <pre>
* bidi streaming method
* </pre>
default Flowable<> watchBidi(Flowable<> requests) {
return null;

Note that it uses the two ReactiveX 3 classes: io.reactivex.rxjava3.core.Single, and io.reactivex.rxjava3.core.Flowable. These two classes provide api for event-driven/reactive sending and receiving of unary (Single) and streaming (Flowable) arguments and return values.

The ReactiveX API...particularly Flowable...makes it very easy to implement both consumers and implementers of the streaming API, while maintaining ordered delivery and non-blocking communication.

For example, this is a simple implementation of the HealthCheckService. Note how the Single and flowable methods are able to express the implementation logic through methods such as
Here is a simple implementation of a consumer of the HealthCheckService.

The use of the ReactiveX API simplifies both the implementation and the consumer use of both unary and streaming services. As an added bonus: the reactive-grpc library used in the ECF Distribution provider provides *flow-control* using backpressure.

In next article I'll describe how OSGi Remote Services can be easily used to export, publish, discover, and import remote services with full support for service versioning, security, and dynamics. I'll also describe one can use tools like maven or bndtools+eclipse to generate source code (as above) from a proto3 file and easily run a generated service as an OSGi Remote Service.

by Scott Lewis ( at October 20, 2021 02:54 AM

Alice’s adventures in Sirius Web Land

October 11, 2021 10:00 AM

Since my early childhood I have loved stories, listening to books read by my mum, then reading by myself comics or classical literature for school and now, as I dedicate not so much time to reading, mostly blog posts and news on the internet. One of my favorite novels remains “Alice’s Adventures in Wonderland”. Alice's Adventures in Wonderland

A young girl named Alice falls through a rabbit hole into a fantastic world of weird creatures. She meets people, she experiences, she tastes, she has to make decisions, sometimes she’s scared, and the minute after she’s happy. This book is like a roller coaster full of events and emotions.

When I think about our job at Obeo, when we have to create a tool dedicated to a specific domain for one of our customers, I feel like we are all small Alices experiencing the Sirius Land. We start by meeting people, trying to understand their jobs, their needs, we make decisions about what concepts we will specify, how they will be represented… Sometimes it works “Yes! I did it!” and sometimes the user’s feedback is not so good and we rework the tool “Oh, no, try again ;(“…

One year ago, we at Obeo released a fist version of the Sirius Web project and I had this little Alice in mind…

Alice was beginning to get very tired of creating DSL graphical editors and of having too many things to do: start Eclipse, describe her domain with Ecore, generate the EMF code, launch another Eclipse runtime, specify her graphical mappings with Sirius Desktop, test with another Eclipse runtime, package everything to an update site, send it to Bob so that he can install it, help Bob who can’t find how to install the modeler, reiterate from the beginning to update the tool according to Bob feedbacks and needs…

“Oh dear! Would you tell me, please, which way I ought to go from here?” she asked,

“That depends a good deal on where you want to get to,” said the Cat.

Alice prayed for a framework to easily create and deploy her studios to the web!

Curiouser and curiouser! This exists!

I have a talk at EclipseCon 2021 to tell you about Alice in Sirius Web Land! In this session, I will introduce and demonstrate:

  • how to describe your domain
  • how to specify your graphical editor
  • how to deploy your studio to your end-users … everything from your browser, thanks to Sirius Web!

This is not a dream, this is really happening!

Sirius Web Domain and View definitions

I will demonstrate all examples using 100% open-source software. Come and join me! You have no excuse, register at EclipseCon. As it is a virtual event you can attend from anywhere, even from Wonderland!

October 11, 2021 10:00 AM

A hands-on tutorial for Eclipse GLSP

by Jonas Helming, Maximilian Koegel and Philip Langer at October 05, 2021 06:54 AM

Do you want to learn how to implement diagram editors using Eclipse GLSP? Then please read on. We have just published...

The post A hands-on tutorial for Eclipse GLSP appeared first on EclipseSource.

by Jonas Helming, Maximilian Koegel and Philip Langer at October 05, 2021 06:54 AM

RHAMT Eclipse Plugin 4.0.0.Final has been released!

by josteele at October 05, 2021 05:54 AM

We are happy to announce the latest release of the Red Hat Application Migration Toolkit (RHAMT) Eclipse Plugin.

Getting Started

It is now available through JBoss Central, and from the update site here.

What is RHAMT?

RHAMT is an automated application migration and assessment tool.

Example ways to RHAMT up your code:

  • Moving your application from WebLogic to EAP, or WebSphere to EAP

  • Version upgrade from Hibernate 3 to Hibernate 4, or EAP 6 to EAP 7

  • Change UI technologies from Seam 2 to pure JSF 2.

An example of how to run the RHAMT CLI:

$ ./rhamt-cli --input /path/to/jee-example-app-1.0.0.ear --output /path/to/output --source weblogic --target eap:7

The output is a report used to assess and prioritize migration and modernization efforts.

The RHAMT Eclipse Plugin - What does it do?

Consider an application migration comprised of thousands of files, with a myriad of small changes, not to mention the tediousness of switching between the report and your IDE. Who wants to be the engineer assigned to that task? :) Instead, this tooling marks the source files containing issues, making it easy to organize, search, and in many cases automatically fix issues using quick fixes.

Let me give you a quick walkthrough.

Ruleset Wizard

We now have quickstart template code generators.

Rueset Wizard

Rule Creation From Code

We have also added rule generators for selected snippets of code.

Rule Generation From Source

Ruleset Graphical Editor

Ruleset navigation and editing is faster and more intuitive thanks to the new graphical editor.

Graphical Editor

Ruleset View

We have created a view dedicated to the management of rulesets. Default rulesets shipped with RHAMT can now be opened, edited, and referenced while authoring your own custom rulesets.

Ruleset View

Run Configuration

The Eclipse plugin interacts with the RHAMT CLI process, thereby making it possible to specify command line options and custom rulesets.

Run Configuration

Ruleset Submission

Lastly, contribute your custom rulesets back to the community from within the IDE.

Ruleset Submission

You can find more detailed information here.

Our goal is to make the RHAMT tooling easy to use. We look forward to your feedback and comments!

Have fun!
John Steele

by josteele at October 05, 2021 05:54 AM

We are hiring

by jeffmaury at October 05, 2021 05:54 AM

The Developer Experience and Tooling group, of which JBoss Tools team is part, is looking for an awesome developer. We are looking to continue improving the usability for developers around various IDEs including Eclipse, VSCode and IntelliJ and around the Red Hat product line, including JBoss Middleware.

Topics range from Java to JavaScript, application servers to containers, source code tinkering to full blown CI/CD setups.

If you are into making developers life easier and like to be able to get involved in many different technologies and get them to work great together then do apply.

You can also ping me ( for questions.

The current list of openings are:

Note: the job postings do list a specific location, but for the right candidate we are happy to consider many locations worldwide (anywhere there is a Red Hat office), as well as working from home.

Have fun!
Jeff Maury
@jeffmaury @jbosstools

by jeffmaury at October 05, 2021 05:54 AM

The Thrill of Conquest

by Donald Raab at September 30, 2021 03:50 PM

A poem

Cadillac Mountain, Acadia National Park, Maine — Photo by Donald Raab
Cadillac Mountain, Acadia National Park, Maine — Photo by Donald Raab


I wrote this poem in 1988 and it was published in my high school literary magazine.

The Thrill of Conquest

Snowflakes drop upon our brows,
slush beneath our feet.
The air around us freezes the tips of our gloves;
Still, we are determined to conquer this last great mountain.
Just one foot in front of the other,
thinking thoughts of hot cocoa brewing on a sizzling stove.
Keep moving, because surely if we stop,
the end will consume us.
Beneath our necks, winter’s chill makes its home.
We cannot go on much longer.
At long last! The apogee is in sight!
Thank the Lord for small miracles.
We reach the top, and slump to the ground in exhaustion.
Our flag is set in the ground, claiming this mountain ours.
All of a sudden,
the wind carries the sound of a ghostly voice,
“Children, dinner is ready.”
Oh well, so much for another adventure.
We jumps on our sleds,
and slide down our hill into the backyard.

— Donald Raab

Thank you for reading! I took the pictures this past weekend on a trip to Maine. I hope you enjoy them.

Sunset, Cadillac Mountain, Acadia National Park, Maine — Photo by Donald Raab
Sunset, Cadillac Mountain, Acadia National Park, Maine — Photo by Donald Raab

by Donald Raab at September 30, 2021 03:50 PM

Completed Kafka Connectivity

September 29, 2021 12:00 AM

Consuming messages from Apache Kafka in Eclipse Ditto

Eclipse Ditto did support publishing events and messages to Apache Kafka for quite some time now.
The time has come to support consuming as well.

A Kafka connection behaves slightly different from other consuming Connections in Ditto.
The following aspects are special:


Kafka’s way of horizontal scaling is to use partitions. The higher the load the more partitions should be configured.
On consumer side this means that a so-called consumer group can have as many consuming clients as number of partitions exist.
Each partition would then be consumed by one client.

This perfectly matches with Ditto connections scaling, each Ditto connection builds such a consumer group.
For a connection there are two ways of scaling:

  1. clientCount on connection level
  2. consumerCount on source level

A connection client bundles all consumers for all sources and all publishers for all targets. It is guaranteed that for a single connection only one client can be instantiated per instance of the connectivity microservice.
This way Ditto provides horizontal scaling.

Therefore, the clientCount should never be configured higher than the number of available connectivity instances.

If the connectivity instance is not fully used by a single connection client, the consumerCount can be used to scale a connection’s consumers vertically. The consumerCount of a source indicates how many consumers should be started for a single connection client for this source. Each consumer is a separate consuming client in the consumer group of the connection.

This means that the number of partition should be greater or equal than clientCount multiplied by the highest consumerCount of a source.

Backpressure and Quality of Service

Usually there is an application connected to Ditto which is consuming either messages or events of devices connected to Ditto.
These messages and events can now be issued by devices via Kafka.
What happens now when the connected application temporarily can’t process the messages emitted by Ditto in the rate the devices publish their messages via Kafka into Ditto?
The answer is: “It depends.”

There are two steps of increasing delivery guarantee for messages to the connected application.

  1. Make use of acknowledgements
  2. Configure the qos for the source to 1

The first will introduce backpressure from the consuming application to the Kafka consumer in Ditto.
This means that the consumer will automatically slow down consuming messages when the performance of the connected application slows down. This way the application has time to scale up, while the messages are buffered in Kafka.

The second step can be used when it’s necessary to ensure that the application not just received but successfully processed the message. If the message could not be processed successfully or if the acknowledgement didn’t arrive in time, the Kafka consumer will restart consuming messages from the last successfully committed offset.


Now that we know about backpressure we also know, that messages could remain in Kafka for some time.
The time can be limited by Kafka’s retention time, but this would be applied to all messages in the same way. What if some messages become invalid after some time, but others won’t?

Ditto provides an expiry of messages on a per-message level. That way Ditto filters such expired messages but still processes all others.

We embrace your feedback

Did you recognize a possible match of Ditto for some of your usecases? Do you miss something in this new feature?
We would love to get your feedback.


The Eclipse Ditto team

September 29, 2021 12:00 AM

Life in a Beautiful Day

by Donald Raab at September 28, 2021 04:54 PM

What my cousin Chris taught me about living

My cousin Chris on the London Eye, 2004

It’s a Beautiful Day

I hope this story reminds you of one positive thing, every single day.

My cousin Chris passed away on September 28, 2012. This is the first time I am writing about him. I don’t really know what to write to be honest. Chris was less a cousin, and more of a brother to me. A brother from another mother he would say.

Chris died a year before my wife was diagnosed with Leukemia. He was only 42 years old. My memories of him and the life he lived brought me comfort and strength in the hardest times, as my wife fought her war against AML.

Chris loved the U2 song “Beautiful Day.” Every time I saw him, he would happily and emphatically say these words to me.

It’s a Beautiful Day

I love the song, and think fondly of Chris every time I hear it. It will forever be his song. I enjoy listening and singing along to it and smiling as I think of him.

Surprisingly, I have never seen the video of the song until today. I just watched the official music video for the song for the very first time. I did not know there might be more to the song than the amazing melody, and motivational lyrics. I think there will be a permanent palm print on my forehead after today.

The reason I say this is because Chris was an airline attendant for most of his career.

After watching this video, the song has an even stronger bond to Chris for me. I know Chris is smiling down at me as I learned this today.

Live a Beautiful Life every single Beautiful Day

Chris would always make me smile. Even during the darkest times of his short life, through all the battles he fought, Chris lived filled with happiness, love and with his motto ready to be shared with all.

Chris saw more of the world than most of us probably ever will. He came to visit my family when we lived in London in 2004, which is when I took the two pictures I have included in this post. Chris knew how to make the most out of a one or two day layover in a city. He knew how to live a beautiful life in a single day.

The last time I saw Chris was in NYC, the summer that he passed away. I have the last conversation I had with Chris saved on my phone from nine years ago. I was hoping to arrange a visit with him in Houston where he lived. He passed away before I got the chance. Our last words are a constant reminder to me, to live my life each day as a Beautiful Day.

Chris: Thank you buddy… I promise I will let u know!!! Love you
Me: Love U2… The band is great as well. ;)

If there is a heaven… I’m certain Chris is there enjoying every beautiful day. Chris was an angel on earth, possibly just dropping by for a quick layover to make sure we all learn how to live life in the moments we have. His life was a gift, and I am lucky to have been a part of it.

Wherever you are Chris, I love you, and I miss you.

It’s a Beautiful Day
My cousin Chris, enjoying a Beautiful Day

by Donald Raab at September 28, 2021 04:54 PM

Eclipse Theia Blueprint Beta 2 is released

by Jonas Helming, Maximilian Koegel and Philip Langer at September 27, 2021 10:59 AM

We are happy to announce the beta 2 release of Eclipse Theia Blueprint. Theia Blueprint is a template application allowing you...

The post Eclipse Theia Blueprint Beta 2 is released appeared first on EclipseSource.

by Jonas Helming, Maximilian Koegel and Philip Langer at September 27, 2021 10:59 AM

Announcing Eclipse Ditto Release 2.1.0

September 27, 2021 12:00 AM

The Eclipse Ditto teams announces availability of Eclipse Ditto 2.1.0.

As the first minor of the 2.x series it adds a lot of new features, the highlight surely being the full integration of Apache Kafka as Ditto managed connection.


Companies are willing to show their adoption of Eclipse Ditto publicly:

From our various feedback channels we however know of more adoption.
If you are making use of Eclipse Ditto, it would be great to show this by adding your company name to that list of known adopters.
In the end, that’s one main way of measuring the success of the project.


The main improvements and additions of Ditto 2.1.0 are:

  • Support consuming messages from Apache Kafka -> completing the Apache Kafka integration as fully supported Ditto managed connection type
  • Conditional requests (updates + retrievals)
  • Enrichment of extra fields for ThingDeleted events
  • Support for using (HTTP) URLs in Thing and Feature “definition” fields, e.g. linking to WoT (Web of Things) Thing Models
  • HMAC based authentication for Ditto managed connections
  • SASL authentication for Azure IoT Hub
  • Publishing of connection opened/closed announcements
  • Addition of new “misconfigured” status category for managed connections indicating that e.g. credentials are wrong or connection to endpoint could not be established to to configuration problems
  • Support “at least once” delivery for policy subject expiry announcements

The following notable fixes are included:

  • Fix “search-persisted” acknowledgement not working for thing deletion
  • Fix reconnect loop to MQTT brokers when using separate MQTT publisher client

The following non-functional work is also included:

  • Support for tracing reporting traces to an “Open Telemetry” endpoint
  • Improving cluster failover and coordinated shutdown + rolling updates
  • Logging improvements, e.g. configuring a logstash server to send logs to or more options to configure a logging file appender
  • Improving background deletion of dangling DB journal entries / snapshots based on the current MongoDB load
  • Improving search update by applying “delta updates” saving lots of bandwidth to MongoDB
  • Reducing cluster communication for search updates using a smart cache

Please have a look at the 2.1.0 release notes for a more detailed information on the release.


The new Java artifacts have been published at the Eclipse Maven repository as well as Maven central.

The Ditto JavaScript client release was published on

The Docker images have been pushed to Docker Hub:


The Eclipse Ditto team

September 27, 2021 12:00 AM

Diagram Editors for Web-based Tools with Eclipse GLSP

by Brian King at September 15, 2021 05:12 PM

In this article, we introduce the Eclipse Graphical Language Server Platform (GLSP), a technology to efficiently build diagram editors for web- and cloud-based tools. These diagram editors can run inside an IDE, such as Eclipse Theia or VS Code; or can be used stand-alone in any web application. Eclipse GLSP fills an important gap in the implementation of graphical editors for web-based domain-specific tools. It is an ideal next-generation solution for replacing traditional desktop technologies such as GEF and GMF. Eclipse GLSP is a very active open source project within the Eclipse Cloud Development Tools ecosystem.

Diagram Editors in the Web/Cloud

There is now a big push to migrate tools and IDEs to web technologies and run them in the cloud or via Electron on the desktop. Eclipse Theia and VS Code offer two powerful frames for supporting such an endeavour (see this comparison). In recent years, we have also seen significant innovation  around enabling web-based tools such as the language server protocol (LSP) and the debug adapter protocol (DAP). The focus of early adopters has been very clearly on enabling textual programming. However, domain-specific use cases and tools often use graphical representations and diagram editors for better comprehension and more efficient development. Eclipse GLSP fills this gap. You can consider it to be like LSP and DAP, but for diagram editors. It provides a framework and a standardized way to efficiently create diagram editors that can be flexibly embedded into tools and applications.

A Feature Rich Framework

Eclipse GLSP began in 2018 and has been very actively developed since then. Due to many industrial adopters, the framework is very feature rich. This includes standard diagram features such as node and edges, a palette, moving/resizing, zooming, inline editing or compartments (see screenshot above). GLSP is targeted at diagram editors rather than “drawing boards”, so it also provides classic tool features such as undo/redo, validation and navigation between textual artifacts and the diagram. Last but not least, GLSP allows the integration of powerful layout and routing algorithms (such as ELK) to enable auto-layouting or advanced routing (see example below). With so many features, Eclipse GLSP is more than ready to be adopted for industrial diagram editors. To learn more, please refer to this detailed feature overview of Eclipse GLSP.

©  logi.cals GmbH

One very important benefit of using a web technology stack for rendering is that there are almost no limitations on what you can actually draw. Eclipse GLSP supports adding custom shapes via SVG and CSS. As you can see in the screenshot below, you have complete freedom to design your diagram elements, including animations.

Now that we have talked about the feature set, let’s take a look under the hood and provide an overview of how GLSP actually works.

How Does it Work?

Implementing a diagram editor based on GLSP consists of two main parts. (1) The rendering component is responsible for drawing things on screen and enables user interaction. (2) The business logic component implements the actual behavior of a diagram, e.g. which nodes can be created, what connections are allowed or how to manipulate domain data on diagram changes. Eclipse GLSP cleanly encapsulates both parts using a defined protocol (the Graphical Language Server Protocol).

Source: GLSP Homepage

The server manages the diagram state and manipulations. It also connects the diagram to surrounding features. As an example, the server could update a domain model or a database to represent the data of the diagram. As another example, the server can adopt layout algorithms to efficiently auto layout the diagram (see screenshot below). When implementing a custom diagram editor, you mostly need to implement a GLSP server. GLSP provides a helper framework for more efficiency. However, due to the defined protocol, you can actually use any language for that.

The default GLSP client is implemented using TypeScript, SVG and CSS. It interprets the protool message from the server and draws the result. Performance critical operations, such as drag and drop, will directly be handled by the client. In most scenarios, the default client already covers most requirements. So, when implementing a custom diagram editor, you usually only need to define how certain elements are rendered.

As you can see, the architecture of GLSP is similar to the language server protocol and the debug adapter protocol. These approaches are highly successful, as the defined split between server and client provides a lot of flexibility. It also requires much less effort to implement new diagrams, as the client is already provided by the framework. With very few lines of code you get full fledged diagrams, integrated with your custom tool! Also see this detailed introduction to Eclipse GLSP and a minimal example diagram editor to learn more about how GLSP works.

Integration with Tools and IDEs

Eclipse GLSP is based on standard web technologies and is easily integratable into any web application. A common scenario for adopters of GLSP is to integrate it into a tool or an IDE. For Eclipse Theia, VS Code and the Eclipse desktop IDE, GLSP provides out-of-the-box integration (see screenshot below). Integration in this context means features, such as that there is an editor component that manages the dirty state or that you can double click files to open diagrams. The integrations are generic and independent of the actual diagram editor implementation. As a consequence, you can provide the same diagram editor in different contexts, e.g. as part of a tool and as part of a regular web page. Please see this article about GLSP diagram editors in VS Code, in Theia and an overview of the available integrations.


Eclipse GLSP allows you to efficiently implement web-based diagram editors and either run them stand-alone or embed them into Eclipse Theia or VS Code. By adapting the same architectural pattern as LSP and DAP, it provides a clean separation between the visual concerns (rendering on the client) and the business logic (GLSP server). This reduces the amount of effort required, as the rendering client is already provided. It also provides flexibility to use the server language of choice and integrate the diagram with other components, such as a layout algorithm or any data source.

Eclipse GLSP is an active open source project within the Eclipse Cloud Development Tools ecosystem. It fills the important role of a next generation diagram editor framework for web-based tools. GLSP is built upon Eclipse Sprotty, it integrates well with and obviously with Eclipse Theia and VS Code. There are several commercial adoptions of GLSP. If you are interested in trying an open example, check out the coffee editor provided by

If you want to learn more about Eclipse GLSP, check out this recent Eclipse Cloud Tool Time talk. The GLSP website provides more articles and videos and there are also professional services available for GLSP.

Finally, there will be a talk about GLSP at EclipseCon 2021, so be sure to  get registered!

by Brian King at September 15, 2021 05:12 PM

WTP 3.23 Released!

September 15, 2021 03:01 PM

The Eclipse Web Tools Platform 3.23 has been released! Installation and updates can be performed using the Eclipse IDE 2021-09 Update Site or through any of the related Eclipse Marketplace . Release 3.23 is included in the 2021-09 Eclipse IDE for Enterprise Java and Web Developers , with selected portions also included in several other packages . Adopters can download the R3.23 p2 repository directly and combine it with the necessary dependencies.

More news

September 15, 2021 03:01 PM

Bye Bye 'build' : the end of an era

by Denis Roy at September 10, 2021 01:34 PM

build:~ # halt

That's the last command anyone will ever type on the venerable "" server. Born in 2005, it was used as a general-purpose machine for running builds and jobs for our committers. Some folks ran ant from cron jobs, some ran Cruise Control, and in 2007, we installed Hudson - a single instance CI for any project that wanted to create a job and use it.

From there, we added worker nodes, but as usage increased, stability decreased.

Afterwards, we invented HIPP (a Hudson Instance Per Project) which, over the years, evolved into the current Jenkins+k8s-based Jiro (Jenkins Instance Running on OpenShift) offering we have at

The Build server went through numerous OS refreshes, and a couple of hardware refreshes over the years, and just wasn't being used  anymore. The current unit is an Intel SR1600 series from 2009 (you have to give credit to Intel, they know how to build them!), so after 12 years, it's time to turn it off -- or perhaps give it new life?

With some added RAM and a shiny new SSD, it will likely be repurposed towards the k8s build cluster, where it will relive its glory days and produce, once again, the binary output from the projects we all love.

Thanks, build, see you in your next life.

by Denis Roy at September 10, 2021 01:34 PM

Eclipse p2 site references

by Lorenzo Bettini at September 02, 2021 11:29 AM

Say you publish a p2 repository for your Eclipse bundles and features. Typically your bundles and features will depend on something external (other Eclipse bundles and features). The users of your p2 repository will have to also use the p2 repositories of the dependencies of your software otherwise they won’t be able to install your software. If your software only relies on standard Eclipse bundles and features, that is, something that can be found in the standard Eclipse central update site, you should have no problem: your users will typically have the Eclipse central update site already configured in their Eclipse installations. So, unless your software requires a specific version of an Eclipse dependency, you should be fine.

What happens instead if your software relies on external dependencies that are available only in other p2 sites? Or, put it another way, you rely on an Eclipse project that is not part of the simultaneous release or you need a version different from the one provided by a specific Eclipse release.

You should tell your users to use those specific p2 sites as well. This, however, will decrease the user experience at least from the installation point of view. One would like to use a p2 site and install from it without further configurations.

To overcome this issue, you should make your p2 repository somehow self-contained. I can think of 3 alternative ways to do that:

  • If you build with Tycho (which is probably the case if you don’t do releng stuff manually), you could use <includeAllDependencies> of the tycho-p2-repository plugin to “to aggregate all transitive dependencies, making the resulting p2 repository self-contained.” Please keep in mind that your p2 repository itself will become pretty huge (likely a few hundred MB), so this might not be feasible in every situation.
  • You can put the required p2 repositories as children of your composite update site. This might require some more work and will force you to introduce composite update sites just for this. I’ve written about p2 composite update sites many times in this blog in the past, so I will not consider this solution further.
  • You can use p2 site references that are meant just for the task mentioned so far and that have been introduced in the category.xml specification for some time now. The idea is that you put references to the p2 sites of your software dependencies and the corresponding content metadata of the generated p2 repository will contain links to the p2 sites of dependencies. Then, p2 will automatically contact those sites when installing software (at least from Eclipse, from the command line we’ll have to use specific arguments as we’ll see later). Please keep in mind that this mechanism works only if you use recent versions of Eclipse (if I remember correctly this has been added a couple of years ago).

In this blog post, I’ll describe such a mechanism, in particular, how this can be employed during the Tycho build.

The simple project used in this blog post can be found here: You should be able to easily reuse most of the POM stuff in your own projects.

IMPORTANT: To benefit from this, you’ll have to use at least Tycho 2.4.0. In fact, Tycho started to support site references only a few versions ago, but only in version 2.4.0 this has been implemented correctly. (I personally fixed this: If you use a (not so) older version, e.g., 2.3.0, there’s a branch in the above GitHub repository, tycho-2.3.0, where some additional hacks have to be performed to make it work (rewrite metadata contents and re-compress the XML files, just to mention a few), but I’d suggest you use Tycho 2.4.0.

There’s also another important aspect to consider: if your software switches to a different version of a dependency that is available on a different p2 repository, you have to update such information consistently. In this blog post, we’ll deal with this issue as well, keeping it as automatic (i.e., less error-prone) as possible.

The example project

The example project is very simple:

  • parent project with the parent POM;
  • a plugin project created with the Eclipse wizard with a simple handler (so it depends on org.eclipse.ui and org.eclipse.core.runtime);
  • a feature project including the plugin project. To make the example more interesting this feature also requires, i.e., NOT includes, the external feature org.eclipse.xtext.xbase. We don’t actually use such an Xtext feature, but it’s useful to recreate an example where we need a specific p2 site containing that feature;
  • a site project with category.xml that is used to generate during the Tycho build our p2 repository.

To make the example interesting the dependency on the Xbase feature is as follows

   <import feature="org.eclipse.xtext.xbase" version="2.25.0" match="compatible"/>

So we require version 2.25.0.

The target platform is defined directly in the parent POM as follows (again, to keep things simple):


Note that I explicitly added the Xtext 2.25.0 site repository because in the 2020-12 Eclipse site Xtext is available with a lower version 2.24.0.

This defines the target platform we built (and in a real example, hopefully, tested) our bundle and feature.

The category.xml initially is defined as follows

<?xml version="1.0" encoding="UTF-8"?>
   <feature id="org.example.feature" version="0.0.0">
      <category name="org.example.category"/>
   <category-def name="org.example.category" label="P2 Example Composite Repository">
         P2 Example Repository

The problem

If you generate the p2 repository with the Maven/Tycho build, you will not be able to install the example feature unless Xtext 2.25.0 and its dependencies can be found (actually, also the standard Eclipse dependencies have to be found, but as said above, the Eclipse update site is already part of the Eclipse distributions). You then need to tell your users to first add the Xtext 2.25.0 update site. In the following, we’ll handle this.

A manual, and thus cumbersome, way to verify that is to try to install the example feature in an Eclipse installation pointing to the p2 repository generated during the build. Of course, we’ll keep also this verification mechanism automatic and easy. So, before going on, following a Test-Driven approach (which I always love), let’s first reproduce the problem in the Tycho build, by adding this configuration to the site project (plug-in versions are configured in the pluginManagement section of the parent POM):


The idea is to run the standard Eclipse p2 director application through the tycho-eclipserun-plugin. The dependency configuration is standard for running such an Eclipse application. We try to install our example feature from our p2 repository into a temporary output directory (these values are defined as properties so that you can copy this plugin configuration in your projects and simply adjust the values of the properties). Also, the arguments passed to the p2 director are standard and should be easy to understand. The only non-standard argument is -followReferences that will be crucial later (for this first run it would not be needed).

Running mvn clean verify should now highlight the problem:

!ENTRY org.eclipse.equinox.p2.director ...
!MESSAGE Cannot complete the install because one or more required items could not be found.
!SUBENTRY 1 org.eclipse.equinox.p2.director...
!MESSAGE Software being installed: Feature 2.0.0.v20210827-1002 ( 2.0.0.v20210827-1002)
!SUBENTRY 1 org.eclipse.equinox.p2.director ...
!MESSAGE Missing requirement: Feature 2.0.0.v20210827-1002
   ( 2.0.0.v20210827-1002)
     'org.eclipse.equinox.p2.iu; [2.25.0,3.0.0)'
   but it could not be found

This would mimic the situation your users might experience.

The solution

Let’s fix this: we add to the category.xml the references to the same p2 repositories we used in our target platform. We can do that manually (or by using the Eclipse Category editor, in the tab Repository Properties):

The category.xml initially is defined as follows

<?xml version="1.0" encoding="UTF-8"?>
   <feature id="org.example.feature" version="0.0.0">
      <category name="org.example.category"/>
   <category-def name="org.example.category" label="P2 Example Composite Repository">
         P2 Example Repository
   <repository-reference location="" enabled="true" />
   <repository-reference location="" enabled="true" />

Now when we create the p2 repository during the Tycho build, the content.xml metadata file will contain the references to the p2 repository (with a syntax slightly different, but that’s not important; it will contain a reference to the metadata repository and to the artifact repository, which usually are the same). Now, our users can simply use our p2 repository without worrying about dependencies! Our p2 repository will be self-contained.

Let’s verify that by running mvn clean verify; now everything is fine:

!ENTRY org.eclipse.equinox.p2.director ...
!MESSAGE Overall install request is satisfiable
!SUBENTRY 1 org.eclipse.equinox.p2.director ...
!MESSAGE Add request for Feature 2.0.0.v20210827-1009
  ( 2.0.0.v20210827-1009) is satisfiable

Note that this requires much more time: now the p2 director has to contact all the p2 sites defined as references and has to also download the requirements during the installation. We’ll see how to optimize this part as well.

In the corresponding output directory, you can find the installed plugins; you can’t do much with such installed bundles, but that’s not important. We just want to verify that our users can install our feature simply by using our p2 repository, that’s all!

You might not want to run this verification on every build, but, for instance, only during the build where you deploy the p2 repository to some remote directory (of course, before the actual deployment step). You can easily do that by appropriately configuring your POM(s).

Some optimizations

As we saw above, each time we run the clean build, the verification step has to access remote sites and has to download all the dependencies. Even though this is a very simple example, the dependencies during the installation are almost 100MB. Every time you run the verification. (It might be the right moment to stress that the p2 director will know nothing about the Maven/Tycho cache.)

We can employ some caching mechanisms by using the standard mechanism of p2: bundle pool! This way, dependencies will have to be downloaded only the very first time, and then the cached versions will be used.

We simply introduce another property for the bundle pool directory (I’m using by default a hidden directory in the home folder) and the corresponding argument for the p2 director application:


Note that now the plug-ins during the verification step will NOT be installed in the specified output directory (which will store only some p2 properties and caches): they will be installed in the bundle pool directory. Again, as said above, you don’t need to interact with such installed plug-ins, you only need to make sure that they can be installed.

In a CI server, you should cache the bundle pool directory as well if you want to benefit from some speed. E.g., this example comes with a GitHub Actions workflow that stores also the bundle pool in the cache, besides the .m2 directory.

This will also allow you to easily experiment with different configurations of the site references in your p2 repository. For example, up to now, we put the same sites used for the target platform. Referring to the whole Eclipse releases p2 site might be too much since it contains all the features and bundles of all the projects participating in Eclipse Simrel. In the target platform, this might be OK since we might want to use some dependencies only for testing. For our p2 repository, we could tweak references so that they refer only to the minimal sites containing all our features’ requirements.

For this example we can replace the 2 sites with 4 small sites with all the requirements (actually the Xtext 2.25.0 is just the same as before):

<repository-reference location="" enabled="true" />
<repository-reference location="" enabled="true" />
<repository-reference location="" enabled="true" />
<repository-reference location="" enabled="true" />

You can verify that removing any of them will lead to installation failures.

The first time this tweaking might require some time, but you now have an easy way to test this!

Keeping things consistent

When you update your target platform, i.e., your dependencies versions, you must make sure to update the site references in the category.xml accordingly. It would be instead nice to modify this information in a single place so that everything else is kept consistent!

We can use again properties in the parent POM:


We want to rely on such properties also in the category.xml, relying on the Maven standard mechanism of copy resources with filtering.

We create another category.xml in the subdirectory templates of the site project using the above properties in the site references (at least in the ones where we want to have control on a specific version):

<?xml version="1.0" encoding="UTF-8"?>
   <feature id="org.example.feature" version="0.0.0">
      <category name="org.example.category"/>
   <category-def name="org.example.category" label="P2 Example Composite Repository">
         P2 Example Repository
   <repository-reference location="${eclipse-version-number}" enabled="true" />
   <repository-reference location="${xtext-version}" enabled="true" />
   <repository-reference location="${eclipse-version}" enabled="true" />
   <repository-reference location="" enabled="true" />

and in the site project we configure the Maven resources plugin appropriately:


Of course, we execute that in a phase that comes BEFORE the phase when the p2 repository is generated. This will overwrite the standard category.xml file (in the root of the site project) by replacing properties with the corresponding values!

By the way, you could use the property eclipse-version also in the configuration of the Tycho Eclipserun plugin seen above, instead of hardcoding 2020-12.

Happy releasing! 🙂

by Lorenzo Bettini at September 02, 2021 11:29 AM

Dependency Cycles During Load Time

by n4js dev ( at August 20, 2021 06:10 AM

When programming in large code bases it can happen inadvertently that cycles of imports are created. In JavaScript, import statements trigger loading and initialization of the specified file directly. In case there is a dependency cycle, files might only be initialized partially and hence errors might occur later during runtime. In this post we present how N4JS detects and avoids these cases by showing validation errors in the source code.


Let's start with the most simple example in JavaScript to illustrate the essential problem.

console.log(s); // prints 'test'?
export const s = "test";

Executing the two-liner above results in: 

ReferenceError: Cannot access 's' before initialization.

This is quite obvious and wouldn't surprise anyone. It is obvious because the read access is stated right before the definition of the constant s in the same file. However, it wouldn't be very obvious anymore when both the read access and the definition of s happen in separate files. Let's split up the example into the files F1.mjs and F2.mjs.


import * as F2 from "./F2.mjs";
export const s = "test";
console.log(F2.s); // prints undefined?


import * as F1 from "./F1.mjs";
export const s = F1.s;

Executing this example results in a similar error:

ReferenceError: s is not defined.

And again the cause for the error is an access to a not yet initialized variable. As a side note: Modifying the variable to be a 'var' instead of a 'const' would fix the error and return the print-out "undefined". This is due to hoisting of var symbols, but is still not the intended result which would be the print-out "test".

So far, both of the examples either give an unintended result or a compile time error is shown. Errors like these can only be identified after they actually happened during tests or production. While the two-liner example seems way too obvious to actually occur often in practice, the second case can easily hide in projects of many files and imports. A further difference is that even if the first example results in a runtime error, it can usually easily be identified and fixed. The second example however can span across many files by increasing the size of the cycle of import statements and therefore is hard to find and fix.

Two important properties of the execution semantics of JavaScript in Node.js can be witnessed here:

(1) In case a file m is started or imported that imports another file m', a subsequent import back to file m will be skipped. As a result, file m' might be only initialized partially when accessing not yet initialized elements from m.

(2) There is an exception to (1) regarding functions. Since functions are hoisted, they do not have to be reached by the control flow to get initialized. Hoisting will initialize them immediately so that they can be called from any location.

Let's look at a third example which reveals a similar case of reference errors. This time the error occurs depending on the entry point of the program. Have a look at the two files below which either result in the print-out "test" or "undefined" depending on which file was the entry point for node.js. Starting with file G1.mjs causes the execution to follow the green indicators and yields "test" whereas starting with file G2.mjs follows the red indicators and yields "undefined".

These kinds of errors might not be of interest when implementing a stand-alone application since these programs usually have a single and well known entry point only. Yet, cycles can occur also in parts of the program and then the entry point is determined by the order of import statements. Moreover, when writing libraries and exposing an API that spans across several files, the entry point can differ a lot and is defined by the library's user. Hence, in case of an unfortunate setup of files and import statements, a library might suffer from unexpected behavior depending on which part of its API was called first.

Also note that all the examples stated their imports at the top and all other statements below. When mixing import statements or dynamic imports with other code, it is even easier to create reference errors.

Validations in N4JS

One of the goals for N4JS is to provide both many handy and powerful language constructs along with type safety and strong validations. The reason behind the latter one is to prevent especially those errors to happen at runtime that are hard to find and hard to reproduce. Migrated to N4JS, the second example would show validation errors at the references to F1.s and F2.s due to the dependency cycle. The approach to detect these cases is explained in the following paragraphs by first laying out the terminology, reasoning about the general problem afterwards, and then defining the error cases in N4JS.


Top level elements are those AST elements of a JavaScript file that are direct children of the root element such as import statements, const or class declarations, and others. Some top level elements can contain expressions or statements such as initializers of consts or extends clauses of classes. These initializers are executed when loading a file. A reference located in such initializers to a top level element (imported or not) is called load time reference.

In addition to compile time and runtime, the term load time is used to refer to the first phase of runtime during which all import statements and top level elements of the started JavaScript file are executed. In this regard we assume that initialization is performed during load time. In a separate step later, some specific calls to the API of imported files would perform the actual requested functionality.

A dependency between two files consist of an import statement and may have imported elements that may be used in the same file. Dependencies with unused imported elements are called unused imports, and those without imported elements are called bare imports. The target of a dependency is the imported file and also the imported element (except for bare imports). There exists at least one dependency for each import statement, and for each code reference to an imported (top level) element. Dependencies are differentiated into three kinds:

Compile time dependencies arise from all non-unused import statements. Runtime dependencies are the subset of compile time dependencies that is necessary at runtime only, i.e. it does not include unused imports or imports used for type information. (We assume that unused imports do not have intended side effects like bare imports have.) Load time dependencies are the subset of runtime dependencies with load time references.

A dependency cycle exists when traversing import statements of one file to the imported files will eventually lead to one of the already visited files. Note that the term dependency cycle refers to files and not necessarily to imported elements. Dependency cycles are differentiated as follows: Compile time dependency cycles are those relying on compile time dependencies. Runtime dependency cycles rely on runtime dependencies and are of special interest later. Load time dependency cycles rely on load time dependencies and are evaluated to errors in N4JS.


To get a clearer understanding, it is important to know the impact of dependency cycles in a program. An inherent property of dependency cycles is that at least one of the import statements during load time gets skipped since it would load a file that is already processed. In a cycle free program, all import statements of all files can be understood as a directed graph of files connected by import statements that define a partial load order. It is usually harmless that the total order of loading files depends on the entry point, i.e. which file is imported first or started the program, since it complies to that partial order. However, in case of dependency cycles the graph contains a cycle which will be broken up at load time to re-establish a directed graph and partial order. That means that the loading of at least one file of each cycle will be skipped because it is already being loaded. Other files that depends on that skipped import might be initialized partially only. Consequently, the entry point, e.g. the order of import statements, impacts whether a file is initialized partially or completely after its import statement was executed. Sorting import statements is a very common IDE feature and usually deemed to be innocent of causing runtime errors. Yet this assumption does not necessarily hold if the program contains dependency cycles.

We learned that partial initialization occurs if a load time initializer accesses a reference to a not yet initialized element of a skipped file. Probably that not yet initialized element will be initialized later during load time, but harm was already done since the current file had read the wrong value. Where exactly did the problem occur? References to not yet initialized elements can be located not only directly in load time initializers but can also be at locations reachable transitively e.g. by calling other functions in between starting from the initializer. Determining all reachable references from load time initializers which potentially access not yet initialized values can only be done by an expensive analysis that is usually imprecise due to over-approximation. In many cases it is even impossible due to reflective calls, dynamic loading etc. However, a simpler way to rule out accesses to partial initialized elements is to make a clear cut and forbid any expressions or statements in load time initializers that cannot be evaluated at compile time, e.g. function calls. On the downside, this strictness also reduces some programming freedom and even rules out legal load time references that would not cause runtime errors.

To summarize the approach: Either runtime dependency cycles need to be removed or - if that is not possible - load time initializers need to be restricted to not reference potentially skipped files.

A very interesting situation is when a runtime dependency cycle C contains a file m that has a load time initializer with a dependency d to file m'. This means that the cycle becomes a cycle that has a correct and an incorrect way of loading its files: Due to load time dependency d file m' must be loaded without being skipped. Still, at least one other import must be skipped to break the cycle. To make sure that loading of m' is not skipped, m' must not be the entry point of the cycle C. Choosing another entry point e.g. file m will result in partially loading m first, loading the rest of the cycle C including m' completely until another import to m is skipped. In other words: A load time dependency to a file m' within a cycle C constrains m' to never be the entry point into C. This situation is illustrated in the figure below.

The figure above shows the third example with additional information about its dependencies and cycles. As you can see there exists a runtime dependency cycle (indicated in blue), since the two files reference each other in runtime import statements. Also indicated in orange there exists a load time dependency because the reference to G2.s is located in an expression of a top level element that is evaluated during load time. Hence, this dependency imposes the constraint that the entry point to the third example must be G1.

In contrast note that the second example has a load time dependency cycle due to the two load time dependencies created by the accesses to F1.s and F2.s.

Error cases

Four types of errors are indicated in different situations regarding load time dependencies. Based on a source code analysis, runtime dependency cycles and references located in top level elements are detected first.

(1) Given this information, load time dependency cycles can be identified and be evaluated to errors. These errors are attached to the references of the load time dependencies.

Three other types of errors occur if and only if there exists a runtime dependency cycle C of modules m and m' (and maybe including others).

(2) Any load time reference in C that references a top level element in C (imported ones or in the same file) is marked with an error. This includes all load time dependencies. The reason to forbid any references to e.g. local or imported functions from C is that these may reach and access partially initialized variables. In N4JS there is one exception to that rule: extends clauses of classes. Load time references are still allowed here and not causing problems because extends clauses in N4JS are already restricted to references to other classes only (and not arbitrary expressions like in JavaScript). Note that ordinary dependencies (i.e. that do not have references in load time code) are still allowed, e.g. within the body of methods.

(3) Any dependency d to a module m' is marked with an error if and only if there exists a load time dependency to m' already. In other words: There may be no other dependency in C to m' if d is a load time dependency. In case an importing module m* is not in C a dependency to m' is allowed.

(4) However, when importing m' from m*, it is mandatory to also import another module m prior to import m*. Otherwise, an error is shown. The import of m prior to the import of m' will ensure that loading of m' is not skipped.

When programming with N4JS and errors like that occur, there are two ways to solve them. First and best solution is to remove the dependency cycle, which in many cases is a code smell already. This can be done by breaking the cycle or merging two or more files or file parts that mutually depend on each other. In case that is not possible, removing some load time dependencies is necessary. However, keep in mind that any load time dependency in a dependency cycle will impose a runtime execution order on the importing file to be loaded always prior to the imported file.


import * as H2 from "H2";
class C extends H2.C {} // no error (3) here


import "H1";
export public class C {}

The last example shows a case similar to the third example: The are two files that have a runtime dependency cycle. Additionally, there is a load time dependency created by the extends clause that references H2.C. Note that the third example produces the validation error (3) at the load time reference G2.s because we disallow all non-compile time expressions or statements in load time initializers. This shows where simplifications of our approach might be improved in the future. Since we leave an exception to error (3) in case the load time dependency is an extends clause, the last example shows no errors in N4JS.


The core problem are read accesses to variables that are not yet initialized. While these kinds of problems are relatively obvious and easy to find when happening in a single file, it is much harder to detect them when they are caused due to dependency cycles of two or more files. For the single file case, several IDEs and languages already provide validations and put error markers to read accesses of undefined symbols, such as VSCode for TypeScript. By introducing the validations described in this blog post, N4JS also can rule out initialization errors due to dependency cycles from happening. Unfortunately, in some cases this approach is too strict but we hope to relax some of the restrictions to improve the compromise of program safety and programming freedom.

by Marcus Mews

by n4js dev ( at August 20, 2021 06:10 AM

gRPC Remote Services Development with Bndtools - video tutorials

by Scott Lewis ( at August 16, 2021 09:11 PM

Here are four new videos that show how to define, implement and run/debug gRPC-based remote services using bndtools, eclipse, and ECF remote services.

Part 1 - API Generation - The generation of a OSGi remote service API using bndtools code generation and the protoc/gRPC compiler. The example service API has both unary and streaming gRPC method types supported by the reactivex API.

Part 2 - Implementation and Part 3 - Consumer - bndtools-project-template-based creation of remote service impl and consumer projects

Part 4 - Debugging - Eclipse/bndtools-based running/debugging of the remote service creating in parts 1-3.

by Scott Lewis ( at August 16, 2021 09:11 PM

Eclipse JKube 1.4.0 is now available!

July 27, 2021 05:00 PM

On behalf of the Eclipse JKube team and everyone who has contributed, I'm happy to announce that Eclipse JKube 1.4.0 has been released and is now available from Maven Central.

Thanks to all of you who have contributed with issue reports, pull requests, feedback, spreading the word with blogs, videos, comments, etc. We really appreciate your help, keep it up!

What's new?

Without further ado, let's have a look at the most significant updates:

Multi-layer support for Container Images

Until now JKube pre-assembled everything needed to generate the container image in a temporary directory that was then added to the image with a single COPY statement. This means that for any single change we do to the application code, this layer would change. This is especially inefficient for the Jib build strategy.

Since this release, we can define our image build model with several layer assemblies and improve this inefficiency by packaging different layers (dependencies, application slim jars, etc.). We've also updated the Quarkus Generator to take advantage of this new feature. Check the following demo for more details:

Support DockerImage as output for OpenShift builds

OpenShift Container Platform comes with an integrated container image registry. By default, when you build your image using OpenShift Maven Plugin and S2I strategy, the build configuration is set up to push into this internal registry.

JKube provides now the possibility to push the image to an external registry by leveraging OpenShift's Build output configuration.

The following property will enable this configuration. Check the embedded video for more details.


Using this release

If your project is based on Maven, you just need to add the kubernetes maven plugin or the openshift maven plugin to your plugin dependencies:


How can you help?

If you're interested in helping out and are a first time contributor, check out the "first-timers-only" tag in the issue repository. We've tagged extremely easy issues so that you can get started contributing to Open Source and the Eclipse organization.

If you are a more experienced developer or have already contributed to JKube, check the "help wanted" tag. We're also excited to read articles and posts mentioning our project and sharing the user experience. Feedback is the only way to improve.

Project Page | GitHub | Issues | Gitter | Mailing list | Stack Overflow

Eclipse JKube Logo

July 27, 2021 05:00 PM

5 Reasons to Adopt Eclipse Theia

by Brian King at July 13, 2021 12:02 PM

Recently I wrote about the momentum happening in the Eclipse Theia project. In this post, I want to highlight some good reasons to adopt Theia as your IDE solution. The core use case for Theia is as a base upon which to build a custom IDE or tool. However, if you are a developer looking for a great tool to use, you will find some motivation here as well. The inspiration for this post comes from Theia project lead Marc Dumais’ ‘Why Use Theia?’ talk that he gave at the recent Cloud DevTools Community Call. So, you could say this is Marc’s post!

1. Modern Technology Stack

Theia is Web-first. It’s built on modern Web technologies, and if we compare it with traditional IDEs such as the Eclipse Desktop IDE or IntelliJ it is a big departure in terms of technologies used.

These best of breed web-based technologies include Node.js, HTML5 and CSS, TypeScript, and npm. Theia supports all modern browsers, including Electron. So from a UI perspective, you can finally say goodbye to SWT or Swing and benefit from the modern rendering capabilities of HTML5. This will dramatically improve the look and feel of any tool built on Theia compared to previous platforms. Even better, you can use modern UI frameworks, such as React, Vue.js or Angular within Theia!

The use of npm connects Theia to a huge ecosystem of available frameworks for almost any purpose. However, it is worth mentioning that it is also very easy to integrate other technologies, e.g. Java, Python or C++ on the backend due to the very flexible architecture.

It’s important to note that these technologies are not only state-of-the-art for modern tools, they also heavily overlap with how business applications are being built today, allowing Theia to benefit from the ongoing evolution of a large ecosystem. This also makes recruiting easier. As an example, compare how many developers know how to develop in React vs SWT these days.

In a nutshell, the technology stack of Theia is powerful, modern and, last but not least, very common.

2. Cloud and Desktop

Eclipse Theia is designed to be used on the web as well as on the desktop. While other tools and platforms are typically created for either desktop or web use, supporting both use cases is in the core DNA of Eclipse Theia. And, we have  adopters in both camps, as well as those that take full advantage of the power of Theia to provide both options at the same time and based on the same code. Having both options enables adopters to implement a long-term evolution strategy. Many companies start with a desktop tool and move to a full cloud-based solution later. Having this flexibility with minimal overhead is a unique and powerful benefit of Theia!

3. Extensible Framework

Eclipse Theia is much more configurable and extensible than other tools like VS Code. While a VS Code extension can add behavior to the IDE at runtime, there are limitations. For example, an extension can register support for searching for symbol references in a new language. The VS Code API covers many of the “standard” use cases when adding support for new programming languages. However, you cannot change the behavior of the IDE in many respects or leave out parts of the IDE that are not needed.

Eclipse Theia, on the other hand, is designed in a way that almost any part of the IDE can be omitted, replaced or customized without changing the Theia source code. You can create your own Theia build and add your own modules to override or extend most parts of the IDE through dependency injection.  

Eclipse Theia supports the same extension API as VS Code. This means extensions created for VS Code are also usable in Theia. Most popular extensions can be obtained from the public Open VSX Registry.

It’s also easy to make your Theia-based application your own. Name/brand it, make it look different, customize views and user interface elements. You can adapt and customize almost anything, and therefore, build tools that fulfil your domain-specific and custom requirements.

To learn more, please see this article about VS Code extensions vs. Theia extensions and this comparison between Eclipse theia and VS Code.


Source: VS Code extensions vs. Theia extensions

4. Multi-Language Support Through LSP and DAP

Traditionally, language support was implemented independently in editors, meaning there was little or no consistency in features between them.To solve this, Microsoft specified Language Server Protocol, a way to standardize the communication between language tooling and code editor. This architecture allows the separation of development of the actual code editor (e.g. Monaco) and the language support (language server). This invention has boosted the development of support for all types of languages.

Similarly for debugging, another crucial function of an IDE, Debug Application Protocol (DAP) was built to define a way for IDEs to work with debuggers.

These technologies originated in VS Code, and are starting to appear in more places. Eclipse Theia, since its inception, provides full support for LSP and DAP. You can therefore benefit from the ever growing ecosystem of available language servers. Even more, the ecosystem around Theia, for example the Graphical Language Server Protocol (GLSP), which works similarly to LSP, but for diagram editors.

If you want to provide support for your own custom language, you can simply develop a language server for it. This makes your language available in Theia and also in any other tool that supports LSP, DAP or GLSP.

5. Truly Open Source and Vendor Neutral

Many tool technologies are open source. However, there are some details and attributes of an open source project that make a huge difference for adopters of a technology. This is especially true for tools, as the maintenance cycle is typically rather long, sometimes decades. Adopters of a platform or framework therefore, should focus on the strategic consequences. Let us look at the criteria more in detail.

Fully Open Source

Eclipse Theia and all its components are fully open sourced. There are no proprietary parts (like there are in VS Code for example, see this comparison).


Eclipse Theia is licensed under the Eclipse Public License (EPL). The EPL allows for commercial use, meaning you can build commercial products based on Theia without license issues. The EPL has a great track record of being commercially adopted, so many details, such as “derivative work” are well defined.

Intellectual Property Management

Defining a license for a project is a first step, but if developers use copied code or dependencies that are incompatible with the EPL, an adopter of the project might become guilty of a copyright violation. Theia is an Eclipse project and therefore its code and dependencies are vetted by the Eclipse foundation. There are defined agreements for contributors and regular reviews (including dependencies) to ensure IP cleanness of the code base. This significantly lowers the risk for adopters to get into license issues.


Many open source projects are almost exclusively driven and controlled by a single vendor. Eclipse Theia follows the Eclipse Foundation development process. It governs the collaboration and decision making in the project and ensures a level playing field for all members of the community. For adopters, the two most obvious benefits are: (1) No single party can drive the decisions, no single party can change the rules, meaning that it is a safe long-term option. (2) The rules ensure you can gain influence and be part of the decision making by participating in the community. This way you can make sure that the project evolves in a direction that suits your requirements.

Vendor Neutral

Not only does the governance model of Eclipse Theia ensure vendor neutrality, the project is also very diverse in terms of contributors. If you look at the contributing companies below (a select list only), you can clearly see that Theia enjoys the broad support that is so  important for innovation, maintenance and the long-term availability of a project.

In a nutshell, Eclipse Theia benefits from a diverse base of contributors and follows a proven license, IP and governance model that has enabled and preserved strategic investments for more than two decades.

Bonus: A Vibrant Ecosystem

Last but not least, Theia is built around a vibrant ecosystem. There are several comercial adopters that have built their solutions with Theia including Arm Mbed Studio, Arduino Pro IDE, Red Hat CodeReady Workspaces, and Google Cloud Shell

Many adopters, service providers and contributors are organized and participate in the Eclipse Cloud DevTools Working Group. Current members include Arm, Broadcom, EclipseSource, Ericsson, IBM, Intel, RedHat, SAP, STMicroelectronics, and TypeFox. The working group structure allows these companies to coordinate their efforts, use cases and strategies. It brings together parties with a common goal, e.g. there is a special interest group for building tools for embedded programming. This set-up allows for great initiatives that serve a common goal and are developed in collaboration. As an example, the ecosystem provides Open VSX, a free and open alternative to the VS Code marketplace. As another example, Eclipse Theia blueprint provides a template for building Theia applications.

In addition to Theia as a core platform, there is a robust ecosystem of supporting projects and technologies. Eclipse has always been a great place for frameworks around building tools to solve all kinds of requirements. For example, there is a framework for building web-based editors called Eclipse GLSP. As another example, transfers a lot of concepts from the EMF ecosystem to the cloud, e.g. model management, model comparison or model validation. Finally, quite a few existing technologies have targeted Theia to make the transition to the web, including Xtext, TraceCompass and many more. So when building on Theia, you do not just get a framework for building tools and IDEs, you can also benefit from the larger ecosystem being built around it!


As you can see, there are many reasons to adopt Eclipse Theia. We listed several important ones in this blog, but there are many more to discover. As you research options, you might find other solutions that are on par with Theia in specific categories. However, the combination of advantages Theia offers is unique. That is not by accident: Theia was explicitly created as an open, flexible and extensible platform to “develop and deliver multi-language Cloud & Desktop IDEs and tools with modern, state-of-the-art web technologies.”   

To see what is coming next, check out the roadmap which is updated quarterly. The roadmap is a moving snapshot that shows priorities of contributing organizations. Common goals are discussed weekly at the Theia Dev Meeting and additional capabilities and features identified there will make it onto the roadmap. Take a look at the project to evaluate how to get involved. The best places to look first are the GitHub project and the community forum.

by Brian King at July 13, 2021 12:02 PM

Choosing servers for Kubernetes

by Denis Roy at July 07, 2021 08:14 PM

The Eclipse Foundation jumped on the Kubernetes bandwagon a few years ago, for the same reasons as everyone else. Our need at the time was for a scalable/fault tolerant solution for our Jenkins-based CI system. We started small by repurposing older hardware, and with early successes, the cluster grew out of a combination of new iron and more repurposed servers.

As cluster usage grew, we started receiving feedback about slow and fluctuating build times. You see, we'd typically purchase hardware for low I/O, parallel operations - lots of CPU cores, lots of RAM, for those hundreds of web requests per second we typically handle. As it turns out, although these machines can handle thousands of simultaneous short-lived connections, they suck at single-threaded operations that run for 20 minutes.

For new hardware, we moved away from the "big iron" model, choosing instead smaller units with fewer CPU cores, but much faster ones, and faster memory busses. To save money up-front, we'd equip them with inexpensive HDDs, with the understanding that local I/O wasn't much of a thing. Now, I know what you're thinking: HDDs? Duh, get SSDs, it's a no-brainer. Our release engineers Mikaël and Fred have been pleading the case for SSDs in build machines for years. I had simply underestimated the impact of local disk I/O - either when multiple disk-intensive pods were scheduled at the same time, or when images were pulled to the local node for spin-up. An 8-minute build was a 27-minute build later on, for no obvious reasons.


We've since been retrofitting all our worker nodes with SSDs with, obviously, much success. The fast machines are now fast --  consistently, and the older iron performs adequately well. And with some benchmarking (thanks for the data and image, Mikaël Barbero), we're able to identify worker nodes that are simply outclassed, such as third-from-the-left "okdnode-12", which is headed for a permanent retirement.

With the targeted use of labels, we can reserve older hardware for typical website applications where slower core speed is appropriate for those short-lived connections.

We're doing our best to provide Eclipse projects with a reliable, performant, expandable and consistent platform for running builds, without breaking the bank. It's a learning curve for sure, and we're getting there.

by Denis Roy at July 07, 2021 08:14 PM

Quarkus 2 + Kubernetes Maven Plugin + GraalVM integration

July 06, 2021 05:30 AM


In this tutorial, we'll see how to develop and integrate a very simple Quarkus 2 application with Kubernetes Maven Plugin (Eclipse JKube) to publish a native GraalVM image into Docker Hub and deploy it on Kubernetes.

This is a remake of my original article Quarkus + Fabric8 Maven Plugin + GraalVM integration, since the Fabric8 Maven Plugin is now deprecated.

In the first part, I describe how to build a very simple Quarkus application. Next, I describe how to build a Quarkus native executable with GraalVM. Finally, I show how to integrate the project with Kubernetes Maven Plugin and how to publish the application container images into Docker Hub and deploy them to Kubernetes.

Quarkus 2 example application

In this section, I'll describe how to build a simple application that will return a random quote each time you perform a request to the /quotes/random endpoint.

Project bootstrapping

Since you probably already have Maven installed in your system, the easiest way to bootstrap the project is by running the following command:

mvn io.quarkus:quarkus-maven-plugin:2.1.1.Final:create \
    -DprojectGroupId=com.marcnuri.demo \
    -DprojectArtifactId=kubernetes-maven-plugin-quarkus \
    -DclassName="com.marcnuri.demo.kmp.quote.QuoteResource" \

If the command completes successfully, you will be able to see a new directory kubernetes-maven-plugin-quarkus with an initial maven project with maven wrapper support.

However, if you don't have Maven installed, or if on the other hand, you prefer an interactive graphical user interface, you can navigate to to customize and download a bootstrapped project with your specific requirements.

Quarkus project bootstrap (

Project resources

As I already explained, the application will serve a random quote each time a user performs a request to an endpoint. The application will load these quotes from a JSON file located in the project resources folder. For this purpose, you'll add the file quotes.json to the src/main/resources/quotes/ directory.

Random quote endpoint

Once we've got the resources set up, we can start with the code implementation. The first step is to create a Quote POJO that will be used to map the quotes defined in the JSON file when it's deserialized.

1public class Quote implements Serializable {
2  /** ... **/
3  private String content;
4  private String author;
5  /** ... **/

Next, we'll create a QuoteService class to provide the service that reads the quotes from the resources directory and selects a random quote.

2public class QuoteService {
4  private static final Logger log = LoggerFactory.getLogger(QuoteService.class);
6  private static final String QUOTES_RESOURCE= "/quotes/quotes.json";
8  private final List<Quote> quotes;
10  public QuoteService() {
11    quotes = new ArrayList<>();
12  }
14  @PostConstruct
15  protected final void initialize() {
16    final var objectMapper = new ObjectMapper();
17    try (final InputStream quotesStream = QuoteService.class.getResourceAsStream(QUOTES_RESOURCE)) {
18      quotes.addAll(objectMapper.readValue(quotesStream,
19        objectMapper.getTypeFactory().constructCollectionType(List.class, Quote.class)));
20    } catch (IOException e) {
21      log.error("Error loading quotes", e);
22    }
23  }
26  Quote getRandomQuote() {
27    return quotes.get(ThreadLocalRandom.current().nextInt(quotes.size()));
28  }

The initialize method uses Jackson to read and deserialize the quotes.json file into a member ArrayList variable that will be used later on to fetch a random quote.

The getRandomQuote method returns a random Quote entry from the ArrayList for each invocation.

To complete the application, we need to modify the bootstrapped REST endpoint to use the service we implemented. For this purpose, we'll modify the QuoteResource class.

2public class QuoteResource {
4  private static final String HEADER_QUOTE_AUTHOR = "Quote-Author";
6  private QuoteService quoteService;
8  @GET
9  @Path("/random")
10  @Produces(MediaType.TEXT_PLAIN)
11  public Response getRandomQuote() {
12    final var randomQuote = quoteService.getRandomQuote();
13    return Response
14      .ok(randomQuote.getContent(), MediaType.TEXT_PLAIN_TYPE)
15      .header(HEADER_QUOTE_AUTHOR, randomQuote.getAuthor())
16      .build();
17  }
19  @Inject
20  public void setQuoteService(QuoteService quoteService) {
21    this.quoteService = quoteService;
22  }

The method getRandomQuote uses an instance of the previously described QuoteService class to get a random quote and return its content in the HTTP response body. In addition, the author of the quote is also added as a Response header.

Once we complete all the steps, we can start the application in development mode using the following command:

./mvnw clean compile quarkus:dev
[INFO] --- quarkus-maven-plugin:2.1.1.Final:dev (default-cli) @ kubernetes-maven-plugin-quarkus ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory D:\00-MN\projects\marcnuri-demo\kubernetes-maven-plugin-quarkus\src\test\resources
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 2 source files to D:\00-MN\projects\marcnuri-demo\kubernetes-maven-plugin-quarkus\target\test-classes
Listening for transport dt_socket at address: 5005
__  ____  __  _____   ___  __ ____  ______
 --/ __ \/ / / / _ | / _ \/ //_/ / / / __/
 -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \
--\___\_\____/_/ |_/_/|_/_/|_|\____/___/
2021-07-04 07:49:33,690 INFO  [io.quarkus] (Quarkus Main Thread) kubernetes-maven-plugin-quarkus 1.0.0-SNAPSHOT on JVM (powered by Quarkus 2.1.1.Final) started in 3.774s. Listening on: http://localhost:8080
2021-07-04 07:49:33,693 INFO  [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated.
2021-07-04 07:49:33,693 INFO  [io.quarkus] (Quarkus Main Thread) Installed features: [cdi, resteasy, resteasy-jackson, smallrye-context-propagation]

Tests paused, press [r] to resume, [h] for more options>

The endpoint will now be accessible at http://localhost:8080/quotes/random.

Screenshot of the result of executing curl localhost:8080/quotes/random -v

Building a native executable with GraalVM

Now is the time to make our application supersonic, for this purpose we are going to use GraalVM to create a native binary of the application.

We are now going to adapt the application to make it fully compatible with GraalVM.

Include resources

By default, GraalVM won’t include any of the resources available on the classpath during image creation using native-image. Resources that must be available at runtime must be specifically included during image creation.

To configure GraalVM to account for our quotes.json resource file, we need to modify the project's file and include the following line:


With this line, we indicate Quarkus to add the -H:IncludeResources command-line flag to the native-image command. In this specific case, we want to add any file that ends with the .json extension.

Native image reflection

Jackson JSON deserialization uses reflection to create instances of the target classes when performing reads. Graal native image build requires to know ahead of time which kind of elements are reflectiveley accessed by the program.

Quarkus eases this task by providing a @RegisterForReflection annotation that automates this task. For our example application, we’ll need to annotate the Quote class.

Building the native application

Now that we've completed adapting the application to be fully GraalVM compatible, we can perform the build in native mode. If Graal VM with native-image support is available in our system, we can simply run the following command:

./mvnw clean package -Pnative
[INFO] [io.quarkus.deployment.pkg.steps.NativeImageBuildRunner] D:\00-MN\bin\graalvm-ce-java11-21.1.0\bin\native-image.cmd
  -J-Dvertx.disableDnsResolver=true -J-Dio.netty.leakDetection.level=DISABLED -J-Dio.netty.allocator.maxOrder=3
  -J-Duser.language=en -J-Dfile.encoding=UTF-8 -H:IncludeResources=.*.json\$
  --initialize-at-build-time=\$BySpaceAndTime -H:+JNI
  -H:+AllowFoldMethods -jar kubernetes-maven-plugin-quarkus-1.0.0-SNAPSHOT-runner.jar -H:FallbackThreshold=0
  -H:+ReportExceptionStackTraces -H:-AddAllCharsets -H:EnableURLProtocols=http -H:-UseServiceLoaderFeature
  -H:+StackTrace kubernetes-maven-plugin-quarkus-1.0.0-SNAPSHOT-runner
[kubernetes-maven-plugin-quarkus-1.0.0-SNAPSHOT-runner:2772]    classlist:   2,473.21 ms,  0.96 GB
[kubernetes-maven-plugin-quarkus-1.0.0-SNAPSHOT-runner:2772]        (cap):   3,921.54 ms,  0.96 GB
[kubernetes-maven-plugin-quarkus-1.0.0-SNAPSHOT-runner:2772]        setup:   6,147.99 ms,  0.96 GB
08:52:27,268 INFO  [org.jbo.threads] JBoss Threads version 3.4.0.Final
[kubernetes-maven-plugin-quarkus-1.0.0-SNAPSHOT-runner:2772]     (clinit):     664.62 ms,  4.67 GB
[kubernetes-maven-plugin-quarkus-1.0.0-SNAPSHOT-runner:2772]   (typeflow):  16,865.10 ms,  4.67 GB
[kubernetes-maven-plugin-quarkus-1.0.0-SNAPSHOT-runner:2772]    (objects):  20,902.79 ms,  4.67 GB
[kubernetes-maven-plugin-quarkus-1.0.0-SNAPSHOT-runner:2772]   (features):     945.86 ms,  4.67 GB
[kubernetes-maven-plugin-quarkus-1.0.0-SNAPSHOT-runner:2772]     analysis:  40,710.23 ms,  4.67 GB
[kubernetes-maven-plugin-quarkus-1.0.0-SNAPSHOT-runner:2772]     universe:   1,702.71 ms,  4.67 GB
[kubernetes-maven-plugin-quarkus-1.0.0-SNAPSHOT-runner:2772]      (parse):   4,527.99 ms,  4.67 GB
[kubernetes-maven-plugin-quarkus-1.0.0-SNAPSHOT-runner:2772]     (inline):   8,267.27 ms,  5.83 GB
[kubernetes-maven-plugin-quarkus-1.0.0-SNAPSHOT-runner:2772]    (compile):  24,709.00 ms,  5.69 GB
[kubernetes-maven-plugin-quarkus-1.0.0-SNAPSHOT-runner:2772]      compile:  39,765.47 ms,  5.69 GB
[kubernetes-maven-plugin-quarkus-1.0.0-SNAPSHOT-runner:2772]        image:   4,563.42 ms,  5.69 GB
[kubernetes-maven-plugin-quarkus-1.0.0-SNAPSHOT-runner:2772]        write:   2,281.61 ms,  5.69 GB
# Printing build artifacts to: kubernetes-maven-plugin-quarkus-1.0.0-SNAPSHOT-runner.build_artifacts.txt
[kubernetes-maven-plugin-quarkus-1.0.0-SNAPSHOT-runner:2772]      [total]:  98,272.86 ms,  5.69 GB
[WARNING] [io.quarkus.deployment.pkg.steps.NativeImageBuildRunner] objcopy executable not found in PATH. Debug symbols will not be separated from executable.
[WARNING] [io.quarkus.deployment.pkg.steps.NativeImageBuildRunner] That will result in a larger native image with debug symbols embedded in it.
[INFO] [io.quarkus.deployment.QuarkusAugmentor] Quarkus augmentation completed in 106997ms
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  02:04 min
[INFO] Finished at: 2021-07-04T08:53:44+02:00
[INFO] -----------------------------------------------------------------------

If the command is executed successfully, a new kubernetes-maven-plugin-quarkus-1.0.0-SNAPSHOT-runner will be available in the target directory.

We can now run the application natively by executing the following command:


Same as when we ran the application in JVM mode, the endpoint is now available at http://localhost:8080/quotes/random.

If GraalVM is not available in your system, but Docker is, the same command can be run inside a Docker container to build a Linux native binary:

./mvnw clean package -Pnative -Dquarkus.native.container-build=true

Kubernetes Maven Plugin (Eclipse JKube) integration

This is the final step in this tutorial. In this section, I'm going to show you how to integrate the project with Kubernetes Maven Plugin (Eclipse JKube).

The process is as simple as adding Eclipse JKube's Kubernetes Maven Plugin to our project's pom.xml

2    <!-- ... -->
3    <jkube.version>1.4.0</jkube.version>
4  </properties>
5  <!-- ... -->
6  <build>
7    <!-- ... -->
8    <plugins>
9      <!-- ... -->
10      <plugin>
11        <groupId>org.eclipse.jkube</groupId>
12        <artifactId>kubernetes-maven-plugin</artifactId>
13        <version>${jkube.version}</version>
14      </plugin>
15    </plugins>
16  </build>

The configuration is as straightforward as adding a <plugin> entry with groupId, artifactId & version to indicate that we want to use the Kubernetes Maven Plugin. In many cases, this may just be enough, as the plugin has a Zero-Config mode that takes care of defining most of the settings for us by analyzing the project's configuration inferring the recommended values.

Build Container (Docker) Image (k8s:build)

First, we need to remove the boiler-plate Dockerfiles that Quarkus provides for us in the src/main/docker directory. Kubernetes Maven Plugin infers all of this configuration from the project, so there's no need to maintain these files and we can safely remove them.

Eclipse JKube's Zero-Config mode uses Generators and Enrichers which provide opinionated defaults. For this project, the only common setting we need to take care of is the one related to the authorization for the Docker Hub registry. In this case, we are configuring JKube to read the push credentials from the DOCKER_HUB_USER and DOCKER_HUB_PASSWORD environment variables.

For this purpose, you need to add the following to the plugin's configuration:

2  <groupId>org.eclipse.jkube</groupId>
3  <artifactId>kubernetes-maven-plugin</artifactId>
4  <version>${jkube.version}</version>
5  <configuration>
6    <authConfig>
7      <push>
8        <username>${env.DOCKER_HUB_USER}</username>
9        <password>${env.DOCKER_HUB_PASSWORD}</password>
10      </push>
11    </authConfig>
12  </configuration>

Since we developed the project with support both for Quarkus JVM and Native modes, we are going to generate 2 different container (Docker) images depending on the Maven profile we select.

Docker image running application with JVM

In Quarkus 2, fast-jar is the default packaging for the JVM mode. This means, that unless stated otherwise (e.g. Maven Profile), the mvn package command will output the necessary files for this mode. Since I'm going to publish these images to Docker Hub, I will name this image marcnuri/kubernetes-maven-plugin-quarkus:jvm.

This will create an image for the marcnuri repository with the name kubernetes-maven-plugin-quarkus and jvm tag.

In order to tweak JKube's Quarkus generator opinionated default for the image name, we need to set the property. We can achieve this by adding the following entry to the pom.xml global properties section:

2    <!-- ... -->
3    <jkube.version>1.4.0</jkube.version>
4    <>marcnuri/kubernetes-maven-plugin-quarkus:jvm</>
5  </properties>

We can now run the following command to build the Docker image:

./mvnw clean package k8s:build
[INFO] --- kubernetes-maven-plugin:1.4.0:build (default-cli) @ kubernetes-maven-plugin-quarkus ---
[INFO] k8s: Running in Kubernetes mode
[INFO] k8s: Building Docker image in Kubernetes mode
[INFO] k8s: Running generator quarkus
[INFO] k8s: quarkus: Using Docker image as base / builder
[INFO] k8s: [marcnuri/kubernetes-maven-plugin-quarkus:jvm] "quarkus": Created docker-build.tar in 7 seconds 
[INFO] k8s: [marcnuri/kubernetes-maven-plugin-quarkus:jvm] "quarkus": Built image sha256:4947d
[INFO] k8s: [marcnuri/kubernetes-maven-plugin-quarkus:jvm] "quarkus": Tag with latest
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  16.113 s
[INFO] Finished at: 2021-07-04T16:22:27+02:00
[INFO] ------------------------------------------------------------------------

Docker image running application in native mode

The procedure for the native mode is very similar to the one we did earlier for JVM. In this case, I want to create an image with the following name: marcnuri/kubernetes-maven-plugin-quarkus:native.

Since the project already contains a Maven profile for native, achieving this is as simple as overriding the property for the native profile.

2  <profile>
3    <id>native</id>
4    <!-- ... -->
5    <properties>
6      <quarkus.package.type>native</quarkus.package.type>
7      <>marcnuri/kubernetes-maven-plugin-quarkus:native</>
8    </properties>
9  </profile>

We can now run the following command to build the native Docker image:

./mvnw clean package k8s:build -Pnative
[INFO] --- kubernetes-maven-plugin:1.4.0:build (default-cli) @ kubernetes-maven-plugin-quarkus ---
[INFO] k8s: Running in Kubernetes mode
[INFO] k8s: Building Docker image in Kubernetes mode
[INFO] k8s: Running generator quarkus
[INFO] k8s: quarkus: Using Docker image as base / builder
[INFO] k8s: Pulling from ubi8/ubi-minimal
[INFO] k8s: Digest: sha256:df6f9e5d689e4a0b295ff12abc6e2ae2932a1f3e479ae1124ab76cf40c3a8cdd
[INFO] k8s: Status: Downloaded newer image for
[INFO] k8s: Pulled in 3 seconds 
[INFO] k8s: [marcnuri/kubernetes-maven-plugin-quarkus:native] "quarkus": Created docker-build.tar in 498 milliseconds
[INFO] k8s: [marcnuri/kubernetes-maven-plugin-quarkus:native] "quarkus": Built image sha256:5a1d5
[INFO] k8s: [marcnuri/kubernetes-maven-plugin-quarkus:native] "quarkus": Tag with latest
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  25.851 s
[INFO] Finished at: 2021-07-04T15:10:31Z
[INFO] ------------------------------------------------------------------------

Push image to Docker Hub (k8s:push)

Regardless of the packaging mode we choose (JVM or native), pushing the image into Docker Hub's registry is as easy as running the following command (provided that we have the required environment variables available):

./mvnw k8s:push
# or if a native image was built
./mvnw k8s:push -Pnative

Of course, this makes sense from a CI pipeline perspective. If you are running the command from your own secure local system, you can override the credentials this way:

./mvnw k8s:push -Djkube.docker.push.username=$username -Djkube.docker.push.password=$password
# or if a native image was built
./mvnw k8s:push -Pnative -Djkube.docker.push.username=$username -Djkube.docker.push.password=$password

In my case, I'm running this from a GitHub Actions Workflow:

A screenshot of GitHub Actions Workflow - mvn kubernetes:push

Deploying the application to Kubernetes (k8s:apply)

Now that I've published my images to Docker Hub, I can safely deploy our application to Kubernetes.

The main advantage of JKube is that you don't need to deal with YAML and configuration files yourself. The plugin takes care of generating everything for you, so you only need to run the following command:

./mvnw k8s:resource k8s:apply
# or if a native image was built
./mvnw k8s:resource k8s:apply -Pnative
[INFO] --- kubernetes-maven-plugin:1.4.0:apply (default-cli) @ kubernetes-maven-plugin-quarkus ---
[INFO] k8s: Using Kubernetes at in namespace default with manifest D:\00-MN\projects\marcnuri-demo\kubernetes-maven-plugin-quarkus\target\classes\META-INF\jkube\kubernetes.yml 
[INFO] k8s: Creating a Service from kubernetes.yml namespace default name kubernetes-maven-plugin-quarkus
[INFO] k8s: Created Service: target\jkube\applyJson\default\service-kubernetes-maven-plugin-quarkus-4.json
[INFO] k8s: Creating a Deployment from kubernetes.yml namespace default name kubernetes-maven-plugin-quarkus
[INFO] k8s: Created Deployment: target\jkube\applyJson\default\deployment-kubernetes-maven-plugin-quarkus-4.json
[INFO] k8s: HINT: Use the command `kubectl get pods -w` to watch your pods start up
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  8.678 s
[INFO] Finished at: 2021-07-06T06:13:35+02:00
[INFO] ------------------------------------------------------------------------

This is a very simple project, so I didn't create an Ingress to expose the service which means that now the application remains inaccessible. I'll cover the process of creating an Ingress in an additional post. However, if you are running the application in minikube and want to access your application, with JKube is as easy as running:

./mvnw k8s:resource k8s:apply -Djkube.enricher.jkube-service.type=NodePort
# or if a native image was built
./mvnw k8s:resource k8s:apply -Djkube.enricher.jkube-service.type=NodePort -Pnative
minikube service kubernetes-maven-plugin-quarkus

If everything goes well, a browser window will be opened and you'll be able to see a page like the following:

A screenshot of a browser showing the application running on Kubernetes


In this post, I've shown you how to develop and integrate a very simple Quarkus 2 application with GraalVM native image support and the Kubernetes Maven Plugin. In the first section, I demonstrated how to bootstrap the application and create a very simple REST endpoint that will return a random quote for each request. Next, I showed you how to configure the application to be fully compatible with GraalVM to be able to generate a native binary. Finally, I showed you how to configure Kubernetes Maven Plugin to be able to build container images and push them into Docker Hub's registry. In addition, I showed you how simple it is to deploy your application to Kubernetes using Eclipse JKube.

The full source code for this post can be found at GitHub.

Quarkus 2 + GraalVM + Kubernetes Maven Plugin

July 06, 2021 05:30 AM

Retrospective of an Old Man (2)

by Stephan Herrmann at June 23, 2021 08:44 AM

(continued from here).

Today I want to speak about four engineers who have influenced Eclipse JDT in the past: Srikanth, Markus Keller, Till Brychcy and Yours Truly. All of them have left the team over the course of the years. None has left because they didn’t love Eclipse. It would come closer to the truth to say, they left because they loved Eclipse too much. I am not authorized to publicly speak about the personal motives of these people. For that reason I will generalize as if all four situations were identical. Obviously this isn’t true, but still I sense a common gist, some overarching topics that might still be relevant today.

What is said below is meant as observations about these four engineers only. Whatever quality I ascribe to them, may well be shared by others not mentioned here. It’s not about drawing boundaries, but about perceived commonality.


Every healthy community has members with a very focused area of expertise, in which their knowledge is thorough and deep. Other members may have a wide range of interest and involvement, connecting the dots between different focus areas. Some members excel by way of sharp analytical capabilities, others contribute the experience from years of working on a component. The personae in this story drew their strength from combining (at different degrees) all four qualifications: knowledge both deep and wide, understanding both analytically sharp and founded on a body of experience.


While every contributor takes responsibility for their contribution, some people feel responsible for an entire component or even product. In this text I don’t speak of responsibility from a managerial perspective, but of a code-centric point of view that cares about the code in all aspects of quality.

In one of the most complex endeavors that I got involved in, one of our four engineers mentioned in retrospect: “failure was not an option“. Feeling responsible for a component implies to closely track the bugzilla inbox, and to make sure that every incoming bug report is handled with due attention. It also implies to make sure that every proposed code change is of highest possible quality, not only in terms of functionality but also as being in harmony with existing design, to prevent architectural decay. Such responsibility finally implies to care about completeness of functionality. In particular when a new Java version changes the game, JDT must support users in working in that new environment. As an example for the latter aspect think of the conflict between unit testing and the Java Platform Module System. In this context, treating test sources differently from main sources is a must, which is why Till added this functionality to JDT (a huge project of its own).


Working with the attitude described above requires a lot of energy. In a healthy community this energy flows in both directions, and everything may well be sustainable. In the cases of our four engineers, however, too much energy was burned, obstacles appeared and caused frustration. Some may get frustrated when they see code changes that are made with insufficient understanding, unaware of maintenance costs incurred. Sometimes it’s the tone of discussion that kills any pleasure of contributing. There’s also a structural conflict between meritocracy and unlimited openness of a community: Think of our engineer who wants to take responsibility, he will have to make some decisions. He feels that the rules of meritocracy grant him permission to make such decision and furthermore he is convinced that such decision is necessary to fulfill the assumed responsibility. If the community doesn’t accept such decisions, our engineer would have to fight an extra (seemingly unnecessary) battle just to get permissions for enacting a decision he deems necessary for the greater goal.

In all four cases the community failed to balance the flow of energy, to get obstacles out of the way, to empower those who feel responsible to enact that responsibility. Not all four experienced a full-blown burn-out, but that’s more a question of whether or not they pulled the plug in good time.

Two of our engineers actually had to quit their job to leave Eclipse, the other two didn’t have to ask anybody, because they never received a penny for their contributions.


Is it just normal that engineers risk burn-out? Is it something happening to everybody? Is it acceptable? Does it make a difference to the community who is it who gets frustrated? More generally speaking: is the community willing to make any difference between newbies and potential “meritocrats”? Does the community actually support the goal of excellent code quality, or is speed of change, e.g., considered more important?

In my previous post I spoke of the end of an era. Does the new era have use for engineers like these? It may not be an extinct species, but I’m afraid their number is dwindling…


No, in this post I’m not begging for any “presents” that would lure me back into my previous position. I had my share of responsibility. Quite likely I had bitten off more then I could possibly chew, when I felt responsible not only for all of the compiler (ecj) per-se, but also to drive Oracle towards improving JLS in ways that would enable the teams behind ecj and javac to finally converge on the same language. I felt responsible at other levels, too, but I will not enumerate my merits. I made a final decision to shed all this responsibility – in good time before it would affect my health.

by Stephan Herrmann at June 23, 2021 08:44 AM

ConPTY Performance in Eclipse Terminal

by Jonah Graham at June 21, 2021 02:02 PM


For many years the Eclipse IDE has provided an integrated terminal (called Eclipse TM Terminal) and now maintained by the Eclipse CDT team. On Windows the terminal uses the amazing WinPTY library to provide a PTY as Windows did not come with one. For the last number of years, Windows 10 has a native version called Windows Pseudo Console (ConPTY) which programs such as VSCode and Eclipse Theia have converted to using, in part because of the fundamental bugs that can’t be fixed in WinPTY. The WinPTY version in Eclipse is also quite out of date, and hard to develop as it is interfaced to by JNI.

For Eclipse 2021-06 the Eclipse CDT will be releasing a preview version of the terminal that will use ConPTY. For interfacing to ConPTY we are using JNA which is much easier to develop because all the interfacing is in the Java code.

One of the open questions I had was whether there would be a performance issue because of the change to ConPTY. In particular, while JNA is slower for some things the ease of use of JNA normally far outweighs the performance hit. But I wanted to make sure our use case wasn’t a problem and that there wasn’t anything else getting in the way of the terminal’s performance.

Shell to Eclipse Terminal Performance

I have analyzed the performance of a process running in the shell writing to stdout as fast as possible to compare various different terminal options on my Windows machine. The Java program creates a byte[] of configurable size and writes that all to System.out.write() in one go, with some simple wall clock timing around it. See the SpeedTest attachment for the source.

I used 5 terminal programs to test the performance:

  • Windows Command – the classic terminal when you run cmd.exe for example
  • Eclipse with WinPTY
  • Eclipse with ConPTY
  • Windows Terminal (from the people who wrote conpty)
  • VSCode’s integrated terminal using ConPTY

And in each of them I ran the same Java program in 3 different shells:

  • cmd.exe
  • WSL2 running Ubuntu bash
  • git bash

Short summary is that WinPTY and Windows Command are much faster than the rest. ConPTY is quite a bit slower, whether used in Eclipse or Windows Terminal. VSCode is dramatically slower than the rest.

cmd.exeWSL2git bash
Windows Command8.33.54.2
Eclipse with WinPTY12.51.67.7
Eclipse with ConPTY1.81.72.0
Windows Terminal2.22.12.4
Full table of results based on a 10MiB write, reported in MiB/second, rounded to nearest 0.1 MiB/s:

As a comparison, on the same machine dual-booted into Xubuntu 18.04 I ran the following 5 terminals:

  • Eclipse – 23.1 MiB/s
  • VSCode – 3.0 MiB/s
  • xterm – 6.3 MiB/s
  • xfce4-terminal – 10.7 MiB/s
  • gnome-terminal – 10.2 MiB/s

The above shows that the raw speed of Eclipse Terminal is very good, it simply requires the best possible PTY layer to achieve the best speeds.

Eclipse Terminal to Shell Performance

I was going to run an Eclipse -> Shell test to make sure writes to the terminal hadn’t regressed. However the terminal has an artificial throttle in this path that limits performance to around 0.01 MiB/s, plenty fast to type, but much slower than a performant system could be. The code could probably be revisited because presumably the new ConPTY does not suffer from these buffering issues, and the throttling probably should not be there for non-Windows at all.


I am pleased that the performance of ConPTY with JNA is close to the new dedicated Microsoft Terminal and much faster than VSCode. Therefore I plan to focus my time on other areas of the terminal, like WSL integration and bug fixes with larger impacts. I am grateful to the community’s contributions and I will happily support/test/integrate any improvements, such as the upcoming Ctrl-Click functionality that was contributed by Fabrizio Iannetti and will be available in Eclipse IDE 2021-06.

Because much of the performance slowdown is because of ConPTY itself, which is actively being developed at Microsoft I hope that Eclipse will benefit from those performance improvements over time. There is no plan to remove the WinPTY implementation anytime soon, so if there is a user who feels impacted by the slowdown I encourage them to reach out to the community (cdt-dev mailing list, tweet me, comment on this bug or create a bug report).

by Jonah Graham at June 21, 2021 02:02 PM

WTP 3.22 Released!

June 16, 2021 12:01 PM

The Eclipse Web Tools Platform 3.22 has been released! Installation and updates can be performed using the Eclipse IDE 2021-06 Update Site or through any of the related Eclipse Marketplace . Release 3.22 is included in the 2021-06 Eclipse IDE for Enterprise Java and Web Developers , with selected portions also included in several other packages . Adopters can download the R3.22 p2 repository directly and combine it with the necessary dependencies.

More news

June 16, 2021 12:01 PM

Jakarta EE Community Update May 2021

by Tanja Obradovic at June 08, 2021 03:21 PM

The month of May was very busy with activities related to Release 9.1! Jakarta EE development and innovation is definitely taking off with full speed. I would like to take this opportunity and invite you all to join the momentum, please get involved and help contribute to the future success of Jakarta EE.

The highlights in May 2021 are as follows:

Jakarta EE 9.1 released May 25th! 

After only six months of the Jakarta EE 9.0 release, the Jakarta EE Working Group has released Jakarta EE 9.1. As requested by the community, the main driver for this release is Java SE 11 support. The additional importance of this release is the fact that we, for the very first time, have multiple compatible products at the time of the release.

Please visit Jakarta EE 9.1 release page, and review all the available compatible products and proceed to the compatible products download page.

For more information please refer to the press release: The Jakarta EE Working Group Releases Jakarta EE 9.1 as Industry Continues to Embrace Open Source Enterprise Java


The Jakarta EE community and Working Group is growing

 It is great to see new members continuously joining the Working Group.

This month I am very happy to welcome Apache Software Foundation (ASF) as a guest member of the Jakarta EE Working Group! I strongly believe that ASF does not need any introduction. However, I want to put emphasis on the importance of Apache Tomcat and Apache TomEE to the Jakarta EE ecosystem. Note that Apache TomEE 9.0.0-M7 is now a Web Profile Jakarta EE 9.1 Compatible Product.

I am also very excited to see another member in China, Beijing Baolande Software Corporation joining the Jakarta EE Working Group! Beijing Baolande Software Corporation develops and sells middleware software, cloud management platform software, and application performance management software. The Company develops and sells application server software, transaction intermediate software, cloud management platform, and other products. Beijing Baolande Software provides related technical services.

SouJava, a Brazilian Java User Group,  involvement in Jakarta EE is well known, as their members are actively involved in Jakarta EE projects and community events. 

SouJava's members are heavily involved with Jakarta EE Specifications and are members of Adopt-A-Spec program for the following specifications

 - Jakarta MVC

 - Jakarta NoSQL

 - Jakarta RESTful Web Services

 - Jakarta Persistence

SouJava was involved in organizing JakartaOne Livestream Brazil 2020 event and is now involved in organizing JakartaOne Livestream Portuguese 2021.

This is a call to other JUGs to explore the possibility of joining Jakarta EE Working Group. Approach us and let us know if membership is something you would be interested in.


JakartaOne Livestream events for the rest of the year!

Our popular JakartaOne Livestream virtual conference series for the rest of the year is scheduled. We are having language specific events as well as our annual JakartaOne Livestream 2021 in English.

Please save these dates:

  • August 21st, 2021 if you speak Turkish, here is an event for you: JakartaOne Livestream - Turkish

  • September 29th, 2021 if you speak Portugese, this one's for you: JakartaOne Livestream - Portugese

  • October 1st, 2021 if you speak Spanish, keep an eye for the website for  JakartaOne Livestream - Spanish

  • December 7th, 2021 is reserved for our annual event in English! JakartaOne Livestream 2021

Jakarta EE 10 is taking shape!

I am beyond excited to see all the progress we see related to Jakarta EE 10 in GitHub (label EE10).  The creation/plan review for Jakarta EE Core Profile 10 was approved by the Jakarta EE Specification Committee. Jakarta EE Web Profile 10 and Jakarta EE Platform 10 issues are in discussion and plan reviews are expected soon. Please join the discussion and Jakarta EE Platform call to provide your input, refer to  Jakarta EE Specifications Calendar (public url, iCal)  for details on all technical calls.


Jakarta EE Individual Specifications and project teams 

We have organized a public calendar Jakarta EE Specifications Calendar (public url, iCal) to display all Jakarta EE Specification project teams meetings. Everyone interested is welcome to join the calls. Do note that the Jakarta EE Platform team is extremely busy and productive. The call is public and is welcome to all people who would like to contribute to technical discussions.

Individual specifications are planning their next release. You can review all the plans submitted for review, some are still open and quite a few are closed, here. I would like to draw your attention to a new specification, Jakarta Config:

“Jakarta Config is a Java API for working with configurations. It supports externalized configuration allowing applications to use different configurations for different environments (dev, test, prod), and allows reading data from different layered configuration sources such as property files, environment variables, etc.”

Select the one that you are interested in and help out. Each specification team is eager to welcome you! 


Want to learn how to use Jakarta EE?  

The Eclipse Cargo Tracker is a fantastic example of an end-to-end Jakarta EE application that showcases core Jakarta EE technologies. Thanks to Scaleforce and Jelastic for providing resources to deploy the demo application to the cloud.

Give the Cargo Tracker a try and consider contributing to the project at Cargo Tracker GitHub repository.


Hibernate as compatible Jakarta Persistence implementation

More exciting news about compatible implementations of individual specifications! Well known object relational mapping tool Hibernate, is implementing Jakarta Persistence specifications!

The latest stable version of Hibernate 5.5, is a compatible implementation of Jakarta Persistence 3.0 and Jakarta Persistence 2.2


EclipseCon 2021 CFP is open till June 15!

Mark your calendars: EclipseCon 2021 is taking place October 25th - 27th 2021! The call for papers is open for another week! We are looking forward to your submission. You can see accepted talks here

Book your 2021 Jakarta EE Virtual Tour and Adopt-A-Spec

We are looking for the opportunity to virtually visit you, so don’t hesitate to get in touch ( if you’d like to hear about Jakarta EE 9.1 and beyond.

We need help from the community! All JUGs out there, please choose the specification of your interest and adopt it. Here is the information about the Adopt-A-Spec program. 



Stay Connected With the Jakarta EE Community

The Jakarta EE community is very active and there are a number of channels to help you stay up to date with all of the latest and greatest news and information. Subscribe to your preferred channels today:

·  Social media: Twitter, Facebook, LinkedIn Group, LinkedIn Page

·  Mailing lists:,, project mailing lists, slack workspace

·  Calendars: Jakarta EE Community Calendar, Jakarta EE Specification Meetings Calendar 

·  Newsletters, blogs, and emails: Eclipse newsletter, Jakarta EE blogs, Hashtag Jakarta EE

·  Meetings: Jakarta Tech Talks, Jakarta EE Update, and Eclipse Foundation events and conferences

You can find the complete list of channels here.

To help shape the future of open source, cloud native Java, get involved in the Jakarta EE Working Group.

To learn more about Jakarta EE-related plans and check the date for the next Jakarta Tech Talk, be sure to bookmark the Jakarta EE Community Calendar.

We always welcome your feedback!

by Tanja Obradovic at June 08, 2021 03:21 PM

The Apache Software Foundation has joined Jakarta EE Working Group

by Tanja Obradovic at May 20, 2021 04:28 PM

I am extremely happy to let you know that The Apache Software Foundation (Apache, ASF) has joined Jakarta EE Working Group! 

Apache needs no introduction, but let me remind everyone about their involvement with Jakarta EE / Java EE  community goes way back, with Apache TomEE and Apache TomCat  implementations. We are looking forward to this, now even tighter, collaboration and all contributions in any / all Jakarta EE related projects and initiatives. Our Jakarta EE members page is now showcasing Apache as well!

Please join me in welcoming the Apache Software Foundation to Jakarta EE Working Group!

by Tanja Obradovic at May 20, 2021 04:28 PM

Make This Person We Hired a Committer

May 20, 2021 12:00 AM

Here’s a scenario: you work for an organization that contributes to open source and you’ve hired a developer to work on your favourite open source project. You need to make them a committer. How do you do that? I get this sort of request every so often: “we’ve hired so-and-so and we need you to make them a committer, m’kay?”. The short answer is: no. The Eclipse Foundation Development Process, and all Eclipse open source projects by extension, work on three principles that we refer to as the Open Source Rules of Engagement which states that Eclipse open source projects must operate in an open, transparent, and meritocratic manner.

May 20, 2021 12:00 AM

Running a Successful Open Source Project

May 14, 2021 12:00 AM

Originally posted on October 26/2017. This post is based on a talk that Gunnar Wagenknecht and I delivered at the Open Source Leadership Summit 2017 and Devoxx US 2017. This content was published in the All Eyes on Open Source issue of JAX Magazine. Running an open source project is easy. All you have to do is make your source code available and you’re open source, right? Well, maybe. Ultimately, whether or not an open source project is successful depends on your definition of success.

May 14, 2021 12:00 AM

Bringing Chromium to Eclipse RCP

by Patrick Paulin at May 12, 2021 10:55 PM

One of the most common questions I’m asked by my clients is whether it’s possible to utilize web-based UI frameworks (Angular, React, Vue, etc.) in an Eclipse RCP application. Until now, my answer has been no, largely because of the limitations of the SWT Browser control.

Well I’m happy to say that things are starting to change in this area, though there is still much work to do.

New SWT support for Microsoft Edge Chromium

In 2018 Microsoft made the surprising decision to base its future Edge browser on Chromium. Support for Edge Chromium is now available in one of two forms – the Edge browser itself and the new WebView2 control.

The WebView2 control is of particular interest because it allows for the embedding of web-based UI elements into native applications. And I’m happy to say that as of the 2021-03 Eclipse release, we can now embed this control in Eclipse RCP applications. While of course limited to the Win32 platform, this support for Chromium in Eclipse RCP makes it possible to fully leverage your web-based UI framework of choice.

To try this out, you’ll need to do two things:

  1. Install the WebView2 runtime (the Edge browser is not required). There are a variety of options for installing the runtime, and I think this situation will continue to evolve. Long-term, Microsoft is going to rely heavily on this control in their application suite and is now deploying the control along with it.
  2. In your SWT Browser control, pass the SWT.EDGE flag in the constructor. Alternatively, you can pass this argument on the command line: -Dorg.eclipse.swt.browser.DefaultType=edge

Here’s a simple Eclipse RCP application using the Edge Chromium browser.

The work left to do

Support for the WebView2 control in SWT is still experimental, and there’s a short but growing list of bugs/enhancement requests. Also, here is a list of the known limitations of the SWT control. Some of these limitations need to be addressed by Microsoft (in particular support for getting/setting cookies) and they are making good progress.

Of course the biggest issue is that this is not a cross-platform solution. So what about MacOS and Linux?

Cross-platform Chromium

There was an attempt made over the past few years to create cross-platform Chromium support in SWT. This support was based on using Rust to wrap the Chromium framework with platform specific controls that would be accessible to SWT.

Unfortunately, this effort has recently been abandoned. My take is that the problem was ultimately a lack of developer support. Without strong developer interest and engagement, the effort was not going to succeed. The Eclipse Platform PMC is still open to the idea of a cross-platform Chromium control, perhaps in the form of a Nebula contribution. So if you’re interested in picking up this work and running with it, let them know.

Another possible solution is that Microsoft releases WebView2 controls for MacOS and Linux . Then the SWT support for this control could be made available for all Eclipse RCP deployment platforms.

Wrapping up

This is definitely early days for Chromium support in Eclipse RCP applications, but I see a clear path forward. My hope is that in the near future Eclipse RCP developers will have access to robust cross-platform Chromium support.

And once that happens, the opportunities for utilizing Eclipse RCP become very interesting. Eclipse RCP applications would be well-suited to host modular microservices in the UI (sometimes called Micro Frontends). And if these microservices were written using web-frameworks running on Chromium, they could easily be migrated to or co-hosted by other Chromium-based frameworks such as Electron.

But that’s a story for another day 🙂

by Patrick Paulin at May 12, 2021 10:55 PM

Behind the Scene #4

April 29, 2021 10:00 AM

I am happy to present you the new Sirius Web “Behind the scene” session. Here and now, Guillaume Coutable, Consultant at Obeo, gives a demonstration of the list container support he is working on.

We are thankful to all our customers for their support to Sirius Web! See you next month for another “Behind the scene” session!

April 29, 2021 10:00 AM

Building a cmd-line application to notarize Apple applications using Quarkus and GraalVM Native Image support

by Tom Schindl at April 23, 2021 06:39 PM

So I’ve been searching for a long time for a small side project where I could give Quarkus’ Native-Image support a spin. While we are using Quarkus in JDK mode in almost all of our server applications there was no need yet to compile a native binary.

This week although I found the perfect usecase: I’ve been banging my head against codesigning and notarizing an e(fx)clipse application (shipped with a JRE) the whole week.

Doing that requires to execute a bunch of cmd-utilities one by one. So I came up with the plan to write an Quarkus command line application, compiled to a native executable to automate this process a bit more. Yeah there are go and python solutions and I could have simply written a shell-script but why not try something cooler.

The result of this work is a native OS-X executable allowing me to codesign, create a dmg/pkg, notarize and finally staple the result as you can see from the screenshot below


As of now this does not include anything special for Java application so it can be used for any application (I currently have an artifical restriction that you can only use .app)

All sources are available at and I added a pre-release of the native executable. Miss a feature, found a bug? Feel free to file a ticket and provide a PR.

by Tom Schindl at April 23, 2021 06:39 PM

Retrospective of an Old Man

by Stephan Herrmann at April 05, 2021 05:45 PM

Last summer I dropped my pen concerning contributions for Eclipse JDT. I never made a public announcement about this, but half a year later I started to think: doesn’t it look weird to receive a Lifetime Achievement Award and then run off without even saying thanks for all the fish? Shouldn’t I at least try to explain what happened? I soon realized that writing a final post to balance accounts with Eclipse would neither be easy nor desirable. Hence the idea, to step back more than a couple of steps, and put my observations on the table in smaller chunks. Hopefully this will allow me to describe things calmly, perhaps there’s even an interesting conclusion to be drawn, but I’ll try to leave that to readers as much as I can.


While I’m not yet preparing for retirement, let me illustrate the long road that led me where I am today: I have always had a strong interest in software tools, and while still in academia (during the 1990s) my first significant development task was providing a specialized development environment (for a “hybrid specification language” if you will). That environment was based on what I felt to be modern at that time: XEmacs (remember: “Emacs Makes A Computer Slow”, the root cause being: “Eight Megabytes And Continuously Swapping”). I vaguely remember a little time later I was adventurous and installed an early version of NetBeans. Even though the memory of my machine was upped (was it already 128 MB?), that encounter is remembered as surpassing the bad experience of Emacs. I never got anything done with it.

A central part of my academic activity was in programming language development in the wider area of Aspect Oriented Software Development. For pragmatical reasons (and against relevant advice by Gilad Bracha) I chose Java as the base language to be extended to become ObjectTeams/Java, later rebranded as OT/J. I owe much to two students, whose final projects (“Diplomarbeit”) was devoted to two successive iterations of the OT/J compiler. One student modified javac version 1.3 (the implementation by Martin Odersky, adopted by Sun just shortly before). It was this student who first mentioned Eclipse to me, and in fact he was using Eclipse for his work. I still have a scribbled note from July 2002 “I finally installed Eclipse” – what would have been some version 2.0.x.

One of the reasons for moving forward after the javac-based compiler was: licensing. Once it dawned on me, that Eclipse contains an open-source Java compiler with no legal restrictions regarding our modifications, I started to dream about more, not just a compiler but an entire IDE for Object Teams! As a first step, another student was assigned the task to “port” our compiler modifications from javac to ecj. Some joint debugging sessions (late in 2002?) with him where my first encounters with the code base of Eclipse JDT.

First Encounters with Eclipse

For a long period Eclipse to me was (a) a development environment I was eager to learn, and (b) a huge code base in CVS to slowly wrap our heads around and coerce into what we wanted it to be. Surely, we were overwhelmed at first, but once we had funding for our project, a nice little crowd of researchers and students, we gradually munched our way through the big pile.

I was quite excited, when in 2004 I spotted a bug in the compiler, reported it in bugzilla (only to learn, that it had already been fixed 🙂 ). It took two more years until the first relevant encounter: I had spotted another compiler bug, which was then tagged as a greatbug. This earned me my first Eclipse T-shirt (“I helped make Callisto a better place“). It’s quite washed out, but I still highly value it (and I know exactly one more person owning the same T-shirt, hi Ed 🙂 ).

Soon after, I met some of my role models in person: at ECOOP 2006 in Nantes, the Eclipse foundation held a special workshop called “eTX – Eclipse Technology Exchange“, which actually was a superb opportunity for people from academia to connect with folks at Eclipse. I specifically recall inspiring chats with Jerome Lanneluc (an author of JDT’s Java Model) and Martin Aeschlimann (JDT/UI). That’s when I learned how welcoming the Eclipse community is.

During the following years, I attended my first Eclipse summits / conferences and such. IIRC I met Philippe Mulet twice. I admired him immensely. Not only was he lead developer of JDT/Core, responsible for building much of the great stuff in the first place. Also he had just gone through the exercise of moving JDT from Java 1.4 to Java 5, a task that cannot be overestimated. Having spoken to Philippe is one of the reasons why I consider myself a member of a second generation at Eclipse: a generation that still connects to the initial era, though not having been part of it.

End of an Era

For me, no other person represents the initial era of Eclipse as much as Dani did. That era has come to an end (silence).

Still a few people from the first generation are around.

Tom Watson is as firm as a rock in maintaining Equinox, with no sign of fatigue. I think he really is up for an award.

Olivier Thomann (first commit 2002) still responsibly handles issues in a few weird areas of JDT (notably: computation of StackMaps, and unicode handling).

John Arthorne has been seen occasionally. He was the one who long, long time ago explained to me the joke behind package org.eclipse.core.internal.watson (it’s elementary).

What was it like to join the community?

It wasn’t before 2010 that I became a committer for JDT/Core. I was the first committer for JDT who was not paid by IBM. I was immensely flattered by the offer. So getting into the inner circle took time: 6 years from first bug report until committer status, while all the time I was more or less actively hacking on our fork of JDT. This is to say: I was engaged with the code all the time. Did I expect things to move faster? No.

Even after getting committer status, for several years mutual reviews of patches among the team were the norm. The leads (first Olivier, then Srikanth) were quite strict in this. Admittedly, I had to get used to that – my patches waiting for reviews, my own development time split between the “real work” and “boring” reviews. In retrospect the safety net of peer reviews was a life saver – plus of course a great opportunity to improve my coding and communication skills. Let me emphasize: this was one committer reviewing the patches of another committer.

I had much respect for the code base I worked with. While working in academia, I had never seen such a big and complex code base before. And yet, there was no part that could not be learned, as all the code showed a clear and principled design. I don’t know how big the impact of Eric Gamma on details of the code was, but clearly the code spoke with the same clarity as the GoF book on design patterns.

As such, I soon learned a fundamental principle for newcomers: “Monkey see, monkey do“. I appreciated this principle because it held the promise that my own code might share the same high quality as the examples I found out there. In later days, I heard a similar attitude framed as “When in Rome, do as Romans do“.

My perspective on JDT has always been determined by entering through the compiler door. For myself this worked out extremely well, since nothing helps you understand Java in more depth and detail, than fixing compiler bugs. And understanding Java better than average I consider a prerequisite for successfully working on JDT.

to be continued

by Stephan Herrmann at April 05, 2021 05:45 PM

Java monolith to microservice refactoring with Eclipse tooling

by Patrick Paulin at March 30, 2021 06:43 PM

As a developer working heavily with OSGi and Eclipse RCP, I’ve spent a lot of time breaking monolithic applications into modules. What I’ve found is that OSGi and it’s associated Eclipse tooling (primarily the Plug-in Development Environment or PDE) are very good at enabling the kind of fine-grained refactoring moves that allows such projects to succeed.

This got me thinking that these technologies and tooling might be useful to anyone trying to refactor a Java monolith into microservices. And it turns out you can do this even if you don’t want to build or deploy OSGi bundles. The tooling can stand on its own and enable a much more powerful and intuitive refactoring workflow.

If you’re interested in learning more about this, I’ve written an article that describes both why refactoring is a good approach to microservice extraction and how Eclipse tooling can help.

Or if you’re interested in meeting with me to find out how this approach could be applied in your projects, why not schedule a free remote consultation and demo?

by Patrick Paulin at March 30, 2021 06:43 PM

Xtext vs. MPS: Decision Criteria

by Niko Stotz at March 19, 2021 08:37 PM

tl;dr If we started a new domain-specific language tomorrow, we could choose between different language workbenches or, more general, textual vs. structural / projectional systems. We should decide case-by-case, guided by the criteria targeted user group, tool environment, language properties, input type, environment, model-to-model and model-to-text transformations, extensibility, theory, model evolution, language test support, and longevity.

This post is based on a presentation and discussion we had at the Strumenta Community. You can download the slides, although reading on might be a bit more clear on the details. Special thanks to Eelco Visser for his contributions regarding language workbenches besides Xtext and MPS.


This whole post wants to answer the question:

Tomorrow I want to start a new domain-specific language.
Which criteria shall I think about to decide on a language workbench?

The most important, and most useless answer to this question is: “It depends.” Every language workbench has its own strengths and weaknesses, and we should assess them anew for each language or project. All criteria mentioned below are worth consideration, and should be balanced towards the needs of the language or project at hand.

Almost every aspect described below can be realized in any language workbench — if we really wanted to torture ourselves, we could write an ASCII-art text DSL to “draw” diagrams, or force a really complex piece of procedural logic into lines and boxes. On the other hand, an existing text-based processing chain integrates rather well with a textual DSL, and tables work nicely in a structured environment.

I personally only know Xtext and MPS good enough to offer an educated opinion; Thankfully, during the presentation several others chimed in to offer additional insights. Thus, we can extend this post’s content (to some degree) to “Textual vs. Structural: Decision Criteria”.

What do we mean with textual and structural language workbenches?

As a loose distinction, we’re using the rule of thumb “If you directly edit what’s written on disk, it’s textual.”

Structural describes both projectional and graphical systems. In projectional systems, the user has no influence on how things are shown; with structural systems, the user may have some influence — think of manually layouting a diagram (thanks to Jos Warmer for this clarification).

Examples of textual systems include

  • MontiCore
  • Racket
  • Rascal
  • Spoofax
  • Xtext

Examples of structural systems are

  • MetaEdit+
  • MPS
  • Sirius

Targeted User Group

If our DSL targeted developers, we might go for a textual system. Developers are used to the powerful tools provided by a good editor or an IDE, and expect this kind of support for handling their “source code” — or, in this case, model. Textual systems might integrate better with their other tools.

If we targeted business users, they might prefer a structural system. The main competitor in this field is Excel with hand-crafted validation rules and obscure VBA-scripts attached. Typically, business users can profit more from projectional features like mixing text, tables and diagrams.

Tool Environment

If our client had an existing infrastructure to deploy Eclipse-based tooling, we probably wanted to leverage that. This implies using an Eclipse-based language workbench like Rascal, Sirius or Xtext. If we wanted model integration with existing tools, EMF would be our best bet, pointing towards Eclipse.

If our client already leaned towards IntelliJ or similar systems, MPS would be more familiar with them. Spoofax supports both Eclipse and IntelliJ.

Language Properties

If (parts of) our DSL had an established text-based language, we wanted to reuse this existing knowledge in our users and provide a similar textual language. Textual syntax often provides aids to parsers that are difficult to reproduce fluently in structural systems.

As an example, think of a C-style if-statement. In text, the user types i, f, maybe a space, and ( without even thinking about it. In a projectional editor, she still types i and f, but the parenthesis is probably automatically added by the projection.

// | denotes cursor position
if (|«condition») {

If she typed (, we would have two bad choices: either we add the parenthesis inside the condition, which is probably not what the user wanted in 95 % of the cases; or we ignore the parenthesis, making the other 5 % really hard to enter.

One important language property is whether we can parse it with reasonable effort and accuracy. For more traditional systems like ANTLR and Xtext, we reach the threshold of unparsable input rather quickly. More advanced systems like Spoofax and Rascal can handle ambiguities well. However, as an extreme example, I doubt we could ever have a parser that reconstructs the semantics of an ASCII-art UML diagram. More realistically, it might be pretty hard for a parser to distinguish mixed free text with unmarked references — think of a free text with some syntactically unmarked references to a user-defined ontology sprinkled in the text: This is free text, with Ornithopters or other Dune references.

Other structures might be parsable, but are very cumbersome to enter — I have yet to see a textual language where writing tables is less than annoying.

Related to parseability is language integration. Almost all technical languages use traditional parser systems, leading to the joy of escaping: <span onclick="if(myVar.substr(\"\\'\") &lt; 5) = \'.header &gt; ul { font-weight: bold; } \'">. More modern languages aren’t that pedantic, but try to write the previous sentence in markdown …​

If we wanted to integrate non-textual content or languages in a textual system, it gets tricky pretty soon. In fact, we had to solve a lot of the problems projectional editors face. As an example, think of the parameter info many IDEs can project in the source code: The Java file contains myObj.myFunc("Niko", false), but the IDE displays myObj.myFunc(name: "Niko", authorized: false). If the cursor was just right to the opening parenthesis, and we pressed right arrow, would we move to the left or right of the double quotes? What if the user could interact with the projected part, e.g. a color selector? These examples are projected mix-ins, but it doesn’t get better at all if we imagined the file contents <img src="data:image/png;base64,iVBORw …​"/>, and wanted to display an inline pixel editor. The aforementioned table embedded into some text is another example.

Structural systems really shine if we wanted to have different editors for the same content, or different viewpoints on the content. To illustrate different editors for the same content, think of a state machine. If we wanted to discuss it with our colleagues, it should be presented in the well-known lines-and-boxes form. We still wanted to retarget a transition or add a state graphically. However, if we had to write it from scratch and had a good structure in mind, or just wanted to refactor an existing one, a text-like representation would be much more efficient.

Different view points can be as simple as “more or less detail”: in a component model, we might want to see only the connections between components, or also their internal wiring. Textual editors can also hide parts of the content — most IDEs, by default, fold the legal header comment in a source code file.
As an example of different viewpoints, imagine a complex model of a machine that integrates mechanical, electrical, and cost aspects. All of these are interconnected, so the integrated model is very valuable. Hardly anybody would like to see all the details. However, different users would be interested in different combinations: the safety engineer needs to know about currents and moving parts, and the production planner wants to look at costs and parts that are hard to get. In a textual system, we could create reports with such contents, but had to accept serious limitations if we wanted all the viewpoints to be editable (e.g. a complex distribution to different files + projection into a different file).

Input Type

A blank slate can be unsuitable for some types of users and input. If we wanted the user to provide very specific data, we would offer them a form or a wizard. These are very simple structured systems. A state machine DSL provides the user with much more flexibility, but enforces some structure — we can’t point a transition to another transition, only to a state. In a structured implementation of this DSL, the user would just not be able to create such an invalid transition; a textual DSL would allow to write it, but mark it as erroneous. If our users were developers, they would be used to starting with an empty window, entering the right syntax, and handling error messages. If we targeted people mostly dealing with forms, they might be scared by the empty window, or would not know how to fix the error reported by the system. (“Scared” might sound funny, but there’s quite some anecdotal evidence.) In a structural system, developers might be really annoyed that they have 15 very similar states with only one transition each, but still have to write them as separate multi-line blocks; they felt limited by the rigid structure. For the other group, we could project explanatory texts, and visually separate scaffolding from places where they should enter something; they felt guided by pre-existing structure.

To some degree we can adjust our language design to the appropriate level of flexibility. If we implemented a OO-class like system, we could either allow class content in arbitrary order, or (by grammar / language definition) enforce to first write constructors, then attributes, then public methods, and private methods only at the end.


Textual systems have been around for a long time, so we know how to integrate them with other systems. Any workflow system can move text files around, and every versioning system can store, merge and diff such files. We understand perfectly how to handle them as build artifacts, and can inspect them on any system with a simple text editor. The Language Server Protocol provides an established technology to use textual languages in web context.

Any such integration is more complicated with structural systems. It might store its contents in XML or binary, thus we require specific support for version control. As of now (March 2021), I’m not aware of a production-quality structural language workbench based on web technology. I hope this will change within next year.

On the other hand, if our project does not require tight external integration and targets a desktop environment, a system like MPS provides lots of tooling out-of-the box that’s well integrated with each other.

Transformations: Model-to-Model

The main distinction for this criteria is between EMF-enabled systems, and others. Our chances to leverage existing transformation technologies, or re-use existing transformations, were pretty good in an EMF ecosystem. EMF provides a very powerful common platform, and a plethora of tooling (both industrial and academic) is available.

Two very strong suits of MPS are intermediate languages, and extensible transformations. EMF provides frameworks to lock several model-to-model transformations into a chain, but it still requires quite some manual work and plumbing. In MPS, this approach is used extensively both by MPS itself, and most of the more complex custom languages I know of. The tool support is excellent; for example, it takes literally one click to inspect all intermediate models of a transformation chain.

Every model-to-model transformation in MPS can be extended by other transformations. It depends on language and transformation design how feasible a specific extension is in practice, but it is used a lot in real-world systems.

Transformations: Model-to-Text

Tightly controlling the output of a model-to-text transformation tends to be easier in textual systems. On the one hand, it’s doable to maintain the formatting (i.e. white space, indentation, newlines) of some part of the input. On the other hand, the system is usually designed to output arbitrary text, so we can tweak it as required. Xtend integrates very nice with Xtext (or any other EMF-based system), and provides superior support for model-to-text transformation: It natively supports polymorphic dispatch, and allows to indent generation templates by both the template and the output structure, with a clear way to tell them apart.

If we didn’t need, or even wanted to prevent, customization of the output, structural systems could be helpful. The final text is structured by the transformation, or post-processed by a pretty printer.

For MPS, we need to consider whether the output format is available as a language. In this case, we use a chain of model-to-model transformations and have the final model take care of the text output, which usually is very close to the model. Java and XML languages are shipped with MPS, C, JSON, partial C++, partial C#, and others are available from the community.


Xtext assumes a closed world, whereas MPS assumes an open world. Thus, if we wanted to tightly control our DSL environment, we have very little effort with Xtext. Using MPS in a controlled environment requires a lot of work.

On the other hand, if our DSL served as an open platform, MPS inherently offers any kind of extensibility we could wish for. We had to explicitly design each required extension point in Xtext.

Conceptual Framework / Theory

Parsers and related text-processing tools are well-researched since the 1970s, and continues to move forward. Computer science build up solid theoretical understanding of the problem and available solutions. We can find several comparable, stable and usable implementations for any major approach.

Structural systems are a niche topic in computer science; Eelco provided some pointers. We don’t understand structural editors well enough to come up with sensible, objective ways to compare them. All usable implementations I know of are proprietary (although often Open Source).


As parsers are around for a long time, we understand pretty well how they can be tuned. They are widely used, so there’s a lot of experience available how to design a language to be efficiently parsable. Xtext has been used in production with gigabyte-sized models. The same experience provides us with very performant editors. I’d expect a textual system to fail more graceful if we closed in on its limits: loading, purely displaying the content, syntax highlighting, folding, navigation, validation, and generation should scale differently, and the system should be partially useful/usable with a subset of remaining operational aspects. If a model became too big for our tooling, we could always fall back to plain text editors; they can edit files of any size. We also know how to generate from very big models: C++ compilers build up completely inlined files of several hundreds of megabytes; the aforementioned gigabyte-sized Xtext models are processed by generators.

Practical experience with MPS shows scalability issues in several aspects. The default serialization format stores one model with all its root nodes in one XML file. Performance degrades seriously for larger models. Using any of the other default serialization formats (XML per root node; binary) helps a lot. The editor is always rendered completely. Depending on the editor implementation, it might be re-rendered by every model change, or even every cursor navigation. I’m not aware of any comprehensive guide how to tackle editor performance issues (in my experience, we should try to avoid the flow layout for bigger parts of the editor). The biggest performance issue with possibly any structural system is the missing fallback: Once we have a model too big for the system (e.g. by import), it’s very hard to do something about the model’s size, as we would need the system to actually edit the model. Thankfully, we can still edit the model programmatically in most cases. Both validation and generation performance in MPS highly depends on the language implementation. The model-to-model transformation approach tends to use quite some memory; I’d assume model-to-model transformations (with free model navigation) to be harder to optimize for memory usage than model-to-text transformation.

Model Evolution

Xtext does not provide any specific support for model evolution. As conceptual advantage of textual systems, we can migrate models with text processing tools. Search / replace or sed can be sufficient for smaller changes to model instances. As a drawback, we cannot store any meta-information in the model, but out of sight (and manipulation) of the user. Thus, we have to put version information in some way directly into our language content.

MPS stores the used language version with every model instance. It detects if a newer version is available, and can run migration scripts on the instance.

Language Test

Most aspects of Xtext-based languages are implemented in Java (or another JVM language), enabling regular JUnit-based tests. Xtext ships with some utilities to simplify such tests, and to ease tests for parsing errors. Xpect, an auxiliary language to Xtext, allows to embed language-specific tests like validation, auto-complete and scoping in comments of example model instances. In practice, most transformation tests compare the generated output to some reference by text comparison.

Naturally, MPS does not support (or need) parsing tests. It provides specific tests for editors, generators, and other language aspects. The editor tests support checking interaction schemes like cursor movement, intentions, or auto-complete. Generator tests are hardly usable in practice, as they require the generated output model to identical to a reference model, and don’t allow to check intermediate models. The tests for other language aspects use language extensibility to annotate regular models with checks for validation, scoping, type calculation etc. MPS provides technically separated language aspects, and specific DSLs, for e.g. scoping or validation. They are efficient, but make it hard to test contained logic with regular JUnit tests.


We can safely assume we will always be able to open text files once we can read the storage media. Text could even be printed. It’s a bit less clear whether parsing technology in 50 years time will easily cope with the structures of today’s languages. Today’s (traditional, as described above) parsers would have a hard time parsing something like PL/1, where any keyword can be used as identifier in an unambiguous context.

If we stored structured models in binary, it might be very hard to retrieve the contents if the system itself was lost. If we used an XML dialect, we could probably recover the basic structures (containment + type, reference + type, metatype, property) of the model.

Let’s assume we lost the DSL system itself, and only know the model instances, or cannot modify the DSL system. (This scenario is not extremely unlikely — there are a lot of productive mainframe programs without available source code.) I don’t have a clear opinion whether it would be easier to filter out all the “noise” from a parsed text file to recover the underlying concepts, or to reassemble the basic structures from an XML file.

In the more probable case, our DSL system is outdated, but we can still run and modify it, e.g. in a virtual environment. Then we can write an exporter that uses the original retrival logic (irrespective of parsing or structured model loading), and export the model contents to a suitable format.

by Niko Stotz at March 19, 2021 08:37 PM

Publishing an Eclipse p2 composite repository on GitHub Pages

by Lorenzo Bettini at March 15, 2021 02:47 PM

I had already described the process of publishing an Eclipse p2 composite update site:

Well, now that Bintray is shutting down, and Sourceforge is quite slow in serving an Eclipse update site, I decided to publish my Eclipse p2 composite update sites on GitHub Pages.

GitHub Pages might not be ideal for serving binaries, and it has a few limitations. However, such limitations (e.g., published sites may be no larger than 1 GB, sites have a soft bandwidth limit of 100GB per month and sites have a soft limit of 10 builds per hour) are not that crucial for an Eclipse update site, whose artifacts are not that huge. Moreover, at least my projects are not going to serve more than 100GB per month, unfortunately, I might say 😉

In this tutorial, I’ll show how to do that, so that you can easily apply this procedure also to your projects!

The procedure is part of the Maven/Tycho build so that it is fully automated. Moreover, the pom.xml and the ant files can be fully reused in your own projects (just a few properties have to be adapted). The idea is that you can run this Maven build (basically, “mvn deploy”) on any CI server (as long as you have write-access to the GitHub repository hosting the update site – more on that later). Thus, you will not depend on the pipeline syntax of a specific CI server (Travis, GitHub Actions, Jenkins, etc.), though, depending on the specific CI server you might have to adjust a few minimal things.

These are the main points:

The p2 children repositories and the p2 composite repositories will be published with standard Git operations since we publish them in a GitHub repository.

Let’s recap what p2 composite update sites are. Quoting from

As repositories continually grow in size they become harder to manage. The goal of composite repositories is to make this task easier by allowing you to have a parent repository which refers to multiple children. Users are then able to reference the parent repository and the children’s content will transparently be available to them.

In order to achieve this, all published p2 repositories must be available, each one with its own p2 metadata that should never be overwritten. On the contrary, the metadata that we will overwrite will be the one for the composite metadata, i.e., compositeContent.xml and compositeArtifacts.xml.

Directory Structure

I want to be able to serve these composite update sites:

  • the main one collects all the versions
  • a composite update site for each major version (e.g., 1.x, 2.x, etc.)
  • a composite update site for each major.minor version (e.g., 1.0.x, 1.1.x, 2.0.x, etc.)

What I aim at is to have the following paths:

  • releases: in this directory, all p2 simple repositories will be uploaded, each one in its own directory, named after version.buildQualifier, e.g., 1.0.0.v20210307-2037, 1.1.0.v20210307-2104, etc. Your Eclipse users can then use the URL of one of these single update sites to stick to that specific version.
  • updates: in this directory, the metadata for major and major.minor composite sites will be uploaded.
  • root: the main composite update site collecting all versions.

To summarize, we’ll end up with a remote directory structure like the following one

├── compositeArtifacts.xml
├── compositeContent.xml
├── p2.index
├── releases
│   ├── 1.0.0.v20210307-2037
│   │   ├── artifacts.jar
│   │   ├── ...
│   │   ├── features ...
│   │   └── plugins ...
│   ├── 1.0.0.v20210307-2046 ...
│   ├── 1.1.0.v20210307-2104 ...
│   └── 2.0.0.v20210308-1304 ...
└── updates
    ├── 1.x
    │   ├── 1.0.x
    │   │   ├── compositeArtifacts.xml
    │   │   ├── compositeContent.xml
    │   │   └── p2.index
    │   ├── 1.1.x
    │   │   ├── compositeArtifacts.xml
    │   │   ├── compositeContent.xml
    │   │   └── p2.index
    │   ├── compositeArtifacts.xml
    │   ├── compositeContent.xml
    │   └── p2.index
    └── 2.x
        ├── 2.0.x
        │   ├── compositeArtifacts.xml
        │   ├── compositeContent.xml
        │   └── p2.index
        ├── compositeArtifacts.xml
        ├── compositeContent.xml
        └── p2.index

Thus, if you want, you can provide these sites to your users (I’m using the URLs that correspond to my example):

  • for the main global update site: every new version will be available when using this site;
  • for all the releases with major version 1: for example, the user won’t see new releases with major version 2;
  • for all the releases with major version 1 and minor version 0: the user will only see new releases of the shape 1.0.0, 1.0.1, 1.0.2, etc., but NOT 1.1.0, 1.2.3, 2.0.0, etc.

If you want to change this structure, you have to carefully tweak the ant file we’ll see in a minute.

Building Steps

During the build, before the actual deployment, we’ll have to update the composite site metadata, and we’ll have to do that locally.

The steps that we’ll perform during the Maven/Tycho build are:

  • Clone the repository hosting the composite update site (in this example,;
  • Create the p2 repository (with Tycho, as usual);
  • Copy the p2 repository in the cloned repository in a subdirectory of the releases directory (the name of the subdirectory has the same qualified version of the project, e.g., 1.0.0.v20210307-2037);
  • Update the composite update sites information in the cloned repository (using the p2 tools);
  • Commit and push the updated clone to the remote GitHub repository (the one hosting the composite update site).

First of all, in the parent POM, we define the following properties, which of course you need to tweak for your own projects:

<!-- Required properties for releasing -->
<!-- The label for the Composite sites -->
<site.label>Composite Site Example</site.label>

It should be clear which properties you need to modify for your project. In particular, the github-update-repo is the URL (with authentication information) of the GitHub repository hosting the composite update site, and the site.label is the label that will be put in the composite metadata.

Then, in the parent POM, we configure in the pluginManagement section all the versions of the plugin we are going to use (see the sources of the example on GitHub).

The most interesting configuration is the one for the tycho-packaging-plugin, where we specify the format of the qualified version:


Moreover, we create a profile release-composite (which we’ll also use later in the POM of the site project), where we disable the standard Maven plugins for install and deploy. Since we are going to release our Eclipse p2 composite update site during the deploy phase, but we are not interested in installing and deploying the Maven artifacts, we skip the standard Maven plugins bound to those phases:

  <!-- Activate this profile to perform the release to GitHub Pages -->

The interesting steps are in the site project, the one with <packaging>eclipse-repository</packaging>. Here we also define the profile release-composite and we use a few plugins to perform the steps involving the Git repository described above (remember that these configurations are inside the profile release-composite, of course in the build plugins section):

    <!-- sets the following properties that we use in our Ant scripts
      bound by default to the validate phase
          <argument>Release ${qualifiedVersion}</argument>
    <!-- add our new child repository -->

Let’s see these configurations in detail. In particular, it is important to understand how the goals of the plugins are bound to the phases of the default lifecycle; remember that on the phase package, Tycho will automatically create the p2 repository and it will do that before any other goals bound to the phase package in the above configurations:

  • with the build-helper-maven-plugin we parse the current version of the project, in particular, we set the properties holding the major and minor versions that we need later to create the composite metadata directory structure; its goal is automatically bound to one of the first phases (validate) of the lifecycle;
  • with the exec-maven-plugin we configure the execution of the Git commands:
    • we clone the Git repository of the update site (with –depth=1 we only get the latest commit in the history, the previous commits are not interesting for our task); this is done in the phase pre-package, that is before the p2 repository is created by Tycho; the Git repository is cloned in the output directory target/checkout
    • in the phase verify (that is, after the phase package), we commit the changes (which will be done during the phase package as shown in the following points)
    • in the phase deploy (that is, the last phase that we’ll run on the command line), we push the changes to the Git repository of the update site
  • with the maven-resources-plugin we copy the p2 repository generated by Tycho into the target/checkout/releases directory in a subdirectory with the name of the qualified version of the project (e.g., 1.0.0.v20210307-2037);
  • with the tycho-eclipserun-plugin we create the composite metadata; we rely on the Eclipse application org.eclipse.ant.core.antRunner, so that we can execute the p2 Ant task for managing composite repositories (p2.composite.repository). The Ant tasks are defined in the Ant file packaging-p2composite.ant, stored in the site project. In this file, there are also a few properties that describe the layout of the directories described before. Note that we need to pass a few properties, including the site.label, the directory of the local Git clone, and the major and minor versions that we computed before.

Keep in mind that in all the above steps, non-existing directories will be automatically created on-demand (e.g., by the maven-resources-plugin and by the p2 Ant tasks). This means that the described process will work seamlessly the very first time when we start with an empty Git repository.

Now, from the parent POM on your computer, it’s enough to run

mvn deploy -Prelease-composite

and the release will be performed. When cloning you’ll be asked for the password of the GitHub repository, and, if not using an SSH agent or a keyring, also when pushing. Again, this depends on the URL of the GitHub repository; you might use an HTTPS URL that relies on the GitHub token, for example.

If you want to make a few local tests before actually releasing, you might stop at the phase verify and inspect the target/checkout to see whether the directories and the composite metadata are as expected.

You might also want to add another execution to the tycho-eclipserun-plugin to add a reference to another Eclipse update site that is required to install your software. The Ant file provides a task for that, p2.composite.add.external that will store the reference into the innermost composite child (e.g., into 1.2.x); here’s an example that adds a reference to the Eclipse main update site:

  <!-- Add composite of required software update sites... 
    (if already present they won't be added again) -->

For example, in my Xtext projects, I use this technique to add a reference to the Xtext update site corresponding to the Xtext version I’m using in that specific release of my project. This way, my update site will be “self-contained” for my users: when using my update site for installing my software, p2 will be automatically able to install also the required Xtext bundles!

Releasing from GitHub Actions

The Maven command shown above can be used to perform a release from your computer. If you want to release your Eclipse update site directly from GitHub Actions, there are a few more things to do.

First of all, we are talking about a GitHub Actions workflow stored and executed in the GitHub repository of your project, NOT in the GitHub repository of the update site. In this example, it is

In such a workflow, we need to push to another GitHub repository. To do that

  • create a GitHub personal access token (selecting repo);
  • create a secret in the GitHub repository of the project (where we run the GitHub Actions workflow), in this example it is called ACTIONS_TOKEN, with the value of that token;
  • when running the Maven deploy command, we need to override the property github-update-repo by specifying a URL for the GitHub repository with the update site using the HTTPS syntax and the encrypted ACTIONS_TOKEN; in this example, it is https://x-access-token:${{ secrets.ACTIONS_TOKEN }};
  • we also need to configure in advance the Git user and email, with some values, otherwise, Git will complain when creating the commit.

To summarize, these are the interesting parts of the release.yml workflow (see the full version here:

name: Release with Maven

      - release

    runs-on: ubuntu-latest

    - uses: actions/checkout@v2
    - name: Set up JDK 11
      uses: actions/setup-java@v1
        java-version: 11
    - name: Configure Git
      run: |
        git config --global 'GitHub Actions'
        git config --global ''
    - name: Build with Maven
      run: >
        mvn deploy
        -Dgithub-update-repo=https://x-access-token:${{ secrets.ACTIONS_TOKEN }}
      working-directory: p2composite.example.parent

The workflow is configured to be executed only when you push to the release branch.

Remember that we are talking about the Git repository hosting your project, not the one hosting your update site.

Final thoughts

With the procedure described in this post, you publish your update sites and the composite metadata during the Maven build, so you never deal manually with the GitHub repository of your update site. However, you can always do that! For example, you might want to remove a release. It’s just a matter of cloning that repository, do your changes (i.e., remove a subdirectory of releases and update manually the composite metadata accordingly), commit, and push. Now and then you might also clean up the history of such a Git repository (the history is not important in this context), by pushing with –force after resetting the Git history. By the way, by tweaking the configurations above you could also do that every time you do a release: just commit with amend and push force!

Finally, you could also create an additional GitHub repository for snapshot releases of your update sites, or for milestones, or release candidate.

Happy releasing! 🙂

by Lorenzo Bettini at March 15, 2021 02:47 PM

An open source policy language for Attribute-Stream Based Access Control (ASBAC)

by Prof. Dr. Dominic Heutelbeck ( at March 09, 2021 03:00 PM

This article discusses, how the Streaming Attribute Policy Language (SAPL) can be applied to realize complex authorization scenarios by formulating rules for access control in an easy to use policy language implemented using Xtext.

This article covers:

  1. Basic concepts and motivation for Attribute-based Access Control (ABAC) in general and Attribute-Stream Based Access Control (ASBAC) in particular
  2. How to express access rights policies using SAPL
  3. How SAPL can be used in Spring Boot applications integrating the access policies

Externalized dynamic policy-driven access control

Often applications allow different degrees of access control following role-based concepts (RBAC). Different roles are assigned to individual within the identity management system. This may include hierarchical role concepts.

The number of applications to be managed within an organization is typically increasing over time. Managing access to the different resources correctly results in an explosion of roles. Furthermore a matching increase in complexity in maintaining the integrity of access control as it becomes increasingly harder to define the right roles and correct assignments to individuals or groups.

In addition, role-based mechanisms are often not capable of expressing all access control requirements of a given application domain.

One way to overcome these problems is to employ so-called attribute-based access control (ABAC). Instead of assigning specific permissions to the role attribute of a subject directly, i.e., the entity requesting access to a resource, a set of policies defines rules under which conditions access is granted.

Those rules are based on properties or attributes of the subject, resource, action (i.e., what does the subject want to do with the resource) and environmental conditions.

Chart showing the attribute-based access control mechanism

Fundamentally in ABAC the code path where resources are to be protected (Policy Enforcement Point or PEP) sends the question “May SUBJECT do ACTION with RESOURCE in ENVIRONMENT” to a so-called Policy Decision Point (PDP). This then calculates the decision based on the rules and attributes.

Attributes may be directly attached to the objects in the question or may be retrieved from external sources if specified by the policies in question.

Following such a model has several advantages:

  • It allows to decouple most of the access control logic from the domain logic, for a clear separation of concerns
  • The access control rules can be changed by configuration/administration independently of and during deployment of the applications
  • The model can express more complex access control rules than RBAC or access control lists (ACLs) and allows for implementing these well-established models where applicable. And offers an upgrade path to other rules, often without touching code

These benefits come at the cost of a certain degree of complexity introduced through the required infrastructure which should be considered on a case-to-case basis.

The primary way of implementing ABAC was to use the XACML standard which specifies an architecture, protocol and XML scheme for specifying policies. Both open source and proprietary XACML implementations exist. However, XACML by itself is a relatively verbose XML-based standard which is difficult to author and to read by a person.

While a standard for a more readable dialect of XACML in form of a DSL exists with ALFA only one proprietary implementation exists.

Streaming Attribute Policy Language (SAPL)

While XACML as the de-facto standard has its problems in syntax and expressiveness, e.g. parametrization of attribute access, it also is still rooted in a traditional request-response design. Thus, in cases where conditions implying access rights are expected to change regularly, applications must poll the policy decision point to keep up, resulting in latency and scalability issues in access control.

The Streaming Attribute Policy Language (SAPL) introduces an extension to the ABAC model, allowing for data stream-based attributes and publish-subscribe driven access control design patterns, the so-called Attribute-Stream Based Access Control (ASBAC).


The data model of SAPL is based on JSON and JSON Path. A policy enforcement point formulates authorization subscriptions as JSON objects containing arbitrary JSON values for subject, action, and resource. Policies are expressed in intuitive syntax. Here are a few examples of SAPL policies.

Full source of working example projects can be found here on GitHub:

Also, this article will not go into the details of the syntax of SAPL but rather explain what the policies do. A full documentation is available under

First let us look at policies which are used in a traditional request-response driven system integrated into a Spring Boot application using the SAPL Spring Boot Starter providing a deep Spring Security integration. Given the following repository, the SAPL integration automatically generates policy enforcement points for methods annotated by @PreEnforce or @PostEnforce controlling entry or exit to the method. The SAPL subscriptions are derived by reflection and accessing the principal objects of the runtime.

SAPL code example - policies

The next policies are excerpts of a more complete policy set expressed in SAPL.

SAPL code example - more complete policy set

This policy is a simple implementation of role-based access control for the findById method.

SAPL code example - RBAC implementation for findById methodAs can be seen in the repository definition, the method findById is annotated with @PostEnforce so the resource JSON object is a serialization of the return value of the method and this policy transforms the data and blackens substrings of the data for administrators which should not have access to medical data.

This kind of filtering and transformation is not possible with XACML. This is combined with a traditional role-based access control by using the authority attribute of the subject. In case access is granted, the return value of the method will be replaced by the transformed object.

SAPL code example

Finally, this policy uses an external data source to fetch additional attributes. In line 16, the policy accesses the patient repository itself and retrieves a list of relatives. And if the user is a relative, it may access, but with certain fields of the dataset removed.

So far, the use of the policies all adhered to the request-response pattern. The following example does use stream-based attributes.

SAPL code example

With two custom attribute implementations, this simple policy integrates an IoT physical access control system and a smart contract on the Ethereum blockchain which allow for quasi-real-time enforcement of the rule that only persons on the company premises who hold a certification to access the resource may access it. The angled bracket syntax denotes that the authorization subscription results in a subscription the matching external data stream source.


This article can only scratch the surface of the possibilities for SAPL, its engine and tools.

SAPL was implemented using Xtext, which made it easy to concentrate on designing a user-friendly syntax and runtime instead of reinventing the wheel for DSL processing. It also allowed for developing web-based policy editors for the server applications and significantly reduced the time needed to get from zero to the first running prototype.

SAPL itself is an open source project licensed under the Apace 2.0 project.
Learn more about it on

by Prof. Dr. Dominic Heutelbeck ( at March 09, 2021 03:00 PM

OSGi and javax.inject

January 26, 2021 11:00 PM

An OSGi bundle that exports the javax.inject package.

Since a couple of years, the Eclipse platform jars are published on maven central with metadata that allows the consumption in a traditional maven project (no Eclipse Tycho required).

This article is my feedback after having experimented with PDE (the Plug-in Development Environment project).

The problem

The goal is to compile and execute code that requires this OSGi bundle from maven-central:


I am using a regular maven project (with the bnd plugins to manage the OSGi related tasks). I do not have Eclipse Tycho, so maven do not have access to any P2 Update Site.

Amongst all dependencies of PDE, there is org.eclipse.e4.core.contexts and Those two bundles requires:

Import-Package: javax.inject;version="1.0.0",

Source: here and here.

So we need a bundle exporting this package, otherwise the requirements are not fulfilled and I get this error:

[ERROR] Resolution failed. Capabilities satisfying the following requirements could not be found:
      ⇒ osgi.identity: (osgi.identity=org.eclipse.pde.core)
          ⇒ [org.eclipse.pde.core version=3.13.200.v20191202-2135]
              ⇒ osgi.wiring.bundle: (&(>=2.0.0)(!(bundle-version>=3.0.0)))
                  ⇒ [ version=2.2.100.v20191122-2104]
                      ⇒ osgi.wiring.package: (&(osgi.wiring.package=javax.inject)(version>=1.0.0))
    [org.eclipse.e4.core.contexts version=1.8.300.v20191017-1404]
      ⇒ osgi.wiring.package: (&(osgi.wiring.package=javax.inject)(version>=1.0.0))

In the P2 world

The bundle javax.inject version 1.0.0 is available in the Eclipse Orbit repositories.

In the maven world

The official dependency

The dependency used by most of the other libraries:


This library does not contain any OSGi metadata in the published MANIFEST.MF.

See the corresponding open issue.

Tom Schindl’s solution

The jar from Eclipse Orbit is available at:


But this is not on maven central. You will need to add following repository to your pom.xml:


On maven central

This question on stackoverflow gives some inputs and suggests:

From the Apache ServiceMix project:


From the GlassFish project.


After analyzing other candidates in list where artifactId == "javax.inject", there is also this one from the Lucee project:


And on twitter Raymond Augé suggested the Apache geronimo project.


Make your choice.

January 26, 2021 11:00 PM

Cloud Native Predictions for 2021 and Beyond

by Chris Aniszczyk at January 19, 2021 04:08 PM

I hope everyone had a wonderful holiday break as the first couple weeks of January 2021 have been pretty wild, from insurrections to new COVID strains. In cloud native land, the CNCF recently released its annual report on all the work we accomplished last year. I recommend everyone take an opportunity to go through the report, we had a solid year given the wild pandemic circumstances.

As part of my job, I have a unique and privileged vantage point of cloud native trends given to all the member companies and developers I work with, so I figured I’d share my thoughts of where things will be going in 2021 and beyond:

Cloud Native IDEs

As a person who has spent a decent portion of his career working on developer tools inside the Eclipse Foundation, I am nothing but thrilled with the recent progress of the state of the art. The future will hold that the development lifecycle (code, build, debug) will happen mostly in the cloud versus your local Emacs or VSCode setup. You will end up getting a full dev environment setup for every pull request, pre-configured and connected to their own deployment to aid your development and debugging needs. A concrete example of this technology today is enabled via GitHub Codespaces and GitPod. While GitHub Codespaces is still in beta, you can try this experience live today with GitPod, using Prometheus as an example. In a minute or so, you have a completely live development environment with an editor and preview environment. The wild thing is that this development environment (workspace) is described in code and shareable with other developers on your team like any other code artifact.

In the end, I expect to see incredible innovation in the cloud native IDE space over the next year, especially as GitHub Codespaces enters out of beta and becomes more widely available so developers can experience this new concept and fall in love.

Kubernetes on the Edge

Kubernetes was born through usage across massive data centers but Kubernetes will evolve just like Linux did for new environments. What happened with Linux was that end users eventually stretched the kernel to support a variety of new deployment scenarios from mobile, embedded and more. I strongly believe Kubernetes will go through a similar evolution and we are already witnessing Telcos (and startups) explore Kubernetes as an edge platform through transforming VNFs into Cloud Native Network Functions (CNFs) along with open source projects like k3s, KubeEdge, k0s, LFEdge, Eclipse ioFog and more. The forces driving hyperscaler clouds to support telcos and the edge, combined with the ability to reuse cloud native software and build upon already a large ecosystem will cement Kubernetes as a dominant platform in edge computing over the next few years.

Cloud Native + Wasm

Web Assembly (Wasm) is a technology that is nascent but I expect it to become a growing utility and workload in the cloud native ecosystem especially as WASI matures and as Kubernetes is used more as an edge orchestrator as described previously. One use case is powering an extension mechanism, like what Envoy does with filters and LuaJIT. Instead of dealing with Lua directly, you can work with a smaller optimized runtime that supports a variety of programming languages. The Envoy project is currently in its journey in adopting Wasm and I expect a similar pattern to follow for any environment that scripting languages are a popular extension mechanism to be wholesale replaced by Wasm in the future.

On the Kubernetes front, there are projects like Krustlet from Microsoft that are exploring how a WASI-based runtime could be supported in Kubernetes. This shouldn’t be too surprising as Kubernetes is already being extended via CRDs and other mechanisms to run different types of workloads like VMs (KubeVirt) and more.

Also, if you’re new to Wasm, I recommend this new intro course from the Linux Foundation that goes over the space, along with the excellection documentation 

Rise of FinOps (CFM)

The coronavirus outbreak has accelerated the shift to cloud native. At least half of companies are accelerating their cloud plans amid the crisis… nearly 60% of respondents said cloud usage would exceed prior plans owing to the COVID-19 pandemic (State of the Cloud Report 2020). On top of that, Cloud Financial Management (or FinOps) is a growing issue and concern for many companies and honestly comes up in about half of my discussions the last six months with companies navigating their cloud native journey. You can also argue that cloud providers aren’t incentivized to make cloud financial management easier as that would make it easier for customers to spend less, however, the true pain is lack of open source innovation and standardization around cloud financial management in my opinion (all the clouds do cost management differently). In the CNCF context, there aren’t many open source projects trying to make FinOps easier, there is the KubeCost project but it’s fairly early days.

Also, the Linux Foundation recently launched the “FinOps Foundation” to help innovation in this space, they have some great introductory materials in this space. I expect to see a lot more open source projects and specifications in the FinOps space in the coming years.

More Rust in Cloud Native

Rust is still a young and niche programming language, especially if you look at programming language rankings from Redmonk as an example. However, my feeling is that you will see Rust in more cloud native projects over the coming year given that there are already a handful of CNCF projects taking advantage of Rust to it popping up in interesting infrastructure projects like the microvm Firecracker. While CNCF currently has a super majority of projects written in Golang, I expect Rust-based projects to be on par with Go-based ones in a couple of years as the Rust community matures.

GitOps + CD/PD Grows Significantly

GitOps is an operating model for cloud native technologies, providing a set of best practices that unify deployment, management and monitoring for applications (originally coined by Alexis Richardson from Weaveworks fame). The most important aspect of GitOps is describing the desired system state versioned in Git via a declaration fashion, that essentially enables a complex set of system changes to be applied correctly and then verified (via a nice audit log enabled via Git and other tools). From a pragmatic standpoint, GitOps improves developer experience and with the growth of projects like Argo, GitLab, Flux and so on, I expect GitOps tools to hit the enterprise more this year. If you look at the data from say GitLab, GitOps is still a nascent practice where the majority of companies haven’t explored it yet but as more companies move to adopt cloud native software at scale, GitOps will naturally follow in my opinion. If you’re interested in learning more about this space, I recommend checking out the newly formed GitOps Working Group in CNCF.

Service Catalogs 2.0: Cloud Native Developer Dashboards

The concept of a service catalog isn’t a new thing, for some of us older folks that grew up in the ITIL era you may remember things such as CMDBs (the horror). However, with the rise of microservices and cloud native development, the ability to catalog services and index a variety of real time service metadata is paramount to drive developer automation. This can include using a service catalog to understand ownership to handle incident management, manage SLOs and more. 

In the future, you will see a trend towards developer dashboards that are not only a service catalog, but provide an ability to extend the dashboard through a variety of automation features all in one place. The canonical open source examples of this are Backstage and Clutch from Lyft, however, any company with a fairly modern cloud native deployment tends to have a platform infrastructure team that has tried to build something similar. As the open source developer dashboards mature with a large plug-in ecosystem, you’ll see accelerated adoption by platform engineering teams everywhere.

Cross Cloud Becomes More Real

Kubernetes and the cloud native movement have demonstrated that cloud native and multi cloud approaches are possible in production environments, the data is clear that “93% of enterprises have a strategy to use multiple providers like Microsoft Azure, Amazon Web Services, and Google Cloud” (State of the Cloud Report 2020). The fact that Kubernetes has matured over the years along with the cloud market, will hopefully unlock programmatic cross-cloud managed services. A concrete example of this approach is embodied in the Crossplane project that provides an open source cross cloud control plane taking advantage of the Kubernetes API extensibility to enable cross cloud workload management (see “GitLab Deploys the Crossplane Control Plane to Offer Multicloud Deployments”).

Mainstream eBPF

eBPF allows you to run programs in the Linux Kernel without changing the kernel code or loading a module, you can think of it as a sandboxed extension mechanism. eBPF has allowed a new generation of software to extend the behavior of the Linux kernel to support a variety of different things from improved networking, monitoring and security. The downside of eBPF historically is that it requires a modern kernel version to take advantage of it and for a long time, that just wasn’t a realistic option for many companies. However, things are changing and even newer versions of RHEL finally support eBPF so you will see more projects take advantage. If you look at the latest container report from Sysdig, you can see the adoption of Falco rising recently which although the report may be a bit biased from Sysdig, it is reflected in production usage. So stay tuned and look for more eBPF based projects in the future!

Finally, Happy 2021!

I have a few more predictions and trends to share especially around end user driven open source, service mesh cannibalization/standardization, Prometheus+OTel, KYC for securing the software supply chain and more but I’ll save that for more detailed posts, nine predictions are enough to kick off the new year! Anyways, thanks for reading and I hope to see everyone at KubeCon + CloudNativeCon EU in May 2021, registration is open!

by Chris Aniszczyk at January 19, 2021 04:08 PM

DSL Forge, dead or (still) alive?

by alajmi at January 19, 2021 02:22 PM

It has been a long time since the last post I’ve published on the DSL Forge blog. As the initial release back in 2014 and the “hot” context of that time, water has flowed under the bridges. The last couple of years, a lot of effort has been spent on the Coding Park platform, a commercial product based on DSL Forge. Unfortunately, not all the developments made since then have been integrated into the open-source repository.

Anyway, I’ve finally managed to have some time to clean the repository and fix some bugs, hence it’s up-to-date now, and still available under the EPL licence on GitHub.

There are several reasons why the project has not progressed the way we wanted at the beginning, let’s take a step back and think about what happened.

Lack of ambition

One of the reasons why the adoption of cloud-based tools has not taken off is the standstill, and sometimes the lack of ambition, of top managers in big industry corporations who traditionnally use Eclipse technologies to build their internal products. Many companies have a huge legacy desktop applications built on top of Elipse RCP. Despite the acceleration that was put the last 5 years to encourage organizations to move to the web/cloud, eventually, very few have taken action.

No standard cloud IDE

Another reason is the absence of a “standard” platform which is unanimously supported to build new tools on top of it. Of course, there are some nice cloud IDEs flourishing under the Eclipse foundation umbrella, such as Dirigible (SAP), Theia (TypeFox), or Che (Codenvy then Red Hat), but it’s still unclear for customers which of these is the winning horse. Today, Theia seems more accurate than its competitors if you judge based on the number of contributors, and the big tech companies that push the technology forward such as IBM, SAP, and Red Hat just to name a few of them. However, the frontier between these cloud IDEs is still confusing: Theia uses the workspace component of Che, later Theia has become the official UI of Che. Theia is somehow based on VS Code, but then has its own extension mechanism, etc.


In the meantime, there have been attempts to standardize the client/server exchange protocol in the case of text editing, with the Microsoft’s Language Server Protocol (LSP), and later with a variant of LSP to support graphical editing (GLSP). Pushing standards is a common strategy to make stakeholders in a given market collaborate in order to optimize their investments, however, like any other standard-focused community, there is a difference between theory and practice. Achieving a complete interoperability is quite unrealistic, because developing the editor front-end requires a lot of effort already, and even with the LSP in mind, it is common to end up developing the same functionality specifically for each editor, which is not always the top priority of commercial projects or startups willing to reduce their time-to-market.

The cost of migration

As said earlier, there is a large amount of legacy source code built on Eclipse RCP. The sustainability of this code is of strategic importance for many corporations, and unfortunately, most of it is written in Java and relies on SWT. Migrating this code is expensive as it implies rewriting a big part of it in JavaScript, with a particular technical stack/framework in mind, so it’s a long journey, architects have a lot of technical decisions to take along the way, and there is no garantee that they took the right decisions on the long run.

The decline of the Eclipse IDE

Friends of Eclipse, don’t be upset! Having worked with a lot of junior developers in the last 5 years, I have noticed the Eclipse IDE is no longer of interest to many of them. A few years ago, Eclipse was best known for being a good Java IDE, back in the times when IBM was a driving force in the community. Today, the situation is different; Microsoft’s VS Code has established itself as the code editor of choice. It is still incomprehensible to see the poor performance of the Eclipse IDE, especially at startup. It is urgent that one of the cloud IDEs mentioned above take over.

The high volatility of web technologies

We see new frameworks and new trends in web development technologies every day. For instance, the RIA frameworks appeared in the early 2010s finally had a short life, especially with the rise of the new frameworks such as React and Angular. Sever-side rendering is now part of History. One consequence of this was the slow down of investments in RIA-based frameworks, including the Eclipse Remote Application Platform (RAP). Today, RAP is still under maintenance, however its scalability is questionable and its rendering capabilities look outdated compared to newer web frameworks. The incredible pace in which web technologies evolve is one of the factors that make decision makers hesitate to invest in cloud-based modeling tools.

The end of a cycle

As a large part of legacy code must be rewritten in JavaScript or any of its variants (TypeScript, JSX, …), many historical developers (today’s senior developers) with a background in Java, have found themselves overwhelmed by the rise of new paradigms coming from the culture of web development. In legacy desktop applications, it is common to see the UI code, should it be in SWT or Swing, melted with the business logic. Of course, architects have always tried to separate the concerns as much as possible, but the same paradigm, structures, and programming language are used everywhere. With the new web frameworks, the learning curve is so steep that senior developers struggle to get hands on the new paradigms and coding style.


The last 10 years, EMF has become an industry-proven standard for model persistency, however it is quite unknown in the web development community. The most widely used format in data exchange through the web is JSON, and even though the facilities that come with EMF are advanced compared to the tooling support of JSON, the reality is, achieving complete bidirectionnality between EMF and JSON is not always garanteed. That beeing said, EclipseSource are doing a great job in this area thanks their work on the framework.

Where is DSL Forge in all of this?

The DSL Forge project will continue to exist as long as it serves users. First, because the tool is still used in academic research. With a variety of legacy R&D prototypes built on RCP, it is easy to have quickly a web-based client thanks to the port of the SWT library on the web which does almost 90% of the job. Moreover, the framework is still used in commercial products, particularly in the field of Cybersecurity and Education. For example, the Coding Park platform, initially developed on Eclipse RAP is still marketed under this technology stack.

Originally, the DSL Forge was seen as a port of Xtext to the web that relies on ACE editor; this is half true as it has also a nice ANTLR/ACE integration. The tool released in 2014 was ahead of its time. Companies were not ready to make the leap (a lot are still in this situation now even with all the progress made), the demand was not mature enough, and the small number of contributors was a barrier to adoption. Given all of that, we made our own path outside the software development tools market. Meanwhile, the former colleagues of Itemis (now at TypeFox) did a really good job: not only they have built a flawless cloud IDE, but also they have managed to forge strategic partnerships which are contributing to the success of Theia. Best of luck for Theia and the incredible team of TypeFox!

To conclude

Today, Plugbee is still supporting the maintenance of DSL Forge to guarantee the sustainability of customer products.

For now, if you are looking to support a more modern technical stack, your best bet is to start with the Xtext servlet. For example, we have integrated the servlet into a Spring Boot/React application, and it works like a charm. The only effort we have made to achieve the integration was to bind properly the Xtext services to ACE editor. This effort has been made as part of the new release of Coding Park. The code will be extracted and made publicly available on the DSL Forge repository soon. If you are interested in this kind of integrations, feel free to get in touch.

Finally, if you are interested in using Eclipse to build custom modeling tools or to migrate existing products to the web, please have a look at our training offer or feel free to contact us.

by alajmi at January 19, 2021 02:22 PM

LiClipse 7.1.0 released (improved Dark theme, LiClipseText and PyDev updates)

by Fabio Zadrozny ( at December 08, 2020 07:52 PM

I'm happy to announce that LiClipse 7.1.0 is now available for download.

LiClipse is now based on Eclipse 4.17 (2020-09), one really nice features is that this now enables dark-scrollbars for trees on Windows.

I think an image may be worth a thousand words here, so below is a screenshot showing how the LiClipse Dark theme looks like (on Windows) with the changes!

This release also updates PyDev to 8.1.0, which provides support for Python 3.9 as well as quick-fixes to convert strings to f-strings, among many other things (see: for more details).

Another upgraded dependency is LiClipseText 2.2.0, which now provides grammars to support TypeScript, RobotFramework and JSON by default.

by Fabio Zadrozny ( at December 08, 2020 07:52 PM

ECA Validation Update for Gerrit

December 08, 2020 05:45 PM

We are planning to install a new version of our Gerrit ECA validation plugin this week in an effort to reduce errors when a contribution is validated.

With this update, we are moving our validation logic to our new ECA Validation API that we created for our new Gitlab instance.

We are planning to push these changes live on Wednesday, December 9 at 16:00 GMT, though there is no planned downtime associated with this update.

Our plan is to revert back to a previous version of the plugin if we detect any anomalies after deploying this change.

Please note that we are also planning to apply these changes to our GitHub ECA validation app in Q1 of 2021. You can expect more news about this in the new year!

For those interested, the code for the API and the plugin are open-source and can be seen at git-eca-rest-api and gerrit-eca-plugin.

Please use our GitHub issue to discuss any concerns you might have with this change.

December 08, 2020 05:45 PM

Become an Eclipse Technology Adopter

December 04, 2020 05:50 PM

Did you know that organizations — whether they are members of the Eclipse Foundation or not — can be listed as Eclipse technology adopters?

In November 2019, The Eclipse IoT working group launched a campaign to promote adopters of Eclipse IoT technologies. Since then, more than 60 organizations have shown their support to various Eclipse IoT projects.

With that success in mind, we decided to build a new API service responsible for managing adopters for all our projects.

If needed, this new service will allow us to create an Adopters page for each of our working groups. This is something that we are currently working on for Eclipse Cloud Development Tools. Organizations that wishes to be listed on this new page can submit their request today by following our instructions.

On top of that, every Eclipse project can now leverage our JavaScript plugin to display logos of adopters without committing them in their website git repository.

As an example, you can check out the Eclipse Ditto website.

What Is Changing?

We are migrating logos and related metadata to a new repository. This means that adopters of Eclipse IoT technologies will be asked to submit their request to this new repository. This change is expected to occur on December 10, 2020.

We plan on updating our documentation to point new users to this new repository. If an issue is created in the wrong repository, we will simply move them to the right location.

The process is very similar with this new repository but we did make some improvements:

  1. The path where we store logos is changing
  2. The file format is changing from .yml to .json to reduce user errors.
  3. The structure of the file was modified to make it easier for an organization to adopt multiple projects.

We expect this change to go uninterrupted to our users. The content of the Eclipse IoT Adopters page won’t change and the JavaScript widget hosted on will continue to work as is.

Please create an issue if you have any questions or concerns regarding this migreation.

How Can My Organization be Listed as Adoptor of Eclipse Technology?

The preferred way to become an adopter is with a pull-request:

  1. Add a colored and a white organization logo to static/assets/images/adoptors. We expect logos to be submitted as .svg and they must be transparent. The file size should be less than 20kb since we are planning to use them for the web!
  2. Update the adopter JSON file: config/adopters.json. Organizations can be easily marked as having multiple adopted projects across different working groups, no need to create separate entries for different projects or working groups!

The alternative way to become an adopter is to submit an issue with your logo and the project name that your organization has adopted.

How Can We List Adopters on Our Project Website?

We built a JavaScript plugin to make this process easier.


Include our plugin in your page:

<script src="//"></script>

Load the plugin:

project_id: "[project_id]"

Create an HTML element containing the chosen selector:

<div class="eclipsefdn-adopters"></div>
  • By default, the selector’s value is eclipsefdn-adopters.


project_id: "[project_id]",
selector: ".eclipsefdn-adopters",
ul_classes: "list-inline",
logo_white: false
Attribute Type Default Description
project_id String Required: Select adopters from a specific project ID.
selector String .eclipsefdn-adopters Define the selector that the plugin will insert adopters into.
ul_classes String Define classes that will be assigned to the ul element.
logo_white Boolean false Whether or not we use the white version of the logo.

For more information, please refer our

A huge thank you to Martin Lowe for all his contributions to this project! His hard work and dedication was crucial for getting this project done on time!

December 04, 2020 05:50 PM

Add Checkstyle support to Eclipse, Maven, and Jenkins

by Christian Pontesegger ( at December 02, 2020 08:52 AM

After PMD and SpotBugs we will have a look at Checkstyle integration into the IDE and our maven builds. Parts of this tutorial are already covered by Lars' tutorial on Using the Checkstyle Eclipse plug-in.

Step 1: Add Eclipse IDE Support

First install the Checkstyle Plugin via the Eclipse Marketplace. Before we enable the checker, we need to define a ruleset to run against. As in the previous tutorials, we will setup project specific rules backed by one ruleset that can also be used by maven later on.

Create a new file for your rules in <yourProject>.releng/checkstyle/checkstyle_rules.xml. If you are familiar with writing rules just add them. In case you are new, you might want to start with one of the default rulesets of checkstyle.

Once we have some rules, we need to add them to our projects. Therefore right click on a project and select Checkstyle/Activate Checkstyle. This will add the project nature and a builder. To make use of our common ruleset, create a file <project>/.checkstyle with following content.

<?xml version="1.0" encoding="UTF-8"?>

<fileset-config file-format-version="1.2.0" simple-config="false" sync-formatter="false">
<local-check-config name="Skills Checkstyle" location="/yourProject.releng/checkstyle/checkstyle_rules.xml" type="project" description="">
<additional-data name="protect-config-file" value="false"/>
<fileset name="All files" enabled="true" check-config-name="Skills Checkstyle" local="true">
<file-match-pattern match-pattern=".java$" include-pattern="true"/>

Make sure to adapt the name and location attributes of local-check-config according to your project structure.

Checkstyle will now run automatically on builds or can be triggered manually via the context menu: Checkstyle/Check Code with Checkstyle.

Step 2: Modifying Rules

While we had to do our setup manually, we can now use the UI integration to adapt our rules. Select the Properties context entry from a project and navigate to Checkstyle, page Local Check Configurations. There select your ruleset and click Configure... The following dialog allows to add/remove rules and to change rule properties. All your changes are backed by our checkstyle_rules.xml file we created earlier.

Step 3: Maven Integration

We need to add the Maven Checkstyle Plugin to our build. Therefore add following section to your master pom:


<!-- enable checkstyle code analysis -->


In the configuration we address the ruleset we also use for the IDE plugin. Make sure that the relative path fits to your project setup. In the provided setup execution is bound to the verify phase.

Step 4: File Exclusions

Excluding files has to be handled differently for IDE and Maven. The Eclipse plugin allows to define inclusions and exclusions via file-match-pattern entries in the .checkstyle configuration file. To exclude a certain package use:

  <fileset name="All files" enabled="true" check-config-name="Skills Checkstyle" local="true">
<file-match-pattern match-pattern="org.yourproject.generated.package.*$" include-pattern="false"/>

In maven we need to add exclusions via the plugin configuration section. Typically such exclusions would go to the pom of a specific project and not the master pom:

<!-- remove generated resources from checkstyle code analysis -->


Step 5: Jenkins Integration

If you followed my previous tutorials on code checkers, then this is business as usual: use the warnings-ng plugin on Jenkins to track our findings:

	recordIssues tools: [checkStyle()]

Try out the live chart on the skills project.

by Christian Pontesegger ( at December 02, 2020 08:52 AM

Add SpotBugs support to Eclipse, Maven, and Jenkins

by Christian Pontesegger ( at November 24, 2020 06:01 PM

SpotBugs (successor of FindBugs) is a tool for static code analysis, similar like PMD. Both tools help to detect bad code constructs which might need improvement. As they partly detect different issues, they may be well combined and used simultaneously.

Step 1: Add Eclipse IDE Support

The SpotBugs Eclipse Plugin can be installed directly via the Eclipse Marketplace.

After installation projects can be configured to use it from the projects Properties context menu. Navigate to the SpotBugs category and enable all checkboxes on the main site. Further set Minimum rank to report to 20 and Minimum confidence to report to Low.

Once done SpotBugs immediately scans the project for problems. Found issues are displayed as custom markers in editors. Further they are visible in the Bug Explorer view as well as in the Problems view.

SpotBugs also comes with a label decoration on elements in the Package Explorer. If you do not like these then disable all Bug count decorator entries in Preferences/General/Appearance/Label Decorations.

Step 2: Maven Integration

Integration is done via the SpotBugs Maven Plugin. To enable, add following section to your master pom:


<!-- enable spotbugs code analysis -->



The execution entry takes care that the spotbugs goal is automatically executed during the verify phase. If you remove the execution section you would have to call the spotbugs goal separately:

mvn spotbugs:spotbugs

Step 3: File Exclusions

You might have code that you do not want to get checked (eg generated files). Exclusions need to be defined in an xml file. A simple filter on package level looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<!-- skip EMF generated packages -->
<Package name="~org\.eclipse\.skills\.model.*" />

See the documentation for a full description of filter definitions.

Once defined this file can be used from the SpotBugs Eclipse plugin as well as from the maven setup.

To simplify the maven configuration we can add following profile to our master pom:

<!-- apply filter when filter file exists -->

<!-- enable spotbugs exclude filter -->


It gets automatically enabled when a file .settings/spotbugs-exclude.xml exists in the current project.

Step 4: Jenkins Integration

Like with PMD, we again use the warnings-ng plugin on Jenkins to track our findings:

	recordIssues tools: [spotBugs(useRankAsPriority: true)]

Try out the live chart on the skills project.

Final Thoughts

PMD is smoother on integration as it stores its rulesets in a common file which can be shared by maven and the Eclipse plugin. SpotBugs currently requires to manage rulesets separately. Still both can be implemented in a way that users automatically get the same warnings in maven and the IDE.

by Christian Pontesegger ( at November 24, 2020 06:01 PM

My main update site moved

by Andrey Loskutov ( at November 23, 2020 08:51 AM

My host provider GMX decided that free hosting that they offered for over a decade is not fitting to their portfolio  anymore (for some security reasons) and simply switched my domain off.


... aus Sicherheitsgründen modernisieren wir regelmäßig unser Produktportfolio.
Im Zuge dessen möchten wir Sie darüber informieren, dass wir Ihren Webspace mit Ihrem Subdomain-Namen zum 19‌.11‌.20‌20 kündigen. 

Because of that, Eclipse update site for all my plugins is moved now: 



Same way, my "home" is moved to

(Github obviously has no issues with free hosting).

That means, anyone who used to have my main update site in scripts / Oomph setups, has to change them to use instead.

I'm sorry for that, but that is nothing I could change.

by Andrey Loskutov ( at November 23, 2020 08:51 AM

e(fx)clipse 3.7.0 is released

by Tom Schindl at October 12, 2020 06:50 PM

We are happy to announce that e(fx)clipse 3.7.0 has been released. This release contains the following repositories/subprojects:

There are almost no new features (eg the new boxshadow) but only bugfixes who are very important if you use OpenJFX in an OSGi-Environment.

For those of you who already use our pom-First approache the new bits have been pushed to and the Sample application at has been updated to use the latest release.

by Tom Schindl at October 12, 2020 06:50 PM

Getting started with Eclipse GEF – the Mindmap Tutorial

by Tamas Miklossy ( at October 12, 2020 06:00 AM

The Eclipse Graphical Editing Framework is a toolkit to create graphical Java applications either integrated in Eclipse or standalone. The most common use of the framework is to develop diagram editors, like a simple Mindmap editor we will create in the GEF Mindmap Tutorial series. Currently, the tutorial consists of 6 parts and all together 19 steps. They are structured as follows:


Part I – The Foundations

  • Step 1: Preparing the development environment
  • Step 2: Creating the model
  • Step 3: Defining the visuals


  • Step 4: Creating the GEF parts
  • Step 5: Models, policies and behaviors
  • Step 6: Moving and resizing a node

Part III – Adding nodes and connections

  • Step 7: Undo and redo operations
  • Step 8: Creating new nodes
  • Step 9: Creating connections

Part IV – Modifying and removing nodes

  • Step 10: Deleting nodes (1)
  • Step 11: Modifying nodes
  • Step 12: Creating feedback
  • Step 13: Deleting nodes (2)

Part V – Creating an Eclipse editor

  • Step 14: Creating an Eclipse editor
  • Step 15: Undo, redo, select all and delete in Eclipse
  • Step 16: Contributing toolbar actions

Part VI – Automatic layouting

  • Step 17: Automatic layouting via GEF layout
  • Step 18: Automatic layouting via Graphviz DOT
  • Step 19: Automatic layouting via the Eclipse Layout Kernel

You can register for the tutorial series using the link below. The article How to set up Eclipse tool development with OpenJDK, GEF, and OpenJFX describes the necessary steps to properly set up your development environment.

Your feedback regarding the Mindmap Tutorial (and the Eclipse GEF project in general) is highly appreciated. If you have any questions or suggestions, please let us know via the Eclipse GEF forum, or create an issue on Eclipse Bugzilla.

For further information, we recommend to take a look at the Eclipse GEF blog articles and watch the Eclipse GEF session on the EclipseCon Europe 2018.


Register for the GEF Tutorials

by Tamas Miklossy ( at October 12, 2020 06:00 AM

Eclipse Collections 10.4.0 Released

by Nikhil Nanivadekar at October 09, 2020 08:36 PM

View of the Grinnell Glacier from overlook point after a grueling 9 mile hike

This is a release which we had not planned for, but we released it nonetheless.

This must be the first time since we open sourced Eclipse Collections that we performed two releases within the same month.

Changes in Eclipse Collections 10.4.0

There are only 2 changes in the 10.4.0 release compared to the feature rich 10.3.0 release viz.

  • Added CharAdapter.isEmpty(), CodePointAdapter.isEmpty(), CodePointList.isEmpty(), as JDK-15 introduced CharSequence.isEmpty().
  • Fixed Javadoc errors.

Why was release 10.4.0 necessary?

In today’s rapid deployment world, it should not be a novel aspect that a project performs multiple releases. However, the Eclipse Collections maintainer team, performs releases when one or more of the below criteria are satisfied:

  1. A bulk of features are ready to be released
  2. A user requests a release for their use case
  3. JDK-EA compatibility is breaking
  4. It has been more than 6 months that a version is released

The Eclipse Collections 10.4.0 release was necessary due to point #3. Eclipse Collections participates in the Quality Outreach program of Open JDK. As a part of this program the library is expected to test the Early Access (EA) versions of Java and identify potential issues in the library or the JDK. I had missed setting up the JDK-15-EA builds until after Eclipse Collections 10.3.0 was released. After setting up the JDK-15-EA builds on 16 August 2020, I found compiler issues in the library due to isEmpty() added as a default method on CharSequence. Stuart Marks has written an in-depth blog of why this new default method broke compatibility. So, we had 2 options, let the library not be compatible with JDK-15, or release a new version with the fix. The Eclipse Collections team believes in supporting Java versions from Java 8 to Java-EA. After release 10.3.0, we had opened a new major version target (11.0.0), but the changes required did not warrant a new major version. So, we decided to release 10.4.0 with the fixes to support JDK-15. Eclipse Collections 10.4.0 release is compatible with JDK-15 and JDK-16-EA.

Thank you

To the vibrant and supportive Eclipse Collections community on behalf of contributors, committers, and maintainers for using Eclipse Collections. We hope you enjoy Eclipse Collections 10.4.0.

I am a Project Lead and Committer for the Eclipse Collections OSS project at the Eclipse Foundation. Eclipse Collections is open for contributions.

Show your support, star us on GitHub.

Eclipse Collections Resources:
Eclipse Collections comes with it’s own implementations of List, Set and Map. It also has additional data structures like Multimap, Bag and an entire Primitive Collections hierarchy. Each of our collections have a rich API for commonly required iteration patterns.

  1. Website
  2. Source code on GitHub
  3. Contribution Guide
  4. Reference Guide

Photo of the blog: I took the photo after hiking to the Grinnell Glacier overlook point. It was a strenuous hike, but the view from up top made it worth it. I picked this photo, to elaborate the sense of accomplishment after completing a release in a short amount of time.

Eclipse Collections 10.4.0 Released was originally published in Oracle Developers on Medium, where people are continuing the conversation by highlighting and responding to this story.

by Nikhil Nanivadekar at October 09, 2020 08:36 PM

Obeo's Chronicles, Autumn 2020

by Cédric Brun ( at October 06, 2020 12:00 AM

I can’t believe we are already looking at Q4. I have so much news to share with you!

Eclipse Sirius, Obeo Cloud Platform and Sirius Web:

This last summer we had the pleasure to organize SiriusCon. This one-day event is each year the opportunity for the modeling community to share their experience, and for the development team to provide visibility on what is currently being worked on and how we see the future of the technology. SiriusCon reached 450 attendees from 53 different countries thanks to 13 fabulous speakers !

The latest edition was special to us, it used to be organized at the end of each year but we decided to postpone it for a few months to be ready for an announcement very close to our heart. We’ve been working on bringing on the Web what we love about Sirius for quite a few years already and reached a point where we have a promising product. Now is the time to accelerate, Mélanie Bats announced it during the conference: we are releasing “Sirius Web” as Open-Source and officially started the countdown !

The announcement at SiriusCon 2020

The reactions to this announcement were fantastic with a lot of excitement within the community.

I am myself very excited for several reasons:

Firstly, I expect this decision will, just like Sirius Desktop was released in Open-Source in 2013, a key factor leading to the creation of hundreds of graphical modelers, in the same way currently demonstrated by the Sirius Gallery but now easily accessible through the Web and leveraging all the capabilities this platform brings.

Our vision is to empower the tool specifier from the data structure and tool definition up to the deployment and exploitation of a modeling tool, directly from the browser, end to end and in an integrated and seamless way.

We are not there yet, though as you’ll see the technology is already quite promising.

Obeo Cloud Platform Modeler

Secondly, for Obeo this decision strengthens our product-based business model while being faithful to our “open core” approach. We will offer, through Obeo Cloud Platform a Sirius Web build extended with Enterprise features, to deploy on public, private clouds or on premise and including support and upgrade guarantees.

Obeo Cloud Platform Offer

Since the announcement the team is working on Sirius Web to publish it as an Open-Source product so that you can start experimenting as soon as EclipseCon 2020. Mélanie will present this in detail during her talk: “Sirius Web: 100% open source cloud modeling platform” ,

EclipseCon 2020

Hint: it’s still time to register for EclipseCon 2020 but do it quickly! The program committee did an excellent job in setting up an exciting program thanks to your many submissions, don’t miss it!

Capella Days Online is coming up!

That’s not it! Each day we see Eclipse Capella get more and more adoption across the globe, this Open-Source product has its own 4-days event: Capella Days Online 2020!

A unique occasion to get many experience reports from multiple domains: Space systems (CNES and GMV), Rail and transportation (Virgin Hyperloop, Nextrail and Vitesco technologies), healthcare (Siemens and Still AB), waste collecting with The SeaCleaners and all of that in addition to aerospace, defence and security with Thales Group. The program is packed with high-quality content: 12 sessions over 4 days from October 12th to 15th, more than 500 attendees already registered, join us and register!

Capella Days
Capella Days Program

SmartEA 6.0 supports Archimate 3.1 and keeps rising!

We use those open-source technologies, like Eclipse Sirius, Acceleo, EMF Compare, M2doc and many more in our “off the shelf” software solution for Enterprise Architecture: Obeo SmartEA.

SmartEA 6.0

This spring we released SmartEA 6.0, which got the Archimate 3.1 certification and brought among many other improvements: new modeling capabilities, extended user management, enhanced BPMN modeling and streamlined user experience.

Our solution is a challenger on the market and convinces more and more customers. Stay tuned, I should be able to share a thrilling announcement soon!

World Clean Up Day and The SeaCleaners

In a nutshell: an excellent dynamic on many fronts and exciting challenges ahead! This is all made possible thanks to the energy and cohesion of the Obeo team in this weird, complex and unusual time. We are committed to the environment and to reduce plastic waste, as such we took part in the World Clean Up Day in partnership with The Sea Cleaners . Beyond the impact of this action which has so much sense to us, it was also a sharing and fun moment!

#WeAreObeo at the World Cleanup Day

Obeo's Chronicles, Autumn 2020 was originally published by Cédric Brun at CEO @ Obeo on October 06, 2020.

by Cédric Brun ( at October 06, 2020 12:00 AM

MapIterable.getOrDefault() : New but not so new API

by Nikhil Nanivadekar at September 23, 2020 02:30 AM

MapIterable.getOrDefault() : New but not so new API

Sunset at Port Hardy (June 2019)

Eclipse Collections comes with it’s own List, Set, and Map implementations. These implementations extend the JDK List, Set, and Map implementations for easy interoperability. In Eclipse Collections 10.3.0, I introduced a new API MapIterable.getOrDefault(). In Java 8, Map.getOrDefault() was introduced, so what makes it a new API for Eclipse Collections 10.3.0? Technically, it is new but not so new API! Consider the code snippets below, prior to Eclipse Collections 10.3.0:

MutableMap.getOrDefault() compiles and works fine
ImmutableMap.getOrDefault() does not compile

As you can see in the code, MutableMap has getOrDefault() available, however ImmutableMap does not have it. But there is no reason why ImmutableMap should not have this read-only API. I found that MapIterable already had getIfAbsentValue() which has the same behavior. Then why did I still add getOrDefault() to MapIterable?

I added MapIterable.getOrDefault() mainly for easy interoperability. Firstly, most Java developers will be aware of the getOrDefault() method, only Eclipse Collections users would be aware of getIfAbsentValue(). By providing the API same as the JDK it reduces the necessity to learn a new API. Secondly, even though getOrDefault() is available on MutableMap, it is not available on the highest Map interface of Eclipse Collections. Thirdly, I got to learn a Java compiler check which I had not experienced before. I will elaborate this check a bit more in detail because I find it interesting.

After I added getOrDefault() to MapIterable, various Map interfaces in Eclipse Collections started giving compiler errors with messages like: inherits unrelated defaults for getOrDefault(Object, V) from types and java.util.Map. This I thought was cool, because at compile time, the Java compiler is ensuring that if there is an API with default implementation in more than one interface in a multi-interface scenario, then Java will not decide which implementation to pick but rather throw compiler errors. Hence, Java ensures at compile time that there is no ambiguity regarding which implementation will be used at runtime. How awesome is that?!? In order to fix the compile time errors, I had to add a default implementations on the interfaces which gave the errors. I always believe in Compiler Errors are better than Runtime Exceptions.

Stuart Marks has put together an awesome blog which covers the specifics of such scenarios. I suggest reading that for in-depth understanding of how and why this behavior is observed.

Post Eclipse Collections 10.3.0 the below code samples will work:

MapIterable.getOrDefault() compiles and works fine
MutableMap.getOrDefault() compiles and works fine
ImmutableMap.getOrDefault() compiles and works fine

Eclipse Collections 10.3.0 was released on 08/08/2020 and is one of our most feature packed releases. The release constitutes numerous contributions from the Java community.

I am a Project Lead and Committer for the Eclipse Collections OSS project at the Eclipse Foundation. Eclipse Collections is open for contributions.

Show your support star us on GitHub.

Eclipse Collections Resources:
Eclipse Collections comes with it’s own implementations of List, Set and Map. It also has additional data structures like Multimap, Bag and an entire Primitive Collections hierarchy. Each of our collections have a rich API for commonly required iteration patterns.

  1. Website
  2. Source code on GitHub
  3. Contribution Guide
  4. Reference Guide

by Nikhil Nanivadekar at September 23, 2020 02:30 AM

N4JS goes LSP

by n4js dev ( at September 08, 2020 11:00 AM

A few weeks ago we started to publish a VSCode extension for N4JS to the VSCode Marketplace. This was one of the last steps on our road to support LSP-based development tools. We chose to make this major change because of several reasons that affected both users and developers of N4JS.

An N4JS project in VSCode with the N4JS language extension

Our language extension for N4JS is hosted at the Microsoft VSCode Marketplace and will be updated regularly by our Jenkins jobs. Versions will be kept in sync with the language version, compiler version and version of the N4JS libraries to avoid incompatible setups. At the moment, the LSP server supports all main features of the language server protocol (LSP) such as validation, content assist, outline view, jump to definition and implementation, open symbol, the rename refactoring and many more. In addition, it will also generate output files whenever a source change is detected. We therefore heavily improved the incremental LSP builder of the Xtext framework and plan to migrate back those changes to the Xtext repository. For the near future we plan to work on stability, performance and also to support some of the less frequently used LSP features.

When looking back, development of N4JS has been based on the Xtext framework from the start and thus it was straightforward to build an Eclipse-based IDE as our main development tool. Later on, we also implemented a headless compiler used for manual and automated testing from the command line. The development of the compiler already indicated some problems stemming from the tight integration of the Eclipse and the Xtext frameworks together with our language specific implementations. To name an example, we had two separate builder implementations: one for the IDE and the other for the headless compiler. Since the Eclipse IDE is using a specific workspace and project model, we also had two implementations for this abstraction. Another important problem we faced with developing an Eclipse-based IDE was that at some points we had to implement UI tests using the SWTBot framework. For us, SWTBot tests turned out to be very hard to develop, to maintain, and to keep from becoming flaky. Shifting to LSP-based development tools, i.e. the headless compiler and an LSP server, allows us to overcome the aforementioned problems.

Users of N4JS now have the option to either use our extension for VSCode or integrate our LSP server into their favorite IDE themselves, even into the Eclipse IDE. They also benefit from more lightweight tools regarding disk size and start-up performance, as well as a better integration into well-known tools from the JavaScript development ecosystem.

by n4js dev ( at September 08, 2020 11:00 AM

No Java? No Problem!

by Ed Merks ( at August 18, 2020 07:50 AM

For the 2020-09 Eclipse Simultaneous Release, the Eclipse IDE will require Java 11 or higher to run.  If the user doesn't have that installed, Eclipse simply won't start, instead popping up this dialog: 

That of course begs the question, what should I do now? The Eclipse Installer itself is an Eclipse application so it too will fail to start for the same reason.  At least on Windows the Eclipse Installer is distributed as a native executable, so it will open a semi-helpful page in the browser to direct the user find a suitable JRE or JDK to install rather than popping up the above dialog.

Of course we are concerned that many users will update 2020-06 to 2020-09 only to find that Eclipse fails to start afterwards because they are currently running with Java 8.  But Mickael Istria has planned ahead for this as part of the 2020-06 release, adding a validation check during the update process to determine if the current JVM is suitable for the update, thereby helping prevent this particular problem.

Now that JustJ is available for building Eclipse products with an embedded JRE, we can do even better.  Several of the Eclipse Packaging Project's products will include a JustJ JRE in the packages for 2020-09, i.e., the C/C++, Rust, and JavaScript packages.  Also the Eclipse Installer for 2020-09 will provide product variants that include a JustJ JRE.  So they all will simply run out of the box regardless of which version of Java is installed and of course even when Java is not installed at all.

Even better, the Eclipse Installer will provide JustJ JREs as choices in the dialogs.  A user who does not have Java installed will be offered JustJ JRE 14.02 as the default JRE.

Choices of JustJ JREs will always be available in the Eclipse Installer; it will be the default only if no suitable version of Java is currently installed on the machine.

Eclipse Installers with an embedded JustJ JRE will be available starting with 2020-09 M3 for all supported platforms.  For a sneak preview, you can find them in the nightly builds folder.  The ones with "-jre" in the name contain an embedded JRE (and the ones with "-restricted" in the name will only install 2020-09 versions of the products).

It was a lot of work getting this all in place, both building the JREs and updating Oomph's build to consume them.  Not only that, just this week I had to rework EMF's build so that it functions with the latest platform where some of the JDT bundles have incremented their BREEs to Java 11.  There's always something disruptive that creates a lot of work.  I should point out that no one funds this work, so I often question how this is all actually sustainable in the long term (not to mention questioning my personal sanity).

I did found a small GmbH here in Switzerland.  It's very pretty here!

If you need help, consider that help is available. If no one pays for anything, at some point you will only get what you pay for, i.e., nothing. But that's a topic for another blog...

by Ed Merks ( at August 18, 2020 07:50 AM

Dogfooding the Eclipse Dash License Tool

by waynebeaton at July 22, 2020 03:43 PM

There’s background information about this post in my previous post. I’ve been using the Eclipse Dash License Tool on itself.

$ mvn dependency:list | grep -Poh "\S+:(system|provided|compile)$" | java -jar licenses.jar -
Querying Eclipse Foundation for license data for 7 items.
Found 6 items.
Querying ClearlyDefined for license data for 1 items.
Found 1 items.
Vetted license information was found for all content. No further investigation is required.
$ _

Note that in this example, I’ve removed the paths to try and reduce at least some of the clutter. I also tend to add a filter to sort the dependencies and remove duplicates (| sort | uniq), but that’s not required here so I’ve left it out.

The message that “[v]etted license information was found for all content”, means that the tool figures that all of my project’s dependencies have been fully vetted and that I’m good to go. I could, for example, create a release with this content and be fully aligned with the Eclipse Foundation’s Intellectual Property Policy.

The tool is, however, only as good as the information that it’s provided with. Checking only the Maven build completely misses the third party content that was introduced by Jonah’s helpful contribution that helps us obtain dependency information from a yarn.lock file.

$ cd yarn
$ node index.js | java -jar licenses.jar -
Querying Eclipse Foundation for license data for 1 items.
Found 0 items.
Querying ClearlyDefined for license data for 1 items.
Found 0 items.
License information could not automatically verified for the following content:

npm/npmjs/@yarnpkg/lockfile/1.1.0 (null)

Please create contribution questionnaires for this content.

$ _

So… oops. Missed one.

Note that the updates to the IP Policy include a change that allows project teams to leverage third-party content (that they believe to be license compatible) in their project code during development. All content must be vetted by the IP due diligence process before it may be leveraged by any release. So the project in its current state is completely onside, but the license of that identified bit of content needs to be resolved before it can be declared as proper release as defined by the Eclipse Foundation Development Process.

This actually demonstrates why I opted to create the tool as CLI that takes a flat list of dependencies as input: we use all sorts of different technologies, and I wanted to focus the tool on providing license information for arbitrary lists of dependencies.

I’m sure that Denis will be able to rewrite my bash one-liner in seven keystrokes, but here’s how I’ve combined the two so that I can get complete picture with a “single” command:

$ { mvn dependency:list | grep -Poh "\S+:(system|provided|compile)$" ; cd yarn && node index.js; } | java -jar licenses.jar -
Querying Eclipse Foundation for license data for 8 items.
Found 6 items.
Querying ClearlyDefined for license data for 2 items.
Found 1 items.
License information could not automatically verified for the following content:

npm/npmjs/@yarnpkg/lockfile/1.1.0 (null)

Please create contribution questionnaires for this content.
$ _

I have some work to do before I can release. I’ll need to engage with the Eclipse Foundation’s IP Team to have that one bit of content vetted.

As a side effect, the tool generates a DEPENDENCIES file. The dependency file lists all of the dependencies provided in the input in ClearlyDefined coordinates along with license information, whether or not the content is approved for use or is restricted (meaning that further investigation is required), and the authority that determined the status.

maven/mavencentral/org.glassfish/jakarta.json/1.1.6, EPL-2.0 OR GPL-2.0 WITH Classpath-exception-2.0, approved, emo_ip_team
maven/mavencentral/commons-codec/commons-codec/1.11, Apache-2.0, approved, CQ15971
maven/mavencentral/org.apache.httpcomponents/httpcore/4.4.13, Apache-2.0, approved, CQ18704
maven/mavencentral/commons-cli/commons-cli/1.4, Apache-2.0, approved, CQ13132
maven/mavencentral/org.apache.httpcomponents/httpclient/4.5.12, Apache-2.0, approved, CQ18703
maven/mavencentral/commons-logging/commons-logging/1.2, Apache-2.0, approved, CQ10162
maven/mavencentral/org.apache.commons/commons-csv/1.8, Apache-2.0, approved, clearlydefined
npm/npmjs/@yarnpkg/lockfile/1.1.0, unknown, restricted, none

Most of the content was vetted by the Eclipse Foundation’s IP Team (the entries marked “CQ*” have corresponding entries in IPZilla), one was found in ClearlyDefined, and one requires further investigation.

The tool produces good results. But, as I stated earlier, it’s only as good as the input that it’s provided with and it only does what it is designed to do (it doesn’t, for example, distinguish between prerequisite dependencies and dependencies of “works with” dependencies; more on this later). The output of the tool is obviously a little rough and could benefit from the use of a proper configurable logging framework. There’s a handful of other open issues for your consideration.

by waynebeaton at July 22, 2020 03:43 PM

Why ServiceCaller is better (than ServiceTracker)

July 07, 2020 07:00 PM

My previous post spurned a reasonable amount of discussion, and I promised to also talk about the new ServiceCaller which simplifies a number of these issues. I also thought it was worth looking at what the criticisms were because they made valid points.

The first observation is that it’s possible to use both DS and ServiceTracker to track ServiceReferences instead. In this mode, the services aren’t triggered by default; instead, they only get accessed upon resolving the ServiceTracker using the getService() call. This isn’t the default out of the box, because you have to write a ServiceTrackerCustomizer adapter that intercepts the addingService() call to wrap the ServiceTracker for future use. In other words, if you change:

serviceTracker = new ServiceTracker<>(bundleContext, Runnable.class, null);;

to the slightly more verbose:

serviceTracker = new ServiceTracker<>(bundleContext, Runnable.class,
new ServiceTrackerCustomizer<Runnable, Wrapped<Runnable>>() {
public Wrapped<Runnable> addingService(ServiceReference<Runnable> ref) {
return new Wrapped<>(ref, bundleContext);
static class Wrapped<T> {
private ServiceReference<T> ref;
private BundleContext context;
public Wrapped(ServiceReference<T> ref, BundleContext context) {
this.ref = ref;
this.context = context;
public T getService() {
try {
return context.getService(ref);
} finally {

Obviously, no practical code uses this approach because it’s too verbose, and if you’re in an environment where DS services aren’t widely used, the benefits of the deferred approach are outweighed by the quantity of additional code that needs to be written in order to implement this pattern.

(The code above is also slightly buggy; we’re getting the service, returning it, then ungetting it afterwards. We should really just be using it during that call instead of returning it in that case.)

Introducing ServiceCaller

This is where ServiceCaller comes in.

The approach of the ServiceCaller is to optimise out the over-eager dereferencing of the ServiceTracker approach, and apply a functional approach to calling the service when required. It also has a mechanism to do single-shot lookups and calling of services; helpful, for example, when logging an obscure error condition or other rarely used code path.

This allows us to elegantly call functional interfaces in a single line of code:

Class callerClass = getClass();
ServiceCaller.callOnce(callerClass, Runnable.class, Runnable:run);

This call looks for Runnable service types, as visible from the caller class, and then invoke the function getClass() as lambda. We can use a method reference (as in the above case) or you can supply a Consumer<T> which will be passed the reference that is resolved from the lookup.

Importantly, this call doesn’t acquire the service until the callOnce call is made. So, if you have an expensive logging factory, you don’t have to initialise it until the first time it’s needed – and even better, if the error condition never occurs, you never need to look it up. This is in direct contrast to the ServiceTracker approach (which actually needs more characters to type) that accesses the services eagerly, and is an order of magnitude better than having to write a ServiceTrackerCustomiser for the purposes of working around a broken API.

However, note that such one-shot calls are not the most efficient way of doing this, especially if it is to be called frequently. So the ServiceCaller has another mode of operation; you can create a ServiceCaller instance, and hang onto it for further use. Like its single-shot counterpart, this will defer the resolution of the service until needed. Furthermore, once resolved, it will cache that instance so you can repeatedly re-use it, in the same way that you could do with the service returned from the ServiceTracker.

private ServiceCaller<Runnable> service;
public void start(BundleContext context) {
this.service = new ServiceCaller<>(getClass(), Runnable.class);
public void stop(BundleContext context) {
public void doSomething() {;

This doesn’t involve significantly more effort than using the ServiceTracker that’s widely in use in Eclipse Activators at the moment, yet will defer the lookup of the service until it’s actually needed. It’s obviously better than writing many lines of ServiceTrackerCustomiser and performs better as a result, and is in most cases a type of drop-in replacement. However, unlike ServiceTracker (which returns you a service that you can then do something with afterwards), this call provides a functional consumer interface that allows you to pass in the action to take.

Wrapping up

We’ve looked at why ServiceTracker has problems with eager instantiation of services, and the complexity of code required to do it the right way. A scan of the Eclipse codebase suggests that outside of Equinox, there are very few uses of ServiceTrackerCustomiser and there are several hundred calls to ServiceTracker(xxx,yyy,null) – so there’s a lot of improvements that can be made fairly easily.

This pattern can also be used to push down the acquisition of the service from a generic Plugin/Activator level call to where it needs to be used. Instead of standing this up in the BundleActivator, the ServiceCaller can be used anywhere in the bundle’s code. This is where the real benefit comes in; by packaging it up into a simple, functional consumer, we can use it to incrementally rid ourselves of the various BundleActivators that take up the majority of Eclipse’s start-up.

A final note on the ServiceCaller – it’s possible that when you run the callOnce method (or the call method if you’re holding on to it) that a service instance won’t be available. If that’s the case, you get notified by a false return call from the call method. If a service is found and is processed, you’ll get a true returned. For some operations, a no-op is a fine behaviour if the service isn’t present – for example, if there’s no LogService then you’re probably going to drop the log event anyway – but it allows you to take the corrective action you need.

It does mean that if you want to capture return state from the method call then you’ll need to have an alternative approach. The easiest way is to have an final Object result[] = new Object[1]; before the call, and then the lambda can assign the return value to the array. That’s because local state captured by lambdas needs to be a final reference, but a final reference to a mutable single element array allows us to poke a single value back. You could of course use a different class for the array, depending on your requirements.

So, we have seen that ServiceCaller is better than ServiceTracker, but can we do even better than that? We certainly can, and that’s the purpose of the next post.

July 07, 2020 07:00 PM

Why ServiceTracker is Bad (for DS)

July 02, 2020 07:00 PM

In a presentation I gave at EclipseCon Europe in 2016, I noted that there were prolems when using ServiceTracker and on slide 37 of my presentation noted that:

  • is a blocking call
  • results in DS activating services

Unfortunately, not everyone agrees because it seems insane that ServiceTracker should do this.

Unfortunately, ServiceTracker is insane.

The advantage of Declarative Services (aka SCR, although no-one calls it that) is that you can register services declaratively, but more importantly, the DS runtime will present the existence of the service but defer instantiation of the component until it’s first requested.

The great thing about this is that you can have a service which does many class loads or timely actions and defer its use until the service is actually needed. If your service isn’t required, then you don’t pay the cost for instantiating that service. I don’t think there’s any debate that this is a Good Thing and everyone, so far, is happy.


The problem, specifically when using ServiceTracker, is that you have to do a two-step process to use it:

  1. You create a ServiceTracker for your particular service class
  2. You call open() on it to start looking for services
  3. Time passes
  4. You acquire the service form the ServiceTracker to do something with it

There is a generally held mistaken belief that the DS component is not instantiated until you hit step 4 in the above. After all, if you’re calling the service from another component – or even looking up the ServiceReference yourself – that’s what would happen.

What actually happens is that the DS component is instantiated in step 2 above. That’s because the open() call – which is nicely thread-safe by the way, in the way that getService() isn’t – starts looking for services, and then caches the InitialTracked service, which causes DS to instantiate the component for you. Since most DS components often have a default, no-arg constructor, this generally misses most people’s attention.

If your component’s constructor – or more importantly, the fields therein, cause many classes to be loaded or perform substantial work or calculation, the fact that you’re hitting a synchronized call can take some non-trivial amount of time. And since this is typically in an Activator.start() method, it means that your nicely delay-until-its-needed component is now on the critical path of this bundle’s start-up, despite not actually needing the service right now.

This is one of the main problems in Eclipse’s start-up; many, many thousands of classes are loaded too eagerly. I’ve been working over the years to try and reduce the problem but it’s an uphill struggle and bad patterns (particularly the use of Activator) are endemic in a non-trivial subset of the Eclipse ecosystem. Of course, there are many fine and historical reasons why this is the case, not the least of which is that we didn’t start shipping DS in the Eclipse runtime until fairly recently.

Repo repro

Of course, when you point this out, not everyone is aware of this subtle behaviour. And while opinions may differ, code does not. I have put together a sample project which has two bundles:

  • Client, which has an Activator (yeah I know, I’m using it to make a point) that uses a ServiceTracker to look for Runnable instances
  • Runner, which has a DS component that provides a Runnable interface

When launched together, as soon as the method is called, you can see the console printing "Component has been instantiated" message. This is despite the Client bundle never actually using the service that the ServiceTracker causes to be obtained.

If you run it with the system property -DdisableOpen=true, the statement is not called, and the component is not instantiated.

This is a non-trivial reason as to why Eclipse startup can be slow. There are many, many uses of ServiceTracker to reach out to other parts of the system, and regardless of whether these are lazy DS components or have been actively instantiated, the use of causes them to all be eagerly activated, even before they’re needed. We can migrate Eclipse’s services to DS (and in fact, I’m working on doing just that) but until we eliminate the ServiceTracker from various Activators, we won’t see the benefit.

The code in the github repository essentially boils down to:

public void start(BundleContext bundleContext) throws Exception {
serviceTracker = new ServiceTracker<>(bundleContext, Runnable.class, null);
if (!Boolean.getBoolean("disableOpen")) {; // This will cause a DS component to be instantiated even though we don't use it

Unfortunately, there’s no way to use ServiceTracker to listen to lazily activated services, and as an OSGi standard, the behaviour is baked in to it.

Fortunately, there’s a lighter-weight tracker you can use called ServiceCaller – but that’s a topic for another blog post.


Using will cause lazily instantiated DS components to be activated eagerly, before the service is used. Instead of using ServiceTracker, try moving your service out to a DS component, and then DS will do the right thing.

July 02, 2020 07:00 PM

How to install RDi in the latest version of Eclipse

by Wim at June 30, 2020 03:57 PM

Monday, June 29, 2020
In this blog, I am going to show you how to install IBM RDi into the latest and the greatest version of Eclipse. If you prefer to watch a video then scroll down to the end. **EDIT** DOES NOT WORK WITH ECLIPSE 2020/09 AND HIGHER.

Read more

by Wim at June 30, 2020 03:57 PM

Quarkus – Supersonic Subatomic IoT

by Jens Reimann at June 30, 2020 03:22 PM

Quarkus is advertised as a “Kubernetes Native Java stack, …”, so we took it to a test, and checked what benefits we can get, by replacing an existing service from the IoT components of EnMasse, the cloud-native, self-service messaging system.

The context

For quite a while, I wanted to try out Quarkus. I wanted to see what benefits it brings us in the context of EnMasse. The IoT functionality of EnMasse is provided by Eclipse Honoâ„¢, which is a micro-service based IoT connectivity platform. Hono is written in Java, makes heavy use of Vert.x, and the application startup and configuration is being orchestrated by Spring Boot.

EnMasse provides the scalable messaging back-end, based on AMQP 1.0. It also takes care of the Eclipse Hono deployment, alongside EnMasse. Wiring up the different services, based on an infrastructure custom resource. In a nutshell, you create a snippet of YAML, and EnMasse takes care and deploys a messaging system for you, with first-class support for IoT.

Architecture diagram, explaining the tenant service.
Architectural overview – showing the Tenant Service

This system requires a service called the “tenant service”. That service is responsible for looking up an IoT tenant, whenever the system needs to validate that a tenant exists or when its configuration is required. Like all the other services in Hono, this service is implemented using the default stack, based on Java, Vert.x, and Spring Boot. Most of the implementation is based on Vert.x alone, using its reactive and asynchronous programming model. Spring Boot is only used for wiring up the application, using dependency injection and configuration management. So this isn’t a typical Spring Boot application, it is neither using Spring Web or any of the Spring Messaging components. And the reason for choosing Vert.x over Spring in the past was performance. Vert.x provides an excellent performance, which we tested a while back in our IoT scale test with Hono.

The goal

The goal was simple: make it use fewer resources, having the same functionality. We didn’t want to re-implement the whole service from scratch. And while the tenant service is specific to EnMasse, it still uses quite a lot of the base functionality coming from Hono. And we wanted to re-use all of that, as we did with Spring Boot. So this wasn’t one of those nice “greenfield” projects, where you can start from scratch, with a nice and clean “Hello World”. This is code is embedded in two bigger projects, passes system tests, and has a history of its own.

So, change as little as possible and get out as much as we can. What else could it be?! And just to understand from where we started, here is a screenshot of the metrics of the tenant service instance on my test cluster:

Screenshot of original resource consumption.
Metrics for the original Spring Boot application

Around 200MiB of RAM, a little bit of CPU, and not much to do. As mentioned before, the tenant service only gets queries to verify the existence of a tenant, and the system will cache this information for a bit.

Step #1 – Migrate to Quarkus

To use Quarkus, we started to tweak our existing project, to adopt the different APIs that Quarkus uses for dependency injection and configuration. And to be fair, that mostly meant saying good-bye to Spring Boot specific APIs, going for something more open. Dependency Injection in Quarkus comes in the form of CDI. And Quarkus’ configuration is based on Eclipse MicroProfile Config. In a way, we didn’t migrate to Quarkus, but away from Spring Boot specific APIs.

First steps

Starting with adding the Quarkus Maven plugin and some basic dependencies to our Maven build, and off we go.

And while replacing dependency inject was a rather smooth process, the configuration part was a bit more tricky. Both Hono and Microprofile Config have a rather opinionated view on the configuration. Which made it problematic to enhance the Hono configuration in the way that Microprofile was happy. So for the first iteration, we ended up wrapping the Hono configuration classes to make them play nice with Microprofile. However, this is something that we intend to improve in Hono in the future.

Packaging the JAR into a container was no different than with the existing version. We only had to adapt the EnMasse operator to provide application arguments in the form Quarkus expected them.

First results

From a user perspective, nothing has changed. The tenant service still works the way it is expected to work and provides all the APIs as it did before. Just running with the Quarkus runtime, and the same JVM as before:

Screenshot of resource consumption with Quarkus in JVM mode.
Metrics after the conversion to Quarkus, in JVM mode

We can directly see a drop of 50MiB from 200MiB to 150MiB of RAM, that isn’t bad. CPU isn’t really different, though. There also is a slight improvement of the startup time, from ~2.5 seconds down to ~2 seconds. But that isn’t a real game-changer, I would say. Considering that ~2.5 seconds startup time, for a Spring Boot application, is actually not too bad, other services take much longer.

Step #2 – The native image

Everyone wants to do Java “native compilation”. I guess the expectation is that native compilation makes everything go much faster. There are different tests by different people, comparing native compilation and JVM mode, and the outcomes vary a lot. I don’t think that “native images” are a silver bullet to performance issues, but still, we have been curious to give it a try and see what happens.

Native image with Quarkus

Enabling native image mode in Quarkus is trivial. You need to add a Maven profile, set a few properties and you have native image generation enabled. With setting a single property in the Maven POM file, you can also instruct the Quarkus plugin to perform the native compilation step in a container. With that, you don’t need to worry about the GraalVM installation on your local machine.

Native image generation can be tricky, we knew that. However, we didn’t expect this to be as complex as being “Step #2”. In a nutshell, creating a native image compiles your code to CPU instruction, rather than JVM bytecode. In order to do that, it traces the call graph, and it fails to do so when it comes to reflection in Java. GraalVM supports reflection, but you need to provide the information about types, classes, and methods that want to participate in the reflection system, from the outside. Luckily Quarkus provides tooling to generate this information during the build. Quarkus knows about constructs like de-serialization in Jackson and can generate the required information for GraalVM to compile this correctly.

However, the magic only works in areas that Quarkus is aware of. So we did run into some weird issues, strange behavior that was hard to track down. Things that worked in JVM mode all of a sudden were broken in native image mode. Not all the hints are in the documentation. And we also didn’t read (or understand) all of the hints that are there. It takes a bit of time to learn, and with a lot of help from some colleagues (many thanks to Georgios, Martin, and of course Dejan for all the support), we got it running.

What is the benefit?

After all the struggle, what did it give us?

Screenshot of resource consumption with Quarkus in native image mode.
Metrics when running as native image Quarkus application

So, we are down another 50MiB of RAM. Starting from ~200MiB, down to ~100MiB. That is only half the RAM! Also, this time, we see a reduction in CPU load. While in JVM mode (both Quarkus and Spring Boot), the CPU load was around 2 millicores, now the CPU is always below that, even during application startup. Startup time is down from ~2.5 seconds with Spring Boot, to ~2 seconds with Quarkus in JVM mode, to ~0.4 seconds for Quarkus in native image mode. Definitely an improvement, but still, neither of those times is really bad.

Pros and cons of Quarkus

Switching to Quarkus was no problem at all. We found a few areas in the Hono configuration classes to improve. But in the end, we can keep the original Spring Boot setup and have Quarkus at the same time. Possibly other Microprofile compatible frameworks as well, though we didn’t test that. Everything worked as before, just using less memory. And except for the configuration classes, we could pretty much keep the whole application as it was.

Native image generation was more complex than expected. However, we also saw some real benefits. And while we didn’t do any performance tests on that, here is a thought: if the service has the same performance as before, the fact that it requires only half the of memory, and half the CPU cycles, this allows us to run twice the amount of instances now. Doubling throughput, as we can scale horizontally. I am really looking forward to another scale test since we did do all other kinds of optimizations as well.

You should also consider that the process of building a native image takes quite an amount of time. For this, rather simple service, it takes around 3 minutes on an above-than-average machine, just to build the native image. I did notice some decent improvement when trying out GraalVM 20.0 over 19.3, so I would expect some more improvements on the toolchain over time. Things like hot code replacement while debugging, are things that are not possible with the native image profile though. It is a different workflow, and that may take a bit to adapt. However, you don’t need to commit to either way. You can still have both at the same time. You can work with JVM mode and the Quarkus development mode, and then enable the native image profile, whenever you are ready.

Taking a look at the size of the container images, I noticed that the native image isn’t smaller (~85 MiB), compared to the uber-JAR file (~45 MiB). Then again, our “java base” image alone is around ~435 MiB. And it only adds the JVM on top of the Fedora minimal image. As you don’t need the JVM when you have the native image, you can go directly with the Fedora minimal image, which is around ~165 MiB, and end up with a much smaller overall image.


Switching our existing Java project to Quarkus wasn’t a big deal. It required some changes, yes. But those changes also mean, using some more open APIs, governed by the Eclipse Foundation’s development process, compared to using Spring Boot specific APIs. And while you can still use Spring Boot, changing the configuration to Eclipse MicroProfile opens up other possibilities as well. Not only Quarkus.

Just by taking a quick look at the numbers, comparing the figures from Spring Boot to Quarkus with native image compilation: RAM consumption was down to 50% of the original, CPU usage also was down to at least 50% of original usage, and the container image shrank to ~50% of the original size. And as mentioned in the beginning, we have been using Vert.x for all the core processing. Users that make use of the other Spring components should see more considerable improvement.

Going forward, I hope we can bring the changes we made to the next versions of EnMasse and Eclipse Hono. There is a real benefit here, and it provides you with some awesome additional choices. And in case you don’t like to choose, the EnMasse operator has some reasonable defaults for you 😉

Also see

This work is based on the work of others. Many thanks to:

The post Quarkus – Supersonic Subatomic IoT appeared first on ctron's blog.

by Jens Reimann at June 30, 2020 03:22 PM

Updates to the Eclipse IP Due Diligence Process

by waynebeaton at June 25, 2020 07:23 PM

In October 2019, The Eclipse Foundation’s Board of Directors approved an update to the IP Policy that introduces several significant changes in our IP due diligence process. I’ve just pushed out an update to the Intellectual Property section in the Eclipse Foundation Project Handbook.

I’ll apologize in advance that the updates are still a little rough and require some refinements. Like the rest of the handbook, we continually revise and rework the content based on your feedback.

Here’s a quick summary of the most significant changes.

License certification only for third-party content. This change removes the requirement to perform deep copyright, provenance and scanning of anomalies for third-party content unless it is being modified and/or if there are special considerations regarding the content. Instead, the focus for third-party content is on license compatibility only, which had previously been referred to as Type A due diligence.

Leverage other sources of license information for third-party content. With this change to license certification only for third-party content, we are able to leverage existing sources of information license information. That is, the requirement that the Eclipse IP Team personally review every bit of third-party content has been removed and we can now leverage other trusted sources.

ClearlyDefined is a trusted source of license information. We currently have two trusted sources of license information: The Eclipse Foundation’s IPZilla and ClearlyDefined. The IPZilla database has been painstakingly built over most of the lifespan of the Eclipse Foundation; it contains a vast wealth of deeply vetted information about many versions of many third-party libraries. ClearlyDefined is an OSI project that combines automated harvesting of software repositories and curation by trusted members of the community to produce a massive database of license (and other) information about content.

Piggyback CQs are no longer required. CQs had previously been used for tracking both the vetting process and the use of third-party content. With the changes, we are no longer required track the use of third-party content using CQs, so piggyback CQs are no longer necessary.

Parallel IP is used in all cases. Previously, our so-called Parallel IP process, the means by which project teams could leverage content during development while the IP Team completed their due diligence review was available only to projects in the incubation phase and only for content with specific conditions. This is no longer the case: full vetting is now always applied in parallel in all cases.

CQs are not required for third-party content in all cases. In the case of third-party content due diligence, CQs are now only used to track the vetting process.

CQs are no longer required before third-party content is introduced. Previously, the IP Policy required that all third-party content must be vetted by the Eclipse IP Team before it can be used by an Eclipse Project. The IP Policy updates turn this around. Eclipse project teams may now introduce new third-party content during a development cycle without first checking with the IP Team. That is, a project team may commit build scripts, code references, etc. to third-party content to their source code repository without first creating a CQ to request IP Team review and approval of the third-party content. At least during the development period between releases, the onus is on the project team to—​with reasonable confidence—​ensure any third-party content that they introduce is license compatible with the project’s license. Before any content may be included in any formal release the project team must engage in the due diligence process to validate that the third-party content licenses are compatible with the project license.

History may be retained when an existing project moves to the Eclipse Foundation. We had previously required that the commit history for a project moving to the Eclipse Foundation be squashed and that the initial contribution be the very first commit in the repository. This is no longer the case; existing projects are now encouraged (but not required) to retain their commit history. The initial contribution must still be provided to the IP Team via CQ as a snapshot of the HEAD state of the existing repository (if any).

The due diligence process for project content is unchanged.

If you notice anything that looks particularly wrong or troubling, please either open a bug report, or send a note to EMO.

by waynebeaton at June 25, 2020 07:23 PM

Eclipse JustJ

by Ed Merks ( at June 25, 2020 08:18 AM

I've recently completed the initial support for provisioning the new Eclipse JustJ project, complete with a logo for it.

I've learned several new technologies and honed existing technology skills to make this happen. For example, I've previously used Inkscape to create nicer images for Oomph; a *.png with alpha is much better than a *.gif with a transparent pixel, particularly with the vogue, dark-theme fashion trend, which for old people like me feels more like the old days of CRT monitors than something modern, but hey, to each their own. In any case, a *.svg is cool, definitely looks great at every resolution, and can easily be rendered to a *.png.

By the way, did you know that artwork derivative of  Eclipse artwork requires special approval? Previously the Eclipse Board of Directors had to review and approve such logos, but now our beloved, supreme leader, Mike Milinkovich, is empowered to do that personally.

Getting to the point where we can redistribute JREs at Eclipse has been a long and winding road.  This of course required Board approval and your elected Committer Representatives helped push that to fruition last year.  Speaking of which, now there is an exciting late-breaking development: the move of AdoptOpenJDK to Eclipse Adoptium.  This will be an important source JREs for JustJ!

One of the primary goals of JustJ is to provide JREs via p2 update sites such that a product build can easily incorporate a JRE into the product. With that in place, the product runs out-of-the-box regardless of the JRE installed on the end-user's computer, which is particularly useful for products that are not Java-centric where the end-user doesn't care about the fact that Eclipse is implemented using Java.  This will also enable the Eclipse Installer to run out-of-the-box and will enable the installer to create an installation that, at the user's discretion, uses a JRE provided by Eclipse. In all cases, this includes the ability to update the installation's embedded JRE as new ones are released.

The first stage is to build a JRE from a JDK using jlink.  This must run natively on the JDK's actual supported operating system and hardware architecture.  Of course we want to automate this step, and all the steps involved in producing a p2 repository populated with JREs.  This is where I had to learn about Jenkins pipeline scripts.  I'm particularly grateful to Mikaël Barbero for helping me get started with a simple example.  Now I am a pipeline junkie, and of course I had to learn Groovy as well.

In the initial stage, we generate the JREs themselves, and that involves using shell scripts effectively.  I'm not a big fan of shell scripts, but they're a necessary evil.  I authored a single script that produces JREs on all the supported operating systems; one that I can run locally on Windows and on my two virtual boxes as well. The pipeline itself needs to run certain stages on specific agents such that their steps are performed on the appropriate operating system and hardware.  I'm grate to Robert Hilbrich of DLR for supporting JustJ's builds with their organization's resource packs!  He's also been kind enough to be one of our first test guinea pigs building a product with a JustJ JRE.  The initial stage produces a set of JREs.

In the next stage, JREs need to be wrapped into plugins and features to produce a p2 repository via a Maven/Tycho build.  This is a huge amount of boiler plate scaffolding that is error-prone to author and challenging to maintain, especially when providing multiple JRE flavors.  So of course we want to automate the generation of this scaffolding as well.  Naturally if we're going to generate something, we need a model to capture the boiled-down essence of what needs to be generated.  So I whipped together an EMF model and used JET templates to sketch out the scaffolding. With the super cool JET Editor, these are really easy to author and maintain.  This stage is described in the documentation and produces a p2 update site.  The sites are automatically maintained and the index pages are automatically generated.

To author nice documentation I had to learn PHP much better.  It's really quite cool and very powerful, particularly for producing pages with dynamic content.  For example, I used it to implement more flexible browsing support of so that one can really see all the files present, even when there is an index.html or index.php in the folder.  In any case, there is now lots of documentation for JustJ to describe everything in detail, and it was authored with the help of PHP scaffolding.

Last but not least, there is an Oomph setup to automate the provisioning of a full development environment along with a tutorial to describe in detail everything in that workspace.  There's no excuse not to contribute.  While authoring this tutorial, I found that creating nice, appropriately-clipped screen captures is super annoying and very time consuming, so I dropped a little goodie into Oomph to make that easier.   You might want to try it. Just add "-Dorg.eclipse.oomph.ui.screenshot=<some-folder-location>" to your eclipse.ini to enable it.  Then, if you hit Ctrl twice quickly, screen captures will be produced immediately based on where your application currently has focus.  If you hit Shift twice quickly, screen captures will be produced after a short delay.  This allows you to bring up a menu from the menu bar, from a toolbar button, or a context menu, and capture that menu.  In all cases, the captures include the "simulated" mouse cursor and starts with the "focus", expanding outward to the full enclosing window.

The bottom line, JustJ generates everything given just a set of URLs to JDKs as input, and it maintains everything automatically.  It even provides an example of how to build a product with an embedded JRE to get you started quickly.  And thanks to some test guinea pigs, we know it really works as advertised.

On the personal front, during this time period, I finished my move to Switzerland.  Getting up early here is a feast for the eyes! The movers were scurrying around my apartment the same days as the 2020-06 release, which was also the same day as one of the Eclipse Board meetings.  That was a little too much to juggle at once!

At this point, I can make anything work and I can make anything that already works work even better. Need help with something?  I'm easy to find...

by Ed Merks ( at June 25, 2020 08:18 AM

Clean Sheet Service Update (0.8)

by Frank Appel at May 23, 2020 09:25 AM

Written by Frank Appel

Thanks to a community contribution we’re able to announce another Clean Sheet Service Update (0.8).

The Clean Sheet Eclipse Design

In case you've missed out on the topic and you are wondering what I'm talking about, here is a screenshot of my real world setup using the Clean Sheet theme (click on the image to enlarge). Eclipse IDE Look and Feel: Clean Sheet Screenshot For more information please refer to the features landing page at, read the introductory Clean Sheet feature description blog post, and check out the New & Noteworthy page.


Clean Sheet Service Update (0.8)

This service update fixes a rendering issue of ruler numbers. Kudos to Pierre-Yves B. for contributing the necessary fixes. Please refer to the issue #87 for more details.

Clean Sheet Installation

Drag the 'Install' link below to your running Eclipse instance

Drag to your running Eclipse* workspace. *Requires Eclipse Marketplace Client


Select Help > Install New Software.../Check for Updates.
P2 repository software site: @
Feature: Code Affine Theme

After feature installation and workbench restart select the ‘Clean Sheet’ theme:
Preferences: General > Appearance > Theme: Clean Sheet


On a Final Note, …

Of course, it’s interesting to hear suggestions or find out about potential issues that need to be resolved. Feel free to use the Xiliary Issue Tracker or the comment section below for reporting.

I’d like to thank all the Clean Sheet adopters for the support! Have fun with the latest update :-)

The post Clean Sheet Service Update (0.8) appeared first on Code Affine.

by Frank Appel at May 23, 2020 09:25 AM

Clean Sheet Service Update (0.7)

by Frank Appel at April 24, 2020 08:49 AM

Written by Frank Appel

It’s been a while, but today we’re happy to announce a Clean Sheet Service Update (0.7).

The Clean Sheet Eclipse Design

In case you've missed out on the topic and you are wondering what I'm talking about, here is a screenshot of my real world setup using the Clean Sheet theme (click on the image to enlarge). Eclipse IDE Look and Feel: Clean Sheet Screenshot For more information please refer to the features landing page at, read the introductory Clean Sheet feature description blog post, and check out the New & Noteworthy page.


Clean Sheet Service Update (0.7)

This service update provides the long overdue JRE 11 compatibility on windows platforms. Kudos to Pierre-Yves B. for contributing the necessary fixes. Please refer to the issues #88 and #90 for more details.

Clean Sheet Installation

Drag the 'Install' link below to your running Eclipse instance

Drag to your running Eclipse* workspace. *Requires Eclipse Marketplace Client


Select Help > Install New Software.../Check for Updates.
P2 repository software site: @
Feature: Code Affine Theme

After feature installation and workbench restart select the ‘Clean Sheet’ theme:
Preferences: General > Appearance > Theme: Clean Sheet


On a Final Note, …

Of course, it’s interesting to hear suggestions or find out about potential issues that need to be resolved. Feel free to use the Xiliary Issue Tracker or the comment section below for reporting.

I’d like to thank all the Clean Sheet adopters for the support! Have fun with the latest update :-)

The post Clean Sheet Service Update (0.7) appeared first on Code Affine.

by Frank Appel at April 24, 2020 08:49 AM

Using the remote OSGi console with Equinox

by Mat Booth at April 23, 2020 02:00 PM

You may be familiar with the OSGi shell you get when you pass the "-console" option to Equinox on the command line. Did you know you can also use this console over Telnet sessions or SSH sessions? This article shows you the bare minimum needed to do so.

by Mat Booth at April 23, 2020 02:00 PM

Eclipse Oomph: Suppress Welcome Page

by kthoms at March 19, 2020 04:37 PM

I am frequently spawning Eclipse workspaces with Oomph setups and the first action I do when a new workspace is provisioned is to close Eclipse’s welcome page. So I wanted to suppress that for a current project setup. So I started searching where Eclipse stores the preference that disables the intro page. The location of that preference is within the workspace directory at


The content of the preference file is


So to make Oomph create the preference file before the workspace is started the first time use a Resource Creation task and set the Target URL


Then put the above mentioned preference content as Content value.

by kthoms at March 19, 2020 04:37 PM

MPS’ Quest of the Holy GraalVM of Interpreters

by Niko Stotz at March 11, 2020 11:19 PM

A vision how to combine MPS and GraalVM

Way too long ago, I prototyped a way to use GraalVM and Truffle inside JetBrains MPS. I hope to pick up this work soon. In this article, I describe the grand picture of what might be possible with this combination.

Part I: Get it Working

Step 0: Teach Annotation Processors to MPS

Truffle uses Java Annotation Processors heavily. Unfortunately, MPS doesn’t support them during its internal Java compilation. The feature request doesn’t show any activity.

So, we have to do it ourselves. A little less time ago, I started with an alternative Java Facet to include Annotation Processors. I just pushed my work-in-progress state from 2018. As far as I remember, there were no fundamental problems with the approach.

Optional Step 1: Teach Truffle Structured Sources

For Truffle, all executed programs stem from a Source. However, this Source can only provide Bytes or Characters. In our case, we want to provide the input model. The prototype just put the Node id of the input model as a String into the Source; later steps resolved the id against MPS API. This approach works and is acceptable; directly passing the input node as object would be much nicer.

Step 2: Implement Truffle Annotations as MPS Language

We have to provide all additional hints as Annotations to Truffle. They are complex enough, so we want to leverage MPS’ language features to directly represent all Truffle concepts.

This might be a simple one-to-one representation of Java Annotations as MPS Concepts, but I’d guess we can add some more semantics and checks. Such feedback within MPS should simplify the next steps: Annotation Processors (and thus, Truffle) have only limited options to report issues back to us.

We use this MPS language to implement the interpreter for our DSL. This results in a TruffleLanguage for our DSL.

Step 3: Start Truffle within MPS

At the time when I wrote the proof-of-concept, a TruffleLanguage had to be loaded at JVM startup. To my understanding, Truffle overcame this limitation. I haven’t looked into the current possibilities in detail yet.

I can imagine two ways to provide our DSL interpreter to the Truffle runtime:

  1. Always register MpsTruffleLanguage1, MpsTruffleLanguage2, etc. as placeholders. This would also work at JVM startup. If required, we can register additional placeholders with one JVM restart.
    All non-colliding DSL interpreters would be MpsTruffleLanguage1 from Truffle’s point of view. This works, as we know the MPS language for each input model, and can make sure Truffle uses the right evaluation for the node at hand. We might suffer a performance loss, as Truffle had to manage more evaluations.

    What are non-colliding interpreters? Assume we have a state machine DSL, an expression DSL, and a test DSL. The expression DSL is used within the state machines; we provide an interpreter for both of them.
    We provide two interpreters for the test DSL: One executes the test and checks the assertions, the other one only marks model nodes that are covered by the test.
    The state machine interpreter, the expression interpreter, and the first test interpreter are non-colliding, as they never want to execute on the same model node. All of them go to MpsTruffleLanguage1.
    The second test interpreter does collide, as it wants to do something with a node also covered by the other interpreters. We put it to MpsTruffleLanguage2.

  2. We register every DSL interpreter as a separate TruffleLanguage. Nice and clean one-to-one relation. In this scenario, we probably had to get Truffle Language Interop right. I have not yet investigated this topic.

Step 4: Translate Input Model to Truffle Nodes

A lot of Truffle’s magic stems from its AST representation. Thus, we need to translate our input model (a.k.a. DSL instance, a.k.a. program to execute) from MPS nodes into Truffle Nodes.

Ideally, the Truffle AST would dynamically adopt any changes of the input model — like hot code replacement in a debugger, except we don’t want to stop the running program. From Truffle’s point of view this shouldn’t be a problem: It rewrites the AST all the time anyway.

DclareForMPS seems a fitting technology. We define mapping rules from MPS node to Truffle Node. Dclare makes sure they are in sync, and input changes are propagated optimally. These rules could either be generic, or be generated from the interpreter definition.

We need to take care that Dclare doesn’t try to adapt the MPS nodes to Truffle’s optimizing AST changes (no back-propagation).

We require special handling for edge cases of MPS → Truffle change propagation, e.g. the user deletes the currently executed part of the program.

For memory optimization, we might translate only the entry nodes of our input model immediately. Instead of the actual child Truffle Nodes, we’d add special nodes that translate the next part of the AST.
Unloading the not required parts might be an issue. Also, on-demand processing seems to conflict with Dclare’s rule-based approach.

Part II: Adapt to MPS

Step 5: Re-create Interpreter Language

The MPS interpreter framework removes even more boilerplate from writing interpreters than Truffle. The same language concepts should be built again, as abstraction on top of the Truffle Annotation DSL. This would be a new language aspect.

Step 6: Migrate MPS Interpreter Framework

Once we had the Truffle-based interpreter language, we want to use it! Also, we don’t want to rewrite all our nice interpreters.

I think it’s feasible to automatically migrate at least large parts of the existing MPS interpreter framework to the new language. I would expect some manual adjustment, though. That’s the price we had to pay for two orders of magnitude performance improvement.

Step 7: Provide Plumbing for BaseLanguage, Checking Rules, Editors, and Tests

Using the interpreter should be as easy as possible. Thus, we have to provide the appropriate utilities:

  • Call the interpreter from any BaseLanguage code.
    We had to make sure we get language / model loading and dependencies right. This should be easier with Truffle than with the current interpreter, as most language dependencies are only required at interpreter build time.
  • Report interpreter results in Checking Rules.
    Creating warnings or errors based on the interpreter’s results is a standard use-case, and should be supported by dedicated language constructs.
  • Show interpreter results in an editor.
    As another standard use-case, we might want to show the interpreter’s results (or a derivative) inside an MPS editor. Especially for long-running or asynchronous calculations, getting this right is tricky. Dedicated editor extensions should take care of the details.
  • Run tests that involve the interpreter.
    Yet another standard use-case: our DSL defines both calculation rules and examples. We want to assure they are in sync, meaning executing the rules in our DSL interpreter and comparing the results with the examples. This must work both inside MPS, and in a headless build / CI test environment.

Step 8: Support Asynchronous Interpretation and/or Caching

The simple implementation of interpreter support accepts a language, parameters, and a program (a.k.a. input model), and blocks until the interpretation is complete.

This working mode is useful in various situations. However, we might want to run long-running interpretations in the background, and notify a callback once the computation is finished.

Example: An MPS editor uses an interpreter to color a rule red if it is not in accordance with a provided example. This interpretation result is very useful, even if it takes several seconds to calculate. However, we don’t want to block the editor (or even whole MPS) for that long.

Extending the example, we might also want to show an error on such a rule. The typesystem runs asynchronously anyways, so blocking is not an issue. However, we now run the same expensive interpretation twice. The interpreter support should provide configurable caching mechanisms to avoid such waste.

Both asynchronous interpretation and caching benefit from proper language extensions.

Step 9: Integrate with MPS Typesystem and Scoping

Truffle needs to know about our DSL’s types, e.g. for resolving overloaded functions or type casting. We already provide this information to the MPS typesystem. I didn’t look into the details yet; I’d expect we could generate at least part of the Truffle input from MPS’ type aspect.

Truffle requires scoping knowledge to store variables in the right stack frame (and possibly other things I don’t understand yet). I’d expect we could use the resolved references in our model as input to Truffle. I’m less optimistic to re-use MPS’ actual scoping system.

For both aspects, we can amend the missing information in the Interpreter Language, similar to the existing one.

Step 10: Support Interpreter Development

As DSL developers, we want to make sure we implemented our interpreter correctly. Thus, we write tests; they are similar to other tests involving the interpreter.

However, if they fail, we don’t want to debug the program expressed in our DSL, but our interpreter. For example, we might implement the interpreter for a switch-like construct, and had forgotten to handle an implicit default case.

Using a regular Java debugger (attached to our running MPS instance) has only limited use, as we had to debug through the highly optimized Truffle code. We cannot use Truffle’s debugging capabilities, as they work on the DSL.
There might be ways to attach a regular Java debugger running inside MPS in a different thread to its own JVM. Combining the direct debugger access with our knowledge of the interpreter’s structure, we might be able to provide sensible stepping through the interpreter to the DSL developer.

Simpler ways to support the developers might be providing traces through the interpreter, or ship test support where the DSL developer can assure specific evaluators were (not) executed.

Step 11: Create Language for Interop

Truffle provides a framework to describe any runtime in-memory data structure as Shape, and to convert them between languages. This should be a nice extension of MPS’ multi-language support into the runtime space, supported by an appropriate Meta-DSL (a.k.a. language aspect).

Part III: Leverage Programming Language Tooling

Step 12: Connect Truffle to MPS’ Debugger

MPS contains the standard interactive debugger inherited from IntelliJ platform.

Truffle exposes a standard interface for interactive debuggers of the interpreted input. It takes care of the heavy lifting from Truffle AST to MPS input node.

If we ran Truffle in a different thread than the MPS debugger, we should manage to connect both parts.

Step 13: Integrate Instrumentation

Truffle also exposes an instrumentation interface. We could provide standard instrumentation applications like “code” coverage (in our case: DSL node coverage) and tracing out-of-the-box.

One might think of nice visualizations:

  • Color node background based on coverage
  • Mark the currently executed part of the model
  • Project runtime values inline
  • Show traces in trace explorer

Other possible applications:

  • Snapshot mechanism for current interpreter state
  • Provide traces for offline debugging, and play them back

Part IV: Beyond MPS

Step 14: Serialize Truffle Nodes

If we could serialize Truffle Nodes (before any run-time optimization), we would have an MPS-independent representation of the executable DSL. Depending on the serialization format (implement Serializable, custom binary format, JSON, etc.), we could optimize for use-case, size, loading time, or other priorities.

Step 15: Execute DSL stand-alone without Generator

Assume an insurance calculation DSL.
Usually, we would implement

  • an interpreter to execute test cases within MPS,
  • a Generator to C to execute on the production server,
  • and a Generator to Java to provide an preview for the insurance agent.

With serialized Truffle Nodes, we need only one interpreter:

Part V: Crazy Ideas

Step 16: Step Back Debugger

By combining Instrumentation and debugger, it might be feasible to provide step-back debugging.

In the interpreter, we know the complete global state of the program, and can store deltas (to reduce memory usage). For quite some DSLs, this might be sufficient to store every intermediate state and thus arbitrary debug movement.

Step 17: Side Step Debugger

By stepping back through our execution and following different execution paths, we could explore alternate outcomes. The different execution path might stem from other input values, or hot code replacement.

Step 18: Explorative Simulations

If we had a side step debugger, nice support to project interpretation results, and a really fast interpreter, we could run explorative simulations on lots of different executions paths. This might enable legendary interactive development.

by Niko Stotz at March 11, 2020 11:19 PM

Eclipse and Handling Content Types on Linux

by Mat Booth at February 06, 2020 03:00 PM

Getting deep desktop integration on Linux.

by Mat Booth at February 06, 2020 03:00 PM

JDT without Eclipse

January 16, 2020 11:00 PM

The JDT (Java Development Tools) is an important part of Eclipse IDE but it can also be used without Eclipse.

For example the Spring Tools 4, which is nowadays a cross-platform tool (Visual Studio Code, Eclipse IDE, …), is highly using the JDT behind the scene. If you would like to know more, I recommend you this podcast episode: Spring Tools lead Martin Lippert

A second known example is the Java Formatter that is also part of the JDT. Since a long time there are maven and gradle plugins that performs the same formatting as Eclipse IDE but as part of the build (often with the possibly to break the build when the code is wrongly formatted).

Reusing the JDT has been made easier since 2017 when it was decided to publish each release and its dependencies on maven central (with following groupId: org.eclipse.jdt, org.eclipse.platform). Stephan Herrmann did a lot of work to achieve this goal. I blogged about this: Use the Eclipse Java Development Tools in a Java SE application and I have pushed a simple example the Java Formatter is used in a simple main(String[]) method built by a classic minimal Maven project: java-formatter.

Workspace or not?

When using the JDT in an headless application, two cases needs to be distinguished:

  1. Some features (the parser, the formatter…) can be used in a simple Java main method.

  2. Other features (search index, AST rewriter…) require a workspace. This imply that the code run inside an OSGi runtime.

To illustrate this aspect, I took some of the examples provided by the site in the blog post series Eclipse JDT Tutorials and I adapted them so that each code snippet can be executed inside a JUnit test. This is the Programcreek examples project.

I have split the unit-tests into two projects:

  • programcreek-standalone for the one that do not require OSGi. The maven project is really simple (using the default convention everywhere)

  • programcreek-osgi for the one that must run inside an OSGi runtime. The bnd maven plugins are configured in the pom.xml to take care of the OSGi stuff.

If you run the test with Maven, it will work out-of-the box.

If you would like to run them inside an IDE, you should use one that starts OSGi when executing the tests (in the same way the maven build is doing it). To get a bnd aware IDE, you can use Eclipse IDE for Java Developers with the additional plugin Bndtools installed, but there are other possibilities.

Source code can be found on GitHub: programcreek-examples

January 16, 2020 11:00 PM

4 Years at The Linux Foundation

by Chris Aniszczyk at January 03, 2020 09:54 AM

Late last year marked the 4th year anniversary of the formation of the CNCF and me joining The Linux Foundation:

As we enter 2020, it’s amusing for me to reflect on my decision to join The Linux Foundation a little over 4 years ago when I was looking for something new to focus on. I spent about 5 years at Twitter which felt like an eternity (the average tenure for a silicon valley employee is under 2 years), focused on open source and enjoyed the startup life of going from a hundred or so engineers to a couple of thousand. I truly enjoyed the ride, it was a high impact experience where we were able to open source projects that changed the industry for the better: Bootstrap (changed front end development for the better), Twemoji (made emojis more open source friendly and embeddable), Mesos (pushed the state of art for open source infrastructure), co-founded TODO Group (pushed the state of corporate open source programs forward) and more!

When I was looking for change, I wanted to find an opportunity that could impact more than I could just do at one company. I had some offers from FAANG companies and amazing startups but eventually settled on the nonprofit Linux Foundation because I wanted to build an open source foundation from scratch, teach other companies about open source best practices and assumed non profit life would be a bit more relaxing than diving into a new company (I was wrong). Also, I was throughly convinced that an openly governed foundation pushing Kubernetes, container specifications and adjacent independent cloud native technologies would be the right model to move open infrastructure forward.

As we enter 2020, I realize that I’ve been with one organization for a long time and that puts me on edge as I enjoy challenges, chaos and dread anything that makes me comfortable or complacent. Also, I have a strong desire to focus on efforts that involve improving the state of security and privacy in a connected world, participatory democracy, climate change; also anything that pushes open source to new industries and geographies.

While I’m always happy to entertain opportunities that align to my goals, the one thing that I do enjoy at the LF is that I’ve had the ability to build a variety of new open source foundations improving industries and communities: CDF, GraphQL Foundation, Open Container Initiative (OCI), Presto Foundation, TODO Group, Urban Computing Foundation and more.

Anyways, thanks for reading and I look forward to another year of bringing open source practices to new industries and places, the world is better when we are collaborating openly.

by Chris Aniszczyk at January 03, 2020 09:54 AM

An update on Eclipse IoT Packages

by Jens Reimann at December 19, 2019 12:17 PM

A lot has happened, since I wrote last about the Eclipse IoT Packages project. We had some great discussions at EclipseCon Europe, and started to work together online, having new ideas in the progress. Right before the end of the year, I think it is a good time to give an update, and peek a bit into the future.


One of the first things we wanted to get started, was a home for the content we plan on creating. An important piece of the puzzle is to explain to people, what we have in mind. Not only for people that want to try out the various Eclipse IoT projects, but also to possible contributors. And in the end, an important goal of the project is to attract interested parties. For consuming our ideas, or growing them even further.

Eclipse IoT Packages logo

So we now have a logo, a homepage, built using using templates in a continuous build system. We are in a position to start focusing on the actual content, and on the more tricky tasks and questions ahead. And should you want to create a PR for the homepage, you are more than welcome. There is also already some content, explaining the main goals, the way we want to move forward, and demo of a first package: “Package Zero”.


While the homepage is a good entry point for people to learn about Eclipse IoT and packages, our GitHub repository is the home for the community. And having some great discussions on GitHub, quickly brought up the need for a community call and a more direct communication channel.

If you are interested in the project, come and join our bi-weekly community call. It is a quick, 30 minutes call at 16:00 CET, and open to everyone. Repeating every two weeks, starting 2019-12-02.

The URL to the call is: You can also subscribe to the community calendar to get a reminder.

In between calls, we have a chat room eclipse/packages on Gitter.

Eclipse IoT Helm Chart Repository

One of the earliest discussion we had, was around the question of how and were we want to host the Helm charts. We would prefer not to author them ourselves, but let the projects contribute them. After all, the IoT packages project has the goal of enabling you to install a whole set of Eclipse IoT projects, with only a few commands. So the focus is on the integration, and the expert knowledge required for creating project Helm chart, is in the actual projects.

On the other side, having a one-stop shop, for getting your Eclipse IoT Helm charts, sounds pretty convenient. So why not host our own Helm chart repository?

Thanks to a company called Kiwigrid, who contributed a CI pipeline for validating charts, we could easily extend our existing homepage publishing job, to also publish Helm charts. As a first chart, we published the Eclipse Ditto chart. And, as expected with Helm, installing it is as easy as:

Of course having a single chart is only the first step. Publishing a single Helm charts isn’t that impressive. But getting an agreement on the community, getting the validation and publishing pipeline set up, attracting new contributors, that is definitely a big step in the right direction.


I think that we now have a good foundation, for moving forward. We have a place called “home”, for documentation, code and community. And it looks like we have also been able to attract more people to the project.

While our first package, “Package Zero”, still isn’t complete, it should be pretty close. Creating a first, joint deployment of Hono and Ditto is our immediate focus. And we will continue to work towards a first release of “Package Zero”. Finding a better name is still an item on the list.

Having this foundation in place also means, that the time is right, for you to think about contributing your own Eclipse IoT Package. Contributions are always welcome.

The post An update on Eclipse IoT Packages appeared first on ctron's blog.

by Jens Reimann at December 19, 2019 12:17 PM

Eclipse m2e: How to use a WORKSPACE Maven installation

by kthoms at November 27, 2019 09:39 AM

Today a colleague of me asked me about the Maven Installations preference page in Eclipse. There is an entry WORKSPACE there, which is disabled and shows NOT AVAILABLE. He wanted to know how to enable a workspace installation of Maven.

Since we both did not find the documentation of the feature I digged into the m2e sources and found class MavenWorkspaceRuntime. The relevant snippets are the method getMavenDistribution() and the MAVEN_DISTRIBUTION constant:

private static final ArtifactKey MAVEN_DISTRIBUTION = new ArtifactKey(
      "org.apache.maven", "apache-maven", "[3.0,)", null); //$NON-NLS-1$ //$NON-NLS-2$ //$NON-NLS-3$


protected IMavenProjectFacade getMavenDistribution() {
  try {
    VersionRange range = VersionRange.createFromVersionSpec(getDistributionArtifactKey().getVersion());
    for(IMavenProjectFacade facade : projectManager.getProjects()) {
      ArtifactKey artifactKey = facade.getArtifactKey();
      if(getDistributionArtifactKey().getGroupId().equals(artifactKey.getGroupId()) //
          && getDistributionArtifactKey().getArtifactId().equals(artifactKey.getArtifactId())//
          && range.containsVersion(new DefaultArtifactVersion(artifactKey.getVersion()))) {
        return facade;
  } catch(InvalidVersionSpecificationException e) {
    // can't happen
  return null;

From here you can see that m2e tries to look for workspace (Maven) projects and to find one the has the coordinates org.apache.maven:apache-maven:[3.0,).

So the answer how to enable a WORKSPACE Maven installation is: Import the project apache-maven into the workspace. And here is how to do it:

  1. Clone Apache Maven from
  2. Optionally: check out a release tag
    git checkout maven-3.6.3
  3. Perform File / Import / Existing Maven Projects
  4. As Root Directory select the apache-maven subfolder in your Maven clone location

Now you will have the project that m2e searches for in your workspace:

And the Maven Installations preference page lets you now select this distribution:

by kthoms at November 27, 2019 09:39 AM

Eclipse startup up time improved

November 05, 2019 12:00 AM

I’m happy to report that the Eclipse SDK integration builds starts in less than 5 seconds (~4900 ms) on my machine into an empty workspace. IIRC this used to be around 9 seconds 2 years ago. 4.13 (which was already quite a bit improved used around 5800ms (6887ms with EGit and Marketplace). For recent improvements in this release see Thanks to everyone who contributed.

November 05, 2019 12:00 AM

Setup a Github Triggered Build Machine for an Eclipse Project

by Jens v.P. ( at October 29, 2019 12:55 PM

Disclaimer 1: This blog post literally is a "web log", i.e., it is my log about setting up a Jenkins machine with a job that is triggered on a Github pull request. A lot of parts have been described elsewhere, and I link to the sources I used here. I also know that nowadays (e.g., new Eclipse build infrastructure) you usually do that via docker -- but then you need to configure docker, in which

by Jens v.P. ( at October 29, 2019 12:55 PM

LiClipse 6.0.0 released

by Fabio Zadrozny ( at October 25, 2019 06:59 PM

LiClipse 6.0.0 is now out.

The main changes is that many dependencies have been updated:

- it's now based on Eclipse 4.13 (2019-09), which is a pretty nice upgrade (in my day-to-day use I find it appears smoother than previous versions, although I know this sounds pretty subjective).

- PyDev was updated to 7.4.0, so, Python 3.8 (which was just released) is now already supported.


by Fabio Zadrozny ( at October 25, 2019 06:59 PM

Qt World Summit 2019 Berlin – Secrets of Successful Mobile Business Apps

by ekkescorner at October 22, 2019 12:39 PM

Qt World Summit 2019

Meet me at Qt World Summit 2019 in Berlin


I’ll speak about development of mobile business apps with

  • Qt 5.13.1+ (Qt Quick Controls 2)
    • Android
    • iOS
    • Windows 10


Qt World Summit 2019 Conference App

As a little appetizer I developed a conference app. HowTo download from Google Play Store or Apple and some more screenshots see here.


sources at GitHub

cu in Berlin

by ekkescorner at October 22, 2019 12:39 PM

A nicer icon for Quick Access / Find Actions

October 20, 2019 12:00 AM

Finally we use a decent icon for Quick Access / Find Actions. This is now a button in the toolbar which allows you to trigger arbitrary commands in the Eclipse IDE.

October 20, 2019 12:00 AM

Back to the top