Skip to main content

On Patents and Specifications

by Mike Milinkovich at May 13, 2021 07:20 PM

We’ve been fielding a number of questions lately about the intersection of our spec process and patents. A couple of these community discussions have gone off in directions that are off target, factually incorrect, or both. Therefore, the purpose of this short FAQ is to explain the patent license options provided by the Eclipse Foundation Intellectual Property Policy for use by specifications developed by specification projects under the Eclipse Foundation Specification Process (EFSP). 

Disclaimer: This is not legal advice. I am not a lawyer. It has not been reviewed by counsel. Consult your own attorney. In addition, this note does not form part of any official Eclipse Foundation policy or process, but rather is provided for informational purposes only to aid those involved in our specification projects to better understand the EFSP and the choices available. I’ll update the content as needed.

One important point to keep in mind when reading this: we believe that the EFSP fully complies with the Open Standards Requirement for Software established by the Open Source Initiative. In other words, the EFSP is designed specifically to be open source friendly.  

Why do specifications require patent licenses?

The purpose of every specification is to stimulate the development of implementations. These implementations may be derived from open source code maintained at the Eclipse Foundation or elsewhere, or they may be independently developed. They may be made available under open source licenses or proprietary. In order to facilitate and encourage these implementations, all specification processes provide some notion of patent licenses from the parties involved in developing the specifications.

What types of patent licenses are used by various specification organizations?

There are a wide variety of specification patent license options available from various sources. 

Some terms that you may hear are:

  • FRAND means fair, reasonable, and non-discriminatory licenses. This means that before you can implement the specification you are required to obtain a license from the patent holders who developed the specification. FRAND is generally considered to be antithetical to open source development, as it requires permission and money to implement a specification or potentially even to use an implementation of such a specification.
  • FRAND-Z is FRAND where the cost of the license is set to zero. Note that although this removes the cost concerns of FRAND, permission may still be required for use and/or implementation. 
  • RF or royalty-free provides a priori royalty-free licenses from the participants developing the specifications to downstream users and implementers. This is considered a best practice for enabling open source implementations of a specification. All Eclipse Foundation specifications are developed on a royalty-free basis. 
  • Non-assert is another legal mechanism which provides a result effectively similar to royalty-free. A non-assert says that a patent holder will not assert their patent rights against an implementer or user. 

Do these licenses mean that an implementer or user can never be sued for patent infringement?

No. The patent licenses are intended to ensure that an implementer or user doesn’t need to be worried about being sued by the parties involved in developing the specifications. It does not provide protection from uninvolved third parties who may believe they have intellectual property rights applicable to the specification. 

Note that the above implies that it is in the interests of the entire community and ecosystem that many participants (particularly patent-owning participants) be involved in developing the specifications. It also explains why it is in the best interest of the community that all participants in the specification process have signed agreements in place documenting their commitment to the patent licensing under the EFSP. 

What patent licenses are granted by the EFSP?

The patent licenses provided via the EFSP apply to all downstream implementations of Final Specifications, including independent implementations. They cover all patents owned by each Participant in the specification project that are essential claims needed by any implementer or user of the specification. Note that the licenses cover the entire specification, not just to the parts of the specification that a participant may have contributed to. We provide our specifications two options for patent licenses: the Compatible Patent License and the Implementation Patent License. The differences between those two are explained below.

But my open source license already has a patent license in it. Why do I need more than that?

The patent licenses provided in open source licenses such as APACHE-2.0 grant a license for contributor-owned patents which apply to their contribution either alone or as combined with the work. The patent license is only to that program/implementation. Note that the APACHE-2.0 patent license  “…applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work…”. Relative to the EFSP, such grants are deficient in both scope (applies only to their contributions) and target (applies only to that implementation). 

What is the difference between the two patent license options provided by the EFSP?

The only difference between the Compatible Patent License and and the Implementation Patent License is the timing of when the patent license grant comes into effect. In the Compatible Patent License, the license grant only happens when the implementation has demonstrated that it is fully compatible with the specification by passing the relevant TCK. The Implementation Patent License provides immediate patent licenses to all implementers, even to partial or work-in-progress implementations. The first choice emphasizes the importance of compatibility. The latter choice emphasizes the importance of open development. Both are valuable options available to Eclipse specification projects. 

I’ve read the EFSP and I don’t see anything about patent licenses. WUWT?

The patent licenses are provided in the Eclipse Foundation Intellectual Property Policy. A future version of the EFSP will make this clearer.

Is the Eclipse Foundation itself granted any licenses to patents? 

No. The Eclipse Foundation itself does not acquire any patent rights in the specifications. The patent licenses are granted from the participating patent owners directly to implementers and users of those specifications. More specifically, the patent license grants are “… to everyone to make, have made, use, sell, offer to sell, and import…” implementations of the specifications.


by Mike Milinkovich at May 13, 2021 07:20 PM

Eclipse ioFog 2.0 Takes Us to the Next Era of EdgeOps

by Mike Milinkovich at May 13, 2021 02:43 PM

On Tuesday, the Eclipse ioFog project team announced the release of Eclipse ioFog 2.0, and I invite everyone to join me in congratulating them and the Eclipse Edge Native Working Group for the achievement. Thanks to the community’s efforts, the release delivers new EdgeOps functionality that makes it faster and easier to develop production-grade edge applications and manage edge AI deployments.

The ioFog platform brings cloud native architectures such as Kubernetes to the edge, allowing developers to deploy, orchestrate, and manage microservices on any edge device. It’s unique in the industry because it abstracts the complexities of networking from the edge applications.

The Eclipse ioFog platform is already widely used and proven in the field. Organizations ranging from major service providers to FORTUNE 500 companies all rely on the software in production environments today. The ioFog platform has even been used as the backbone of an edge AI application that monitors children’s temperatures and use of masks in schools to help prevent the spread of COVID-19. It’s a great example of the value and potential the ioFog platform delivers.

EdgeOps Targets Edge Computing Challenges

The new EdgeOps capabilities in Eclipse ioFog 2.0 take that value and potential to the next level.

EdgeOps technologies and tools are built from the ground up to address the unique challenges that arise when developing edge computing applications. They’re needed because traditional DevOps technologies and tools are not well-suited to edge computing. The EdgeOps concept was defined by the Eclipse Edge Native Working Group, and the Eclipse ioFog platform is a key enabling technology.

Here’s a brief look at what the new EdgeOps features in Eclipse ioFog 2.0 mean for developers.

Red Hat’s Skupper Project Simplifies Application Connections

The Eclipse ioFog architecture is based on the concept of an Edge Compute Network (ECN) that’s comprised of a Controller, Agents (more on this later), and a Service Mesh.

In Eclipse ioFog 2.0, the Service Mesh component was replaced with Red Hat’s Skupper project.

Skupper uses the Apache Qpid Dispatch Router for application connectivity between data centers and any type of edge with no need for virtual private networks (VPNs) or special firewall rules. As a result, developers can now set up application connections using a simple yaml configuration file.

New Features Enable More Fluid EdgeOps

Eclipse ioFog 2.0 also includes new features that give developers more flexible and adaptable EdgeOps capabilities.

For example, developers can now add, remove, and move ioFog Agents between ECNs at runtime. Agents are lightweight, universal container runtimes that are installed on edge devices to manage the life cycle of microservices, data volumes, edge resources, and other assets. With Eclipse ioFog 2.0, developers have new agility to take advantage of live orchestration, and to migrate and drain microservices and applications.

Developers also have new abilities to:

·      Prune images based on policies

·      Manage multiple image registries

·      Mount data volumes in microservices containers

Together, these features showcase the Edge Native community’s ability to deliver the production-ready features edge application developers need to support real-world use cases.

Learn More and Get Started With Eclipse ioFog 2.0

Be sure to check out the following for additional insight:

·      For more information on the contents of the Eclipse ioFog 2.0 release, click here.

·      To learn more about EdgeOps, read our recent newsletter article and new white paper.

·      To learn more about the Eclipse Edge Native Working Group, review the working group charter and participation agreement, and visit the website.


by Mike Milinkovich at May 13, 2021 02:43 PM

Bringing Chromium to Eclipse RCP

by Patrick Paulin at May 12, 2021 10:55 PM

One of the most common questions I’m asked by my clients is whether it’s possible to utilize web-based UI frameworks (Angular, React, Vue, etc.) in an Eclipse RCP application. Until now, my answer has been no, largely because of the limitations of the SWT Browser control.

Well I’m happy to say that things are starting to change in this area, though there is still much work to do.

New SWT support for Microsoft Edge Chromium

In 2018 Microsoft made the surprising decision to base its future Edge browser on Chromium. Support for Edge Chromium is now available in one of two forms – the Edge browser itself and the new WebView2 control.

The WebView2 control is of particular interest because it allows for the embedding of web-based UI elements into native applications. And I’m happy to say that as of the 2021-03 Eclipse release, we can now embed this control in Eclipse RCP applications. While of course limited to the Win32 platform, this support for Chromium in Eclipse RCP makes it possible to fully leverage your web-based UI framework of choice.

To try this out, you’ll need to do two things:

  1. Install the WebView2 runtime (the Edge browser is not required). There are a variety of options for installing the runtime, and I think this situation will continue to evolve. Long-term, Microsoft is going to rely heavily on this control in their application suite and is now deploying the control along with it.
  2. In your SWT Browser control, pass the SWT.EDGE flag in the constructor. Alternatively, you can pass this argument on the command line: -Dorg.eclipse.swt.browser.DefaultType=edge

Here’s a simple Eclipse RCP application using the Edge Chromium browser.

The work left to do

Support for the WebView2 control in SWT is still experimental, and there’s a short but growing list of bugs/enhancement requests. Also, here is a list of the known limitations of the SWT control. Some of these limitations need to be addressed by Microsoft (in particular support for getting/setting cookies) and they are making good progress.

Of course the biggest issue is that this is not a cross-platform solution. So what about MacOS and Linux?

Cross-platform Chromium

There was an attempt made over the past few years to create cross-platform Chromium support in SWT. This support was based on using Rust to wrap the Chromium framework with platform specific controls that would be accessible to SWT.

Unfortunately, this effort has recently been abandoned. My take is that the problem was ultimately a lack of developer support. Without strong developer interest and engagement, the effort was not going to succeed. The Eclipse Platform PMC is still open to the idea of a cross-platform Chromium control, perhaps in the form of a Nebula contribution. So if you’re interested in picking up this work and running with it, let them know.

Another possible solution is that Microsoft releases WebView2 controls for MacOS and Linux . Then the SWT support for this control could be made available for all Eclipse RCP deployment platforms.

Wrapping up

This is definitely early days for Chromium support in Eclipse RCP applications, but I see a clear path forward. My hope is that in the near future Eclipse RCP developers will have access to robust cross-platform Chromium support.

And once that happens, the opportunities for utilizing Eclipse RCP become very interesting. Eclipse RCP applications would be well-suited to host modular microservices in the UI (sometimes called Micro Frontends). And if these microservices were written using web-frameworks running on Chromium, they could easily be migrated to or co-hosted by other Chromium-based frameworks such as Electron.

But that’s a story for another day 🙂


by Patrick Paulin at May 12, 2021 10:55 PM

Welcome to the Entrepreneurial Open Source Podcast

by Thabang Mashologu at May 12, 2021 01:28 PM

We’re excited to introduce you to the Entrepreneurial Open Source Podcast

This show aims to uncover valuable practical insights about building business through open source participation and commercial innovation. By speaking to people who are giving back to open source communities while running successful businesses, we hope to inform, engage, and inspire other entrepreneurs who are on the same journey. 

In the coming months, you’ll hear my co-host Gael Blondelle and I speaking with open source entrepreneurs in industrial IoT (IIoT), edge computing, artificial intelligence, developer tools, and more. We are lucky to be able to tap into a global community of tech and business leaders at the Eclipse Foundation and proud to share their stories and experiences of leadership in industrial open source with you. 

Disrupting Industrial Automation Through Open Source

Our first guest is Don Pearson, Inductive Automation’s Chief Strategy Officer. Inductive Automation is a leading industrial automation software vendor, and a member of the Eclipse IoT and Sparkplug working groups. Don and I discussed how the company started, and how a lack of transparency from some software companies pushed them to take a new approach with their technology, business, and licensing models. 

Inductive Automation has leveraged their commitment to open source, interoperability, and open standards to deliver the Ignition application platform for SCADA and IIoT solutions. In sectors ranging from food and beverage to oil and gas, Inductive Automation’s partners and end customers are able to build on the Ignition platform and trusted standards like MQTT and OPC UA to implement industrial applications and achieve their business objectives.

Don says the flexibility and freedom that open source creates is key to the success of their business.

“There’s no way we could do what we do with our technology without our connection to open source and the model that it puts out there.— Don Pearson, Inductive Automation

We hope you enjoy the episode! Listen to it on Apple Podcasts, Spotify or Google.

You can also find episodes on our website and follow the podcast on Twitter at https://twitter.com/OSS4Biz for updates. To join us as a guest on the podcast, please complete this form.


by Thabang Mashologu at May 12, 2021 01:28 PM

Progress and Release Reviews

May 12, 2021 12:00 AM

The Eclipse Foundation Development Process (EDP) describes a release as anything that is distributed for adoption outside of the committers of a project (effectively, if you stamp something with a version number and tell the general audience to download it, it’s a release). Appropriately labeled nightly, integration, and milestone builds are not considered releases. Further, the EDP describes a requirement for all Eclipse projects to engage in a review prior to creating a release.

May 12, 2021 12:00 AM

Eclipse Theia Builds Momentum

by Brian King at May 11, 2021 08:10 PM

Theia Today 

The Eclipse Theia project was born just 4 years ago. Kudos and thanks to all the organizations that have supported and built the project into the success that it is today. In addition to these original contributors, there continues to be significant growth in the number of companies contributing to and adopting Theia as the basis for their custom IDEs and domain-specific tools. Because of our active and vibrant community, Theia is now a thriving and diverse project — ready to move to the next level of growth and adoption.

Theia is an open source, extensible and adaptable platform that provides very similar capabilities to Visual Studio (VS) Code, while offering more flexibility to be tailored to specific use cases. This means you can build tools that are very similar to VS Code, but also completely white label and customize the workbench. All of this, of course, available as a browser and/or as a desktop application.

Theia is also compatible with VS Code, so Microsoft VS Code extensions can be leveraged through the Open VSX Registry — enabling the use of the biggest ecosystem of additional development features available today. This means that with Theia, organizations and vendors building cloud and desktop IDEs, have a production-ready, vendor-neutral, and open source framework for creating customized development environments.

Let’s now take a look at the current project state, starting with some notable examples of project adoption, as well as the companies behind those initiatives.

Arduino IDE 2.0

Arduino IDE 2.0

Arduino IDE 2.0 is an evolution of the popular IDE used to program any Arduino board. In addition to a modern editor and a more responsive interface, the IDE 2.0 also includes some advanced features for professional users: autocompletion, code navigation, and live debugger. The Arduino IDE is a good example of what’s possible with Theia as a platform. As you can see in the screenshot above, some parts of the UI are fully tailored, such as the toolbar. Other parts, such as the code editor (Monaco) are similar to what you see with VS Code.

Arm Mbed Studio

Arm Mbed Studio

 

Mbed Studio is a local development environment for Mbed OS, giving developers all the tools and dependencies needed to create C/C++ IoT applications for Arm Cortex-M based hardware. Mbed Studio is one of several examples that show how well Theia fits the needs of tooling for embedded programming. Mbed Studio leverages Theia’s compatibility with VS Code extensions to reuse some existing language features for C/C++, while  also heavily customizing the workbench in some areas.

EclipseSource

EclipseSource

 

EclipseSource is a service provider for tool development and provides consulting and implementation services as well as training for Eclipse Theia. EclipseSource does not provide their own Theia-based tools, but they do support various customers in implementing their own solutions.

EclipseSource also provides a comprehensive open source example tool based on Eclipse Theia called the coffee editor. This tool, shown in the screenshot above, combines Theia with other Eclipse technologies such as EMF.cloud, Eclipse GLSP, Xtext, JSON Forms and many more. The tool is provided as an online demo, so if you want to see what is possible with Theia + the Eclipse Cloud DevTools ecosystem, check it out.

Logi.cloud by logi.cals

Logi.cloud

 

logi.cals develops industrial automation engineering software to empower application developers in creating smart factories of tomorrow.  Based on Theia, logi.cloud provides a full-featured cloud-based IDE for developing, deploying and commissioning automation projects in the IEC 61131-3 languages, C and C++.

Red Hat CodeReady Workspaces

CodeReady Workspaces

 

Built on the open source Eclipse Che project, Red Hat CodeReady Workspaces uses Kubernetes and containers to provide development and IT teams with a consistent, secure, and zero-configuration development environment. The experience is as fast and familiar as an integrated development environment on your laptop.

Eclipse Che and Red Hat CodeReady Workspaces offer Kubernetes-native development environments. Eclipse Theia means developers need only a web browser to access a feature-rich IDE, and benefit from a similar look-and-feel and shared extensions with VS Code. As an open source community project, Che developers have been able to collaborate and contribute to Eclipse Theia, and together help build a better experience for users of cloud development tools. - David Harris | Principal Product Manager, Developer Tooling at Red Hat

SAP Business Application Studio

SAP Business Application Studio

 

SAP Business Application Studio, the next generation of the iconic SAP Web IDE, is a modern cloud-based development environment, tailored for efficient development of business applications such as SAP Fiori, SAP S/4 HANA extensions, Workflow, Mobile and more. SAP Business Application Studio showcases how the online capabilities of Eclipse Theia truly support an exceptional cloud-based developer experience.

TypeFox

TypeFox specializes in building domain-specific tools and offers development and consulting services for Theia, Sprotty, Xtext, and other related technologies. The TypeFox team laid the groundwork for Theia in order to provide a platform on which custom applications can be built.

...and More

This is just a sampling of some of the great tools built on Theia along with the services available in a flourishing ecosystem. Noteworthy shoutouts for other products based on Theia go to Google Cloud Shell and Acquia’s Drupal Cloud IDE. This is just the visible tip of the proverbial iceberg. As is often the case for open source projects, most adopters stay hidden. So, if there is a project you know of that uses Theia and is not listed on our Adopters page, please let us know by opening an issue.

Noteworthy Initiatives

Because the Theia project has such an extensive reach, it would go beyond the scope of this post to mention all ongoing developments. With this in mind, I would like to focus on three initiatives that I find rather interesting: VS Code extension compatibility; the Open VSX marketplace; and Theia Blueprint.

VS Code Compatibility

There are a number of interesting pieces in the Theia tech stack (which would warrant a separate post). One of the crucial pieces is work being done around APIs for VS Code extensions that provide compatibility with thousands of available FOSS extensions. Theia supports VS Code extensions through its Plugin system, which is regularly updated to the most recent VS Code extension API. Related to this, the Monaco Editor is a key component of Theia, and adapting and keeping it up-to-date is a huge achievement by the project. Having these core pieces (with more to come), makes Theia a modern application framework for building innovative developer tools.

Open VSX

There is strong demand for a fully open alternative to the Visual Studio Marketplace, and the Open VSX Registry meets that demand. It offers a community driven, fully open platform for publishing VS Code extensions. The Registry is built on Eclipse Open VSX, which is an open source project hosted at the Eclipse Foundation.

Open VSX is integrated into multiple software applications that support VS Code extensions, including many that are Theia-based. If you need to have your own extension gallery, for example in a protected corporate environment, you can do this with Open VSX. Having VS Code extensions available ensures that your Theia-based product is a platform for innovation and customization.

Theia Blueprint

We’ve recently seen the alpha release of Theia Blueprint, a template to help potential adopters jumpstart their Theia-based projects. It provides installers for all major operating systems so you can easily download, try and build on Theia locally. More details can be found in my recent blog post.

What’s Next for Theia? 

Theia continues its growth in terms of adoption, but also in terms of the underlying ecosystem. In just the past three months, there have been more than 50 individual committers, from eight different companies. Theia is clearly not a one man or one company show, it is based on a very diverse ecosystem of contributors. If you visit the Theia community forum, you’ll see posts regularly from developers asking questions related to work they are doing on their Theia-based tools. This gives a hint about the huge number of unknown adopters, the so-called “dark matter”. It’s exciting to see a vibrant ecosystem flourish.

Other projects of the Eclipse Cloud DevTools ecosystem augment the Theia ecosystem, e.g. the Open VSX Registry, Eclipse Che for workspace hosting, Eclipse GLSP for diagrams or EMF.cloud for domain-specific tools.

To see what is coming next, check out the roadmap which is updated quarterly. The roadmap is a moving snapshot that shows priorities of contributing organizations. Common goals are discussed weekly at the Theia Dev Meeting and additional capabilities and features identified there will make it onto the roadmap.

Sounds interesting? Take a look at the project to evaluate how to get involved. The best places to look first are the GitHub project and the community forum.


by Brian King at May 11, 2021 08:10 PM

Java News Roundup - Week of May 3rd, 2021

by Michael Redlich at May 11, 2021 05:30 AM

This week’s Java news roundup features news from OpenJDK, the GA release of Kotlin 1.5, point releases on Eclipse projects, Micronaut Coherence 1.0.0-M1, Quarkus-2.0.0-Alpha2, updates on Spring projects, and developer surveys from Jakarta EE and Payara Platform 2021.

By Michael Redlich

by Michael Redlich at May 11, 2021 05:30 AM

Eclipse Foundation Specification Process Step-by-Step

May 07, 2021 12:00 AM

Scientific progress goes “boink”? – Hobbes The Eclipse Foundation Specification Process (EFSP) provides a framework and governance model for developers engaged in the process of developing specifications. Specification: A specification is a collection of related artifacts. The EFSP defines a specification as a “collection of Application Programming Interface (API) definitions, descriptions of semantic behavior, data formats, protocols, and/or other referenced specifications, along with its TCK, intended to enable the development and testing of independent Compatible Implementations.

May 07, 2021 12:00 AM

Diagram editors in Theia and VS Code with Eclipse GLSP

by Jonas Helming and Maximilian Koegel at May 06, 2021 08:18 AM

Do you want to implement a diagram editor to be embedded into VS Code, Eclipse Theia, a website or any other...

The post Diagram editors in Theia and VS Code with Eclipse GLSP appeared first on EclipseSource.


by Jonas Helming and Maximilian Koegel at May 06, 2021 08:18 AM

Announcing Eclipse Ditto Release 2.0.0

May 06, 2021 12:00 AM

Today, ~1.5 years after release 1.0.0, the Eclipse Ditto team is happy to announce the availability of Eclipse Ditto 2.0.0.

With the major version 2.0.0 the Ditto team removed technical debt and ended support for APIs which were deprecated long ago in order to have a better maintainable codebase. However some awesome new features are included as well.

Adoption

Companies are willing to show their adoption of Eclipse Ditto publicly: https://iot.eclipse.org/adopters/?#iot.ditto

From our various feedback channels we however know of more adoption.
If you are making use of Eclipse Ditto, it would be great to show this by adding your company name to that list of known adopters.
In the end, that’s one main way of measuring the success of the project.

Changelog

The main improvements and additions of Ditto 2.0.0 are:

  • Merge/PATCH updates of digital twins
  • Configurable OpenID Connect / OAuth2.0 claim extraction to be used for authorization
  • Establishing connections to endpoints (via AMQP, MQTT, HTTP) utilizing a Ditto managed SSH tunnel
  • Addition of a DevOps API in order to retrieve all known connections
  • Expiring policy subjects + publishing of announcement message prior to expiry
  • Addition of policy actions in order to inject a policy subject based on a provided JWT
  • Built-in acknowledgement for search updates to have the option of twin updates with strong consistency of the search index
  • Restoring active connection faster after a hard restart of the Ditto cluster via automatic prioritization of connections
  • Support for LastWill/Testament + retain flag for MQTT connections

The step to a major version was done because of the following breaking API changes:

  • Removal of “API version 1” (deprecated in Ditto 1.1.0) from Ditto’s Java APIs + HTTP API
  • Removal of code in Java APIs marked as @Deprecated
  • Binary incompatible changes to Java APIs
  • Restructuring of Ditto’s Maven modules in order to simplify/ease further development

The following non-functional enhancements are also included:

  • Improvement of stability during rolling updates
  • Addition of sharding concept for Ditto internal pub/sub enabling connection of e.g. tens of thousands Websocket sessions
  • Background cleanup improvements in order to have less impact on DB roundtrip times
  • Update of third party libraries (e.g. Akka)
  • Documentation of deployment via K3S

Please have a look at the 2.0.0 release notes for a more detailed information on the release.

Artifacts

The new Java artifacts have been published at the Eclipse Maven repository as well as Maven central.

The Ditto JavaScript client release was published on npmjs.com:

The Docker images have been pushed to Docker Hub:



Ditto


The Eclipse Ditto team


May 06, 2021 12:00 AM

Jakarta EE Community Update for March and April 2021

by Tanja Obradovic at May 04, 2021 08:58 PM

Our update this month is jam-packed with highlights, as March and April were busy with many events and new developments in the Jakarta EE Community! 

The highlights for the past two months are as follows:

The Jakarta EE 2021 Developer Survey is now open!

It is that time of the year: the Jakarta EE 2021 Developer Survey is now open until May 31st. Please, if you haven't already provided your response, I encourage you to do so now. Help us gather input from the wider java enterprise community and shape the future of Jakarta EE!

Here are insights for 2020 Jakarta EE Developer Survey.

 

The Jakarta EE community and Working Group is growing

 We continue to have steady membership growth, and more interest from JUGs to be closely involved in various Jakarta EE projects.

Jakarta EE Working Group welcomes iJUG as a participant member and Istanbul JUG as a guest member!

iJUG involvement in Jakarta EE is well known, as their members are actively involved in Jakarta EE projects and community events. Now they have officially become members of not just Jakarta EE Working Group, but both MicroProfile and Adoptium Working Groups as well. 

JUG Istanbul has joined as a Guest member also. JUG Istanbul is a driving force behind  Jakarta One Livestream - Turkish, and its members are interested in being involved in advancing following individual specifications Jakarta Contexts and Dependency Injection and  Jakarta Concurrency.

This is a call to other JUGs to explore the possibility of joining Jakarta EE Working Group. Approach us and let us know if membership is something you would be interested in.

 

Jakarta EE 9.1 release

I hope you are all already familiar with the Jakarta EE 9.1 Release Plan! The main driver for this release is Java SE 11 support.

Eclipse GlassFish 6.1.0-RC1 Web (https://download.eclipse.org/ee4j/glassfish/web-6.1.0-RC1.zip )  and Full Profile (https://download.eclipse.org/ee4j/glassfish/glassfish-6.1.0-RC1.zip)  are now on Eclipse downloads. Please download and take a look!

Multiple Compatible Products on the release date of the Jakarta EE 9.1

The compatibility certification request for Jakarta EE 9.1 release keep coming, and we are super proud and happy about it

The Jakarta EE 9.1 release will be the first release that will have more than one compatible implementation used for the ratification of the final specification. 

 

Jakarta EE Individual Specifications and project teams 

We have organized a public calendar Jakarta EE Specifications Calendar (public url, iCal) to display all Jakarta EE Specification project teams meetings. Everyone interested is welcome to join the calls.

The Jakarta EE Platform team (meeting on Feb 9th, 2021) has invited all specification project team members and leads to submit their release plans for review by April 15th, so the planning for release after 9.1 can start. The response was great! 25 individual specifications that have provided the plan review can be viewed here

The work towards the next release is emerging, and I would like to draw your attention towards Jakarta EE Core Profile Creation and Plan Review. Please review and join the Jakarta EE Platform team meetings to provide your input.

 

Statement direction for Jakarta EE 10 is on its way!

Jakarta EE Steering Committee has requested from the Jakarta EE Platform team to formulate a statement of direction for the release 10. You can join the discussion in this GitHub issue , but sharing here main points


A traditional roadmap with deliverables is still premature, so we need to highlight areas of focus that will build into an eventual roadmap. Areas of focus that need progress before a proper roadmap can be defined include:

  • Addressing lack of specification standalone TCKs that can be composed into new platforms

  • Defining the makeup of the core profile

  • Defining how profile specifications can be released independently and what versioning would look like under this approach

  • Promoting individual specification releases

  • Address integration of MicroProfile Config as Jakarta Config

  • Updating core profile specifications to enable build time capable implementation

  • Handling of optional specification features

  • Removing the current circularity between specifications at the TCK level

  • Java SE version and JPMS strategy

  • Improve architecture guidelines and address specification cohesion


Compatible Products and Compatible Implementations

Our list of Compatible Products, implementations of Jakarta EE Platform and Web Profile, is growing, and not just for Jakarta EE 8, but also for Jakarta EE 9.

The current Jakarta EE 8 list is impressive

 

And the Jakarta EE 9 list is growing

 

Jakarta EE White Paper: Why Jakarta EE Is the Right Choice for Today's Java Applications

In collaboration with the community leaders we have published the  Jakarta EE  White Paper: “Why Jakarta EE is the Right Choice for Today’s Java Applications”. Please promote and share this Whitepaper amongst your community--we appreciate your support in sharing it with others.

 

EclipseCon 2021 CFP is now open!

Please mark your calendars: EclipseCon 2021 is taking place October 25th - 27th 2021! The call for papers is now open, so do not miss this chance to showcase your work! We already have quite a few talks related to Jakarta EE and Cloud Native Technologies submitted and you can review them here


Book your 2021 Jakarta EE Virtual Tour and Adopt-A-Spec

We are looking for the opportunity to virtually visit you, so don’t hesitate to get in touch (tanja.obradovic@eclipse-foundation.org) if you’d like to hear about Jakarta EE 9 and beyond.

We need help from the community! All JUGs out there please choose the specification of your interest and adopt it. Here is the information about the Adopt-A-Spec program. 

 

Stay Connected With the Jakarta EE Community

The Jakarta EE community is very active and there are a number of channels to help you stay up to date with all of the latest and greatest news and information. Subscribe to your preferred channels today:

·  Social media: Twitter, Facebook, LinkedIn Group

·  Mailing lists: jakarta.ee-community@eclipse.org, jakarta.ee-wg@eclipse.org, project mailing lists, slack workspace

·  Calendars: Jakarta EE Community Calendar, Jakarta EE Specification Meetings Calendar 

·  Newsletters, blogs, and emails: Eclipse newsletter, Jakarta EE blogs, Hashtag Jakarta EE

·  Meetings: Jakarta Tech Talks, Jakarta EE Update, and Eclipse Foundation events and conferences

 

You can find the complete list of channels here.

 

To help shape the future of open source, cloud native Java, get involved in the Jakarta EE Working Group.

 

To learn more about Jakarta EE-related plans and check the date for the next Jakarta Tech Talk, be sure to bookmark the Jakarta EE Community Calendar.

 


by Tanja Obradovic at May 04, 2021 08:58 PM

JUG Istanbul joins Jakarta EE Working Group!

by Tanja Obradovic at May 04, 2021 06:53 PM

We are happy to share the news with you: Java User Group Istanbul  has been invited by Jakarta EE Steering Committee to join the Working Group as a Guest member. JUG Istanbul is a driving force behind  Jakarta One Livestream - Turkish event, and its members are interested in being involved in advancing following individual specifications Jakarta Contexts and Dependency Injection and  Jakarta Concurrency . Please share the news and give warm welcome the JUG Istanbul!


by Tanja Obradovic at May 04, 2021 06:53 PM

Behind the Scene #4

April 29, 2021 10:00 AM

I am happy to present you the new Sirius Web “Behind the scene” session. Here and now, Guillaume Coutable, Consultant at Obeo, gives a demonstration of the list container support he is working on.

We are thankful to all our customers for their support to Sirius Web! See you next month for another “Behind the scene” session!


April 29, 2021 10:00 AM

My Pet Python Kata

by Donald Raab at April 26, 2021 04:39 PM

Learning a little Python using the Eclipse Collections Pet Kata

Eclipse Collections Pet Kata on GitHub (left) : Python implementation of Person class from Pet Kata

A time to learn Python

I’ve programmed in more than twenty programming languages over the past four decades. I’ve coded in Java for 20+ years, and am the creator of a Java collections framework called Eclipse Collections. I have never tried to learn Python before. That is, until now.

Why learn Python?

I am a Java Champion, and regularly teach and advocate for the Java programming language. My learning Python might seem a bit odd to some. In fact, some of my friends started pinging me on Twitter after I posted several tweets about me solving the Pet Kata using Python wondering if I was having a mid-life crisis or if I had been abduckted. I tried to alleviate their concerns with a tweet.

Python is an older programming language than Java, which might surprise some folks. I often recommend to developers to look at older languages to learn new techniques and approaches. I try to practice what I preach.

I recently shared my experience returning to my nostalgic Smalltalk days and implementing the Deck of Cards Kata using the Pharo Smalltalk IDE. Programming in Smalltalk using unit tests was a tremendous amount of fun. You can see the results in the following blog.

A little Smalltalk for the soul

I thought that I might as well make the most of my recent visit to a dynamic language I knew well and see how quickly I would be able to code productively in Python. Just as I did with Smalltalk, I chose a code kata to implement. This time I went with the Pet Kata. I’ve often told folks they can use the Eclipse Collections Katas to learn any programming language, as long as there is a way to specify unit tests and implement a domain in the language.

I made the job of learning Python a lot easier by using the quite capable PyCharm IDE from JetBrains. PyCharm felt very familiar to me, because I have programmed in the IntelliJ IDE from JetBrains for almost twenty years.

The Domain

To start things off, I implemented the domain classes. I knew Python supported object-oriented programming, so I created three classes to represent Person, Pet, and PetType. A Person has zero or more Pets and each Pet has a PetType, which I implemented via an Enum, which is a type both Java and Python support.

This is how I implementedPetType.

PetType Enum

This is how I implemented the Pet class.

Pet Class

This is how I implemented the Person class.

Person class

You can compare the solutions for the domain classes in Python, to the solutions in Java using Eclipse Collections here.

What I learned in the domain classes?

Space matters in Python. So do spaces. There are no curly braces like in Java, and Python is less forgiving than Smalltalk in terms of how code is formatted and edited.

You don’t define instance variables as you would in Java or Smalltalk, with a special declaration area. The variables come into existence as you define them in the init method. Since Python is dynamic, there is no type to specify.

The self keyword is the same name Smalltalk uses for the instance, and is the equivalent of this in Java.

The [] is a literal form of an empty list. You can add things to a list using the append method.

You define a function using def, followed by the function name, and a parameter list in parentheses and then ending the definition line with a :. When you define an instance method, the first parameter in every method needs to be self.

To implement an Enum, you need to import the Enum type from the enum module or package (not too sure on the terminology here yet). You then have your type extend the Enum type, by putting the Enum type inside the parentheses after the class name, which is PetType in this case.

The lambda keyword is used to create a lambda, or anonymous function. The parameters are specified after the keyword, and separated from the expression with a :. I used lambdas with both map and filter functions.

Functions I used in the domain were any, len and map. This is one place where I think Python is missing some object-oriented goodness of encapsulated methods. I think any, len and map should be methods on the list type, not functions that accept an iterable type. Making them methods instead of functions would make them easier to discover. The IDE can help you with code assist if you type . after a type and show you a method list of available methods. This made it harder to discover these functions without using Google with questions like “How do I find the size of a list in python?”

The Tests in the Kata

There are five separate exercises in the Pet Kata. I decided I would only implement the first four exercises, because that would be enough to get started and develop a feel for what it is like to program in Python.

I had to learn how to develop a unit test in Python, so I started Googling for the info. What I discovered is that unit testing in Python is somewhat similar to the unit testing I was familiar with in JUnit 3.x versions. Each test class has to extend unittest.TestCase, and each test method must be prefixed with the word test.

The assertions are available directly on the instance of the class using the self keyword. So to assert equality, you would use the method self.assertEqual().

PetDomainForKata

Before I started writing the tests, I created the test data I used in the tests. In Java, I put this code in an abstract class, but I had trouble figuring out how to define an instance variable in a parent class and then extending the class and accessing the variable. So instead I decided to just create a function called get_people() which I used in each test case to get the list of people with their pets.

Test data is a list of people with their pets and ages

You can compare the solutions for the test data setup in Python, to the solutions in Java using Eclipse Collections here.

Exercise 1 Test

The tests for exercise 1 in the Pet Kata are fairly straight forward. You need to learn how to transform and filter a list in this exercise.

Transforming and filtering lists using map and filter

In this test class, I learned how to use the functions map, filter, list, and len. Both map and filter return iterators which then need to be converted to lists using the list function.

You can compare the solutions for exercise 1 in Python, to the solutions in Java using Eclipse Collections here.

Exercise 2 Test

Most of the tests for exercise 2 in the Pet Kata are also fairly straightforward.

Using any, all, sum, list and filter to complete tests in exercise 2

In this test class, I learned how to use the any, all , sum methods in addition to what I learned in exercise 1. I was surprised there was no count function, but I was able to find how to use sum with map and filter to accomplish what was needed.

The one test that was not straightforward was the one where I needed to use flatMap. Python does not have a function called flatMap, and instead you need to use a function named chain in a library called itertools along with map.

Learning how to flatten a persons pet types using chain and map

You can compare the solutions for exercise 2 in Python, to the solutions in Java using Eclipse Collections here.

Exercise 3 Test

The tests in exercise 3 are much harder to complete. This is because they do more complex things like counting and grouping things by other things.

Implementing countByEach, groupBy and groupByEach in Python

I had to use both the collections.Counter class and itertools.chain function in order to implement the method known in Eclipse Collections as countByEach. The method countByEach is a flatCollect followed by a countBy. In Eclipse Collections, countBy will return a Bag type. I learned that the equivalent type in Python is called Counter.

There is a groupBy function in itertools, but it didn’t quite behave as I had hoped. I could not find a simple way to transform the result from groupBy to a dictionary, so I followed the recommended imperative way of using a for loop and converting each group value to a list using the list function. It was interesting to see how to iterate over key and group together in the for loop.

Finally, I had to switch to completely imperative approach with nested loops to convert the list of people into a dictionary of pet types mapped to sets of people.

You can compare the solutions for exercise 3 in Python, to the solutions in Java using Eclipse Collections here.

Exercise 4 Test

After exercises 2 and 3, I was was worried exercise 4 was going to be even more difficult. It turned out to not be so hard.

Calculating the age statistics for Pets with min, max, sum and median

I couldn’t find a way to calculate min, max, sum and mean in Python using a single iteration like we do in Java using IntSummaryStatistics, but there are functions for min, max, sum and mean (you need to import the statistics module for mean). I was able to use any and all again, but I could not find an equivalent of none, so used assertFalse with any.

Joining strings, most common, median

For the remaining tests I learned how to use the join method on a string along with map to get a string representation of the names of Bob’s Smiths pets. I then used Counter along with a method named most_common to find the top three pets. Finally, I used the statistics.median function to calculate the median age of the pets.

You can compare the solutions for exercise 4 in Python, to the solutions in Java using Eclipse Collections here.

Lessons Learned

I learned a little bit about Python in this exercise. It was interesting to see how quickly I could get a bunch of tests written in Python up and running. Where I slowed down a bit was in learning whether I needed a function or a method to accomplish something. It was also challenging to understand what the type of something was so I could explore its API in more detail. I was able to leverage the debugger in PyCharm a bit, and it helped in some cases.

I was comparing and contrasting the Python solutions I came up with along with Java solutions using Eclipse Collections on Twitter. What I discovered along the way was that there were APIs in Eclipse Collections that weren’t used in the solutions that actually made things more concise. I submitted two pull requests to the Eclipse Collections Kata as a result. The additional methods I used to refactor some of the solutions were countByEach, flatCollectInt, and containsBy. These are more advanced “fused” APIs that have evolved over time. You can read more about the evolution of the containsBy “fused” API in the following blog.

Fusing methods for productivity

I hope you enjoyed this learning experiment implementing the Pet Kata in Python. I definitely feel like I have developed some basic understanding of the Python programming language as a result. And now that I have written this blog, I always have something I can refer back to.

I am a Project Lead and Committer for the Eclipse Collections OSS project at the Eclipse Foundation. Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it on GitHub.


by Donald Raab at April 26, 2021 04:39 PM

Release 5.10

April 24, 2021 12:00 AM

New version 5.10 has been released.

Release is of Type A

Features

  • Logs setup/configuration
  • Database - Persist the last used datasource
  • Database - All procedure results are displayed
  • HANA SQL builder - Implement AlterSequenceBuilder
  • Git - Autofill
  • GIt Notifications
  • Branding of the Help menu items
  • Monaco - Improved Code Completion
  • Separate common implementation of OData Engine
  • Add support for callbacks
  • OData - Add annotation for SAPData

Fixes

  • Redundant ?refreshToken= appended to URL
  • Files are not uploaded
  • Git Perspective Issue (Committed Code is not present in github)
  • Update the term SAP Cloud Platform
  • HanaSqlDialect - incorrect processing of exist
  • Save button not working
  • Monaco - Code completion suggestion not triggered by dot operator
  • HANA - Duplicate name for dirigible tables
  • Configurations - Java array is returned instead of JS array
  • Preview - Refresh/Set URL on Save File
  • Worspace - Add Dirigible-Editor header on create file
  • Missing some type mappings support from DB to EDM
  • Minor fixes

Statistics

  • 61K+ Users
  • 87K+ Sessions
  • 188 Countries
  • 430 Repositories in DirigibleLabs

Operational

Enjoy!


April 24, 2021 12:00 AM

Building a cmd-line application to notarize Apple applications using Quarkus and GraalVM Native Image support

by Tom Schindl at April 23, 2021 06:39 PM

So I’ve been searching for a long time for a small side project where I could give Quarkus’ Native-Image support a spin. While we are using Quarkus in JDK mode in almost all of our server applications there was no need yet to compile a native binary.

This week although I found the perfect usecase: I’ve been banging my head against codesigning and notarizing an e(fx)clipse application (shipped with a JRE) the whole week.

Doing that requires to execute a bunch of cmd-utilities one by one. So I came up with the plan to write an Quarkus command line application, compiled to a native executable to automate this process a bit more. Yeah there are go and python solutions and I could have simply written a shell-script but why not try something cooler.

The result of this work is a native OS-X executable allowing me to codesign, create a dmg/pkg, notarize and finally staple the result as you can see from the screenshot below

115746928-0f796280-a395-11eb-9ec8-abe9fd94d591

As of now this does not include anything special for Java application so it can be used for any application (I currently have an artifical restriction that you can only use .app)

All sources are available at https://github.com/BestSolution-at/mac-codesigner and I added a pre-release of the native executable. Miss a feature, found a bug? Feel free to file a ticket and provide a PR.


by Tom Schindl at April 23, 2021 06:39 PM

OSGi Working Group Settles into New Home at Eclipse Foundation

by Karsten Silz at April 23, 2021 09:00 AM

After shipping the OSGi Core Release 8 in December, the OSGi Working Group (WG) is now incubating at the Eclipse Foundation. The OSGi WG (previously named “OSGi Alliance”) announced the move to Eclipse last October. It has already ratified the charter, created two committees and two working groups, and migrated its code repositories.

By Karsten Silz

by Karsten Silz at April 23, 2021 09:00 AM

JBoss Tools and Red Hat CodeReady Studio for Eclipse 2021-03

by jeffmaury at April 20, 2021 08:20 AM

JBoss Tools 4.19.0 and Red Hat CodeReady Studio 12.19 for Eclipse 2021-03 are here waiting for you. Check it out!

crstudio12

Installation

Red Hat CodeReady Studio comes with everything pre-bundled in its installer. Simply download it from our Red Hat CodeReady product page and run it like this:

java -jar codereadystudio-<installername>.jar

JBoss Tools or Bring-Your-Own-Eclipse (BYOE) CodeReady Studio require a bit more:

This release requires at least Eclipse 4.19 (2021-03) but we recommend using the latest Eclipse 4.19 2021-03 JEE Bundle since then you get most of the dependencies preinstalled.

Java11 is now required to run Red Hat Developer Studio or JBoss Tools (this is a requirement from Eclipse 4.17). So make sure to select a Java11 JDK in the installer. You can still work with pre-Java11 JDK/JRE and projects in the tool.

Once you have installed Eclipse, you can either find us on the Eclipse Marketplace under "JBoss Tools" or "Red Hat CodeReady Studio".

For JBoss Tools, you can also use our update site directly.

http://download.jboss.org/jbosstools/photon/stable/updates/

What is new?

Our main focus for this release was an improved tooling for the Quarkus framework, improvements for container based development and bug fixing.

OpenShift

Login to Developer Sandbox from the tooling

Red Hat provides an online OpenShift environment called Developer Sandbox that makes it easy for developers to build, test and deploy cloud native applications and microservices.

In order to use Developer Sandbox, you must own a Red Hat SSO account (which can be linked to social accounts like GitHub,…​). Once logged in into Red Hat SSO, you will get an environment provisioned in Developer Sandbox but the first time you will try to login in to Developer Sandbox, your account needs to be verified (in order to prevent crypto miners and robots) thus you need to go through a verification phase where you will be asked to provide:

  • first your phone number and country code

  • then a verification code that you will receive on your smartphone.

So it is now possible to provision and log in to Developer Sandbox from the OpenShift tooling and connect it to the Developer Sandbox environment.

Open the OpenShift Application Explorer view (Window → Show View → Other…​, enter open and double click on OpenShift Application Explorer):

devsandbox1

Right click on the first node and select the Login context menu:

devsandbox2

In order to provision the Developer Sandbox environment, click on the Red Hat Developer Sandbox link: a browser window will open and you will be required to login to your Red Hat SSO account:

devsandbox3

Login to your account (please note that if you don’t have a Red Hat account, you can create a new one). Once you’re logged in, you should see the following window:

devsandbox4

Enter your contry code (+XX) and phone number and click the Verify button:

You will be required to provide the verification code that you should have received on your phone:

devsandbox5

Once your Developer Sandbox environment is provisioned; you will see the following window:

devsandbox6

Click on the Ǹext button to log in to your Developer Sandbox environment:

devsandbox7

Click on the DevSandbox link and log in with the same credentials: you will see the following window:

devsandbox8

Click on the Display Token link and the click on the Finish button, you should be back to the Login wizard:

devsandbox9

Please note that the URL and Token fields have been updated. Click the Finish button, the OpenShift Application Explorer will be updated with the Developer Sandbox URL and if you expand it, you will see 3 namespaces/projects available for you to start playing with:

devsandbox10

You’re now ready to work against this environment for free !!!.

Browser based login to an OpenShift cluster

When it comes to login to a cluster, OpenShift Tools supported two different authentication mechanisms:

  • user/password

  • token

The drawback is that it does not cover clusters where a more enhanced and modern authentication infrastructure is in place. So it is now possible to login to the cluster through an embedded web browser.

In order to use it, go to the Login context menu from the Application Explorer view:

weblogin1

Click on the Retrieve token button and an embedded web browser will be displayed:

weblogin2

Complete the workflow until you see a page that contains Display Token:

weblogin3

Click on Display Token:

The web browser is automatically closed and you’ll notice that the retrieved token has been set in the original dialog:

weblogin4

Devfile registries management

Since JBoss Tools 4.18.0.Final, the preferred way of developing components is now based on devfile, which is a YAML file that describe how to build the component and if required, launch other containers with other containers. When you create a component, you need to specify a devfile that describe your component. So either you component source contains its own devfile or you need to pick a devfile that is related to your component. In the second case, OpenShift Tools supports devfile registries that contains a set of different devfiles. There is a default registry (https://github.com/odo-devfiles/registry) but you may want to have your own registries. It is now possible to add and remove registries as you want.

The registries are displayed in the OpenShift Application Explorer under the Devfile registries node:

registries1

Please note that expanding the registry node will list all devfiles from that registry with a description:

registries2

A context menu on the Devfile registries node allows you to add new registries, and on the registry node to delete it.

Devfile enhanced editing experience

Although devfile registries can provide ready-to-use devfiles, there may be some advanced cases where users need to write their own devfile. As the syntax is quite complex, the YAML editor has been completed so that to provide:

  • syntax validation

  • content assist

Support for Python based components

Python-based components were supported but debugging was not possible. This release brings integration between the Eclipse debugger and the Python runtime.

Quarkus

Support for environment variables in Run/debug Quarkus configurations

Environment variables is one way to override properties values in the Quarkus properties file. It is now possible to specify environment variables in a Run/debug Quarkus configuration.

Server Tools

Wildfly 23 Server Adapter

A server adapter has been added to work with Wildfly 23.

EAP 7.4 Beta Server Adapter

The server adapter has been adapted to work with EAP 7.4 Beta.

Hibernate Tools

Hibernate Runtime Provider Updates

A number of additions and updates have been performed on the available Hibernate runtime providers.

Runtime Provider Updates

The Hibernate 5.4 runtime provider now incorporates Hibernate Core version 5.4.30.Final and Hibernate Tools version 5.4.30.Final.

Platform

Views, Dialogs and Toolbar

Filter field for configuration details

A filter field has been added to the Installation Details > Configuration tab. This allows much faster lookup of specific information from the system details by showing only lines containing the filter criteria.

configuration filter
Preference to remember the last used page in search dialog

The Remember last used page check box was previously available from the Search dialog > Customize…​ > Search Page Selection dialog, which was not intuitive and hard to find. Now, the check box is moved to the Search preference page.

A new preference Remember last used page in the Search dialog has been added to Preferences > General > Search page. This new preference is enabled by default. image::https://www.eclipse.org/eclipse/news/4.18/images/remember-last-used.png[]

Text Editors

Horizontal Scrolling in Text Editor

You can now scroll horizontally in the Text Editor using Shift+Mouse Wheel and touchpad gestures on Windows. Horizontal scrolling with touchpad already works on Linux and macOS.

Open-with does not store the editor relationship anymore

The menu entry Open With ▸ …​ does not store the selected editor as default editor for the selected file as this was undesired in most cases and lead to confusion. Also, removing this association was not easy for the end user. The user can still assign a editor to a certain file type via the Open With ▸ Other…​ dialog.

Debug

Find Next/Previous in Console View

In the Console view, you can repeat your last search in the forward or backward direction in the following ways:

  • Right-click in the Console view, then select Find Next or Find Previous.

  • Use the keyboard shortcuts Ctrl+K or Ctrl+Shift+K.

console find next find previous 45017
Disable All in Breakpoints view

In the Breakpoints view, you can disable all the breakpoints using the new Disable All context-menu option available on right-click.

disable allbreakpoints
Terminate descendants of operating-system processes launched by Eclipse

Some types of launch-configurations start operating-system processes when launched from Eclipse. When you terminate the corresponding process before it completes (for example by clicking the terminate button, the red square) that operating-system process is destroyed. Now the descendants of that process, its child-processes created by the main-process and their children recursively, are destroyed too.

Termination of child processes of launched OS processes can be configured

Since Eclipse 4.18 child processes (descendants) of an operating system process launched from Eclipse are terminated too, when the launched process is terminated (for example by clicking the terminate button).

It is now possible to configure in the Launch Configuration whether the child processes of a launched process should be terminated too or if they should stay alive, in case the launched processes is terminated. You can control this with the checkbox Terminate child processes if terminating the launched process in the Common tab of the Run/Debug Configurations dialog. By default this checkbox is selected and child processes are terminated too.

configure child process termination

Preferences

External browsers on Windows

On Windows, the list of recognized External web browsers has been updated to include:

  • Microsoft Edge (%ProgramFiles(x86)%\Microsoft\Edge\Application\msedge.exe)

browsers windows update

Enable word wrap on console output

A new preference Enable word wrap is available in the Console preference page. This setting persists the current state of the "Word wrap" toggle on the console view between user sessions. By default, word wrapping is disabled on console output.

console preferences word wrap

Themes and Styling

New "System" theme

A new "System" theme is available in the Appearance preference page. This theme is built using system colors, and as a consequence integrates well in any OS and OS theme.

This screenshot shows the System theme in action under several GTK themes:

GTK Adwaita:

systemTheme gtkAdawaita

GTK Adwaita Dark:

systemTheme gtkAdawaitaDark

GTK Kripton:

systemTheme gtkKripton

GTK Dark Mint:

systemTheme gtkDarkMint
Windows dark theme styles progress bars

The progress bar in the dark theme on Windows OS is now styled:

progressbar dark win32
Light theme on macOS

The Light theme for macOS has been updated to fit the latest macOS design.

Old:

macTheme light old

New:

macTheme light new

General Updates

Equinox Linux Security JNA Fragment

A new fragment has been added for Linux password security using JNA. This new fragment replaces the old JNI x86_64-specific fragment and supports all Linux architectures.

Ant 1.10.9

Eclipse has adopted Ant version 1.10.9.

Java Developement Tools (JDT)

JUnit

JUnit 5.7.1

JUnit 5.7.1 is here and Eclipse JDT has been updated to use this version.

Java Editor

Quick assist to create try-with-resources

For expressions returning a type that is AutoCloseable there’s a new quick assist (Ctrl+1) available: Assign to new local variable in try-with-resources.

try with resources before

It creates a new try-with-resources block with the expression assigned to a resource variable. The variable type and name can be selected from a few suggestions:

try with resources after

The default hotkey sequence for this quick assist is Ctrl+2 followed by T.

Add catch clause to try-with-resources assists

There are multiple assists to surround auto-closeable statements in a try-with-resources statement including Surround with > Try-with-resources Block. Now, all forms will add a catch clause for any exceptions (such as IOException) thrown by the auto-close if not already handled via an existing catch clause or throws directive. In the case where the existing code catches or throws an exception that sub-classes the exceptions of the new catch clause, an additional catch clause will be added to rethrow the exception to ensure code logic remains consistent.

add catch clause to try with resources before
add catch clause to try with resources after
Quick fix to create permitted type declaration

You can use the following quick fixes (Ctrl+1) to create a new permitted class or interface declaration:

create permitted type declaration

The created type will declare the sealed type as its super type and it can be declared as final, non-sealed, or sealed with the available quick fixes for further inheritance control.

Java Feature clean ups

A new tab named Java Feature has been added to the Clean Up preferences. It lists the clean up options that introduce the use of language features from different Java versions. Relevant clean up options from other tabs have also been moved to this new tab.

You can use these clean ups while upgrading the Java version in your code.

java feature preferences
Pattern matching for instanceof clean up

A new clean up has been added that uses pattern matching for the instanceof operator when possible.

It is only applicable for Java 15 or higher when preview features are enabled.

To apply the clean up, select Pattern matching for instanceof check box on the Java Feature tab in your clean up profile.

pattern matching preferences

For the given code:

pattern matching before

One gets:

pattern matching after
Reduce indentation clean up

A new clean up has been added that removes useless indentation when the opposite workflow falls through.

When several blocks fall through, it reduces the block with the greatest indentation. It can negate an if condition if the else statements fall through.

To apply the clean up, select Reduce indentation when possible check box on the Code Style tab in your clean up profile.

reduce indentation preferences

For the given code:

reduce indentation before

One gets:

reduce indentation after
Extract increment clean up

A new clean up has been added that moves increment or decrement outside an expression.

A prefix increment/decrement (i`) first changes the value of the variable and then returns the updated value. A postfix increment/decrement (`i) first returns the original value and then changes the value of the variable.

But let’s look at this code:

int i = j++;

Most of the developers hardly remember which from the increment or the assignment comes first. One way to make the code obvious is to write the increment/decrement in a dedicated statement:

int i = j;
      j++;

And so for the prefix expressions:

int i = ++j;

…​it goes like this:

j++;
      int i = j;

The cleanup moves a prefix expression above the statement and a postfix expression below. It does not move increments from loop condition and it does not cleanup several increments in the same statement. The increment/decrement is always rewritten as a postfix expression for standardization.

To apply the clean up, select Extract increment/decrement from statement check box on the Code Style tab in your clean up profile.

extract increment preferences

For the given code:

extract increment before

One gets:

extract increment after
Use Comparator.comparing() clean up

A new clean up has been added that replaces a plain comparator instance by a lambda expression passed to a Comparator.comparing() method.

The feature is enabled only with Java 8 or higher.

The Comparator type must be inferred by the destination of the comparator. The algorithm of the comparator must be standard and based on one field or method. The cleanup can handle the null values and reversed orders.

To apply the clean up, select Use Comparator.comparing() check box on the Java Feature tab in your clean up profile.

comparator comparing preferences

For the given code:

comparator comparing before

One gets:

comparator comparing after
Multi-catch clean up

A new clean up has been added that converts catch clauses with same body to Java 7’s multi-catch.

The feature is enabled only with Java 7 or higher.

To apply the clean up, select Use Multi-catch check box on the Java Feature tab in your clean up profile.

multi catch preferences

For the given code:

multi catch before

One gets:

multi catch after
Convert fields into local variables

A new clean up has been added that refactors a field into a local variable if its use is only local.

The previous value should not be read. The field should be private. The field should not be final. The field should be primitive. The field should not have annotations.

To apply the clean up, select Convert fields into local variables if the use is only local check box on the Optimization tab in your clean up profile.

convert fields preferences

For the given code:

convert fields before

One gets:

convert fields after
Static inner class clean up

A new clean up has been added that makes inner class static if it doesn’t use top level class members.

To apply the clean up, select Make inner classes static where possible check box on the Optimization tab in your clean up profile.

static inner class preferences

For the given code:

static inner class before

One gets:

static inner class after
Use String.replace() clean up

A new clean up has been added that replaces String.replaceAll() by String.replace() when the pattern is a plain text.

The pattern must be constant.

To apply the clean up, select Use String.replace() instead of String.replaceAll() when no regex used check box on the Optimization tab in your clean up profile.

string replace preferences

For the given code:

string replace before

One gets:

string replace after
Primitive comparison clean up

A new clean up has been added that replaces the compareTo() method by a comparison on primitive.

It improves the space and time performance. The compared value must be a primitive.

To apply the clean up, select Primitive comparison check box on the Optimization tab in your clean up profile.

primitive comparison preferences

For the given code:

primitive comparison before

One gets:

primitive comparison after
Primitive parsing clean up

A new clean up has been added that avoids to create primitive wrapper when parsing a string.

The object should be used as a primitive and not as a wrapper.

To apply the clean up, select Primitive parsing check box on the Optimization tab in your clean up profile.

primitive parsing preferences

For the given code:

primitive parsing before

One gets:

primitive parsing after
Pull down common code from if/else statement clean up

A new clean up has been added that extracts common code from the end of an if / else if / else control flow.

Ultimately it removes the empty and passive if conditions.

The control flow should have an else clause and the duplicate code should not rely on variables declared in the block.

The statement matching performs a deep analysis. All the blocks should end with the same set of statements, or the blocks with different code should fall through with a jump statement (return, throw, continue or break).

To apply the clean up, select Pull down common code from if/else statement check box on the Duplicate code tab in your clean up profile.

control flow merge preferences

For the given code:

control flow merge before

One gets:

control flow merge after

And for the given code where all tails of blocks are identical except one block which falls through:

control flow merge jump statement before

The identical tails of blocks have been pulled down from the control flow and the falling through block has been left as it is:

control flow merge jump statement after
String.substring() clean up

A new clean up has been added that removes the second substring() parameter if this parameter is the length of the string. It’s the default value.

It must reference the same expression.

The expression must be passive.

To apply the clean up, select Redundant String.substring() parameter check box on the Unnecessary code tab in your clean up profile.

substring preferences

For the given code:

substring before

One gets:

substring after
Unreachable block clean up

A new clean up has been added that detects two if conditions that are identical and removes the second one.

The conditions should be passive.

No exceptions should be awaited.

It doesn’t create unreachable code below the if statement which would create a compile error. That is to say it avoids the case where only the removed block doesn’t fall through, all the other cases fall through, there are an else clause (not only if/else clauses) and a statement after the control workflow.

To apply the clean up, select Unreachable block check box on the Unnecessary code tab in your clean up profile.

unreachable block preferences

For the given code:

unreachable block before

One gets:

unreachable block after
Unlooped while clean up

A new clean up has been added that replaces a while loop that always terminates during the first iteration by an if.

The loop should not contain any continue statement.

The loop should only contain break statements without statements after.

To apply the clean up, select Convert loop into if when possible check box on the Unnecessary code tab in your clean up profile.

unlooped while preferences

For the given code:

unlooped while before

One gets:

unlooped while after
Source Fixing clean ups

A new tab named Source Fixing has been added to the Clean Up preferences. It lists the clean up options that fixes the behavior of the code. The Compare with != 0 for bitwise expression clean up option from Code style tab have also been moved to this new tab.

Use it carefully. You may get an unexpected behavior. It may trigger zombie code. A zombie code is a dead code that is dead because an error occurs before. The day someone fixes the error, the zombie code comes back to life and alters the behavior. Although most of the cleanups need review, those ones need testing.

source fixing preferences
Object.equals() on non null clean up

A new clean up has been added that inverts calls to Object.equals(Object) and String.equalsIgnoreCase(String) to avoid useless null pointer exception.

The caller must be nullable.

The parameter must not be nullable.

Beware! By avoiding null pointer exceptions, the behavior may change!

To apply the clean up, select Avoid Object.equals() or String.equalsIgnoreCase() on null objects check box on the Source Fixing tab in your clean up profile.

invert equals preferences

For the given code:

invert equals before

One gets:

invert equals after
Comparison to zero clean up

A new clean up has been added that fixes Comparable.compareTo() usage.

The code is not supposed to predict the 1 and -1 values; it is supposed to get zero or a value lesser or greater than zero.

Beware! The behavior may change if you implement a custom comparator!

To apply the clean up, select Compare to zero check box on the Source Fixing tab in your clean up profile.

comparison zero preferences

For the given code:

comparison zero before

One gets:

comparison zero after
Completion overwrites in Java editor

The Java Editor now uses Completion overwrites as the default. If Completion overwrites is on, the completion text replaces the characters following the caret position until the end of the word. If Completion inserts is on, the completion text is inserted at the caret position, so it never overwrites any existing text. Note that pressing Ctrl when applying a completion proposal toggles between the two insertion modes.

You can change the default in the Java > Editor > Content Assist preference page.

Insert best guessed parameters in Java editor

Instead of simply inserting the method parameter names as placeholders, when a method is completed, the Java Editor now inserts the best guessed parameters by default.

You can change the default in the Java > Editor > Content Assist preference page.

Quick assist to create new implementation

Invoking the Quick Assist (Ctrl+1) to create new implementation on an interface or abstract class declaration launches the New Java Class wizard:

quick assist interface
Quick fixes on permitted types

You can add sealed, non-sealed, or final modifiers on permitted type declarations, as applicable, using the new Quick Fixes (Ctrl+1).

On a permitted class declaration:

permitted class

On a permitted interface declaration:

permitted interface
Convert to switch expression

A new quick assist and clean up has been added that converts switch statements to switch expressions (Java 14 or higher) where possible.

Switch statements that use control statements such as nested switch statements, if/else blocks, for/while loops are not considered as is the case for return/continue statements. All cases of the switch statement must either have a last assignment statement that sets the same variable/field as other cases, or else has a throw statement. Fall-through is allowed between cases but only if there are no other statements in between. The switch statement must have a default case unless the switch expression is an enum type and all possible enum values are represented in the cases.

To apply the quick assist, press Ctrl+1 on the target switch statement and select Convert to switch expression, if offered.

To apply the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog, select Convert to switch expression check box on the Code Style tab (or the Java Feature tab starting from Eclipse 2021-03).

switch expressions preferences

For the given code:

switch expressions before

One gets:

switch expressions after
Uses the else-if pseudo keyword

A new clean up has been added that combines nested if statement in else block to else if.

Beware for any comments after the else keyword. It will be lost.

To apply the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog, select Combine nested 'if' statement in 'else' block to 'else if' check box on the Code Style tab.

For the given code:

else if before

One gets:

else if after
Bitwise expressions in comparisons

A new clean up has been added that replaces the > operator with != when the comparison expression has a bitwise expression operand and a 0 operand.

This resolves an anti-pattern for such kind of comparisons, which can also be a bug when the bitwise expression is involving a negative constant value. This code smell is further described by the FindBugs project as bug description "BIT: Check for sign of bitwise operation".

To apply the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog select Compare with != 0 for bitwise expression on the Code Style tab (or the Source fixing tab starting from Eclipse 2021-03).

bitwise expressions preferences

For the given code:

bitwise expressions before

You get this after the clean up:

bitwise expressions after
Pull up assignment

A new clean up has been added that moves assignments inside an if condition above the if node.

It improves the readability of the code.

To apply the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog, select Pull up assignment check box on the Code Style tab.

For the given code:

pull up assignment before

One gets:

pull up assignment after
Use switch

A new clean up has been added that replaces if/else if/else blocks to use switch when possible.

It converts to switch when there are more than two cases.

It does not convert if the discriminant can be null, that is to say only primitive.

It does a variable conflict analyze.

The case value can be literals or constants.

To apply the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog, select Convert if/else if/else chain to switch check box on the Code Style tab.

For the given code:

use switch before

One gets:

use switch after
Add elements in collections without loop

A new clean up has been added that uses Collection.addAll() or Collections.addAll() instead of a for loop.

It refactors for loops with index, for loops with iterator and foreach loops.

If the source is an array, the list is raw, and the Java version is 1.5 or higher, we use Arrays.asList() to handle the erasure type. It doesn’t decrease the performance.

To apply the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog, select Add elements in collections without loop check box on the Code Style tab.

add remove preferences

For the given code:

add remove before

One gets:

add remove after
Use ternary operator

A new clean up has been added that replaces (X && Y) || (!X && Z) by X ? Y : Z.

The operands must be passive and boolean.

To apply the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog, select Replace (X && Y) || (!X && Z) by X ? Y : Z check box on the Duplicate Code tab.

ternary operator preferences

For the given code:

ternary operator before

One gets:

ternary operator after
Use '==' or '^' on booleans

A new clean up has been added that replaces (X && !Y) || (!X && Y) by X ^ Y and replaces (X && Y) || (!X && !Y) by X == Y.

It only works on boolean.

It works with lazy or eager operators.

The operands must be passive.

It does not matter an operand is on the left or right.

It does a deep negation expression analyze.

To apply the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog, select Use '==' or '^' on booleans check box on the Duplicate Code tab.

For the given code:

xor before

One gets:

xor after
Redundant falling through blocks

A new clean up has been added that detects a list of statements that ends with a jump statement (return, break, continue or throw), and has the same list of statements below that.

It detects similar statements. It also checks that the declarations of the variables in the statements are the same. It looks for redundant statements in if, else, catch and finally but not in loops.

To apply the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog, select Remove redundant end of block with jump statement check box on the Duplicate Code tab.

For the given code:

redundant falling blocks before

One gets:

redundant falling blocks after
Redundant if condition

A new clean up has been added that removes a condition on an else that is negative to the condition of the previous if.

The condition must be passive. The removed code should not throw an expected exception. The cleanup uses a deep condition comparison algorithm.

To apply the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog, select Redundant if condition check box on the Duplicate Code tab.

For the given code:

if condition before

One gets:

if condition after
Use Objects.hash()

A new clean up has been added that rewrites Eclipse-autogenerated hashCode() method by Eclipse-autogenerated hashCode() method for Java 7 using Objects.hash().

Let’s remind that you can autogenerate your hashCode() and equals() methods by right-clicking on your class, selecting Source and clicking on Generate hashCode() and equals() methods…​. Since Eclipse 2018-09, a checkbox allows you to generate your methods using Java 7 API. This cleanup rewrites your method as if it has been generated using this option.

This clean up does not generate again your method from scratch, it rewrites it using a more modern syntax. That is to say, if your method is missing or voluntary does not process a field, this field still won’t be processed.

To apply the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog, select Use Objects.hash() check box on the Unnecessary Code tab (or the Java Feature tab starting from Eclipse 2021-03).

hash preferences

For the given code:

hash before

One gets:

hash after
Use String.join()

A new clean up has been added that uses String.join() when possible.

It detects all types of for loops. The delimiter can be added before or after. The condition can be a boolean or an index comparison.

To apply the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog, select Use String.join() check box on the Unnecessary Code tab (or the Java Feature tab starting from Eclipse 2021-03).

For the given code:

string join before

One gets:

string join after
Use Arrays.fill()

A new clean up has been added that replaces for loops to use Arrays.fill() where possible.

The value must be hard-coded.

To apply the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog, select Use Arrays.fill() check box on the Unnecessary Code tab.

For the given code:

arrays fill before

One gets:

arrays fill after
Evaluate without null check

A new clean up has been added that removes redundant null checks.

It removes null check on value before equals() or equalsIgnoreCase() method and before instanceof expression.

It only removes redundant passive expressions.

To apply the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog, select Evaluate without null check check box on the Unnecessary Code tab.

For the given code:

redundant null check before

One gets:

redundant null check after
Avoid double negation

A new clean up has been added that reduces double negation in boolean expression.

It removes negations on both operands in an equality/difference operation.

It prefers equality/difference operation rather than negated operand.

To apply the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog, select Double negation check box on the Unnecessary Code tab.

For the given code:

double negation before

One gets:

double negation after
Redundant comparison statement

Removes useless bad value checks before assignments or return statements. Such useless bad value checks are comparing an expression against bad value, then either assigning bad value or the expression depending on the result of the bad value check. It is simpler to directly assign the expression.

The expression should be passive.

The excluded value should be hard coded.

To apply the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog, select Remove redundant comparison statement check box on the Unnecessary Code tab.

For the given code:

redundant comparison statement before

One gets:

redundant comparison statement after
Unnecessary super() call

A new clean up has been added that removes call to super constructor with empty arguments.

Such a call is redundant. See JLS section 12.5 for more info.

To apply the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog, select Remove redundant super() call in constructor check box on the Unnecessary Code tab.

For the given code:

redundant super before

One gets:

redundant super after
Initialize collection at creation

A new clean up has been added that replaces the creation of a new Collection, then invoking Collection.addAll() on it, by the creation of the new Collection with the other Collection as parameter.

Only well known collection classes are refactored to avoid behavior changes. The cleanup is enabled only if there is no useful instantiation parameters.

To apply the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog, select Initialize collection at creation check box on the Unnecessary Code tab.

For the given code:

collection cloning before

One gets:

collection cloning after
Initialize map at creation

A new clean up has been added that replaces creating a new Map, then invoking Map.putAll() on it, by creating the new Map with the other Map as parameter.

Only well known map classes are refactored to avoid behavior changes. The cleanup is enabled only if there is no useful instantiation parameters.

To apply the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog, select Initialize map at creation check box on the Unnecessary Code tab.

For the given code:

map cloning before

One gets:

map cloning after
Remove overridden assignment

A new clean up has been added that removes passive assignment when the variable is reassigned before being read.

To apply the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog, select Remove overridden assignment check box on the Unnecessary Code tab.

For the given code:

overridden assignment before

One gets:

overridden assignment after
Raise embedded if into parent if

A new clean up has been added that merges inner if statement into the parent if statement.

The cleanup checks that there is no else statement.

To apply the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog, select Raise embedded if into parent if check box on the Unnecessary Code tab.

For the given code:

embedded if before

One gets:

embedded if after
Redundant return

A new clean up has been added that removes useless lone return at the end of a method or lambda.

The cleanup checks that there is no value on the return statement.

To apply the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog, select Remove useless return check box on the Unnecessary Code tab.

For the given code:

redundant return before

One gets:

redundant return after
Redundant continue

A new clean up has been added that removes useless lone continue at the end of a loop.

A continue statement at the end of a loop is removed. A continue statement at the end of a control statement is removed if the control statement is at the end of a loop. A continue statement is kept if it has a label.

To apply the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog, select Remove useless continue check box on the Unnecessary Code tab.

For the given code:

redundant continue before

One gets:

redundant continue after
Use try-with-resource

A new clean up has been added that changes code to make use of Java 7 try-with-resources feature. In particular, it removes now useless finally clauses.

It may move an inner closeable assignment as a resource. It handles finally with a simple close() invocation, a null-check and remaining statements below.

It is only enabled from Java 7 and it also handles the Java 9 syntax.

To apply the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog, select Use try-with-resource check box on the Unnecessary Code tab (or the Java Feature tab starting from Eclipse 2021-03).

For the given code:

try with resource before

One gets:

try with resource after
Exit loop earlier

A new clean up has been added that adds a break to avoid passive for loop iterations.

The inner assignments must not do other different assignments after (assign other values or assign into other variables).

There must be no side effects after the first assignments.

To apply the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog, select Exit loop earlier check box on the Optimization tab.

For the given code:

break loop before

One gets:

break loop after
Use StringBuilder

A new clean up has been added that replaces String concatenation by StringBuilder when possible.

It uses StringBuffer for Java 1.4-.

It only replaces strings on several statements and the concatenation should have more than two pieces.

The variable should be only concatenated and it should retrieve the string once.

To apply the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog, select Replace String concatenation by StringBuilder check box on the Optimization tab.

For the given code:

stringbuilder before

One gets:

stringbuilder after
Primitive serialization

A new clean up has been added that replaces a primitive boxing to serialize by a call to the static toString() method.

It works for all the primitive types: boolean, char, byte, short, int, long, float and double.

To apply the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog, select Primitive serialization check box on the Optimization tab.

For the given code:

primitive serialization before

One gets:

primitive serialization after
Prefer boolean literal

A new clean up has been added that replaces Boolean.TRUE/Boolean.FALSE by true/false when used as primitive.

To apply the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog, select Prefer boolean literals check box on the Optimization tab.

boolean literal preferences

For the given code:

boolean literal before

One gets:

boolean literal after
Diamond operator <> (Remove redundant type arguments)

The clean up Remove redundant type arguments has been renamed Use diamond operator and is still available in the Unnecessary Code tab in Eclipse 2020-12.

The clean up will be moved to the future Java Feature tab in Eclipse 2021-03.

Java Views and Dialogs

A new preference option has been added and enabled by default: Preferences > Java > Enable parallel index search. Depending on the available hardware, this option should improve performance for all index based Java search operations, but could also lead to possible regressions. To switch back to the old sequential index search, turn this option off:

parallel index search
Coloring restricted identifiers

A new option named Restricted identifiers has been added under Java category in Java > Editor > Syntax Coloring preferences.

Some identifiers (e.g. var, yield, record etc.) are restricted identifiers because they are not allowed in some contexts. Semantic highlighting options for such identifiers can be controlled by the element Restricted identifiers under Java category in Java > Editor > Syntax Coloring preference page.

restricted identifier preference
Externally annotate sources

The concept of external null annotations has been extended to apply to source folders, too.

External annotations were introduced in Eclipse 4.5 in order to overlay not-editable library classes with null annotations to specify the null contract against which library calls should be analysed. You can now apply the same concept for another kind of classes that should not be edited: generated source code.

In the Java Build Path dialog, also source folders now have a node External annotations where a path to Eclipse External Annotation files (.eea) can be configured.

annotate sources config

Given a project that is configured for annotation based null analysis, and given a Java class inside a source folder configured for external annotations, the editor now offers a quick assist (Ctrl+1) for annotating individual type references in the signatures of methods and fields.

annotate sources assist

The selected option will record that the return type should be interpreted as @NonNull List<Attribute> (the popup to the right showing the internal format how this annotation will be stored in an .eea file). With this annotation in place, the annotated signature will be shown in hovers and will be used for null analysis:

annotate sources effect
static import org.mockito.Mockito.* available as favorite

Imports for static org.mockito.Mockito. are added to the Java favorites in the preferences under *Java > Editor > Content Assists > Favorites. This way the organize imports action in the IDE will automatically add static imports to this class when you use the Mockito library in your tests.

Fine-grained search for permitted types

You can perform a fine-grained search for permitted type declarations in the Search dialog (Ctrl+H) > Java Search > Limit To > Match Locations with the new option:

search permitted type
Sort library entries alphabetically in Package Explorer enabled by default

The Preferences > Java > Appearance > [x] Sort library entries alphabetically in Package Explorer is now enabled by default. This makes it easier for you to see if a library is available or not.

If you want to see the order in which the libraries are added to the classpath, e.g. to understand classpath loading issues, you can disable the preference.

Debug

Toggle tracepoints in editor ruler

A new Toggle Tracepoint context-menu entry has been added to the Java Editor line ruler. Both the Toggle Tracepoint options i.e. the new context-menu entry and the existing option under Run menu have a new icon and are now available for Java class files also along with Java source files.

toggle tracepoints
Toggle breakpoint on a list of methods including abstract method

You can now Toggle Method Breakpoint on a list of methods which includes an abstract method.

debug toggle breakpoint
Support for @argfiles when launching

A new check box was added to the Arguments tab for Java based launch configurations (Java Application, JUnit, and others) for writing arguments into an @argfile. This is disabled below Java 9 and can be enabled for Java programs launched with Java 9 and above.

launch with argfile
Stabilized logical structures in Variables view with active GC

The Debug view no longer breaks when logical structures are shown while the application’s garbage collector is active (com.sun.jdi.ObjectCollectedException occurred while retrieving value).

Java Formatter

Annotations wrapping

The formatter now allows more control over how multiple annotations on a single element should be divided into lines. Previously, they could either all be placed in a single line along with the annotated element, or each in a separate line. The settings that controlled this behavior (in the New Lines > After annotations section) now only control a line break between the last annotation and the annotated element. Line breaks between annotations are controlled by a new group of settings in the Line Wrapping > Wrapping Settings > Annotations section.

Just like with standard wrapping settings, they can be set to keep everything in a single line (Do not wrap), each annotation in a separate line (Wrap all elements), or only break lines that exceed the width limit (Wrap where necessary). The last option along with the Never join already wrapped lines setting effectively means manual control over each case. The annotation wrapping settings differ from other wrapping settings in that the indentation control is not available.

The formatter configuration sections can be found in the Profile Editor (Preferences > Java > Code Style > Formatter > Edit…​).

formatter wrap annotations

And more…​

You can find more noteworthy updates in on this page.

What is next?

Having JBoss Tools 4.19.0 and Red Hat CodeReady Studio 12.19 out we are already working on the next release.

Enjoy!

Jeff Maury


by jeffmaury at April 20, 2021 08:20 AM

The Desktop Don Reference

by Donald Raab at April 18, 2021 05:44 AM

Everything I know about software development in a single page of quotes.

Goat Fell, Isle of Arran — Scotland — Photo taken by Donald Raab in 2004

Background

At the end of 2003, I accepted an opportunity to go on an extended business trip with my family to England. We moved to London in January of 2004, and returned back to the states at the end of the same year. Before I left, I decided I wanted to distill everything I knew about software development down to a set of memorable quotes. I used to read a lot of books. Most of the books I read were technical, but some were about leadership and project management. I also had some great mentors in the 1990s who left an impression on me with quotes that they would reference. I had to do some research to find sources for the quotes I included in the “Desktop Don Reference” (DDR), as none of them are my own. Some of the quotes are so common though, that I have been unable to identify an original source and have left them noted with a question mark.

Once I had my set of quotes, I organized them into seven categories, of three quotes each. This was very important, as you will understand once you read the first quote. I wanted the quotes to fit on a single printed page, with a reasonable sized font. I left printed copies with my team members at the time and told them if they ever had any questions for me while I was in London and if they couldn’t reach me, they could just refer to this page. I told them any advice I would give them would probably be found on this page of quotes.

Over the years, developers have asked me for this reference. I would smile any time I would see a developer that had the DDR hanging in their cubicle. I’m including the categories and quotes from the original DDR below.

Desktop Don Reference

Simplicity

The human mind can comprehend seven plus or minus two things at a time. 1
Things should be as simple as possible, but not any simpler. 2
Keep it simple, stupid. 3

Quality

Fix broken windows when you find them. 4
Premature optimization is the root of all evil. 5
Do things once and only once. 6

Process

Make it work, make it right, make it fast. 6
If it hurts when you touch it, then don’t touch it. 7
Plan is nothing, planning is everything. 8

Time

Slow down to speed up. 9
Thinking saves time. 9
Modeling saves time. 9

Teamwork

None of us is as smart as all of us. 10
It’s not always what you know, it’s often who you know. ?
Help is a four-letter word you shouldn’t be afraid to use. ?

Education

Learn something new every day. ?
You can’t listen when you’re talking. ?
There is always someone smarter — find them and learn from them! ?

The Future

You aren’t gonna need it. (YAGNI) 6
Those who do not study history are doomed to repeat it. 11
The best way to predict the future is to invent it. 12

Note: If I could predict the future, I wouldn’t need to work.

Quote attribution

1 George A. Miller

2 Albert Einstein

3 Kelly Johnson

4 The Pragmatic Programmers — Dave Thomas and Andy Hunt

5 Donald Knuth

6 Kent Beck

7 Mom

8 Dwight D. Eisenhower

9 Larry Constantine, from “Beyond Chaos”

10 Kenneth H. Blanchard

11 George Santayana

12 Alan Kay

I am a Project Lead and Committer for the Eclipse Collections OSS project at the Eclipse Foundation. Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it on GitHub.


by Donald Raab at April 18, 2021 05:44 AM

A diagram editor framework for VS Code

by Jonas Helming and Maximilian Koegel at April 16, 2021 07:28 AM

Do you want to extend VS Code with a custom diagram editor? In this article we will introduce you to Eclipse...

The post A diagram editor framework for VS Code appeared first on EclipseSource.


by Jonas Helming and Maximilian Koegel at April 16, 2021 07:28 AM

Eclipse Theia Blueprint Now Available for Download

by Brian King at April 12, 2021 12:15 PM

We are pleased to announce the Alpha release of Eclipse Theia Blueprint, a downloadable template tool for Eclipse Theia, including desktop installers for all major operating systems. It can be used to evaluate the capabilities of Eclipse Theia on the desktop as well as to serve as a “blueprint” upon which to build a custom product based on Eclipse Theia. It has recently been contributed to the Eclipse Theia open source project and is based on work by Rob Moran, STMicroelectronics and EclipseSource.

Eclipse Theia BluePrint Logo

Picture of download links

Download Theia Blueprint for All Major Operating Systems

Eclipse Theia is a framework for creating web-based tools and IDEs. Consider it a more open and flexible alternative to VS Code. Eclipse Theia shares quite a number of core concepts with VS Code (including the language server protocol and the UX design), and it reuses certain open VS Code components, such as the code editor.
 
There are also some clear differentiators between Eclipse Theia and VS Code. Eclipse Theia is a flexible base upon which to build any kind of domain-specific tool. This includes IDEs, but is not limited to this use case. (see this comparison between Theia and VS Code for additional information on this topic). Products built on Eclipse Theia can be deployed in the cloud as browser applications, but also as desktop applications via Electron.

It’s important to note that Eclipse Theia is not a product in and of itself, Rather, it is a platform upon which products can be built. A good way to experience Eclipse Theia  is to use one of the products or projects that have already adopted it in their own product offerings. This list includes:

  • Eclipse Che: An open source workspace server using a tailored Eclipse Theia product as the default IDE
  • Google Cloud Shell: Google has adopted Eclipse Theia for their Cloud Shell Editor
  • GitPod: A service for online coding using Eclipse Theia as an IDE
  • Mbed Studio: A free IDE for Arm Mbed OS application and library development, and one of the first publicly available desktop products based on Eclipse Theia
  • SAP Business Application Studio: the next generation of SAP Web IDE
  • Yangster: An IDE for the Yang language
  • Coffee editor example: An open source example for a domain-specific development environment based on Eclipse Theia, including an online demonstration
  • … and a growing list of other products adopting Theia

Alternatively, you can always build and start Eclipse Theia yourself.
 
While there are several ways to “use“ Eclipse Theia, one of the most commonly requested features for a long time has been a downloadable desktop version. This makes sense from a user and adopter point of view, especially if you want the vanilla Eclipse Theia. Furthermore, a template product based on Theia would be a nice starting point for building any custom product. This is exactly what Eclipse Theia Blueprint provides!

Eclipse Theia Blueprint is a downloadable, installable template tool for evaluating Theia capabilities on the desktop and to serve as a basis for building custom products.

Eclipse Theia Blueprint Application Window

Theia Blueprint Application Window (click image to open full-size in new window)

Theia Blueprint assembles a selection of Eclipse Theia extensions (i.e. features), including a selection of most commonly used VS Code extensions from the Open VSX Registry. It provides everything as a desktop application, along with installers for the various operating systems. Theia Blueprint essentially bundles things that are already there and makes them easier to consume. All build scripts of Theia Blueprint are part of the open source project, as well as documentation on how to customize them to your own needs. Theia Blueprint is a good foundation upon which to define and build your own desktop tool.

While Theia Blueprint offers many advantages to developers, it is also important to state what Theia Blueprint is not:

Eclipse Theia Blueprint is not a fully polished and tested IDE product meant to replace VS Code, the Eclipse IDE or any other existing production IDEs.

This is important to keep in mind in terms of expectations. At the moment, there is no dedicated product team to ensure the quality of the installable product. Theia Blueprint simply  bundles the latest release of various components. While the quality of these underlying components is good, some of them may not be as polished as commercially driven alternatives.

This said,  with significant community interest and participation in driving Theia Blueprint forward, turning this into a more production-ready IDE is certainly within reach. Initially, the most important contributions include bug reporting bugs and feature requests, e.g. about components to be included in the product.

Eclipse Theia Blueprint is now available as an Alpha version and can be downloaded here. If you want to use it as a template for building your own product, please see the documentation as well as the sources on Github.

To get a detailed understanding of what Theia Blueprint product currently consists of, please see the About dialog in the application. Essentially, a set of most commonly used components were selected, including language support for TypeScript, Java and some common file types such as CSS, MarkDown or Yaml.

Eclipse Theia Blueprint features are based on Theia and the included extensions/plugins. For bugs in Theia please consider opening an issue in the Theia project on Github.

Eclipse Theia Blueprint only packages existing functionality and provides an installable version. If you believe there is a mistake in packaging, something needs to be added to the packaging or the installers do not work properly, please open an issue on Github.

About Eclipse Theia Blueprint

About Theia Blueprint, accessed via the Help menu (click image to open full-size in new window)

We hope you find Theia Blueprint to be useful and agree with us that it fills an important gap by making it much simpler to evaluate Eclipse Theia on the desktop, while also being a template upon which to build new desktop products. Please keep in mind that Eclipse Theia-based products can also be deployed in the cloud and accessed via a browser.

Special thanks to Rob Moran, STMicroelectronics and EclipseSource, for their significant  contributions to this project and the Eclipse Theia community. It is a great example of how open source collaboration facilitates innovation within the Eclipse Cloud Development Tools working group As are all Eclipse open source projects, Theia Blueprint is open for any contributions and additions! If you would like to learn more about the Eclipse Cloud DevTools working group, please reach out to us here. To stay up-to-date, sign up for our email list here.


by Brian King at April 12, 2021 12:15 PM

Three Good Reasons to Complete the Jakarta EE Developer Survey

by Thabang Mashologu at April 07, 2021 01:13 PM

Surveys often seem to come around when we’re busiest and feel we can’t afford to take time to answer questions. However, a few minutes of your time can have important benefits down the road to everyone in the Java ecosystem. Now in its fourth year, the Jakarta EE Developer Survey is open until May 31.

Last year, more than 2,200 software developers, architects, and decision-makers around the world responded to the survey, an increase of almost 20 percent over the previous year. Here are three reasons everyone in the Java community should take a few minutes to provide their input.

1. Discover the Latest Trends in Tools and Technologies

By sharing details about the Java framework, JDK, runtime, IDE, and perhaps other languages you’re using to build cloud native applications, you can help paint a more accurate picture of which technologies and tools are gaining ground, and which are losing favor.
 
Understanding these trends may lead you to explore the benefits of other frameworks, runtimes, and tools that deliver capabilities you don’t have today and can help accelerate your cloud evolution.

2. Gain Insight Into Architectural Approaches and Cloud Evolution Strategies

In the last few years, there’s been quite a bit of discussion about the benefits of various strategies to implement Java systems in the cloud. Interestingly, last year’s Jakarta EE Developer Survey revealed that the use of monolithic architectures jumped from 13 percent in 2019 to 25 percent in 2020.
 
With input about your architectural approach and timeline for evolving Java systems to the cloud, everyone in the Java ecosystem will have better visibility into ongoing strategies to:

  • “Lift and shift” legacy applications to the cloud
  • Re-architect legacy applications as microservices
  • Create new cloud native applications from the ground up

This visibility could very well influence your cloud evolution strategy and approach.

3. Influence Jakarta EE Priorities

The Jakarta EE Developer Survey is a great opportunity to indicate how you would like Jakarta EE to evolve to meet your cloud requirements.
 
Maybe there are certain features and functions you need Jakarta EE specifications to provide. Maybe you’re looking for better support for microservices or tighter integration with container orchestration technologies. Perhaps you’re looking for a faster pace of innovation. Or, perhaps you’re looking for something else altogether.
 
Whatever your priorities are, the Jakarta EE community needs to know so they can make informed decisions that are aligned as closely as possible with the priorities of developers globally.

Start the Survey Now

The more people who take a few minutes to complete the survey, the more accurate the results will be. Make sure your point of view is included.

To access the survey, click here.


by Thabang Mashologu at April 07, 2021 01:13 PM

Retrospective of an Old Man

by Stephan Herrmann at April 05, 2021 05:45 PM

Last summer I dropped my pen concerning contributions for Eclipse JDT. I never made a public announcement about this, but half a year later I started to think: doesn’t it look weird to receive a Lifetime Achievement Award and then run off without even saying thanks for all the fish? Shouldn’t I at least try to explain what happened? I soon realized that writing a final post to balance accounts with Eclipse would neither be easy nor desirable. Hence the idea, to step back more than a couple of steps, and put my observations on the table in smaller chunks. Hopefully this will allow me to describe things calmly, perhaps there’s even an interesting conclusion to be drawn, but I’ll try to leave that to readers as much as I can.

Prelude

While I’m not yet preparing for retirement, let me illustrate the long road that led me where I am today: I have always had a strong interest in software tools, and while still in academia (during the 1990s) my first significant development task was providing a specialized development environment (for a “hybrid specification language” if you will). That environment was based on what I felt to be modern at that time: XEmacs (remember: “Emacs Makes A Computer Slow”, the root cause being: “Eight Megabytes And Continuously Swapping”). I vaguely remember a little time later I was adventurous and installed an early version of NetBeans. Even though the memory of my machine was upped (was it already 128 MB?), that encounter is remembered as surpassing the bad experience of Emacs. I never got anything done with it.

A central part of my academic activity was in programming language development in the wider area of Aspect Oriented Software Development. For pragmatical reasons (and against relevant advice by Gilad Bracha) I chose Java as the base language to be extended to become ObjectTeams/Java, later rebranded as OT/J. I owe much to two students, whose final projects (“Diplomarbeit”) was devoted to two successive iterations of the OT/J compiler. One student modified javac version 1.3 (the implementation by Martin Odersky, adopted by Sun just shortly before). It was this student who first mentioned Eclipse to me, and in fact he was using Eclipse for his work. I still have a scribbled note from July 2002 “I finally installed Eclipse” – what would have been some version 2.0.x.

One of the reasons for moving forward after the javac-based compiler was: licensing. Once it dawned on me, that Eclipse contains an open-source Java compiler with no legal restrictions regarding our modifications, I started to dream about more, not just a compiler but an entire IDE for Object Teams! As a first step, another student was assigned the task to “port” our compiler modifications from javac to ecj. Some joint debugging sessions (late in 2002?) with him where my first encounters with the code base of Eclipse JDT.

First Encounters with Eclipse

For a long period Eclipse to me was (a) a development environment I was eager to learn, and (b) a huge code base in CVS to slowly wrap our heads around and coerce into what we wanted it to be. Surely, we were overwhelmed at first, but once we had funding for our project, a nice little crowd of researchers and students, we gradually munched our way through the big pile.

I was quite excited, when in 2004 I spotted a bug in the compiler, reported it in bugzilla (only to learn, that it had already been fixed 🙂 ). It took two more years until the first relevant encounter: I had spotted another compiler bug, which was then tagged as a greatbug. This earned me my first Eclipse T-shirt (“I helped make Callisto a better place“). It’s quite washed out, but I still highly value it (and I know exactly one more person owning the same T-shirt, hi Ed 🙂 ).

Soon after, I met some of my role models in person: at ECOOP 2006 in Nantes, the Eclipse foundation held a special workshop called “eTX – Eclipse Technology Exchange“, which actually was a superb opportunity for people from academia to connect with folks at Eclipse. I specifically recall inspiring chats with Jerome Lanneluc (an author of JDT’s Java Model) and Martin Aeschlimann (JDT/UI). That’s when I learned how welcoming the Eclipse community is.

During the following years, I attended my first Eclipse summits / conferences and such. IIRC I met Philippe Mulet twice. I admired him immensely. Not only was he lead developer of JDT/Core, responsible for building much of the great stuff in the first place. Also he had just gone through the exercise of moving JDT from Java 1.4 to Java 5, a task that cannot be overestimated. Having spoken to Philippe is one of the reasons why I consider myself a member of a second generation at Eclipse: a generation that still connects to the initial era, though not having been part of it.

End of an Era

For me, no other person represents the initial era of Eclipse as much as Dani did. That era has come to an end (silence).

Still a few people from the first generation are around.

Tom Watson is as firm as a rock in maintaining Equinox, with no sign of fatigue. I think he really is up for an award.

Olivier Thomann (first commit 2002) still responsibly handles issues in a few weird areas of JDT (notably: computation of StackMaps, and unicode handling).

John Arthorne has been seen occasionally. He was the one who long, long time ago explained to me the joke behind package org.eclipse.core.internal.watson (it’s elementary).

What was it like to join the community?

It wasn’t before 2010 that I became a committer for JDT/Core. I was the first committer for JDT who was not paid by IBM. I was immensely flattered by the offer. So getting into the inner circle took time: 6 years from first bug report until committer status, while all the time I was more or less actively hacking on our fork of JDT. This is to say: I was engaged with the code all the time. Did I expect things to move faster? No.

Even after getting committer status, for several years mutual reviews of patches among the team were the norm. The leads (first Olivier, then Srikanth) were quite strict in this. Admittedly, I had to get used to that – my patches waiting for reviews, my own development time split between the “real work” and “boring” reviews. In retrospect the safety net of peer reviews was a life saver – plus of course a great opportunity to improve my coding and communication skills. Let me emphasize: this was one committer reviewing the patches of another committer.

I had much respect for the code base I worked with. While working in academia, I had never seen such a big and complex code base before. And yet, there was no part that could not be learned, as all the code showed a clear and principled design. I don’t know how big the impact of Eric Gamma on details of the code was, but clearly the code spoke with the same clarity as the GoF book on design patterns.

As such, I soon learned a fundamental principle for newcomers: “Monkey see, monkey do“. I appreciated this principle because it held the promise that my own code might share the same high quality as the examples I found out there. In later days, I heard a similar attitude framed as “When in Rome, do as Romans do“.

My perspective on JDT has always been determined by entering through the compiler door. For myself this worked out extremely well, since nothing helps you understand Java in more depth and detail, than fixing compiler bugs. And understanding Java better than average I consider a prerequisite for successfully working on JDT.

to be continued


by Stephan Herrmann at April 05, 2021 05:45 PM

Behind the Scene #3

March 31, 2021 10:00 AM

And here it is! The Sirius Web “Behind the scene” third session. I am pleased to give the stage to Florian Barbin, Consultant at Obeo. He gives a short tour of the new incremental layout he developed with William Piers based on Thales needs. We are very grateful to our customers for contributing to Sirius Web!

See you next month for another “Behind the scene”!


March 31, 2021 10:00 AM

Support SSH tunneling for managed connections

March 31, 2021 12:00 AM

With the upcoming release of Eclipse Ditto version 2.0.0 it will be possible to

SSH tunneling for managed connections

With the upcoming release of Eclipse Ditto version 2.0.0, managed connections support establishing an SSH tunnel, which is then used to connect to the actual target endpoint. This is useful when the target endpoint is not directly accessible. Currently, this feature is available for all connection types supported in Ditto, except for Kafka 2.x.

Connection Overview

For further information, see Secure Shell (SSH) Connection Protocol, RFC4254

Setting up connections with SSH tunneling in Ditto

When setting up a tunneled connection, the configuration must specify the sshTunnel section, which contains the necessary information to establish the SSH port forwarding. For authentication, password and public key are supported. Also, host validation using public key fingerprints are supported. The tunnel configuration does not affect the other parts of your connection configuration. If the feature is enabled the connection will establish an SSH tunnel and afterwards use this tunnel to connect to the desired endpoint. In case you later disable the SSH tunnel feature, the payload will be processed directly to the desired endpoint.

Basic Authentication

When using basic authenticating the sshTunnel configuration should contain the credentials.type plain, as well as the username and password fields:

{
  "name": "tunneled-connection",
  "connectionType": "mqtt",
  "uri": "tcp://mqtt.eclipseprojects.io:1883",
  "sources": [{ ... }],
  "sshTunnel": {
    "enabled": true,
    "uri": "ssh://ssh-host:2222",
    "credentials": {
      "type": "plain",
      "username": "username",
      "password": "password"
    },
    "validateHost": true,
    "knownHosts": ["MD5:e0:3a:34:1c:68:ed:c6:bc:7c:ca:a8:67:c7:45:2b:19"]
  }
}

Authentication with public key

On public key authentication the credentials.type is public-key. In addition to the username the publicKey and privateKey have to be provided. The public key must be provided as PEM-encoded key in X.509 format. The private key must be provided as PEM-encoded key in unencrypted PKCS8 format as specified by RFC-7468.

{
  "name": "tunneled-connection",
  "connectionType": "mqtt",
  "uri": "tcp://mqtt.eclipseprojects.io:1883",
  "sources": [{ ... }],
  "sshTunnel": {
    "enabled": true,
    "uri": "ssh://ssh-host:2222",
    "credentials": {
      "type": "public-key",
      "username": "username",
      "publicKey": "-----BEGIN PUBLIC KEY-----\nMIIBIjANBgkqhkiG9.....\n-----END PUBLIC KEY-----",
      "privateKey": "-----BEGIN PRIVATE KEY-----\nMIIEvAIBADANBgkqhki....\n-----END PRIVATE KEY-----"
    },
    "validateHost": true,
    "knownHosts": ["MD5:e0:3a:34:1c:68:ed:c6:bc:7c:ca:a8:67:c7:45:2b:19"]
  }
}

The following command can be used to convert a standard OpenSSL key in PKCS1 format to the PKCS8 format accepted by Ditto:

openssl pkcs8 -topk8 -nocrypt -in client-private.pem.key -out client-private.pem.pk8

Host validation using public key fingerprints

When validateHost is enabled, the host public key fingerprints are validated. They can be provided in the format the standard command line tool ssh-keygen produces them. The fingerprints are prefixed with an alias of the hash algorithm that was used to calculate the fingerprint. Ditto supports the following hash algorithms for public key fingerprints: MD5, SHA1, SHA224, SHA256, SHA384 and SHA512. To generate a valid fingerprint with an MD5 hash algorithm from the public key, following can be used:

ssh-keygen -lf id_rsa.pub -E md5

Feedback?

Please get in touch if you have feedback or questions regarding this new functionality.



Ditto

–
The Eclipse Ditto team


March 31, 2021 12:00 AM

Java monolith to microservice refactoring with Eclipse tooling

by Patrick Paulin at March 30, 2021 06:43 PM

As a developer working heavily with OSGi and Eclipse RCP, I’ve spent a lot of time breaking monolithic applications into modules. What I’ve found is that OSGi and it’s associated Eclipse tooling (primarily the Plug-in Development Environment or PDE) are very good at enabling the kind of fine-grained refactoring moves that allows such projects to succeed.

This got me thinking that these technologies and tooling might be useful to anyone trying to refactor a Java monolith into microservices. And it turns out you can do this even if you don’t want to build or deploy OSGi bundles. The tooling can stand on its own and enable a much more powerful and intuitive refactoring workflow.

If you’re interested in learning more about this, I’ve written an article that describes both why refactoring is a good approach to microservice extraction and how Eclipse tooling can help.

Or if you’re interested in meeting with me to find out how this approach could be applied in your projects, why not schedule a free remote consultation and demo?


by Patrick Paulin at March 30, 2021 06:43 PM

JBoss Tools 4.19.0.AM1 for Eclipse 2021-03

by jeffmaury at March 26, 2021 02:26 PM

Happy to announce 4.19.0.AM1 (Developer Milestone 1) build for Eclipse 2021-03.

Downloads available at JBoss Tools 4.19.0 AM1.

What is New?

Full info is at this page. Some highlights are below.

OpenShift

Browser based login to an OpenShift cluster

When it comes to login to a cluster, OpenShift Tools supported two different authentication mechanisms:

  • user/password

  • token

The drawback is that it does not cover clusters where a more enhanced and modern authentication infrastructure is in place. So it is now possible to login to the cluster through an embedded web browser.

In order to use it, go to the Login context menu from the Application Explorer view:

weblogin1

Click on the Retrieve token button and an embedded web browser will be displayed:

weblogin2

Complete the workflow until you see a page that contains Display Token:

weblogin3

Click on Display Token:

The web browser is automatically closed and you’ll notice that the retrieved token has been set in the original dialog:

weblogin4

Devfile registries management

Since JBoss Tools 4.18.0.Final, the preferred way of developing components is now based on devfile, which is a YAML file that describe how to build the component and if required, launch other containers with other containers. When you create a component, you need to specify a devfile that describe your component. So either you component source contains its own devfile or you need to pick a devfile that is related to your component. In the second case, OpenShift Tools supports devfile registries that contains a set of different devfiles. There is a default registry (https://github.com/odo-devfiles/registry) but you may want to have your own registries. It is now possible to add and remove registries as you want.

The registries are displayed in the OpenShift Application Explorer under the Devfile registries node:

registries1

Please note that expanding the registry node will list all devfiles from that registry with a description:

registries2

A context menu on the Devfile registries node allows you to add new registries, and on the registry node to delete it.

Devfile enhanced editing experience

Although devfile registries can provide ready-to-use devfiles, there may be some advanced cases where users need to write their own devfile. As the syntax is quite complex, the YAML editor has been completed so that to provide:

  • syntax validation

  • content assist

Support for Python based components

Python-based components were supported but debugging was not possible. This release brings integration between the Eclipse debugger and the Python runtime.

Hibernate Tools

A number of additions and updates have been performed on the available Hibernate runtime providers.

Runtime Provider Updates

The Hibernate 5.4 runtime provider now incorporates Hibernate Core version 5.4.29.Final and Hibernate Tools version 5.4.29a.Final.

Server Tools

Wildfly 23 Server Adapter

A server adapter has been added to work with Wildfly 23.

EAP 7.4 Beta Server Adapter

The server adapter has been adapted to work with EAP 7.4 Beta.

Enjoy!

Jeff Maury


by jeffmaury at March 26, 2021 02:26 PM

Release 5.9

March 26, 2021 12:00 AM

New version 5.9 has been released.

Release is of Type A

Features

  • Logs setup/configuration
  • Enhance page title in the ‘Access’ tab of the ‘Documents’
  • Pluggable Javascript Module Source Provider
  • Switch the default theme to the light one
  • Cache repository resources by default
  • After successful import, refresh Workspace Explorer
  • Generate Helm charts as part of the release action
  • Enhance exists method to accept artefact type
  • Add more charts capabilities
  • Custom Dirigible Docker Image in Helm
  • Add support for Stored Procedures in Database Explorer
  • Add support for Functions in Database Explorer
  • PDF API
  • Security OAuth API
  • Add Support for Stored Procedures in SQL View
  • Database Add Support for Stored Procedures API

Fixes

  • Redundant ?refreshToken= appended to URL
  • Database - invalid encapsulating of entity name
  • Build fails with no space left on device
  • No enum constant org.eclipse.dirigible.database.sql.DataType.FLOAT
  • SAP CF Runtime only deployment is not working
  • Database Query results are limited to 100 records
  • Make Git Commit Repository Agnostic
  • Logout is not working
  • SAP CF & Kyma - keep the initial access path after authentication
  • No Column annotation found
  • Fails to initialize terminal server in windows environment
  • Busy page is stuck
  • PostgreSQL issues
  • PostgreSQL - Syntax error at or near “PRIMARY”
  • PostgreSQL - Multiple primary keys for table “activemq_acks” are not allowed
  • Minor fixes

Statistics

  • 60K+ Users
  • 86K+ Sessions
  • 187 Countries
  • 426 Repositories in DirigibleLabs

Operational

Enjoy!


March 26, 2021 12:00 AM

Xtext vs. MPS: Decision Criteria

by Niko Stotz at March 19, 2021 08:37 PM

tl;dr If we started a new domain-specific language tomorrow, we could choose between different language workbenches or, more general, textual vs. structural / projectional systems. We should decide case-by-case, guided by the criteria targeted user group, tool environment, language properties, input type, environment, model-to-model and model-to-text transformations, extensibility, theory, model evolution, language test support, and longevity.

This post is based on a presentation and discussion we had at the Strumenta Community. You can download the slides, although reading on might be a bit more clear on the details. Special thanks to Eelco Visser for his contributions regarding language workbenches besides Xtext and MPS.

Introduction

This whole post wants to answer the question:

Tomorrow I want to start a new domain-specific language.
Which criteria shall I think about to decide on a language workbench?

The most important, and most useless answer to this question is: “It depends.” Every language workbench has its own strengths and weaknesses, and we should assess them anew for each language or project. All criteria mentioned below are worth consideration, and should be balanced towards the needs of the language or project at hand.

Almost every aspect described below can be realized in any language workbench — if we really wanted to torture ourselves, we could write an ASCII-art text DSL to “draw” diagrams, or force a really complex piece of procedural logic into lines and boxes. On the other hand, an existing text-based processing chain integrates rather well with a textual DSL, and tables work nicely in a structured environment.

I personally only know Xtext and MPS good enough to offer an educated opinion; Thankfully, during the presentation several others chimed in to offer additional insights. Thus, we can extend this post’s content (to some degree) to “Textual vs. Structural: Decision Criteria”.

What do we mean with textual and structural language workbenches?

As a loose distinction, we’re using the rule of thumb “If you directly edit what’s written on disk, it’s textual.”

Structural describes both projectional and graphical systems. In projectional systems, the user has no influence on how things are shown; with structural systems, the user may have some influence — think of manually layouting a diagram (thanks to Jos Warmer for this clarification).

Examples of textual systems include

  • ANTLR
  • MontiCore
  • Racket
  • Rascal
  • Spoofax
  • Xtext

Examples of structural systems are

  • MetaEdit+
  • MPS
  • Sirius

Targeted User Group

If our DSL targeted developers, we might go for a textual system. Developers are used to the powerful tools provided by a good editor or an IDE, and expect this kind of support for handling their “source code” — or, in this case, model. Textual systems might integrate better with their other tools.

If we targeted business users, they might prefer a structural system. The main competitor in this field is Excel with hand-crafted validation rules and obscure VBA-scripts attached. Typically, business users can profit more from projectional features like mixing text, tables and diagrams.

Tool Environment

If our client had an existing infrastructure to deploy Eclipse-based tooling, we probably wanted to leverage that. This implies using an Eclipse-based language workbench like Rascal, Sirius or Xtext. If we wanted model integration with existing tools, EMF would be our best bet, pointing towards Eclipse.

If our client already leaned towards IntelliJ or similar systems, MPS would be more familiar with them. Spoofax supports both Eclipse and IntelliJ.

Language Properties

If (parts of) our DSL had an established text-based language, we wanted to reuse this existing knowledge in our users and provide a similar textual language. Textual syntax often provides aids to parsers that are difficult to reproduce fluently in structural systems.

As an example, think of a C-style if-statement. In text, the user types i, f, maybe a space, and ( without even thinking about it. In a projectional editor, she still types i and f, but the parenthesis is probably automatically added by the projection.

// | denotes cursor position
if (|«condition») {
  «statements»
}

If she typed (, we would have two bad choices: either we add the parenthesis inside the condition, which is probably not what the user wanted in 95 % of the cases; or we ignore the parenthesis, making the other 5 % really hard to enter.

One important language property is whether we can parse it with reasonable effort and accuracy. For more traditional systems like ANTLR and Xtext, we reach the threshold of unparsable input rather quickly. More advanced systems like Spoofax and Rascal can handle ambiguities well. However, as an extreme example, I doubt we could ever have a parser that reconstructs the semantics of an ASCII-art UML diagram. More realistically, it might be pretty hard for a parser to distinguish mixed free text with unmarked references — think of a free text with some syntactically unmarked references to a user-defined ontology sprinkled in the text: This is free text, with Ornithopters or other Dune references.

Other structures might be parsable, but are very cumbersome to enter — I have yet to see a textual language where writing tables is less than annoying.

Related to parseability is language integration. Almost all technical languages use traditional parser systems, leading to the joy of escaping: <span onclick="if(myVar.substr(\"\\'\") &lt; 5) myTag.style = \'.header &gt; ul { font-weight: bold; } \'">. More modern languages aren’t that pedantic, but try to write the previous sentence in markdown …​

If we wanted to integrate non-textual content or languages in a textual system, it gets tricky pretty soon. In fact, we had to solve a lot of the problems projectional editors face. As an example, think of the parameter info many IDEs can project in the source code: The Java file contains myObj.myFunc("Niko", false), but the IDE displays myObj.myFunc(name: "Niko", authorized: false). If the cursor was just right to the opening parenthesis, and we pressed right arrow, would we move to the left or right of the double quotes? What if the user could interact with the projected part, e.g. a color selector? These examples are projected mix-ins, but it doesn’t get better at all if we imagined the file contents <img src="data:image/png;base64,iVBORw …​"/>, and wanted to display an inline pixel editor. The aforementioned table embedded into some text is another example.

Structural systems really shine if we wanted to have different editors for the same content, or different viewpoints on the content. To illustrate different editors for the same content, think of a state machine. If we wanted to discuss it with our colleagues, it should be presented in the well-known lines-and-boxes form. We still wanted to retarget a transition or add a state graphically. However, if we had to write it from scratch and had a good structure in mind, or just wanted to refactor an existing one, a text-like representation would be much more efficient.

Different view points can be as simple as “more or less detail”: in a component model, we might want to see only the connections between components, or also their internal wiring. Textual editors can also hide parts of the content — most IDEs, by default, fold the legal header comment in a source code file.
As an example of different viewpoints, imagine a complex model of a machine that integrates mechanical, electrical, and cost aspects. All of these are interconnected, so the integrated model is very valuable. Hardly anybody would like to see all the details. However, different users would be interested in different combinations: the safety engineer needs to know about currents and moving parts, and the production planner wants to look at costs and parts that are hard to get. In a textual system, we could create reports with such contents, but had to accept serious limitations if we wanted all the viewpoints to be editable (e.g. a complex distribution to different files + projection into a different file).

Input Type

A blank slate can be unsuitable for some types of users and input. If we wanted the user to provide very specific data, we would offer them a form or a wizard. These are very simple structured systems. A state machine DSL provides the user with much more flexibility, but enforces some structure — we can’t point a transition to another transition, only to a state. In a structured implementation of this DSL, the user would just not be able to create such an invalid transition; a textual DSL would allow to write it, but mark it as erroneous. If our users were developers, they would be used to starting with an empty window, entering the right syntax, and handling error messages. If we targeted people mostly dealing with forms, they might be scared by the empty window, or would not know how to fix the error reported by the system. (“Scared” might sound funny, but there’s quite some anecdotal evidence.) In a structural system, developers might be really annoyed that they have 15 very similar states with only one transition each, but still have to write them as separate multi-line blocks; they felt limited by the rigid structure. For the other group, we could project explanatory texts, and visually separate scaffolding from places where they should enter something; they felt guided by pre-existing structure.

To some degree we can adjust our language design to the appropriate level of flexibility. If we implemented a OO-class like system, we could either allow class content in arbitrary order, or (by grammar / language definition) enforce to first write constructors, then attributes, then public methods, and private methods only at the end.

Environment

Textual systems have been around for a long time, so we know how to integrate them with other systems. Any workflow system can move text files around, and every versioning system can store, merge and diff such files. We understand perfectly how to handle them as build artifacts, and can inspect them on any system with a simple text editor. The Language Server Protocol provides an established technology to use textual languages in web context.

Any such integration is more complicated with structural systems. It might store its contents in XML or binary, thus we require specific support for version control. As of now (March 2021), I’m not aware of a production-quality structural language workbench based on web technology. I hope this will change within next year.

On the other hand, if our project does not require tight external integration and targets a desktop environment, a system like MPS provides lots of tooling out-of-the box that’s well integrated with each other.

Transformations: Model-to-Model

The main distinction for this criteria is between EMF-enabled systems, and others. Our chances to leverage existing transformation technologies, or re-use existing transformations, were pretty good in an EMF ecosystem. EMF provides a very powerful common platform, and a plethora of tooling (both industrial and academic) is available.

Two very strong suits of MPS are intermediate languages, and extensible transformations. EMF provides frameworks to lock several model-to-model transformations into a chain, but it still requires quite some manual work and plumbing. In MPS, this approach is used extensively both by MPS itself, and most of the more complex custom languages I know of. The tool support is excellent; for example, it takes literally one click to inspect all intermediate models of a transformation chain.

Every model-to-model transformation in MPS can be extended by other transformations. It depends on language and transformation design how feasible a specific extension is in practice, but it is used a lot in real-world systems.

Transformations: Model-to-Text

Tightly controlling the output of a model-to-text transformation tends to be easier in textual systems. On the one hand, it’s doable to maintain the formatting (i.e. white space, indentation, newlines) of some part of the input. On the other hand, the system is usually designed to output arbitrary text, so we can tweak it as required. Xtend integrates very nice with Xtext (or any other EMF-based system), and provides superior support for model-to-text transformation: It natively supports polymorphic dispatch, and allows to indent generation templates by both the template and the output structure, with a clear way to tell them apart.

If we didn’t need, or even wanted to prevent, customization of the output, structural systems could be helpful. The final text is structured by the transformation, or post-processed by a pretty printer.

For MPS, we need to consider whether the output format is available as a language. In this case, we use a chain of model-to-model transformations and have the final model take care of the text output, which usually is very close to the model. Java and XML languages are shipped with MPS, C, JSON, partial C++, partial C#, and others are available from the community.

Extensibility

Xtext assumes a closed world, whereas MPS assumes an open world. Thus, if we wanted to tightly control our DSL environment, we have very little effort with Xtext. Using MPS in a controlled environment requires a lot of work.

On the other hand, if our DSL served as an open platform, MPS inherently offers any kind of extensibility we could wish for. We had to explicitly design each required extension point in Xtext.

Conceptual Framework / Theory

Parsers and related text-processing tools are well-researched since the 1970s, and continues to move forward. Computer science build up solid theoretical understanding of the problem and available solutions. We can find several comparable, stable and usable implementations for any major approach.

Structural systems are a niche topic in computer science; Eelco provided some pointers. We don’t understand structural editors well enough to come up with sensible, objective ways to compare them. All usable implementations I know of are proprietary (although often Open Source).

Scalability

As parsers are around for a long time, we understand pretty well how they can be tuned. They are widely used, so there’s a lot of experience available how to design a language to be efficiently parsable. Xtext has been used in production with gigabyte-sized models. The same experience provides us with very performant editors. I’d expect a textual system to fail more graceful if we closed in on its limits: loading, purely displaying the content, syntax highlighting, folding, navigation, validation, and generation should scale differently, and the system should be partially useful/usable with a subset of remaining operational aspects. If a model became too big for our tooling, we could always fall back to plain text editors; they can edit files of any size. We also know how to generate from very big models: C++ compilers build up completely inlined files of several hundreds of megabytes; the aforementioned gigabyte-sized Xtext models are processed by generators.

Practical experience with MPS shows scalability issues in several aspects. The default serialization format stores one model with all its root nodes in one XML file. Performance degrades seriously for larger models. Using any of the other default serialization formats (XML per root node; binary) helps a lot. The editor is always rendered completely. Depending on the editor implementation, it might be re-rendered by every model change, or even every cursor navigation. I’m not aware of any comprehensive guide how to tackle editor performance issues (in my experience, we should try to avoid the flow layout for bigger parts of the editor). The biggest performance issue with possibly any structural system is the missing fallback: Once we have a model too big for the system (e.g. by import), it’s very hard to do something about the model’s size, as we would need the system to actually edit the model. Thankfully, we can still edit the model programmatically in most cases. Both validation and generation performance in MPS highly depends on the language implementation. The model-to-model transformation approach tends to use quite some memory; I’d assume model-to-model transformations (with free model navigation) to be harder to optimize for memory usage than model-to-text transformation.

Model Evolution

Xtext does not provide any specific support for model evolution. As conceptual advantage of textual systems, we can migrate models with text processing tools. Search / replace or sed can be sufficient for smaller changes to model instances. As a drawback, we cannot store any meta-information in the model, but out of sight (and manipulation) of the user. Thus, we have to put version information in some way directly into our language content.

MPS stores the used language version with every model instance. It detects if a newer version is available, and can run migration scripts on the instance.

Language Test

Most aspects of Xtext-based languages are implemented in Java (or another JVM language), enabling regular JUnit-based tests. Xtext ships with some utilities to simplify such tests, and to ease tests for parsing errors. Xpect, an auxiliary language to Xtext, allows to embed language-specific tests like validation, auto-complete and scoping in comments of example model instances. In practice, most transformation tests compare the generated output to some reference by text comparison.

Naturally, MPS does not support (or need) parsing tests. It provides specific tests for editors, generators, and other language aspects. The editor tests support checking interaction schemes like cursor movement, intentions, or auto-complete. Generator tests are hardly usable in practice, as they require the generated output model to identical to a reference model, and don’t allow to check intermediate models. The tests for other language aspects use language extensibility to annotate regular models with checks for validation, scoping, type calculation etc. MPS provides technically separated language aspects, and specific DSLs, for e.g. scoping or validation. They are efficient, but make it hard to test contained logic with regular JUnit tests.

Longevity

We can safely assume we will always be able to open text files once we can read the storage media. Text could even be printed. It’s a bit less clear whether parsing technology in 50 years time will easily cope with the structures of today’s languages. Today’s (traditional, as described above) parsers would have a hard time parsing something like PL/1, where any keyword can be used as identifier in an unambiguous context.

If we stored structured models in binary, it might be very hard to retrieve the contents if the system itself was lost. If we used an XML dialect, we could probably recover the basic structures (containment + type, reference + type, metatype, property) of the model.

Let’s assume we lost the DSL system itself, and only know the model instances, or cannot modify the DSL system. (This scenario is not extremely unlikely — there are a lot of productive mainframe programs without available source code.) I don’t have a clear opinion whether it would be easier to filter out all the “noise” from a parsed text file to recover the underlying concepts, or to reassemble the basic structures from an XML file.

In the more probable case, our DSL system is outdated, but we can still run and modify it, e.g. in a virtual environment. Then we can write an exporter that uses the original retrival logic (irrespective of parsing or structured model loading), and export the model contents to a suitable format.


by Niko Stotz at March 19, 2021 08:37 PM

WTP 3.21 Released!

March 17, 2021 02:01 PM

The Eclipse Web Tools Platform 3.21 has been released! Installation and updates can be performed using the Eclipse IDE 2021-03 Update Site or through any of the related Eclipse Marketplace . Release 3.21 is included in the 2021-03 Eclipse IDE for Enterprise Java and Web Developers , with selected portions also included in several other packages . Adopters can download the R3.21 p2 repository directly and combine it with the necessary dependencies.

More news


March 17, 2021 02:01 PM

Publishing an Eclipse p2 composite repository on GitHub Pages

by Lorenzo Bettini at March 15, 2021 02:47 PM

I had already described the process of publishing an Eclipse p2 composite update site:

Well, now that Bintray is shutting down, and Sourceforge is quite slow in serving an Eclipse update site, I decided to publish my Eclipse p2 composite update sites on GitHub Pages.

GitHub Pages might not be ideal for serving binaries, and it has a few limitations. However, such limitations (e.g., published sites may be no larger than 1 GB, sites have a soft bandwidth limit of 100GB per month and sites have a soft limit of 10 builds per hour) are not that crucial for an Eclipse update site, whose artifacts are not that huge. Moreover, at least my projects are not going to serve more than 100GB per month, unfortunately, I might say 😉

In this tutorial, I’ll show how to do that, so that you can easily apply this procedure also to your projects!

The procedure is part of the Maven/Tycho build so that it is fully automated. Moreover, the pom.xml and the ant files can be fully reused in your own projects (just a few properties have to be adapted). The idea is that you can run this Maven build (basically, “mvn deploy”) on any CI server (as long as you have write-access to the GitHub repository hosting the update site – more on that later). Thus, you will not depend on the pipeline syntax of a specific CI server (Travis, GitHub Actions, Jenkins, etc.), though, depending on the specific CI server you might have to adjust a few minimal things.

These are the main points:

The p2 children repositories and the p2 composite repositories will be published with standard Git operations since we publish them in a GitHub repository.

Let’s recap what p2 composite update sites are. Quoting from https://wiki.eclipse.org/Equinox/p2/Composite_Repositories_(new)

As repositories continually grow in size they become harder to manage. The goal of composite repositories is to make this task easier by allowing you to have a parent repository which refers to multiple children. Users are then able to reference the parent repository and the children’s content will transparently be available to them.

In order to achieve this, all published p2 repositories must be available, each one with its own p2 metadata that should never be overwritten. On the contrary, the metadata that we will overwrite will be the one for the composite metadata, i.e., compositeContent.xml and compositeArtifacts.xml.

Directory Structure

I want to be able to serve these composite update sites:

  • the main one collects all the versions
  • a composite update site for each major version (e.g., 1.x, 2.x, etc.)
  • a composite update site for each major.minor version (e.g., 1.0.x, 1.1.x, 2.0.x, etc.)

What I aim at is to have the following paths:

  • releases: in this directory, all p2 simple repositories will be uploaded, each one in its own directory, named after version.buildQualifier, e.g., 1.0.0.v20210307-2037, 1.1.0.v20210307-2104, etc. Your Eclipse users can then use the URL of one of these single update sites to stick to that specific version.
  • updates: in this directory, the metadata for major and major.minor composite sites will be uploaded.
  • root: the main composite update site collecting all versions.

To summarize, we’ll end up with a remote directory structure like the following one

├── compositeArtifacts.xml
├── compositeContent.xml
├── p2.index
├── README.md
├── releases
│   ├── 1.0.0.v20210307-2037
│   │   ├── artifacts.jar
│   │   ├── ...
│   │   ├── features ...
│   │   └── plugins ...
│   ├── 1.0.0.v20210307-2046 ...
│   ├── 1.1.0.v20210307-2104 ...
│   └── 2.0.0.v20210308-1304 ...
└── updates
    ├── 1.x
    │   ├── 1.0.x
    │   │   ├── compositeArtifacts.xml
    │   │   ├── compositeContent.xml
    │   │   └── p2.index
    │   ├── 1.1.x
    │   │   ├── compositeArtifacts.xml
    │   │   ├── compositeContent.xml
    │   │   └── p2.index
    │   ├── compositeArtifacts.xml
    │   ├── compositeContent.xml
    │   └── p2.index
    └── 2.x
        ├── 2.0.x
        │   ├── compositeArtifacts.xml
        │   ├── compositeContent.xml
        │   └── p2.index
        ├── compositeArtifacts.xml
        ├── compositeContent.xml
        └── p2.index

Thus, if you want, you can provide these sites to your users (I’m using the URLs that correspond to my example):

  • https://lorenzobettini.github.io/p2composite-github-pages-example-updates for the main global update site: every new version will be available when using this site;
  • https://lorenzobettini.github.io/p2composite-github-pages-example-updates/updates/1.x for all the releases with major version 1: for example, the user won’t see new releases with major version 2;
  • https://lorenzobettini.github.io/p2composite-github-pages-example-updates/updates/1.x/1.0.x for all the releases with major version 1 and minor version 0: the user will only see new releases of the shape 1.0.0, 1.0.1, 1.0.2, etc., but NOT 1.1.0, 1.2.3, 2.0.0, etc.

If you want to change this structure, you have to carefully tweak the ant file we’ll see in a minute.

Building Steps

During the build, before the actual deployment, we’ll have to update the composite site metadata, and we’ll have to do that locally.

The steps that we’ll perform during the Maven/Tycho build are:

  • Clone the repository hosting the composite update site (in this example, https://github.com/LorenzoBettini/p2composite-github-pages-example-updates);
  • Create the p2 repository (with Tycho, as usual);
  • Copy the p2 repository in the cloned repository in a subdirectory of the releases directory (the name of the subdirectory has the same qualified version of the project, e.g., 1.0.0.v20210307-2037);
  • Update the composite update sites information in the cloned repository (using the p2 tools);
  • Commit and push the updated clone to the remote GitHub repository (the one hosting the composite update site).

First of all, in the parent POM, we define the following properties, which of course you need to tweak for your own projects:

<!-- Required properties for releasing -->
<github-update-repo>git@github.com:LorenzoBettini/p2composite-github-pages-example-updates.git</github-update-repo>
<github-local-clone>${project.build.directory}/checkout</github-local-clone>
<releases-directory>${github-local-clone}/releases</releases-directory>
<current-release-directory>${releases-directory}/${qualifiedVersion}</current-release-directory>
<!-- The label for the Composite sites -->
<site.label>Composite Site Example</site.label>

It should be clear which properties you need to modify for your project. In particular, the github-update-repo is the URL (with authentication information) of the GitHub repository hosting the composite update site, and the site.label is the label that will be put in the composite metadata.

Then, in the parent POM, we configure in the pluginManagement section all the versions of the plugin we are going to use (see the sources of the example on GitHub).

The most interesting configuration is the one for the tycho-packaging-plugin, where we specify the format of the qualified version:

<plugin>
  <groupId>org.eclipse.tycho</groupId>
  <artifactId>tycho-packaging-plugin</artifactId>
  <version>${tycho-version}</version>
  <configuration>
    <format>'v'yyyyMMdd'-'HHmm</format>
  </configuration>
</plugin>

Moreover, we create a profile release-composite (which we’ll also use later in the POM of the site project), where we disable the standard Maven plugins for install and deploy. Since we are going to release our Eclipse p2 composite update site during the deploy phase, but we are not interested in installing and deploying the Maven artifacts, we skip the standard Maven plugins bound to those phases:

<profile>
  <!-- Activate this profile to perform the release to GitHub Pages -->
  <id>release-composite</id>
  <build>
    <pluginManagement>
      <plugins>
        <plugin>
          <artifactId>maven-install-plugin</artifactId>
          <executions>
            <execution>
              <id>default-install</id>
              <phase>none</phase>
            </execution>
          </executions>
        </plugin>
        <plugin>
          <artifactId>maven-deploy-plugin</artifactId>
          <configuration>
            <skip>true</skip>
          </configuration>
        </plugin>
      </plugins>
    </pluginManagement>
  </build>
</profile>

The interesting steps are in the site project, the one with <packaging>eclipse-repository</packaging>. Here we also define the profile release-composite and we use a few plugins to perform the steps involving the Git repository described above (remember that these configurations are inside the profile release-composite, of course in the build plugins section):

<plugin>
  <groupId>org.codehaus.mojo</groupId>
  <artifactId>build-helper-maven-plugin</artifactId>
  <executions>
    <!-- sets the following properties that we use in our Ant scripts
      parsedVersion.majorVersion
      parsedVersion.minorVersion
      bound by default to the validate phase
    -->
    <execution>
      <id>parse-version</id>
      <goals>
        <goal>parse-version</goal>
      </goals>
    </execution>
  </executions>
</plugin>
<plugin>
  <groupId>org.codehaus.mojo</groupId>
  <artifactId>exec-maven-plugin</artifactId>
  <configuration>
    <executable>git</executable>
  </configuration>
  <executions>
    <execution>
      <id>git-clone</id>
      <phase>prepare-package</phase>
      <goals>
        <goal>exec</goal>
      </goals>
      <configuration>
        <arguments>
          <argument>clone</argument>
          <argument>--depth=1</argument>
          <argument>-b</argument>
          <argument>master</argument>
          <argument>${github-update-repo}</argument>
          <argument>${github-local-clone}</argument>
        </arguments>
      </configuration>
    </execution>
    <execution>
      <id>git-add</id>
      <phase>verify</phase>
      <goals>
        <goal>exec</goal>
      </goals>
      <configuration>
        <arguments>
          <argument>-C</argument>
          <argument>${github-local-clone}</argument>
          <argument>add</argument>
          <argument>-A</argument>
        </arguments>
      </configuration>
    </execution>
    <execution>
      <id>git-commit</id>
      <phase>verify</phase>
      <goals>
        <goal>exec</goal>
      </goals>
      <configuration>
        <arguments>
          <argument>-C</argument>
          <argument>${github-local-clone}</argument>
          <argument>commit</argument>
          <argument>-m</argument>
          <argument>Release ${qualifiedVersion}</argument>
        </arguments>
      </configuration>
    </execution>
    <execution>
      <id>git-push</id>
      <phase>deploy</phase>
      <goals>
        <goal>exec</goal>
      </goals>
      <configuration>
        <arguments>
          <argument>-C</argument>
          <argument>${github-local-clone}</argument>
          <argument>push</argument>
          <argument>origin</argument>
          <argument>master</argument>
        </arguments>
      </configuration>
    </execution>
  </executions>
</plugin>
<plugin>
  <artifactId>maven-resources-plugin</artifactId>
  <executions>
    <execution>
      <id>copy-repository</id>
      <phase>package</phase>
      <goals>
        <goal>copy-resources</goal>
      </goals>
      <configuration>
        <outputDirectory>${current-release-directory}</outputDirectory>
        <resources>
          <resource>
            <directory>${project.build.directory}/repository</directory>
          </resource>
        </resources>
      </configuration>
    </execution>
  </executions>
</plugin>
<plugin>
  <groupId>org.eclipse.tycho.extras</groupId>
  <artifactId>tycho-eclipserun-plugin</artifactId>
  <configuration>
    <repositories>
      <repository>
        <id>2020-12</id>
        <layout>p2</layout>
        <url>https://download.eclipse.org/releases/2020-12</url>
      </repository>
    </repositories>
    <dependencies>
      <dependency>
        <artifactId>org.eclipse.ant.core</artifactId>
        <type>eclipse-plugin</type>
      </dependency>
      <dependency>
        <artifactId>org.apache.ant</artifactId>
        <type>eclipse-plugin</type>
      </dependency>
      <dependency>
        <artifactId>org.eclipse.equinox.p2.repository.tools</artifactId>
        <type>eclipse-plugin</type>
      </dependency>
      <dependency>
        <artifactId>org.eclipse.equinox.p2.core.feature</artifactId>
        <type>eclipse-feature</type>
      </dependency>
      <dependency>
        <artifactId>org.eclipse.equinox.p2.extras.feature</artifactId>
        <type>eclipse-feature</type>
      </dependency>
      <dependency>
        <artifactId>org.eclipse.equinox.ds</artifactId>
        <type>eclipse-plugin</type>
      </dependency>
    </dependencies>
  </configuration>
  <executions>
    <!-- add our new child repository -->
    <execution>
      <id>add-p2-composite-repository</id>
      <phase>package</phase>
      <goals>
        <goal>eclipse-run</goal>
      </goals>
      <configuration>
        <applicationsArgs>
          <args>-application</args>
          <args>org.eclipse.ant.core.antRunner</args>
          <args>-buildfile</args>
          <args>packaging-p2composite.ant</args>
          <args>p2.composite.add</args>
          <args>-Dsite.label="${site.label}"</args>
          <args>-Dcomposite.base.dir=${github-local-clone}</args>
          <args>-DunqualifiedVersion=${unqualifiedVersion}</args>
          <args>-DbuildQualifier=${buildQualifier}</args>
          <args>-DparsedVersion.majorVersion=${parsedVersion.majorVersion}</args>
          <args>-DparsedVersion.minorVersion=${parsedVersion.minorVersion}</args>
        </applicationsArgs>
      </configuration>
    </execution>
  </executions>
</plugin>

Let’s see these configurations in detail. In particular, it is important to understand how the goals of the plugins are bound to the phases of the default lifecycle; remember that on the phase package, Tycho will automatically create the p2 repository and it will do that before any other goals bound to the phase package in the above configurations:

  • with the build-helper-maven-plugin we parse the current version of the project, in particular, we set the properties holding the major and minor versions that we need later to create the composite metadata directory structure; its goal is automatically bound to one of the first phases (validate) of the lifecycle;
  • with the exec-maven-plugin we configure the execution of the Git commands:
    • we clone the Git repository of the update site (with –depth=1 we only get the latest commit in the history, the previous commits are not interesting for our task); this is done in the phase pre-package, that is before the p2 repository is created by Tycho; the Git repository is cloned in the output directory target/checkout
    • in the phase verify (that is, after the phase package), we commit the changes (which will be done during the phase package as shown in the following points)
    • in the phase deploy (that is, the last phase that we’ll run on the command line), we push the changes to the Git repository of the update site
  • with the maven-resources-plugin we copy the p2 repository generated by Tycho into the target/checkout/releases directory in a subdirectory with the name of the qualified version of the project (e.g., 1.0.0.v20210307-2037);
  • with the tycho-eclipserun-plugin we create the composite metadata; we rely on the Eclipse application org.eclipse.ant.core.antRunner, so that we can execute the p2 Ant task for managing composite repositories (p2.composite.repository). The Ant tasks are defined in the Ant file packaging-p2composite.ant, stored in the site project. In this file, there are also a few properties that describe the layout of the directories described before. Note that we need to pass a few properties, including the site.label, the directory of the local Git clone, and the major and minor versions that we computed before.

Keep in mind that in all the above steps, non-existing directories will be automatically created on-demand (e.g., by the maven-resources-plugin and by the p2 Ant tasks). This means that the described process will work seamlessly the very first time when we start with an empty Git repository.

Now, from the parent POM on your computer, it’s enough to run

mvn deploy -Prelease-composite

and the release will be performed. When cloning you’ll be asked for the password of the GitHub repository, and, if not using an SSH agent or a keyring, also when pushing. Again, this depends on the URL of the GitHub repository; you might use an HTTPS URL that relies on the GitHub token, for example.

If you want to make a few local tests before actually releasing, you might stop at the phase verify and inspect the target/checkout to see whether the directories and the composite metadata are as expected.

You might also want to add another execution to the tycho-eclipserun-plugin to add a reference to another Eclipse update site that is required to install your software. The Ant file provides a task for that, p2.composite.add.external that will store the reference into the innermost composite child (e.g., into 1.2.x); here’s an example that adds a reference to the Eclipse main update site:

<execution>
  <!-- Add composite of required software update sites... 
    (if already present they won't be added again) -->
  <id>add-eclipse-repository</id>
  <phase>package</phase>
  <goals>
    <goal>eclipse-run</goal>
  </goals>
  <configuration>
    <applicationsArgs>
      <args>-application</args>
      <args>org.eclipse.ant.core.antRunner</args>
      <args>-buildfile</args>
      <args>packaging-p2composite.ant</args>
      <args>p2.composite.add.external</args>
      <args>-Dsite.label="${site.label}"</args>
      <args>-Dcomposite.base.dir=${github-local-clone}</args>
      <args>-DunqualifiedVersion=${unqualifiedVersion}</args>
      <args>-DbuildQualifier=${buildQualifier}</args>
      <args>-DparsedVersion.majorVersion=${parsedVersion.majorVersion}</args>
      <args>-DparsedVersion.minorVersion=${parsedVersion.minorVersion}</args>
      <args>-Dchild.repo="https://download.eclipse.org/releases/${eclipse-version}"</args>
    </applicationsArgs>
  </configuration>
</execution>

For example, in my Xtext projects, I use this technique to add a reference to the Xtext update site corresponding to the Xtext version I’m using in that specific release of my project. This way, my update site will be “self-contained” for my users: when using my update site for installing my software, p2 will be automatically able to install also the required Xtext bundles!

Releasing from GitHub Actions

The Maven command shown above can be used to perform a release from your computer. If you want to release your Eclipse update site directly from GitHub Actions, there are a few more things to do.

First of all, we are talking about a GitHub Actions workflow stored and executed in the GitHub repository of your project, NOT in the GitHub repository of the update site. In this example, it is https://github.com/LorenzoBettini/p2composite-github-pages-example.

In such a workflow, we need to push to another GitHub repository. To do that

  • create a GitHub personal access token (selecting repo);
  • create a secret in the GitHub repository of the project (where we run the GitHub Actions workflow), in this example it is called ACTIONS_TOKEN, with the value of that token;
  • when running the Maven deploy command, we need to override the property github-update-repo by specifying a URL for the GitHub repository with the update site using the HTTPS syntax and the encrypted ACTIONS_TOKEN; in this example, it is https://x-access-token:${{ secrets.ACTIONS_TOKEN }}@github.com/LorenzoBettini/p2composite-github-pages-example-updates;
  • we also need to configure in advance the Git user and email, with some values, otherwise, Git will complain when creating the commit.

To summarize, these are the interesting parts of the release.yml workflow (see the full version here: https://github.com/LorenzoBettini/p2composite-github-pages-example/blob/master/.github/workflows/release.yml):

name: Release with Maven

on:
  push:
    branches:
      - release

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v2
    - name: Set up JDK 11
      uses: actions/setup-java@v1
      with:
        java-version: 11
    - name: Configure Git
      run: |
        git config --global user.name 'GitHub Actions'
        git config --global user.email 'lorenzo.bettini@users.noreply.github.com'
    - name: Build with Maven
      run: >
        mvn deploy
        -Prelease-composite
        -Dgithub-update-repo=https://x-access-token:${{ secrets.ACTIONS_TOKEN }}@github.com/LorenzoBettini/p2composite-github-pages-example-updates
      working-directory: p2composite.example.parent

The workflow is configured to be executed only when you push to the release branch.

Remember that we are talking about the Git repository hosting your project, not the one hosting your update site.

Final thoughts

With the procedure described in this post, you publish your update sites and the composite metadata during the Maven build, so you never deal manually with the GitHub repository of your update site. However, you can always do that! For example, you might want to remove a release. It’s just a matter of cloning that repository, do your changes (i.e., remove a subdirectory of releases and update manually the composite metadata accordingly), commit, and push. Now and then you might also clean up the history of such a Git repository (the history is not important in this context), by pushing with –force after resetting the Git history. By the way, by tweaking the configurations above you could also do that every time you do a release: just commit with amend and push force!

Finally, you could also create an additional GitHub repository for snapshot releases of your update sites, or for milestones, or release candidate.

Happy releasing! 🙂


by Lorenzo Bettini at March 15, 2021 02:47 PM

An open source policy language for Attribute-Stream Based Access Control (ASBAC)

by Prof. Dr. Dominic Heutelbeck (dheutelbeck@ftk.de) at March 09, 2021 03:00 PM

This article discusses, how the Streaming Attribute Policy Language (SAPL) can be applied to realize complex authorization scenarios by formulating rules for access control in an easy to use policy language implemented using Xtext.

This article covers:

  1. Basic concepts and motivation for Attribute-based Access Control (ABAC) in general and Attribute-Stream Based Access Control (ASBAC) in particular
  2. How to express access rights policies using SAPL
  3. How SAPL can be used in Spring Boot applications integrating the access policies

Externalized dynamic policy-driven access control

Often applications allow different degrees of access control following role-based concepts (RBAC). Different roles are assigned to individual within the identity management system. This may include hierarchical role concepts.

The number of applications to be managed within an organization is typically increasing over time. Managing access to the different resources correctly results in an explosion of roles. Furthermore a matching increase in complexity in maintaining the integrity of access control as it becomes increasingly harder to define the right roles and correct assignments to individuals or groups.

In addition, role-based mechanisms are often not capable of expressing all access control requirements of a given application domain.

One way to overcome these problems is to employ so-called attribute-based access control (ABAC). Instead of assigning specific permissions to the role attribute of a subject directly, i.e., the entity requesting access to a resource, a set of policies defines rules under which conditions access is granted.

Those rules are based on properties or attributes of the subject, resource, action (i.e., what does the subject want to do with the resource) and environmental conditions.

Chart showing the attribute-based access control mechanism

Fundamentally in ABAC the code path where resources are to be protected (Policy Enforcement Point or PEP) sends the question “May SUBJECT do ACTION with RESOURCE in ENVIRONMENT” to a so-called Policy Decision Point (PDP). This then calculates the decision based on the rules and attributes.

Attributes may be directly attached to the objects in the question or may be retrieved from external sources if specified by the policies in question.

Following such a model has several advantages:

  • It allows to decouple most of the access control logic from the domain logic, for a clear separation of concerns
  • The access control rules can be changed by configuration/administration independently of and during deployment of the applications
  • The model can express more complex access control rules than RBAC or access control lists (ACLs) and allows for implementing these well-established models where applicable. And offers an upgrade path to other rules, often without touching code

These benefits come at the cost of a certain degree of complexity introduced through the required infrastructure which should be considered on a case-to-case basis.

The primary way of implementing ABAC was to use the XACML standard which specifies an architecture, protocol and XML scheme for specifying policies. Both open source and proprietary XACML implementations exist. However, XACML by itself is a relatively verbose XML-based standard which is difficult to author and to read by a person.

While a standard for a more readable dialect of XACML in form of a DSL exists with ALFA only one proprietary implementation exists.

Streaming Attribute Policy Language (SAPL)

While XACML as the de-facto standard has its problems in syntax and expressiveness, e.g. parametrization of attribute access, it also is still rooted in a traditional request-response design. Thus, in cases where conditions implying access rights are expected to change regularly, applications must poll the policy decision point to keep up, resulting in latency and scalability issues in access control.

The Streaming Attribute Policy Language (SAPL) introduces an extension to the ABAC model, allowing for data stream-based attributes and publish-subscribe driven access control design patterns, the so-called Attribute-Stream Based Access Control (ASBAC).

attribute-stream_based_access_control_mechanism

The data model of SAPL is based on JSON and JSON Path. A policy enforcement point formulates authorization subscriptions as JSON objects containing arbitrary JSON values for subject, action, and resource. Policies are expressed in intuitive syntax. Here are a few examples of SAPL policies.

Full source of working example projects can be found here on GitHub: https://github.com/heutelbeck/sapl-demos.

Also, this article will not go into the details of the syntax of SAPL but rather explain what the policies do. A full documentation is available under https://sapl.io/docs/sapl-reference.html.

First let us look at policies which are used in a traditional request-response driven system integrated into a Spring Boot application using the SAPL Spring Boot Starter providing a deep Spring Security integration. Given the following repository, the SAPL integration automatically generates policy enforcement points for methods annotated by @PreEnforce or @PostEnforce controlling entry or exit to the method. The SAPL subscriptions are derived by reflection and accessing the principal objects of the runtime.

SAPL code example - policies

The next policies are excerpts of a more complete policy set expressed in SAPL.

SAPL code example - more complete policy set

This policy is a simple implementation of role-based access control for the findById method.

SAPL code example - RBAC implementation for findById methodAs can be seen in the repository definition, the method findById is annotated with @PostEnforce so the resource JSON object is a serialization of the return value of the method and this policy transforms the data and blackens substrings of the data for administrators which should not have access to medical data.

This kind of filtering and transformation is not possible with XACML. This is combined with a traditional role-based access control by using the authority attribute of the subject. In case access is granted, the return value of the method will be replaced by the transformed object.

SAPL code example

Finally, this policy uses an external data source to fetch additional attributes. In line 16, the policy accesses the patient repository itself and retrieves a list of relatives. And if the user is a relative, it may access, but with certain fields of the dataset removed.

So far, the use of the policies all adhered to the request-response pattern. The following example does use stream-based attributes.

SAPL code example

With two custom attribute implementations, this simple policy integrates an IoT physical access control system and a smart contract on the Ethereum blockchain which allow for quasi-real-time enforcement of the rule that only persons on the company premises who hold a certification to access the resource may access it. The angled bracket syntax denotes that the authorization subscription results in a subscription the matching external data stream source.

Conclusion

This article can only scratch the surface of the possibilities for SAPL, its engine and tools.

SAPL was implemented using Xtext, which made it easy to concentrate on designing a user-friendly syntax and runtime instead of reinventing the wheel for DSL processing. It also allowed for developing web-based policy editors for the server applications and significantly reduced the time needed to get from zero to the first running prototype.

SAPL itself is an open source project licensed under the Apace 2.0 project.
Learn more about it on https://sapl.io


by Prof. Dr. Dominic Heutelbeck (dheutelbeck@ftk.de) at March 09, 2021 03:00 PM

gRPC Code Generation using Bndtools and ECF Remote Services

by Scott Lewis (noreply@blogger.com) at February 06, 2021 09:52 PM

There's a new video tutorial that demonstrates using ECF Remote Services, bndtools, and Eclipse to create an OSGi Remote Service.   

Bndtools has recently added the ability to run code generators as part of a bnd-based project, and with ECF's bndtools workspace template, a single proto3 file added to a project will automatically generate an entire Java remote service API and update/regenerate the API as changes are made to the proto3 file.   No command-line execution of protoc needed.

Further with ECF's project templates, the generated API can be easily implemented and exported as an OSGi Remote Service.

Please watch the video here


by Scott Lewis (noreply@blogger.com) at February 06, 2021 09:52 PM

Why Vendor Neutrality is important

by Denis Roy at February 03, 2021 06:32 PM

We're seeing a trend of free (as in beer) resources being discontinued, or altered, with very little time provided to allow projects to react. This causes pain for OSS projects that use and depend on these services. The trend is certainly not new, and although there's certainly a case for the you-get-what-you-pay-for mantra, it's clear that relying on a single-vendor, third-party "free" resource is not without its risks.

Two recent cases come to mind: 

Dockerhub pull rate limits. With about 2 months of notice, projects around the globe scrambled to fix release engineering processes to accommodate the upcoming limits.

JFrog terminating BinTray and others. With just under three months to react, projects who depend on these services need to find a new home for their binary distributions.

When relying on single-vendor services, or single-vendor open-source projects, or single-vendor anything for that matter, your assurance that you're not relying on a ticking time bomb is nil. At the Eclipse Foundation, we don't depend on a single vendor. In fact, we shy away from solutions that include the words Proprietary, Vendor, Closed and Licensed.  A quick search for "eclipse vendor neutral" will provide dozens of examples.

The Eclipse Foundation does rely on some vendor products - GitLab and GitHub, for instance. We do need to strike a logical and reasonable balance between Vendor Neutrality and Ease of Use. We do, however, ensure two things:

The underlying data model is Open. In both those examples, the data is Git, and Git is not proprietary technology.

That we have a Plan B, should the vendor suddely change the playing field. We actively back up Eclipse project repositories that are on GitHub. The backup is a plain git clone.

Vendor Neutrality, along with those two strategies for dealing with vendor products and services, allow the Foundation to ensure project services are maintained with minimal risk or potential disruption from a single third party.


by Denis Roy at February 03, 2021 06:32 PM

OSGi and javax.inject

January 26, 2021 11:00 PM

An OSGi bundle that exports the javax.inject package.

Since a couple of years, the Eclipse platform jars are published on maven central with metadata that allows the consumption in a traditional maven project (no Eclipse Tycho required).

This article is my feedback after having experimented with PDE (the Plug-in Development Environment project).

The problem

The goal is to compile and execute code that requires this OSGi bundle from maven-central:

<dependency>
  <groupId>org.eclipse.pde</groupId>
  <artifactId>org.eclipse.pde.core</artifactId>
  <version>3.13.200</version>
</dependency>

I am using a regular maven project (with the bnd plugins to manage the OSGi related tasks). I do not have Eclipse Tycho, so maven do not have access to any P2 Update Site.

Amongst all dependencies of PDE, there is org.eclipse.e4.core.contexts and org.eclipse.e4.core.services. Those two bundles requires:

Import-Package: javax.inject;version="1.0.0",

Source: here and here.

So we need a bundle exporting this package, otherwise the requirements are not fulfilled and I get this error:

[ERROR] Resolution failed. Capabilities satisfying the following requirements could not be found:
    [<<INITIAL>>]
      ⇒ osgi.identity: (osgi.identity=org.eclipse.pde.core)
          ⇒ [org.eclipse.pde.core version=3.13.200.v20191202-2135]
              ⇒ osgi.wiring.bundle: (&(osgi.wiring.bundle=org.eclipse.e4.core.services)(bundle-version>=2.0.0)(!(bundle-version>=3.0.0)))
                  ⇒ [org.eclipse.e4.core.services version=2.2.100.v20191122-2104]
                      ⇒ osgi.wiring.package: (&(osgi.wiring.package=javax.inject)(version>=1.0.0))
    [org.eclipse.e4.core.contexts version=1.8.300.v20191017-1404]
      ⇒ osgi.wiring.package: (&(osgi.wiring.package=javax.inject)(version>=1.0.0))

In the P2 world

The bundle javax.inject version 1.0.0 is available in the Eclipse Orbit repositories.

In the maven world

The official dependency

The dependency used by most of the other libraries:

<dependency>
  <groupId>javax.inject</groupId>
  <artifactId>javax.inject</artifactId>
  <version>1</version>
</dependency>

This library does not contain any OSGi metadata in the published MANIFEST.MF.

See the corresponding open issue.

Tom Schindl’s solution

The jar from Eclipse Orbit is available at:

<dependency>
  <groupId>at.bestsolution.efxclipse.eclipse</groupId>
  <artifactId>javax.inject</artifactId>
  <version>1.0.0</version>
</dependency>

But this is not on maven central. You will need to add following repository to your pom.xml:

<repositories>
  <repository>
    <id>bestsolution</id>
    <url>http://maven.bestsolution.at/efxclipse-releases/</url>
  </repository>
</repositories>

On maven central

This question on stackoverflow gives some inputs and suggests:

From the Apache ServiceMix project:

<dependency>
  <groupId>org.apache.servicemix.bundles</groupId>
  <artifactId>org.apache.servicemix.bundles.javax-inject</artifactId>
  <version>1_3</version>
</dependency>

From the GlassFish project.

<dependency>
  <groupId>org.glassfish.hk2.external</groupId>
  <artifactId>javax.inject</artifactId>
  <version>2.5.0-b62</version>
</dependency>

After analyzing other candidates in list where artifactId == "javax.inject", there is also this one from the Lucee project:

<dependency>
  <groupId>org.lucee</groupId>
  <artifactId>javax.inject</artifactId>
  <version>1.0.0</version>
</dependency>

And on twitter Raymond Augé suggested the Apache geronimo project.

<dependency>
  <groupId>org.apache.geronimo.specs</groupId>
  <artifactId>geronimo-atinject_1.0_spec</artifactId>
  <version>1.2</version>
</dependency>

Make your choice.


January 26, 2021 11:00 PM

Cloud Native Predictions for 2021 and Beyond

by Chris Aniszczyk at January 19, 2021 04:08 PM

I hope everyone had a wonderful holiday break as the first couple weeks of January 2021 have been pretty wild, from insurrections to new COVID strains. In cloud native land, the CNCF recently released its annual report on all the work we accomplished last year. I recommend everyone take an opportunity to go through the report, we had a solid year given the wild pandemic circumstances.

https://twitter.com/CloudNativeFdn/status/1343914259177222145

As part of my job, I have a unique and privileged vantage point of cloud native trends given to all the member companies and developers I work with, so I figured I’d share my thoughts of where things will be going in 2021 and beyond:

Cloud Native IDEs

As a person who has spent a decent portion of his career working on developer tools inside the Eclipse Foundation, I am nothing but thrilled with the recent progress of the state of the art. The future will hold that the development lifecycle (code, build, debug) will happen mostly in the cloud versus your local Emacs or VSCode setup. You will end up getting a full dev environment setup for every pull request, pre-configured and connected to their own deployment to aid your development and debugging needs. A concrete example of this technology today is enabled via GitHub Codespaces and GitPod. While GitHub Codespaces is still in beta, you can try this experience live today with GitPod, using Prometheus as an example. In a minute or so, you have a completely live development environment with an editor and preview environment. The wild thing is that this development environment (workspace) is described in code and shareable with other developers on your team like any other code artifact.

In the end, I expect to see incredible innovation in the cloud native IDE space over the next year, especially as GitHub Codespaces enters out of beta and becomes more widely available so developers can experience this new concept and fall in love.

Kubernetes on the Edge

Kubernetes was born through usage across massive data centers but Kubernetes will evolve just like Linux did for new environments. What happened with Linux was that end users eventually stretched the kernel to support a variety of new deployment scenarios from mobile, embedded and more. I strongly believe Kubernetes will go through a similar evolution and we are already witnessing Telcos (and startups) explore Kubernetes as an edge platform through transforming VNFs into Cloud Native Network Functions (CNFs) along with open source projects like k3s, KubeEdge, k0s, LFEdge, Eclipse ioFog and more. The forces driving hyperscaler clouds to support telcos and the edge, combined with the ability to reuse cloud native software and build upon already a large ecosystem will cement Kubernetes as a dominant platform in edge computing over the next few years.

Cloud Native + Wasm

Web Assembly (Wasm) is a technology that is nascent but I expect it to become a growing utility and workload in the cloud native ecosystem especially as WASI matures and as Kubernetes is used more as an edge orchestrator as described previously. One use case is powering an extension mechanism, like what Envoy does with filters and LuaJIT. Instead of dealing with Lua directly, you can work with a smaller optimized runtime that supports a variety of programming languages. The Envoy project is currently in its journey in adopting Wasm and I expect a similar pattern to follow for any environment that scripting languages are a popular extension mechanism to be wholesale replaced by Wasm in the future.

On the Kubernetes front, there are projects like Krustlet from Microsoft that are exploring how a WASI-based runtime could be supported in Kubernetes. This shouldn’t be too surprising as Kubernetes is already being extended via CRDs and other mechanisms to run different types of workloads like VMs (KubeVirt) and more.

Also, if you’re new to Wasm, I recommend this new intro course from the Linux Foundation that goes over the space, along with the excellection documentation 

Rise of FinOps (CFM)

The coronavirus outbreak has accelerated the shift to cloud native. At least half of companies are accelerating their cloud plans amid the crisis… nearly 60% of respondents said cloud usage would exceed prior plans owing to the COVID-19 pandemic (State of the Cloud Report 2020). On top of that, Cloud Financial Management (or FinOps) is a growing issue and concern for many companies and honestly comes up in about half of my discussions the last six months with companies navigating their cloud native journey. You can also argue that cloud providers aren’t incentivized to make cloud financial management easier as that would make it easier for customers to spend less, however, the true pain is lack of open source innovation and standardization around cloud financial management in my opinion (all the clouds do cost management differently). In the CNCF context, there aren’t many open source projects trying to make FinOps easier, there is the KubeCost project but it’s fairly early days.

Also, the Linux Foundation recently launched the “FinOps Foundation” to help innovation in this space, they have some great introductory materials in this space. I expect to see a lot more open source projects and specifications in the FinOps space in the coming years.

More Rust in Cloud Native

Rust is still a young and niche programming language, especially if you look at programming language rankings from Redmonk as an example. However, my feeling is that you will see Rust in more cloud native projects over the coming year given that there are already a handful of CNCF projects taking advantage of Rust to it popping up in interesting infrastructure projects like the microvm Firecracker. While CNCF currently has a super majority of projects written in Golang, I expect Rust-based projects to be on par with Go-based ones in a couple of years as the Rust community matures.

GitOps + CD/PD Grows Significantly

GitOps is an operating model for cloud native technologies, providing a set of best practices that unify deployment, management and monitoring for applications (originally coined by Alexis Richardson from Weaveworks fame). The most important aspect of GitOps is describing the desired system state versioned in Git via a declaration fashion, that essentially enables a complex set of system changes to be applied correctly and then verified (via a nice audit log enabled via Git and other tools). From a pragmatic standpoint, GitOps improves developer experience and with the growth of projects like Argo, GitLab, Flux and so on, I expect GitOps tools to hit the enterprise more this year. If you look at the data from say GitLab, GitOps is still a nascent practice where the majority of companies haven’t explored it yet but as more companies move to adopt cloud native software at scale, GitOps will naturally follow in my opinion. If you’re interested in learning more about this space, I recommend checking out the newly formed GitOps Working Group in CNCF.

Service Catalogs 2.0: Cloud Native Developer Dashboards

The concept of a service catalog isn’t a new thing, for some of us older folks that grew up in the ITIL era you may remember things such as CMDBs (the horror). However, with the rise of microservices and cloud native development, the ability to catalog services and index a variety of real time service metadata is paramount to drive developer automation. This can include using a service catalog to understand ownership to handle incident management, manage SLOs and more. 

In the future, you will see a trend towards developer dashboards that are not only a service catalog, but provide an ability to extend the dashboard through a variety of automation features all in one place. The canonical open source examples of this are Backstage and Clutch from Lyft, however, any company with a fairly modern cloud native deployment tends to have a platform infrastructure team that has tried to build something similar. As the open source developer dashboards mature with a large plug-in ecosystem, you’ll see accelerated adoption by platform engineering teams everywhere.

Cross Cloud Becomes More Real

Kubernetes and the cloud native movement have demonstrated that cloud native and multi cloud approaches are possible in production environments, the data is clear that “93% of enterprises have a strategy to use multiple providers like Microsoft Azure, Amazon Web Services, and Google Cloud” (State of the Cloud Report 2020). The fact that Kubernetes has matured over the years along with the cloud market, will hopefully unlock programmatic cross-cloud managed services. A concrete example of this approach is embodied in the Crossplane project that provides an open source cross cloud control plane taking advantage of the Kubernetes API extensibility to enable cross cloud workload management (see “GitLab Deploys the Crossplane Control Plane to Offer Multicloud Deployments”).

Mainstream eBPF

eBPF allows you to run programs in the Linux Kernel without changing the kernel code or loading a module, you can think of it as a sandboxed extension mechanism. eBPF has allowed a new generation of software to extend the behavior of the Linux kernel to support a variety of different things from improved networking, monitoring and security. The downside of eBPF historically is that it requires a modern kernel version to take advantage of it and for a long time, that just wasn’t a realistic option for many companies. However, things are changing and even newer versions of RHEL finally support eBPF so you will see more projects take advantage. If you look at the latest container report from Sysdig, you can see the adoption of Falco rising recently which although the report may be a bit biased from Sysdig, it is reflected in production usage. So stay tuned and look for more eBPF based projects in the future!

Finally, Happy 2021!

I have a few more predictions and trends to share especially around end user driven open source, service mesh cannibalization/standardization, Prometheus+OTel, KYC for securing the software supply chain and more but I’ll save that for more detailed posts, nine predictions are enough to kick off the new year! Anyways, thanks for reading and I hope to see everyone at KubeCon + CloudNativeCon EU in May 2021, registration is open!


by Chris Aniszczyk at January 19, 2021 04:08 PM

DSL Forge, dead or (still) alive?

by alajmi at January 19, 2021 02:22 PM

It has been a long time since the last post I’ve published on the DSL Forge blog. As the initial release back in 2014 and the “hot” context of that time, water has flowed under the bridges. The last couple of years, a lot of effort has been spent on the Coding Park platform, a commercial product based on DSL Forge. Unfortunately, not all the developments made since then have been integrated into the open-source repository.

Anyway, I’ve finally managed to have some time to clean the repository and fix some bugs, hence it’s up-to-date now, and still available under the EPL licence on GitHub.

There are several reasons why the project has not progressed the way we wanted at the beginning, let’s take a step back and think about what happened.

Lack of ambition

One of the reasons why the adoption of cloud-based tools has not taken off is the standstill, and sometimes the lack of ambition, of top managers in big industry corporations who traditionnally use Eclipse technologies to build their internal products. Many companies have a huge legacy desktop applications built on top of Elipse RCP. Despite the acceleration that was put the last 5 years to encourage organizations to move to the web/cloud, eventually, very few have taken action.

No standard cloud IDE

Another reason is the absence of a “standard” platform which is unanimously supported to build new tools on top of it. Of course, there are some nice cloud IDEs flourishing under the Eclipse foundation umbrella, such as Dirigible (SAP), Theia (TypeFox), or Che (Codenvy then Red Hat), but it’s still unclear for customers which of these is the winning horse. Today, Theia seems more accurate than its competitors if you judge based on the number of contributors, and the big tech companies that push the technology forward such as IBM, SAP, and Red Hat just to name a few of them. However, the frontier between these cloud IDEs is still confusing: Theia uses the workspace component of Che, later Theia has become the official UI of Che. Theia is somehow based on VS Code, but then has its own extension mechanism, etc.

LSP or !LSP

In the meantime, there have been attempts to standardize the client/server exchange protocol in the case of text editing, with the Microsoft’s Language Server Protocol (LSP), and later with a variant of LSP to support graphical editing (GLSP). Pushing standards is a common strategy to make stakeholders in a given market collaborate in order to optimize their investments, however, like any other standard-focused community, there is a difference between theory and practice. Achieving a complete interoperability is quite unrealistic, because developing the editor front-end requires a lot of effort already, and even with the LSP in mind, it is common to end up developing the same functionality specifically for each editor, which is not always the top priority of commercial projects or startups willing to reduce their time-to-market.

The cost of migration

As said earlier, there is a large amount of legacy source code built on Eclipse RCP. The sustainability of this code is of strategic importance for many corporations, and unfortunately, most of it is written in Java and relies on SWT. Migrating this code is expensive as it implies rewriting a big part of it in JavaScript, with a particular technical stack/framework in mind, so it’s a long journey, architects have a lot of technical decisions to take along the way, and there is no garantee that they took the right decisions on the long run.

The decline of the Eclipse IDE

Friends of Eclipse, don’t be upset! Having worked with a lot of junior developers in the last 5 years, I have noticed the Eclipse IDE is no longer of interest to many of them. A few years ago, Eclipse was best known for being a good Java IDE, back in the times when IBM was a driving force in the community. Today, the situation is different; Microsoft’s VS Code has established itself as the code editor of choice. It is still incomprehensible to see the poor performance of the Eclipse IDE, especially at startup. It is urgent that one of the cloud IDEs mentioned above take over.

The high volatility of web technologies

We see new frameworks and new trends in web development technologies every day. For instance, the RIA frameworks appeared in the early 2010s finally had a short life, especially with the rise of the new frameworks such as React and Angular. Sever-side rendering is now part of History. One consequence of this was the slow down of investments in RIA-based frameworks, including the Eclipse Remote Application Platform (RAP). Today, RAP is still under maintenance, however its scalability is questionable and its rendering capabilities look outdated compared to newer web frameworks. The incredible pace in which web technologies evolve is one of the factors that make decision makers hesitate to invest in cloud-based modeling tools.

The end of a cycle

As a large part of legacy code must be rewritten in JavaScript or any of its variants (TypeScript, JSX, …), many historical developers (today’s senior developers) with a background in Java, have found themselves overwhelmed by the rise of new paradigms coming from the culture of web development. In legacy desktop applications, it is common to see the UI code, should it be in SWT or Swing, melted with the business logic. Of course, architects have always tried to separate the concerns as much as possible, but the same paradigm, structures, and programming language are used everywhere. With the new web frameworks, the learning curve is so steep that senior developers struggle to get hands on the new paradigms and coding style.

EMF <-> JSON

The last 10 years, EMF has become an industry-proven standard for model persistency, however it is quite unknown in the web development community. The most widely used format in data exchange through the web is JSON, and even though the facilities that come with EMF are advanced compared to the tooling support of JSON, the reality is, achieving complete bidirectionnality between EMF and JSON is not always garanteed. That beeing said, EclipseSource are doing a great job in this area thanks their work on the EMF.cloud framework.

Where is DSL Forge in all of this?

The DSL Forge project will continue to exist as long as it serves users. First, because the tool is still used in academic research. With a variety of legacy R&D prototypes built on RCP, it is easy to have quickly a web-based client thanks to the port of the SWT library on the web which does almost 90% of the job. Moreover, the framework is still used in commercial products, particularly in the field of Cybersecurity and Education. For example, the Coding Park platform, initially developed on Eclipse RAP is still marketed under this technology stack.

Originally, the DSL Forge was seen as a port of Xtext to the web that relies on ACE editor; this is half true as it has also a nice ANTLR/ACE integration. The tool released in 2014 was ahead of its time. Companies were not ready to make the leap (a lot are still in this situation now even with all the progress made), the demand was not mature enough, and the small number of contributors was a barrier to adoption. Given all of that, we made our own path outside the software development tools market. Meanwhile, the former colleagues of Itemis (now at TypeFox) did a really good job: not only they have built a flawless cloud IDE, but also they have managed to forge strategic partnerships which are contributing to the success of Theia. Best of luck for Theia and the incredible team of TypeFox!

To conclude

Today, Plugbee is still supporting the maintenance of DSL Forge to guarantee the sustainability of customer products.

For now, if you are looking to support a more modern technical stack, your best bet is to start with the Xtext servlet. For example, we have integrated the servlet into a Spring Boot/React application, and it works like a charm. The only effort we have made to achieve the integration was to bind properly the Xtext services to ACE editor. This effort has been made as part of the new release of Coding Park. The code will be extracted and made publicly available on the DSL Forge repository soon. If you are interested in this kind of integrations, feel free to get in touch.

Finally, if you are interested in using Eclipse to build custom modeling tools or to migrate existing products to the web, please have a look at our training offer or feel free to contact us.


by alajmi at January 19, 2021 02:22 PM

ECF 3.14.19 released - simplify remote service discovery via properties

by Scott Lewis (noreply@blogger.com) at January 08, 2021 12:21 AM

 ECF 3.14.19 has been released.

Along with the usual bug fixes, this release includes new documentation on the use of properties for discovering and importing remote services.   The docs describe the use of properties files for simplifying the import of remote services.   

This capability is especially useful for Eclipse RCP clients accessing Jax-RS/REST remote services.

Patrick Paulin describes a production usage his blog posting here.


by Scott Lewis (noreply@blogger.com) at January 08, 2021 12:21 AM

WTP 3.20 Released!

December 16, 2020 08:55 PM

The Eclipse Web Tools Platform 3.20 has been released! Installation and updates can be performed using the Eclipse IDE 2020-12 Update Site or through any of the related Eclipse Marketplace listings . Release 3.20 is included in the 2020-12 Eclipse IDE for Enterprise Java Developers , with selected portions also included in several other packages . Adopters can download the R3.20 p2 repository directly and combine it with the necessary dependencies.

More news


December 16, 2020 08:55 PM

LiClipse 7.1.0 released (improved Dark theme, LiClipseText and PyDev updates)

by Fabio Zadrozny (noreply@blogger.com) at December 08, 2020 07:52 PM

I'm happy to announce that LiClipse 7.1.0 is now available for download.

LiClipse is now based on Eclipse 4.17 (2020-09), one really nice features is that this now enables dark-scrollbars for trees on Windows.

I think an image may be worth a thousand words here, so below is a screenshot showing how the LiClipse Dark theme looks like (on Windows) with the changes!

This release also updates PyDev to 8.1.0, which provides support for Python 3.9 as well as quick-fixes to convert strings to f-strings, among many other things (see: https://pydev.blogspot.com/2020/12/pydev-810-released-python-39-code.html for more details).

Another upgraded dependency is LiClipseText 2.2.0, which now provides grammars to support TypeScript, RobotFramework and JSON by default.



by Fabio Zadrozny (noreply@blogger.com) at December 08, 2020 07:52 PM

ECA Validation Update for Gerrit

December 08, 2020 05:45 PM

We are planning to install a new version of our Gerrit ECA validation plugin this week in an effort to reduce errors when a contribution is validated.

With this update, we are moving our validation logic to our new ECA Validation API that we created for our new Gitlab instance.

We are planning to push these changes live on Wednesday, December 9 at 16:00 GMT, though there is no planned downtime associated with this update.

Our plan is to revert back to a previous version of the plugin if we detect any anomalies after deploying this change.

Please note that we are also planning to apply these changes to our GitHub ECA validation app in Q1 of 2021. You can expect more news about this in the new year!

For those interested, the code for the API and the plugin are open-source and can be seen at git-eca-rest-api and gerrit-eca-plugin.

Please use our GitHub issue to discuss any concerns you might have with this change.


December 08, 2020 05:45 PM

Become an Eclipse Technology Adopter

December 04, 2020 05:50 PM

Did you know that organizations — whether they are members of the Eclipse Foundation or not — can be listed as Eclipse technology adopters?

In November 2019, The Eclipse IoT working group launched a campaign to promote adopters of Eclipse IoT technologies. Since then, more than 60 organizations have shown their support to various Eclipse IoT projects.

With that success in mind, we decided to build a new API service responsible for managing adopters for all our projects.

If needed, this new service will allow us to create an Adopters page for each of our working groups. This is something that we are currently working on for Eclipse Cloud Development Tools. Organizations that wishes to be listed on this new page can submit their request today by following our instructions.

On top of that, every Eclipse project can now leverage our JavaScript plugin to display logos of adopters without committing them in their website git repository.

As an example, you can check out the Eclipse Ditto website.

What Is Changing?

We are migrating logos and related metadata to a new repository. This means that adopters of Eclipse IoT technologies will be asked to submit their request to this new repository. This change is expected to occur on December 10, 2020.

We plan on updating our documentation to point new users to this new repository. If an issue is created in the wrong repository, we will simply move them to the right location.

The process is very similar with this new repository but we did make some improvements:

  1. The path where we store logos is changing
  2. The file format is changing from .yml to .json to reduce user errors.
  3. The structure of the file was modified to make it easier for an organization to adopt multiple projects.

We expect this change to go uninterrupted to our users. The content of the Eclipse IoT Adopters page won’t change and the JavaScript widget hosted on iot.eclipse.org will continue to work as is.

Please create an issue if you have any questions or concerns regarding this migreation.

How Can My Organization be Listed as Adoptor of Eclipse Technology?

The preferred way to become an adopter is with a pull-request:

  1. Add a colored and a white organization logo to static/assets/images/adoptors. We expect logos to be submitted as .svg and they must be transparent. The file size should be less than 20kb since we are planning to use them for the web!
  2. Update the adopter JSON file: config/adopters.json. Organizations can be easily marked as having multiple adopted projects across different working groups, no need to create separate entries for different projects or working groups!

The alternative way to become an adopter is to submit an issue with your logo and the project name that your organization has adopted.

How Can We List Adopters on Our Project Website?

We built a JavaScript plugin to make this process easier.

Usage

Include our plugin in your page:

<script src="//api.eclipse.org/adopters/assets/js/eclipsefdn.adopters.js"></script>

Load the plugin:

<script>
eclipseFdnAdopters.getList({
project_id: "[project_id]"
});
</script>

Create an HTML element containing the chosen selector:

<div class="eclipsefdn-adopters"></div>
  • By default, the selector’s value is eclipsefdn-adopters.

Options

<script>
eclipseFdnAdopters.getList({
project_id: "[project_id]",
selector: ".eclipsefdn-adopters",
ul_classes: "list-inline",
logo_white: false
});
</script>
Attribute Type Default Description
project_id String Required: Select adopters from a specific project ID.
selector String .eclipsefdn-adopters Define the selector that the plugin will insert adopters into.
ul_classes String Define classes that will be assigned to the ul element.
logo_white Boolean false Whether or not we use the white version of the logo.

For more information, please refer our README.md.

A huge thank you to Martin Lowe for all his contributions to this project! His hard work and dedication was crucial for getting this project done on time!


December 04, 2020 05:50 PM

Add Checkstyle support to Eclipse, Maven, and Jenkins

by Christian Pontesegger (noreply@blogger.com) at December 02, 2020 08:52 AM

After PMD and SpotBugs we will have a look at Checkstyle integration into the IDE and our maven builds. Parts of this tutorial are already covered by Lars' tutorial on Using the Checkstyle Eclipse plug-in.

Step 1: Add Eclipse IDE Support

First install the Checkstyle Plugin via the Eclipse Marketplace. Before we enable the checker, we need to define a ruleset to run against. As in the previous tutorials, we will setup project specific rules backed by one ruleset that can also be used by maven later on.

Create a new file for your rules in <yourProject>.releng/checkstyle/checkstyle_rules.xml. If you are familiar with writing rules just add them. In case you are new, you might want to start with one of the default rulesets of checkstyle.

Once we have some rules, we need to add them to our projects. Therefore right click on a project and select Checkstyle/Activate Checkstyle. This will add the project nature and a builder. To make use of our common ruleset, create a file <project>/.checkstyle with following content.

<?xml version="1.0" encoding="UTF-8"?>

<fileset-config file-format-version="1.2.0" simple-config="false" sync-formatter="false">
<local-check-config name="Skills Checkstyle" location="/yourProject.releng/checkstyle/checkstyle_rules.xml" type="project" description="">
<additional-data name="protect-config-file" value="false"/>
</local-check-config>
<fileset name="All files" enabled="true" check-config-name="Skills Checkstyle" local="true">
<file-match-pattern match-pattern=".java$" include-pattern="true"/>
</fileset>
</fileset-config>

Make sure to adapt the name and location attributes of local-check-config according to your project structure.

Checkstyle will now run automatically on builds or can be triggered manually via the context menu: Checkstyle/Check Code with Checkstyle.

Step 2: Modifying Rules

While we had to do our setup manually, we can now use the UI integration to adapt our rules. Select the Properties context entry from a project and navigate to Checkstyle, page Local Check Configurations. There select your ruleset and click Configure... The following dialog allows to add/remove rules and to change rule properties. All your changes are backed by our checkstyle_rules.xml file we created earlier.

Step 3: Maven Integration

We need to add the Maven Checkstyle Plugin to our build. Therefore add following section to your master pom:

	<properties>
<maven.checkstyle.version>3.1.1</maven.checkstyle.version>
</properties>

<build>
<plugins>
<!-- enable checkstyle code analysis -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
<version>${maven.checkstyle.version}</version>
<configuration>
<configLocation>../../releng/yourProject.releng/checkstyle/checkstyle_rules.xml</configLocation>
<linkXRef>false</linkXRef>
</configuration>

<executions>
<execution>
<id>checkstyle-integration</id>
<phase>verify</phase>
<goals>
<goal>check</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>

In the configuration we address the ruleset we also use for the IDE plugin. Make sure that the relative path fits to your project setup. In the provided setup execution is bound to the verify phase.

Step 4: File Exclusions

Excluding files has to be handled differently for IDE and Maven. The Eclipse plugin allows to define inclusions and exclusions via file-match-pattern entries in the .checkstyle configuration file. To exclude a certain package use:

  <fileset name="All files" enabled="true" check-config-name="Skills Checkstyle" local="true">
...
<file-match-pattern match-pattern="org.yourproject.generated.package.*$" include-pattern="false"/>
</fileset>

In maven we need to add exclusions via the plugin configuration section. Typically such exclusions would go to the pom of a specific project and not the master pom:

	<build>
<plugins>
<!-- remove generated resources from checkstyle code analysis -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-checkstyle-plugin</artifactId>
<version>${maven.checkstyle.version}</version>

<configuration>
<excludes>**/org/yourproject/generated/package/**/*</excludes>
</configuration>
</plugin>
</plugins>
</build>

Step 5: Jenkins Integration

If you followed my previous tutorials on code checkers, then this is business as usual: use the warnings-ng plugin on Jenkins to track our findings:

	recordIssues tools: [checkStyle()]

Try out the live chart on the skills project.


by Christian Pontesegger (noreply@blogger.com) at December 02, 2020 08:52 AM

Add SpotBugs support to Eclipse, Maven, and Jenkins

by Christian Pontesegger (noreply@blogger.com) at November 24, 2020 06:01 PM

SpotBugs (successor of FindBugs) is a tool for static code analysis, similar like PMD. Both tools help to detect bad code constructs which might need improvement. As they partly detect different issues, they may be well combined and used simultaneously.

Step 1: Add Eclipse IDE Support

The SpotBugs Eclipse Plugin can be installed directly via the Eclipse Marketplace.

After installation projects can be configured to use it from the projects Properties context menu. Navigate to the SpotBugs category and enable all checkboxes on the main site. Further set Minimum rank to report to 20 and Minimum confidence to report to Low.

Once done SpotBugs immediately scans the project for problems. Found issues are displayed as custom markers in editors. Further they are visible in the Bug Explorer view as well as in the Problems view.

SpotBugs also comes with a label decoration on elements in the Package Explorer. If you do not like these then disable all Bug count decorator entries in Preferences/General/Appearance/Label Decorations.

Step 2: Maven Integration

Integration is done via the SpotBugs Maven Plugin. To enable, add following section to your master pom:

	<properties>
<maven.spotbugs.version>4.1.4</maven.spotbugs.version>
</properties>

<build>
<plugins>
<!-- enable spotbugs code analysis -->
<plugin>
<groupId>com.github.spotbugs</groupId>
<artifactId>spotbugs-maven-plugin</artifactId>
<version>${maven.spotbugs.version}</version>

<configuration>
<effort>Max</effort>
<threshold>Low</threshold>
<fork>false</fork>
</configuration>

<executions>
<execution>
<id>spotbugs-integration</id>
<phase>verify</phase>
<goals>
<goal>spotbugs</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>

The execution entry takes care that the spotbugs goal is automatically executed during the verify phase. If you remove the execution section you would have to call the spotbugs goal separately:

mvn spotbugs:spotbugs

Step 3: File Exclusions

You might have code that you do not want to get checked (eg generated files). Exclusions need to be defined in an xml file. A simple filter on package level looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<FindBugsFilter>
<!-- skip EMF generated packages -->
<Match>
<Package name="~org\.eclipse\.skills\.model.*" />
</Match>
</FindBugsFilter>

See the documentation for a full description of filter definitions.

Once defined this file can be used from the SpotBugs Eclipse plugin as well as from the maven setup.

To simplify the maven configuration we can add following profile to our master pom:

	<profiles>
<profile>
<!-- apply filter when filter file exists -->
<id>auto-spotbugs-exclude</id>
<activation>
<file>
<exists>.settings/spotbugs-exclude.xml</exists>
</file>
</activation>

<build>
<plugins>
<!-- enable spotbugs exclude filter -->
<plugin>
<groupId>com.github.spotbugs</groupId>
<artifactId>spotbugs-maven-plugin</artifactId>
<version>${maven.spotbugs.version}</version>

<configuration>
<excludeFilterFile>.settings/spotbugs-exclude.xml</excludeFilterFile>
</configuration>
</plugin>
</plugins>
</build>
</profile>
</profiles>

It gets automatically enabled when a file .settings/spotbugs-exclude.xml exists in the current project.

Step 4: Jenkins Integration

Like with PMD, we again use the warnings-ng plugin on Jenkins to track our findings:

	recordIssues tools: [spotBugs(useRankAsPriority: true)]

Try out the live chart on the skills project.

Final Thoughts

PMD is smoother on integration as it stores its rulesets in a common file which can be shared by maven and the Eclipse plugin. SpotBugs currently requires to manage rulesets separately. Still both can be implemented in a way that users automatically get the same warnings in maven and the IDE.


by Christian Pontesegger (noreply@blogger.com) at November 24, 2020 06:01 PM

My main update site moved

by Andrey Loskutov (noreply@blogger.com) at November 23, 2020 08:51 AM

My host provider GMX decided that free hosting that they offered for over a decade is not fitting to their portfolio  anymore (for some security reasons) and simply switched my andrei.gmxhome.de domain off.

Quote:

... aus Sicherheitsgründen modernisieren wir regelmäßig unser Produktportfolio.
Im Zuge dessen möchten wir Sie darüber informieren, dass wir Ihren Webspace mit Ihrem Subdomain-Namen andrei.gmxhome.de zum 19‌.11‌.20‌20 kündigen. 

Because of that, Eclipse update site for all my plugins is moved now: 

from http://andrei.gmxhome.de/eclipse/ 

to https://raw.githubusercontent.com/iloveeclipse/plugins/latest/.

Same way, my "home" is moved to https://github.com/iloveeclipse/plugins/wiki.

(Github obviously has no issues with free hosting).

That means, anyone who used to have my main update site in scripts / Oomph setups, has to change them to use https://raw.githubusercontent.com/iloveeclipse/plugins/latest/ instead.

I'm sorry for that, but that is nothing I could change.


by Andrey Loskutov (noreply@blogger.com) at November 23, 2020 08:51 AM

What’s new in Fabric8 Kubernetes Java client 4.12.0

by Rohan Kumar at October 30, 2020 07:00 AM

The recent Fabric8 Kubernetes Java client 4.12.0 release includes many new features and bug fixes. This article introduces the major features we’ve added between the 4.11.0 and 4.12.0 releases.

I will show you how to get started with the new VolumeSnapshot extension, CertificateSigningRequests, and Tekton triggers in the Fabric8 Tekton client (to name just a few). I’ll also point out several minor changes that break backward compatibility with older releases. Knowing about these changes will help you avoid problems when you upgrade to the latest version of Fabric8’s Java client for Kubernetes or Red Hat OpenShift.

How to get the new Fabric8 Java client

You will find the most current Fabric8 Java client release on Maven Central. To start using the new Java client, add it as a dependency in your Maven pom.xml. For Kubernetes, the dependency is:

<dependency>
  <groupId>io.fabric8</groupId>
  <artifactId>kubernetes-client</artifactId>
  <version>4.12.0</version>
</dependency>

For OpenShift, it’s:

<dependency>
  <groupId>io.fabric8</groupId>
  <artifactId>openshift-client</artifactId>
  <version>4.12.0</version>
</dependency>

Breaking changes in this release

We have moved several classes for this release, so upgrading to the new version of the Fabric8 Kubernetes Java client might not be completely smooth. The changes are as follows:

  • We moved the CustomResourceDefinition to io.fabric8.kubernetes.api.model.apiextensions.v1 and io.fabric8.kubernetes.api.model.apiextensions.v1beta1.
  • We moved SubjectAccessReview, SelfSubjectAccessReview, LocalSubjectAccessReview, and SelfSubjectRulesReview to io.fabric8.kubernetes.api.model.authorization.v1 and io.fabric8.kubernetes.api.model.authorization.v1beta1.
  • The io.fabric8.tekton.pipeline.v1beta1.WorkspacePipelineDeclaration is now io.fabric8.tekton.pipeline.v1beta1.PipelineWorkspaceDeclaration.
  • We introduced a new interface, WatchAndWaitable, which is used by WatchListDeletable and other interfaces. This change should not affect you if you are using the Fabric8 Kubernetes Java client’s domain-specific language (DSL).

The new VolumeSnapshot extension

You might know about the Fabric8 Kubernetes Java client extensions for Knative, Tekton, Istio, and Service Catalog. In this release, we’ve added a new Container Storage Interface (CSI) VolumeSnapshot extension. VolumeSnapshots are in the snapshot.storage.k8s.io/v1beta1 directory. To start using the new extension, add the following dependency to your Maven pom.xml:

<dependency>
  <groupId>io.fabric8</groupId>
  <artifactId>volumesnapshot-client</artifactId>
  <version>4.12.0</version>
</dependency>

Once you’ve added the dependency, you can start using the VolumeSnapshotClient. Here’s an example of how to create a VolumeSnapshot:

try (VolumeSnapshotClient client = new DefaultVolumeSnapshotClient()) {
      System.out.println("Creating a volume snapshot");
      client.volumeSnapshots().inNamespace("default").createNew()
        .withNewMetadata()
        .withName("my-snapshot")
        .endMetadata()
        .withNewSpec()
        .withNewSource()
        .withNewPersistentVolumeClaimName("my-pvc")
        .endSource()
        .endSpec()
        .done();
    }

Spin up a single pod with client.run()

Just like you would with kubectl run, you can quickly spin up a pod with the Fabric8 Kubernetes Java client. You only need to provide a name and image:

try (KubernetesClient client = new DefaultKubernetesClient()) {
    client.run().inNamespace("default").withName("hello-openshift")
            .withImage("openshift/hello-openshift:latest")
            .done();
}

Authentication API support

A new authentication API lets you use the Fabric8 Kubernetes Java client to query a Kubernetes cluster. You should be able to use the API for all operations equivalent to kubectl auth can-i. Here’s an example:

try (KubernetesClient client = new DefaultKubernetesClient()) {
    SelfSubjectAccessReview ssar = new SelfSubjectAccessReviewBuilder()
            .withNewSpec()
            .withNewResourceAttributes()
            .withGroup("apps")
            .withResource("deployments")
            .withVerb("create")
            .withNamespace("dev")
            .endResourceAttributes()
            .endSpec()
            .build();

    ssar = client.authorization().v1().selfSubjectAccessReview().create(ssar);

    System.out.println("Allowed: "+  ssar.getStatus().getAllowed());
}

OpenShift 4 resources

The Fabric8 Kubernetes Java client now supports all of the new OpenShift 4 resources in its OpenShift model. Additional resources added in operators.coreos.com, operators.openshift.io, console.openshift.io, and monitoring.coreos.com are also available within the OpenShift model. Here is an example of using PrometheusRule to monitor a Prometheus instance:

try (OpenShiftClient client = new DefaultOpenShiftClient()) {
    PrometheusRule prometheusRule = new PrometheusRuleBuilder()
            .withNewMetadata().withName("foo").endMetadata()
            .withNewSpec()
            .addNewGroup()
            .withName("./example-rules")
            .addNewRule()
            .withAlert("ExampleAlert")
            .withNewExpr().withStrVal("vector(1)").endExpr()
            .endRule()
            .endGroup()
            .endSpec()
            .build();

    client.monitoring().prometheusRules().inNamespace("rokumar").createOrReplace(prometheusRule);
    System.out.println("Created");

    PrometheusRuleList prometheusRuleList = client.monitoring().prometheusRules().inNamespace("rokumar").list();
    System.out.println(prometheusRuleList.getItems().size() + " items found");
}

Certificate signing requests

We’ve added a new entry point, certificateSigningRequests(), in the main KubernetesClient interface. This means you can use CertificateSigningRequest resources in all of your applications developed with Fabric8:

try (KubernetesClient client = new DefaultKubernetesClient()) {

    CertificateSigningRequest csr = new CertificateSigningRequestBuilder()
            .withNewMetadata().withName("test-k8s-csr").endMetadata()
            .withNewSpec()
            .addNewGroup("system:authenticated")
            .withRequest("<your-req>")
            .addNewUsage("client auth")
            .endSpec()
            .build();
    client.certificateSigningRequests().create(csr);
}

Custom resource definitions

We’ve moved the apiextensions/v1 CustomResourceDefinition (CRD) to the io.fabric8.kubernetes.api.model.apiextensions.v1beta1 and io.fabric8.kubernetes.api.model.apiextensions.v1 packages. You can now use CustomResourceDefinition objects inside apiextensions() like this:

try (KubernetesClient client = new DefaultKubernetesClient()) {
    client.apiextensions().v1()
            .customResourceDefinitions()
            .list()
            .getItems().forEach(crd -> System.out.println(crd.getMetadata().getName()));
}

Creating bootstrap project templates

We’ve provided a new, built-in way to create a project with all of the role bindings you need. It works like OpenShift’s oc adm create-bootstrap-project-template command. Specify the parameters that the template requires in the DSL method. The method then creates the Project and related RoleBindings for you:

try (OpenShiftClient client = new DefaultOpenShiftClient()) {
    client.projects().createProjectAndRoleBindings("default", "Rohan Kumar", "default", "developer", "developer");
}

Tekton model 0.15.1

We’ve updated the Tekton model to version 0.15.1 so that you can take advantage of all the newest upstream features and enhancements for Tekton. This example creates a simple Task and TaskRun to echo “hello world” in a pod. Instead of YAML, we use the Fabric8 TektonClient:

try (TektonClient tkn = new DefaultTektonClient()) {
    // Create Task
    tkn.v1beta1().tasks().inNamespace(NAMESPACE).createOrReplaceWithNew()
            .withNewMetadata().withName("echo-hello-world").endMetadata()
            .withNewSpec()
            .addNewStep()
            .withName("echo")
            .withImage("alpine:3.12")
            .withCommand("echo")
            .withArgs("Hello World")
            .endStep()
            .endSpec()
            .done();

    // Create TaskRun
    tkn.v1beta1().taskRuns().inNamespace(NAMESPACE).createOrReplaceWithNew()
            .withNewMetadata().withName("echo-hello-world-task-run").endMetadata()
            .withNewSpec()
            .withNewTaskRef()
            .withName("echo-hello-world")
            .endTaskRef()
            .endSpec()
            .done();
}

When you run this code, you will see the Task and TaskRun being created. The TaskRun, in turn, creates a pod, which prints the “Hello World” message:

tekton-java-client-demo : $ tkn taskrun list
NAME                        STARTED         DURATION     STATUS
echo-hello-world-task-run   2 minutes ago   19 seconds   Succeeded
tekton-java-client-demo : $ kubectl get pods
NAME                                  READY   STATUS      RESTARTS   AGE
echo-hello-world-task-run-pod-4gczw   0/1     Completed   0          2m17s
tekton-java-client-demo : $ kubectl logs pod/echo-hello-world-task-run-pod-4gczw
Hello World

Tekton triggers in the Fabric8 Tekton client

The Fabric8 Tekton client and model now support Tekton triggers. You can use triggers to automate the creation of Tekton pipelines. All you have to do is embed your triggers in the Tekton continuous deployment (CD) pipeline. Here is an example of using the Fabric8 Tekton client to create a Tekton trigger template:

try (TektonClient tkn = new DefaultTektonClient()) {
    tkn.v1alpha1().triggerTemplates().inNamespace(NAMESPACE).createOrReplaceWithNew()
            .withNewMetadata().withName("pipeline-template").endMetadata()
            .withNewSpec()
                .addNewParam()
                    .withName("gitrepositoryurl")
                    .withDescription("The git repository url")
                .endParam()
                .addNewParam()
                    .withName("gitrevision")
                    .withDescription("The git revision")
                .endParam()
                .addNewParam()
                    .withName("message")
                    .withDescription("The message to print")
                    .withDefault("This is default message")
                .endParam()
                .addNewParam()
                    .withName("contenttype")
                    .withDescription(" The Content-Type of the event")
                .endParam()
            .withResourcetemplates(Collections.singletonList(new PipelineRunBuilder()
                    .withNewMetadata().withGenerateName("simple-pipeline-run-").endMetadata()
                    .withNewSpec()
                        .withNewPipelineRef().withName("simple-pipeline").endPipelineRef()
                        .addNewParam()
                            .withName("message")
                            .withValue(new ArrayOrString("$(tt.params.message)"))
                        .endParam()
                        .addNewParam()
                            .withName("contenttype")
                            .withValue(new ArrayOrString("$(tt.params.contenttype)"))
                        .endParam()
                        .addNewResource()
                            .withName("git-source")
                            .withNewResourceSpec()
                                .withType("git")
                                .addNewParam()
                                .withName("revision")
                                .withValue("$(tt.params.gitrevision)")
                                .endParam()
                                .addNewParam()
                                .withName("url")
                                .withValue("$(tt.params.gitrepositoryurl)")
                                .endParam()
                            .endResourceSpec()
                        .endResource()
                    .endSpec()
                    .build()))
            .endSpec()
            .done();
}

Automatically refresh OpenID Connect tokens

If your Kubernetes provider uses OpenID Connect tokens (like IBM Cloud), you don’t need to worry about your tokens expiring. The new Fabric8 Kubernetes Java client automatically refreshes your tokens by contacting the OpenID Connect provider, which is listed in the ~/.kube/config.

Support for Knative 0.17.2 and Knative Eventing Contrib

For this release, we’ve updated the Knative model to the latest version. We also added new support for the additional resources from Knative Eventing Contrib, which involves sources and channel implementations that integrate with Apache CouchDB, Apache Kafka, Amazon Simple Queue Service (AWS SQS), GitHub, GitLab, and so on.

Here’s an example of creating an AwsSqsSource using KnativeClient:

try (KnativeClient client = new DefaultKnativeClient()) {
    AwsSqsSource awsSqsSource = new AwsSqsSourceBuilder()
            .withNewMetadata().withName("awssqs-sample-source").endMetadata()
            .withNewSpec()
            .withNewAwsCredsSecret("credentials", "aws-credentials", true)
            .withQueueUrl("QUEUE_URL")
            .withSink(new ObjectReferenceBuilder()
                    .withApiVersion("messaging.knative.dev/v1alpha1")
                    .withKind("Channel")
                    .withName("awssqs-test")
                    .build())
            .endSpec()
            .build();
    client.awsSqsSources().inNamespace("default").createOrReplace(awsSqsSource);
}

Get involved!

There are a few ways to get involved with the development of the Fabric8 Kubernetes Java client:

Share

The post What’s new in Fabric8 Kubernetes Java client 4.12.0 appeared first on Red Hat Developer.


by Rohan Kumar at October 30, 2020 07:00 AM

e(fx)clipse 3.7.0 is released

by Tom Schindl at October 12, 2020 06:50 PM

We are happy to announce that e(fx)clipse 3.7.0 has been released. This release contains the following repositories/subprojects:

There are almost no new features (eg the new boxshadow) but only bugfixes who are very important if you use OpenJFX in an OSGi-Environment.

For those of you who already use our pom-First approache the new bits have been pushed to https://maven.bestsolution.at/efxclipse-releases/ and the Sample application at https://github.com/BestSolution-at/e4-efxclipse-maven-sample/tree/efxclipse-3.7.0 has been updated to use the latest release.


by Tom Schindl at October 12, 2020 06:50 PM

Getting started with Eclipse GEF – the Mindmap Tutorial

by Tamas Miklossy (miklossy@itemis.de) at October 12, 2020 06:00 AM

The Eclipse Graphical Editing Framework is a toolkit to create graphical Java applications either integrated in Eclipse or standalone. The most common use of the framework is to develop diagram editors, like a simple Mindmap editor we will create in the GEF Mindmap Tutorial series. Currently, the tutorial consists of 6 parts and all together 19 steps. They are structured as follows:

gef_mindmap_tutorial

Part I – The Foundations

  • Step 1: Preparing the development environment
  • Step 2: Creating the model
  • Step 3: Defining the visuals

Part II – GEF MVC

  • Step 4: Creating the GEF parts
  • Step 5: Models, policies and behaviors
  • Step 6: Moving and resizing a node

Part III – Adding nodes and connections

  • Step 7: Undo and redo operations
  • Step 8: Creating new nodes
  • Step 9: Creating connections

Part IV – Modifying and removing nodes

  • Step 10: Deleting nodes (1)
  • Step 11: Modifying nodes
  • Step 12: Creating feedback
  • Step 13: Deleting nodes (2)

Part V – Creating an Eclipse editor

  • Step 14: Creating an Eclipse editor
  • Step 15: Undo, redo, select all and delete in Eclipse
  • Step 16: Contributing toolbar actions

Part VI – Automatic layouting

  • Step 17: Automatic layouting via GEF layout
  • Step 18: Automatic layouting via Graphviz DOT
  • Step 19: Automatic layouting via the Eclipse Layout Kernel

You can register for the tutorial series using the link below. The article How to set up Eclipse tool development with OpenJDK, GEF, and OpenJFX describes the necessary steps to properly set up your development environment.

Your feedback regarding the Mindmap Tutorial (and the Eclipse GEF project in general) is highly appreciated. If you have any questions or suggestions, please let us know via the Eclipse GEF forum, or create an issue on Eclipse Bugzilla.

For further information, we recommend to take a look at the Eclipse GEF blog articles and watch the Eclipse GEF session on the EclipseCon Europe 2018.

 

Register for the GEF Tutorials


by Tamas Miklossy (miklossy@itemis.de) at October 12, 2020 06:00 AM

Eclipse Collections 10.4.0 Released

by Nikhil Nanivadekar at October 09, 2020 08:36 PM

View of the Grinnell Glacier from overlook point after a grueling 9 mile hike

This is a release which we had not planned for, but we released it nonetheless.

This must be the first time since we open sourced Eclipse Collections that we performed two releases within the same month.

Changes in Eclipse Collections 10.4.0

There are only 2 changes in the 10.4.0 release compared to the feature rich 10.3.0 release viz.

  • Added CharAdapter.isEmpty(), CodePointAdapter.isEmpty(), CodePointList.isEmpty(), as JDK-15 introduced CharSequence.isEmpty().
  • Fixed Javadoc errors.

Why was release 10.4.0 necessary?

In today’s rapid deployment world, it should not be a novel aspect that a project performs multiple releases. However, the Eclipse Collections maintainer team, performs releases when one or more of the below criteria are satisfied:

  1. A bulk of features are ready to be released
  2. A user requests a release for their use case
  3. JDK-EA compatibility is breaking
  4. It has been more than 6 months that a version is released

The Eclipse Collections 10.4.0 release was necessary due to point #3. Eclipse Collections participates in the Quality Outreach program of Open JDK. As a part of this program the library is expected to test the Early Access (EA) versions of Java and identify potential issues in the library or the JDK. I had missed setting up the JDK-15-EA builds until after Eclipse Collections 10.3.0 was released. After setting up the JDK-15-EA builds on 16 August 2020, I found compiler issues in the library due to isEmpty() added as a default method on CharSequence. Stuart Marks has written an in-depth blog of why this new default method broke compatibility. So, we had 2 options, let the library not be compatible with JDK-15, or release a new version with the fix. The Eclipse Collections team believes in supporting Java versions from Java 8 to Java-EA. After release 10.3.0, we had opened a new major version target (11.0.0), but the changes required did not warrant a new major version. So, we decided to release 10.4.0 with the fixes to support JDK-15. Eclipse Collections 10.4.0 release is compatible with JDK-15 and JDK-16-EA.

Thank you

To the vibrant and supportive Eclipse Collections community on behalf of contributors, committers, and maintainers for using Eclipse Collections. We hope you enjoy Eclipse Collections 10.4.0.

I am a Project Lead and Committer for the Eclipse Collections OSS project at the Eclipse Foundation. Eclipse Collections is open for contributions.

Show your support, star us on GitHub.

Eclipse Collections Resources:
Eclipse Collections comes with it’s own implementations of List, Set and Map. It also has additional data structures like Multimap, Bag and an entire Primitive Collections hierarchy. Each of our collections have a rich API for commonly required iteration patterns.

  1. Website
  2. Source code on GitHub
  3. Contribution Guide
  4. Reference Guide

Photo of the blog: I took the photo after hiking to the Grinnell Glacier overlook point. It was a strenuous hike, but the view from up top made it worth it. I picked this photo, to elaborate the sense of accomplishment after completing a release in a short amount of time.


Eclipse Collections 10.4.0 Released was originally published in Oracle Developers on Medium, where people are continuing the conversation by highlighting and responding to this story.


by Nikhil Nanivadekar at October 09, 2020 08:36 PM

Obeo's Chronicles, Autumn 2020

by Cédric Brun (cedric.brun@obeo.fr) at October 06, 2020 12:00 AM

I can’t believe we are already looking at Q4. I have so much news to share with you!

Eclipse Sirius, Obeo Cloud Platform and Sirius Web:

This last summer we had the pleasure to organize SiriusCon. This one-day event is each year the opportunity for the modeling community to share their experience, and for the development team to provide visibility on what is currently being worked on and how we see the future of the technology. SiriusCon reached 450 attendees from 53 different countries thanks to 13 fabulous speakers !

The latest edition was special to us, it used to be organized at the end of each year but we decided to postpone it for a few months to be ready for an announcement very close to our heart. We’ve been working on bringing on the Web what we love about Sirius for quite a few years already and reached a point where we have a promising product. Now is the time to accelerate, Mélanie Bats announced it during the conference: we are releasing “Sirius Web” as Open-Source and officially started the countdown !

The announcement at SiriusCon 2020

The reactions to this announcement were fantastic with a lot of excitement within the community.

I am myself very excited for several reasons:

Firstly, I expect this decision will, just like Sirius Desktop was released in Open-Source in 2013, a key factor leading to the creation of hundreds of graphical modelers, in the same way currently demonstrated by the Sirius Gallery but now easily accessible through the Web and leveraging all the capabilities this platform brings.

Our vision is to empower the tool specifier from the data structure and tool definition up to the deployment and exploitation of a modeling tool, directly from the browser, end to end and in an integrated and seamless way.

We are not there yet, though as you’ll see the technology is already quite promising.

Obeo Cloud Platform Modeler

Secondly, for Obeo this decision strengthens our product-based business model while being faithful to our “open core” approach. We will offer, through Obeo Cloud Platform a Sirius Web build extended with Enterprise features, to deploy on public, private clouds or on premise and including support and upgrade guarantees.

Obeo Cloud Platform Offer

Since the announcement the team is working on Sirius Web to publish it as an Open-Source product so that you can start experimenting as soon as EclipseCon 2020. Mélanie will present this in detail during her talk: “Sirius Web: 100% open source cloud modeling platform” ,

EclipseCon 2020

Hint: it’s still time to register for EclipseCon 2020 but do it quickly! The program committee did an excellent job in setting up an exciting program thanks to your many submissions, don’t miss it!

Capella Days Online is coming up!

That’s not it! Each day we see Eclipse Capella get more and more adoption across the globe, this Open-Source product has its own 4-days event: Capella Days Online 2020!

A unique occasion to get many experience reports from multiple domains: Space systems (CNES and GMV), Rail and transportation (Virgin Hyperloop, Nextrail and Vitesco technologies), healthcare (Siemens and Still AB), waste collecting with The SeaCleaners and all of that in addition to aerospace, defence and security with Thales Group. The program is packed with high-quality content: 12 sessions over 4 days from October 12th to 15th, more than 500 attendees already registered, join us and register!

Capella Days
Capella Days Program

SmartEA 6.0 supports Archimate 3.1 and keeps rising!

We use those open-source technologies, like Eclipse Sirius, Acceleo, EMF Compare, M2doc and many more in our “off the shelf” software solution for Enterprise Architecture: Obeo SmartEA.

SmartEA 6.0

This spring we released SmartEA 6.0, which got the Archimate 3.1 certification and brought among many other improvements: new modeling capabilities, extended user management, enhanced BPMN modeling and streamlined user experience.

Our solution is a challenger on the market and convinces more and more customers. Stay tuned, I should be able to share a thrilling announcement soon!

World Clean Up Day and The SeaCleaners

In a nutshell: an excellent dynamic on many fronts and exciting challenges ahead! This is all made possible thanks to the energy and cohesion of the Obeo team in this weird, complex and unusual time. We are committed to the environment and to reduce plastic waste, as such we took part in the World Clean Up Day in partnership with The Sea Cleaners . Beyond the impact of this action which has so much sense to us, it was also a sharing and fun moment!

#WeAreObeo at the World Cleanup Day

Obeo's Chronicles, Autumn 2020 was originally published by Cédric Brun at CEO @ Obeo on October 06, 2020.


by Cédric Brun (cedric.brun@obeo.fr) at October 06, 2020 12:00 AM

MapIterable.getOrDefault() : New but not so new API

by Nikhil Nanivadekar at September 23, 2020 02:30 AM

MapIterable.getOrDefault() : New but not so new API

Sunset at Port Hardy (June 2019)

Eclipse Collections comes with it’s own List, Set, and Map implementations. These implementations extend the JDK List, Set, and Map implementations for easy interoperability. In Eclipse Collections 10.3.0, I introduced a new API MapIterable.getOrDefault(). In Java 8, Map.getOrDefault() was introduced, so what makes it a new API for Eclipse Collections 10.3.0? Technically, it is new but not so new API! Consider the code snippets below, prior to Eclipse Collections 10.3.0:

MutableMap.getOrDefault() compiles and works fine
ImmutableMap.getOrDefault() does not compile

As you can see in the code, MutableMap has getOrDefault() available, however ImmutableMap does not have it. But there is no reason why ImmutableMap should not have this read-only API. I found that MapIterable already had getIfAbsentValue() which has the same behavior. Then why did I still add getOrDefault() to MapIterable?

I added MapIterable.getOrDefault() mainly for easy interoperability. Firstly, most Java developers will be aware of the getOrDefault() method, only Eclipse Collections users would be aware of getIfAbsentValue(). By providing the API same as the JDK it reduces the necessity to learn a new API. Secondly, even though getOrDefault() is available on MutableMap, it is not available on the highest Map interface of Eclipse Collections. Thirdly, I got to learn a Java compiler check which I had not experienced before. I will elaborate this check a bit more in detail because I find it interesting.

After I added getOrDefault() to MapIterable, various Map interfaces in Eclipse Collections started giving compiler errors with messages like: org.eclipse.collections.api.map.MutableMapIterable inherits unrelated defaults for getOrDefault(Object, V) from types org.eclipse.collections.api.map.MapIterable and java.util.Map. This I thought was cool, because at compile time, the Java compiler is ensuring that if there is an API with default implementation in more than one interface in a multi-interface scenario, then Java will not decide which implementation to pick but rather throw compiler errors. Hence, Java ensures at compile time that there is no ambiguity regarding which implementation will be used at runtime. How awesome is that?!? In order to fix the compile time errors, I had to add a default implementations on the interfaces which gave the errors. I always believe in Compiler Errors are better than Runtime Exceptions.

Stuart Marks has put together an awesome blog which covers the specifics of such scenarios. I suggest reading that for in-depth understanding of how and why this behavior is observed.

Post Eclipse Collections 10.3.0 the below code samples will work:

MapIterable.getOrDefault() compiles and works fine
MutableMap.getOrDefault() compiles and works fine
ImmutableMap.getOrDefault() compiles and works fine

Eclipse Collections 10.3.0 was released on 08/08/2020 and is one of our most feature packed releases. The release constitutes numerous contributions from the Java community.

I am a Project Lead and Committer for the Eclipse Collections OSS project at the Eclipse Foundation. Eclipse Collections is open for contributions.

Show your support star us on GitHub.

Eclipse Collections Resources:
Eclipse Collections comes with it’s own implementations of List, Set and Map. It also has additional data structures like Multimap, Bag and an entire Primitive Collections hierarchy. Each of our collections have a rich API for commonly required iteration patterns.

  1. Website
  2. Source code on GitHub
  3. Contribution Guide
  4. Reference Guide

by Nikhil Nanivadekar at September 23, 2020 02:30 AM

Migrating from Fabric8 Maven Plugin to Eclipse JKube 1.0.0

by Rohan Kumar at September 21, 2020 07:00 AM

The recent release of Eclipse JKube 1.0.0 means that the Fabric8 Maven Plugin is no longer supported. If you are currently using the Fabric8 Maven Plugin, this article provides instructions for migrating to JKube instead. I will also explain the relationship between Eclipse JKube and the Fabric8 Maven Plugin (they’re the same thing) and introduce the highlights of the new Eclipse JKube 1.0.0 release. These migration instructions are for developers working on the Kubernetes and Red Hat OpenShift platforms.

Eclipse JKube is the Fabric8 Maven Plugin

Eclipse JKube and the Fabric8 Maven Plugin are one and the same. Eclipse JKube was first released in 2014 under the name of Fabric8 Maven Plugin. The development team changed the name when we pre-released Eclipse JKube 0.1.0 in December 2019. For more about the name change, see my recent introduction to Eclipse JKube. This article focuses on the migration path to JKube 1.0.0.

What’s new in Eclipse JKube 1.0.0

If you are hesitant about migrating to JKube, the following highlights from the new 1.0.0 release might change your mind:

Fabric8 Maven Plugin generates both Kubernetes and Red Hat OpenShift artifacts, and it automatically detects and deploys resources to the underlying cluster. But developers who use Kubernetes don’t need OpenShift artifacts, and OpenShift developers don’t need Kubernetes manifests. We addressed this issue by splitting Fabric8 Maven Plugin into two plugins for Eclipse JKube: Kubernetes Maven Plugin and OpenShift Maven Plugin.

Eclipse JKube migration made easy

Eclipse JKube has a migrate goal that automatically updates Fabric8 Maven Plugin references in your pom.xml to the Kubernetes Maven Plugin or OpenShift Maven Plugin. In the next sections, I’ll show you how to migrate a Fabric8 Maven Plugin-based project to either platform.

Replace the code for the Fabric8 Maven plugin with either the code for the Kubernetes Maven plugin or the OpenShft Maven plugin.

For demonstration purposes, we can use my old random generator application, which displays a random JSON response at a /random endpoint. To start, clone this repository:

$ git clone https://github.com/rohanKanojia/fmp-demo-project.git
cd fmp-demo-project

Then build the project:

$ mvn clean install

Eclipse JKube migration for Kubernetes users

Use the following goal to migrate to Eclipse JKube’s Kubernetes Maven Plugin. Note that we have to specify a complete artifactId and groupId because the plugin is not automatically included in the pom.xml:

$ mvn org.eclipse.jkube:kubernetes-maven-plugin:migrate

Here are the logs for the migrate goal:

fmp-demo-project : $ mvn org.eclipse.jkube:kubernetes-maven-plugin:migrate
[INFO] Scanning for projects...
[INFO]
[INFO] ----------------------< meetup:random-generator >-----------------------
[INFO] Building random-generator 0.0.1
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- kubernetes-maven-plugin:1.0.0-rc-1:migrate (default-cli) @ random-generator ---
[INFO] k8s: Found Fabric8 Maven Plugin in pom with version 4.4.1
[INFO] k8s: Renamed src/main/fabric8 to src/main/jkube
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  3.154 s
[INFO] Finished at: 2020-09-08T19:32:01+05:30
[INFO] ------------------------------------------------------------------------
fmp-demo-project : $

You’ll notice that all of the Fabric8 Maven Plugin references have been replaced by references to Eclipse JKube. The Kubernetes Maven Plugin is the same as the Fabric8 Maven Plugin. The only differences are the k8s prefix and that it generates Kubernetes manifests.

Once you’ve installed the Kubernetes Maven Plugin, you can deploy your application as usual:

$ mvn k8s:build k8s:resource k8s:deploy

Eclipse JKube migration for OpenShift users

Use the same migration process for the OpenShift Maven Plugin as you would for the Kubernetes Maven Plugin. Run the migrate goal but with the OpenShift MavenPlugin specified:

$ mvn org.eclipse.jkube:openshift-maven-plugin:migrate

Here are the logs for this migrate goal:

fmp-demo-project : $ mvn org.eclipse.jkube:openshift-maven-plugin:migrate
[INFO] Scanning for projects...
[INFO]
[INFO] ----------------------< meetup:random-generator >-----------------------
[INFO] Building random-generator 0.0.1
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- openshift-maven-plugin:1.0.0-rc-1:migrate (default-cli) @ random-generator ---
[INFO] k8s: Found Fabric8 Maven Plugin in pom with version 4.4.1
[INFO] k8s: Renamed src/main/fabric8 to src/main/jkube
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  4.227 s
[INFO] Finished at: 2020-09-08T19:41:34+05:30
[INFO] ------------------------------------------------------------------------

This goal replaces all of your Fabric8 Maven Plugin references with references to OpenShift Maven Plugin. You can then deploy your application to Red Hat OpenShift just as you normally would:

$ mvn oc:build oc:resource oc:deploy

Conclusion

See the Eclipse JKube migration guide for more about migrating from the Fabric8 Maven Plugin on OpenShift or Kubernetes. Feel free to create a GitHub issue to report any problems that you encounter during the migration. We really value your feedback, so please report bugs, ask for improvements, and tell us about your migration experience.

Whether you are already using Eclipse JKube or just curious about it, don’t be shy about joining our welcoming community:

Share

The post Migrating from Fabric8 Maven Plugin to Eclipse JKube 1.0.0 appeared first on Red Hat Developer.


by Rohan Kumar at September 21, 2020 07:00 AM

N4JS goes LSP

by n4js dev (noreply@blogger.com) at September 08, 2020 11:00 AM

A few weeks ago we started to publish a VSCode extension for N4JS to the VSCode Marketplace. This was one of the last steps on our road to support LSP-based development tools. We chose to make this major change because of several reasons that affected both users and developers of N4JS.

An N4JS project in VSCode with the N4JS language extension


Our language extension for N4JS is hosted at the Microsoft VSCode Marketplace and will be updated regularly by our Jenkins jobs. Versions will be kept in sync with the language version, compiler version and version of the N4JS libraries to avoid incompatible setups. At the moment, the LSP server supports all main features of the language server protocol (LSP) such as validation, content assist, outline view, jump to definition and implementation, open symbol, the rename refactoring and many more. In addition, it will also generate output files whenever a source change is detected. We therefore heavily improved the incremental LSP builder of the Xtext framework and plan to migrate back those changes to the Xtext repository. For the near future we plan to work on stability, performance and also to support some of the less frequently used LSP features.


When looking back, development of N4JS has been based on the Xtext framework from the start and thus it was straightforward to build an Eclipse-based IDE as our main development tool. Later on, we also implemented a headless compiler used for manual and automated testing from the command line. The development of the compiler already indicated some problems stemming from the tight integration of the Eclipse and the Xtext frameworks together with our language specific implementations. To name an example, we had two separate builder implementations: one for the IDE and the other for the headless compiler. Since the Eclipse IDE is using a specific workspace and project model, we also had two implementations for this abstraction. Another important problem we faced with developing an Eclipse-based IDE was that at some points we had to implement UI tests using the SWTBot framework. For us, SWTBot tests turned out to be very hard to develop, to maintain, and to keep from becoming flaky. Shifting to LSP-based development tools, i.e. the headless compiler and an LSP server, allows us to overcome the aforementioned problems.


Users of N4JS now have the option to either use our extension for VSCode or integrate our LSP server into their favorite IDE themselves, even into the Eclipse IDE. They also benefit from more lightweight tools regarding disk size and start-up performance, as well as a better integration into well-known tools from the JavaScript development ecosystem.




by n4js dev (noreply@blogger.com) at September 08, 2020 11:00 AM

No Java? No Problem!

by Ed Merks (noreply@blogger.com) at August 18, 2020 07:50 AM

For the 2020-09 Eclipse Simultaneous Release, the Eclipse IDE will require Java 11 or higher to run.  If the user doesn't have that installed, Eclipse simply won't start, instead popping up this dialog: 

That of course begs the question, what should I do now? The Eclipse Installer itself is an Eclipse application so it too will fail to start for the same reason.  At least on Windows the Eclipse Installer is distributed as a native executable, so it will open a semi-helpful page in the browser to direct the user find a suitable JRE or JDK to install rather than popping up the above dialog.

Of course we are concerned that many users will update 2020-06 to 2020-09 only to find that Eclipse fails to start afterwards because they are currently running with Java 8.  But Mickael Istria has planned ahead for this as part of the 2020-06 release, adding a validation check during the update process to determine if the current JVM is suitable for the update, thereby helping prevent this particular problem.

Now that JustJ is available for building Eclipse products with an embedded JRE, we can do even better.  Several of the Eclipse Packaging Project's products will include a JustJ JRE in the packages for 2020-09, i.e., the C/C++, Rust, and JavaScript packages.  Also the Eclipse Installer for 2020-09 will provide product variants that include a JustJ JRE.  So they all will simply run out of the box regardless of which version of Java is installed and of course even when Java is not installed at all.

Even better, the Eclipse Installer will provide JustJ JREs as choices in the dialogs.  A user who does not have Java installed will be offered JustJ JRE 14.02 as the default JRE.

Choices of JustJ JREs will always be available in the Eclipse Installer; it will be the default only if no suitable version of Java is currently installed on the machine.

Eclipse Installers with an embedded JustJ JRE will be available starting with 2020-09 M3 for all supported platforms.  For a sneak preview, you can find them in the nightly builds folder.  The ones with "-jre" in the name contain an embedded JRE (and the ones with "-restricted" in the name will only install 2020-09 versions of the products).

It was a lot of work getting this all in place, both building the JREs and updating Oomph's build to consume them.  Not only that, just this week I had to rework EMF's build so that it functions with the latest platform where some of the JDT bundles have incremented their BREEs to Java 11.  There's always something disruptive that creates a lot of work.  I should point out that no one funds this work, so I often question how this is all actually sustainable in the long term (not to mention questioning my personal sanity).

I did found a small GmbH here in Switzerland.  It's very pretty here!

If you need help, consider that help is available. If no one pays for anything, at some point you will only get what you pay for, i.e., nothing. But that's a topic for another blog...


by Ed Merks (noreply@blogger.com) at August 18, 2020 07:50 AM

Dogfooding the Eclipse Dash License Tool

by waynebeaton at July 22, 2020 03:43 PM

There’s background information about this post in my previous post. I’ve been using the Eclipse Dash License Tool on itself.

$ mvn dependency:list | grep -Poh "\S+:(system|provided|compile)$" | java -jar licenses.jar -
Querying Eclipse Foundation for license data for 7 items.
Found 6 items.
Querying ClearlyDefined for license data for 1 items.
Found 1 items.
Vetted license information was found for all content. No further investigation is required.
$ _

Note that in this example, I’ve removed the paths to try and reduce at least some of the clutter. I also tend to add a filter to sort the dependencies and remove duplicates (| sort | uniq), but that’s not required here so I’ve left it out.

The message that “[v]etted license information was found for all content”, means that the tool figures that all of my project’s dependencies have been fully vetted and that I’m good to go. I could, for example, create a release with this content and be fully aligned with the Eclipse Foundation’s Intellectual Property Policy.

The tool is, however, only as good as the information that it’s provided with. Checking only the Maven build completely misses the third party content that was introduced by Jonah’s helpful contribution that helps us obtain dependency information from a yarn.lock file.

$ cd yarn
$ node index.js | java -jar licenses.jar -
Querying Eclipse Foundation for license data for 1 items.
Found 0 items.
Querying ClearlyDefined for license data for 1 items.
Rejected: https://clearlydefined.io/definitions/npm/npmjs/@yarnpkg/lockfile/1.1.0
Found 0 items.
License information could not automatically verified for the following content:

npm/npmjs/@yarnpkg/lockfile/1.1.0 (null)

Please create contribution questionnaires for this content.

$ _

So… oops. Missed one.

Note that the updates to the IP Policy include a change that allows project teams to leverage third-party content (that they believe to be license compatible) in their project code during development. All content must be vetted by the IP due diligence process before it may be leveraged by any release. So the project in its current state is completely onside, but the license of that identified bit of content needs to be resolved before it can be declared as proper release as defined by the Eclipse Foundation Development Process.

This actually demonstrates why I opted to create the tool as CLI that takes a flat list of dependencies as input: we use all sorts of different technologies, and I wanted to focus the tool on providing license information for arbitrary lists of dependencies.

I’m sure that Denis will be able to rewrite my bash one-liner in seven keystrokes, but here’s how I’ve combined the two so that I can get complete picture with a “single” command:

$ { mvn dependency:list | grep -Poh "\S+:(system|provided|compile)$" ; cd yarn && node index.js; } | java -jar licenses.jar -
Querying Eclipse Foundation for license data for 8 items.
Found 6 items.
Querying ClearlyDefined for license data for 2 items.
Rejected: https://clearlydefined.io/definitions/npm/npmjs/@yarnpkg/lockfile/1.1.0
Found 1 items.
License information could not automatically verified for the following content:

npm/npmjs/@yarnpkg/lockfile/1.1.0 (null)

Please create contribution questionnaires for this content.
$ _

I have some work to do before I can release. I’ll need to engage with the Eclipse Foundation’s IP Team to have that one bit of content vetted.

As a side effect, the tool generates a DEPENDENCIES file. The dependency file lists all of the dependencies provided in the input in ClearlyDefined coordinates along with license information, whether or not the content is approved for use or is restricted (meaning that further investigation is required), and the authority that determined the status.

maven/mavencentral/org.glassfish/jakarta.json/1.1.6, EPL-2.0 OR GPL-2.0 WITH Classpath-exception-2.0, approved, emo_ip_team
maven/mavencentral/commons-codec/commons-codec/1.11, Apache-2.0, approved, CQ15971
maven/mavencentral/org.apache.httpcomponents/httpcore/4.4.13, Apache-2.0, approved, CQ18704
maven/mavencentral/commons-cli/commons-cli/1.4, Apache-2.0, approved, CQ13132
maven/mavencentral/org.apache.httpcomponents/httpclient/4.5.12, Apache-2.0, approved, CQ18703
maven/mavencentral/commons-logging/commons-logging/1.2, Apache-2.0, approved, CQ10162
maven/mavencentral/org.apache.commons/commons-csv/1.8, Apache-2.0, approved, clearlydefined
npm/npmjs/@yarnpkg/lockfile/1.1.0, unknown, restricted, none

Most of the content was vetted by the Eclipse Foundation’s IP Team (the entries marked “CQ*” have corresponding entries in IPZilla), one was found in ClearlyDefined, and one requires further investigation.

The tool produces good results. But, as I stated earlier, it’s only as good as the input that it’s provided with and it only does what it is designed to do (it doesn’t, for example, distinguish between prerequisite dependencies and dependencies of “works with” dependencies; more on this later). The output of the tool is obviously a little rough and could benefit from the use of a proper configurable logging framework. There’s a handful of other open issues for your consideration.


by waynebeaton at July 22, 2020 03:43 PM

Why ServiceCaller is better (than ServiceTracker)

July 07, 2020 07:00 PM

My previous post spurned a reasonable amount of discussion, and I promised to also talk about the new ServiceCaller which simplifies a number of these issues. I also thought it was worth looking at what the criticisms were because they made valid points.

The first observation is that it’s possible to use both DS and ServiceTracker to track ServiceReferences instead. In this mode, the services aren’t triggered by default; instead, they only get accessed upon resolving the ServiceTracker using the getService() call. This isn’t the default out of the box, because you have to write a ServiceTrackerCustomizer adapter that intercepts the addingService() call to wrap the ServiceTracker for future use. In other words, if you change:

serviceTracker = new ServiceTracker<>(bundleContext, Runnable.class, null);
serviceTracker.open();

to the slightly more verbose:

serviceTracker = new ServiceTracker<>(bundleContext, Runnable.class,
new ServiceTrackerCustomizer<Runnable, Wrapped<Runnable>>() {
public Wrapped<Runnable> addingService(ServiceReference<Runnable> ref) {
return new Wrapped<>(ref, bundleContext);
}
}
}
static class Wrapped<T> {
private ServiceReference<T> ref;
private BundleContext context;
public Wrapped(ServiceReference<T> ref, BundleContext context) {
this.ref = ref;
this.context = context;
}
public T getService() {
try {
return context.getService(ref);
} finally {
context.ungetService(ref);
}
}
}

Obviously, no practical code uses this approach because it’s too verbose, and if you’re in an environment where DS services aren’t widely used, the benefits of the deferred approach are outweighed by the quantity of additional code that needs to be written in order to implement this pattern.

(The code above is also slightly buggy; we’re getting the service, returning it, then ungetting it afterwards. We should really just be using it during that call instead of returning it in that case.)

Introducing ServiceCaller

This is where ServiceCaller comes in.

The approach of the ServiceCaller is to optimise out the over-eager dereferencing of the ServiceTracker approach, and apply a functional approach to calling the service when required. It also has a mechanism to do single-shot lookups and calling of services; helpful, for example, when logging an obscure error condition or other rarely used code path.

This allows us to elegantly call functional interfaces in a single line of code:

Class callerClass = getClass();
ServiceCaller.callOnce(callerClass, Runnable.class, Runnable:run);

This call looks for Runnable service types, as visible from the caller class, and then invoke the function getClass() as lambda. We can use a method reference (as in the above case) or you can supply a Consumer<T> which will be passed the reference that is resolved from the lookup.

Importantly, this call doesn’t acquire the service until the callOnce call is made. So, if you have an expensive logging factory, you don’t have to initialise it until the first time it’s needed – and even better, if the error condition never occurs, you never need to look it up. This is in direct contrast to the ServiceTracker approach (which actually needs more characters to type) that accesses the services eagerly, and is an order of magnitude better than having to write a ServiceTrackerCustomiser for the purposes of working around a broken API.

However, note that such one-shot calls are not the most efficient way of doing this, especially if it is to be called frequently. So the ServiceCaller has another mode of operation; you can create a ServiceCaller instance, and hang onto it for further use. Like its single-shot counterpart, this will defer the resolution of the service until needed. Furthermore, once resolved, it will cache that instance so you can repeatedly re-use it, in the same way that you could do with the service returned from the ServiceTracker.

private ServiceCaller<Runnable> service;
public void start(BundleContext context) {
this.service = new ServiceCaller<>(getClass(), Runnable.class);
}
public void stop(BundleContext context) {
this.service.unget();
}
public void doSomething() {
service.call(Runnable::run);
}

This doesn’t involve significantly more effort than using the ServiceTracker that’s widely in use in Eclipse Activators at the moment, yet will defer the lookup of the service until it’s actually needed. It’s obviously better than writing many lines of ServiceTrackerCustomiser and performs better as a result, and is in most cases a type of drop-in replacement. However, unlike ServiceTracker (which returns you a service that you can then do something with afterwards), this call provides a functional consumer interface that allows you to pass in the action to take.

Wrapping up

We’ve looked at why ServiceTracker has problems with eager instantiation of services, and the complexity of code required to do it the right way. A scan of the Eclipse codebase suggests that outside of Equinox, there are very few uses of ServiceTrackerCustomiser and there are several hundred calls to ServiceTracker(xxx,yyy,null) – so there’s a lot of improvements that can be made fairly easily.

This pattern can also be used to push down the acquisition of the service from a generic Plugin/Activator level call to where it needs to be used. Instead of standing this up in the BundleActivator, the ServiceCaller can be used anywhere in the bundle’s code. This is where the real benefit comes in; by packaging it up into a simple, functional consumer, we can use it to incrementally rid ourselves of the various BundleActivators that take up the majority of Eclipse’s start-up.

A final note on the ServiceCaller – it’s possible that when you run the callOnce method (or the call method if you’re holding on to it) that a service instance won’t be available. If that’s the case, you get notified by a false return call from the call method. If a service is found and is processed, you’ll get a true returned. For some operations, a no-op is a fine behaviour if the service isn’t present – for example, if there’s no LogService then you’re probably going to drop the log event anyway – but it allows you to take the corrective action you need.

It does mean that if you want to capture return state from the method call then you’ll need to have an alternative approach. The easiest way is to have an final Object result[] = new Object[1]; before the call, and then the lambda can assign the return value to the array. That’s because local state captured by lambdas needs to be a final reference, but a final reference to a mutable single element array allows us to poke a single value back. You could of course use a different class for the array, depending on your requirements.

So, we have seen that ServiceCaller is better than ServiceTracker, but can we do even better than that? We certainly can, and that’s the purpose of the next post.


July 07, 2020 07:00 PM

Why ServiceTracker is Bad (for DS)

July 02, 2020 07:00 PM

In a presentation I gave at EclipseCon Europe in 2016, I noted that there were prolems when using ServiceTracker and on slide 37 of my presentation noted that:

  • ServiceTracker.open() is a blocking call
  • ServiceTracker.open() results in DS activating services

Unfortunately, not everyone agrees because it seems insane that ServiceTracker should do this.

Unfortunately, ServiceTracker is insane.

The advantage of Declarative Services (aka SCR, although no-one calls it that) is that you can register services declaratively, but more importantly, the DS runtime will present the existence of the service but defer instantiation of the component until it’s first requested.

The great thing about this is that you can have a service which does many class loads or timely actions and defer its use until the service is actually needed. If your service isn’t required, then you don’t pay the cost for instantiating that service. I don’t think there’s any debate that this is a Good Thing and everyone, so far, is happy.

Problem

The problem, specifically when using ServiceTracker, is that you have to do a two-step process to use it:

  1. You create a ServiceTracker for your particular service class
  2. You call open() on it to start looking for services
  3. Time passes
  4. You acquire the service form the ServiceTracker to do something with it

There is a generally held mistaken belief that the DS component is not instantiated until you hit step 4 in the above. After all, if you’re calling the service from another component – or even looking up the ServiceReference yourself – that’s what would happen.

What actually happens is that the DS component is instantiated in step 2 above. That’s because the open() call – which is nicely thread-safe by the way, in the way that getService() isn’t – starts looking for services, and then caches the InitialTracked service, which causes DS to instantiate the component for you. Since most DS components often have a default, no-arg constructor, this generally misses most people’s attention.

If your component’s constructor – or more importantly, the fields therein, cause many classes to be loaded or perform substantial work or calculation, the fact that you’re hitting a ServiceTracker.open() synchronized call can take some non-trivial amount of time. And since this is typically in an Activator.start() method, it means that your nicely delay-until-its-needed component is now on the critical path of this bundle’s start-up, despite not actually needing the service right now.

This is one of the main problems in Eclipse’s start-up; many, many thousands of classes are loaded too eagerly. I’ve been working over the years to try and reduce the problem but it’s an uphill struggle and bad patterns (particularly the use of Activator) are endemic in a non-trivial subset of the Eclipse ecosystem. Of course, there are many fine and historical reasons why this is the case, not the least of which is that we didn’t start shipping DS in the Eclipse runtime until fairly recently.

Repo repro

Of course, when you point this out, not everyone is aware of this subtle behaviour. And while opinions may differ, code does not. I have put together a sample project which has two bundles:

  • Client, which has an Activator (yeah I know, I’m using it to make a point) that uses a ServiceTracker to look for Runnable instances
  • Runner, which has a DS component that provides a Runnable interface

When launched together, as soon as the ServiceTracker.open() method is called, you can see the console printing "Component has been instantiated" message. This is despite the Client bundle never actually using the service that the ServiceTracker causes to be obtained.

If you run it with the system property -DdisableOpen=true, the ServiceTracker.open() statement is not called, and the component is not instantiated.

This is a non-trivial reason as to why Eclipse startup can be slow. There are many, many uses of ServiceTracker to reach out to other parts of the system, and regardless of whether these are lazy DS components or have been actively instantiated, the use of ServiceTracker.open() causes them to all be eagerly activated, even before they’re needed. We can migrate Eclipse’s services to DS (and in fact, I’m working on doing just that) but until we eliminate the ServiceTracker from various Activators, we won’t see the benefit.

The code in the github repository essentially boils down to:

public void start(BundleContext bundleContext) throws Exception {
serviceTracker = new ServiceTracker<>(bundleContext, Runnable.class, null);
if (!Boolean.getBoolean("disableOpen")) {
serviceTracker.open(); // This will cause a DS component to be instantiated even though we don't use it
}
}

Unfortunately, there’s no way to use ServiceTracker to listen to lazily activated services, and as an OSGi standard, the behaviour is baked in to it.

Fortunately, there’s a lighter-weight tracker you can use called ServiceCaller – but that’s a topic for another blog post.

Summary

Using ServiceTracker.open() will cause lazily instantiated DS components to be activated eagerly, before the service is used. Instead of using ServiceTracker, try moving your service out to a DS component, and then DS will do the right thing.


July 02, 2020 07:00 PM

How to install RDi in the latest version of Eclipse

by Wim at June 30, 2020 03:57 PM

Monday, June 29, 2020
In this blog, I am going to show you how to install IBM RDi into the latest and the greatest version of Eclipse. If you prefer to watch a video then scroll down to the end. **EDIT** DOES NOT WORK WITH ECLIPSE 2020/09 AND HIGHER.

Read more


by Wim at June 30, 2020 03:57 PM

Quarkus – Supersonic Subatomic IoT

by Jens Reimann at June 30, 2020 03:22 PM

Quarkus is advertised as a “Kubernetes Native Java stack, …”, so we took it to a test, and checked what benefits we can get, by replacing an existing service from the IoT components of EnMasse, the cloud-native, self-service messaging system.

The context

For quite a while, I wanted to try out Quarkus. I wanted to see what benefits it brings us in the context of EnMasse. The IoT functionality of EnMasse is provided by Eclipse Honoâ„¢, which is a micro-service based IoT connectivity platform. Hono is written in Java, makes heavy use of Vert.x, and the application startup and configuration is being orchestrated by Spring Boot.

EnMasse provides the scalable messaging back-end, based on AMQP 1.0. It also takes care of the Eclipse Hono deployment, alongside EnMasse. Wiring up the different services, based on an infrastructure custom resource. In a nutshell, you create a snippet of YAML, and EnMasse takes care and deploys a messaging system for you, with first-class support for IoT.

Architecture diagram, explaining the tenant service.
Architectural overview – showing the Tenant Service

This system requires a service called the “tenant service”. That service is responsible for looking up an IoT tenant, whenever the system needs to validate that a tenant exists or when its configuration is required. Like all the other services in Hono, this service is implemented using the default stack, based on Java, Vert.x, and Spring Boot. Most of the implementation is based on Vert.x alone, using its reactive and asynchronous programming model. Spring Boot is only used for wiring up the application, using dependency injection and configuration management. So this isn’t a typical Spring Boot application, it is neither using Spring Web or any of the Spring Messaging components. And the reason for choosing Vert.x over Spring in the past was performance. Vert.x provides an excellent performance, which we tested a while back in our IoT scale test with Hono.

The goal

The goal was simple: make it use fewer resources, having the same functionality. We didn’t want to re-implement the whole service from scratch. And while the tenant service is specific to EnMasse, it still uses quite a lot of the base functionality coming from Hono. And we wanted to re-use all of that, as we did with Spring Boot. So this wasn’t one of those nice “greenfield” projects, where you can start from scratch, with a nice and clean “Hello World”. This is code is embedded in two bigger projects, passes system tests, and has a history of its own.

So, change as little as possible and get out as much as we can. What else could it be?! And just to understand from where we started, here is a screenshot of the metrics of the tenant service instance on my test cluster:

Screenshot of original resource consumption.
Metrics for the original Spring Boot application

Around 200MiB of RAM, a little bit of CPU, and not much to do. As mentioned before, the tenant service only gets queries to verify the existence of a tenant, and the system will cache this information for a bit.

Step #1 – Migrate to Quarkus

To use Quarkus, we started to tweak our existing project, to adopt the different APIs that Quarkus uses for dependency injection and configuration. And to be fair, that mostly meant saying good-bye to Spring Boot specific APIs, going for something more open. Dependency Injection in Quarkus comes in the form of CDI. And Quarkus’ configuration is based on Eclipse MicroProfile Config. In a way, we didn’t migrate to Quarkus, but away from Spring Boot specific APIs.

First steps

Starting with adding the Quarkus Maven plugin and some basic dependencies to our Maven build, and off we go.

And while replacing dependency inject was a rather smooth process, the configuration part was a bit more tricky. Both Hono and Microprofile Config have a rather opinionated view on the configuration. Which made it problematic to enhance the Hono configuration in the way that Microprofile was happy. So for the first iteration, we ended up wrapping the Hono configuration classes to make them play nice with Microprofile. However, this is something that we intend to improve in Hono in the future.

Packaging the JAR into a container was no different than with the existing version. We only had to adapt the EnMasse operator to provide application arguments in the form Quarkus expected them.

First results

From a user perspective, nothing has changed. The tenant service still works the way it is expected to work and provides all the APIs as it did before. Just running with the Quarkus runtime, and the same JVM as before:

Screenshot of resource consumption with Quarkus in JVM mode.
Metrics after the conversion to Quarkus, in JVM mode

We can directly see a drop of 50MiB from 200MiB to 150MiB of RAM, that isn’t bad. CPU isn’t really different, though. There also is a slight improvement of the startup time, from ~2.5 seconds down to ~2 seconds. But that isn’t a real game-changer, I would say. Considering that ~2.5 seconds startup time, for a Spring Boot application, is actually not too bad, other services take much longer.

Step #2 – The native image

Everyone wants to do Java “native compilation”. I guess the expectation is that native compilation makes everything go much faster. There are different tests by different people, comparing native compilation and JVM mode, and the outcomes vary a lot. I don’t think that “native images” are a silver bullet to performance issues, but still, we have been curious to give it a try and see what happens.

Native image with Quarkus

Enabling native image mode in Quarkus is trivial. You need to add a Maven profile, set a few properties and you have native image generation enabled. With setting a single property in the Maven POM file, you can also instruct the Quarkus plugin to perform the native compilation step in a container. With that, you don’t need to worry about the GraalVM installation on your local machine.

Native image generation can be tricky, we knew that. However, we didn’t expect this to be as complex as being “Step #2”. In a nutshell, creating a native image compiles your code to CPU instruction, rather than JVM bytecode. In order to do that, it traces the call graph, and it fails to do so when it comes to reflection in Java. GraalVM supports reflection, but you need to provide the information about types, classes, and methods that want to participate in the reflection system, from the outside. Luckily Quarkus provides tooling to generate this information during the build. Quarkus knows about constructs like de-serialization in Jackson and can generate the required information for GraalVM to compile this correctly.

However, the magic only works in areas that Quarkus is aware of. So we did run into some weird issues, strange behavior that was hard to track down. Things that worked in JVM mode all of a sudden were broken in native image mode. Not all the hints are in the documentation. And we also didn’t read (or understand) all of the hints that are there. It takes a bit of time to learn, and with a lot of help from some colleagues (many thanks to Georgios, Martin, and of course Dejan for all the support), we got it running.

What is the benefit?

After all the struggle, what did it give us?

Screenshot of resource consumption with Quarkus in native image mode.
Metrics when running as native image Quarkus application

So, we are down another 50MiB of RAM. Starting from ~200MiB, down to ~100MiB. That is only half the RAM! Also, this time, we see a reduction in CPU load. While in JVM mode (both Quarkus and Spring Boot), the CPU load was around 2 millicores, now the CPU is always below that, even during application startup. Startup time is down from ~2.5 seconds with Spring Boot, to ~2 seconds with Quarkus in JVM mode, to ~0.4 seconds for Quarkus in native image mode. Definitely an improvement, but still, neither of those times is really bad.

Pros and cons of Quarkus

Switching to Quarkus was no problem at all. We found a few areas in the Hono configuration classes to improve. But in the end, we can keep the original Spring Boot setup and have Quarkus at the same time. Possibly other Microprofile compatible frameworks as well, though we didn’t test that. Everything worked as before, just using less memory. And except for the configuration classes, we could pretty much keep the whole application as it was.

Native image generation was more complex than expected. However, we also saw some real benefits. And while we didn’t do any performance tests on that, here is a thought: if the service has the same performance as before, the fact that it requires only half the of memory, and half the CPU cycles, this allows us to run twice the amount of instances now. Doubling throughput, as we can scale horizontally. I am really looking forward to another scale test since we did do all other kinds of optimizations as well.

You should also consider that the process of building a native image takes quite an amount of time. For this, rather simple service, it takes around 3 minutes on an above-than-average machine, just to build the native image. I did notice some decent improvement when trying out GraalVM 20.0 over 19.3, so I would expect some more improvements on the toolchain over time. Things like hot code replacement while debugging, are things that are not possible with the native image profile though. It is a different workflow, and that may take a bit to adapt. However, you don’t need to commit to either way. You can still have both at the same time. You can work with JVM mode and the Quarkus development mode, and then enable the native image profile, whenever you are ready.

Taking a look at the size of the container images, I noticed that the native image isn’t smaller (~85 MiB), compared to the uber-JAR file (~45 MiB). Then again, our “java base” image alone is around ~435 MiB. And it only adds the JVM on top of the Fedora minimal image. As you don’t need the JVM when you have the native image, you can go directly with the Fedora minimal image, which is around ~165 MiB, and end up with a much smaller overall image.

Conclusion

Switching our existing Java project to Quarkus wasn’t a big deal. It required some changes, yes. But those changes also mean, using some more open APIs, governed by the Eclipse Foundation’s development process, compared to using Spring Boot specific APIs. And while you can still use Spring Boot, changing the configuration to Eclipse MicroProfile opens up other possibilities as well. Not only Quarkus.

Just by taking a quick look at the numbers, comparing the figures from Spring Boot to Quarkus with native image compilation: RAM consumption was down to 50% of the original, CPU usage also was down to at least 50% of original usage, and the container image shrank to ~50% of the original size. And as mentioned in the beginning, we have been using Vert.x for all the core processing. Users that make use of the other Spring components should see more considerable improvement.

Going forward, I hope we can bring the changes we made to the next versions of EnMasse and Eclipse Hono. There is a real benefit here, and it provides you with some awesome additional choices. And in case you don’t like to choose, the EnMasse operator has some reasonable defaults for you 😉


Also see

This work is based on the work of others. Many thanks to:

The post Quarkus – Supersonic Subatomic IoT appeared first on ctron's blog.


by Jens Reimann at June 30, 2020 03:22 PM

Updates to the Eclipse IP Due Diligence Process

by waynebeaton at June 25, 2020 07:23 PM

In October 2019, The Eclipse Foundation’s Board of Directors approved an update to the IP Policy that introduces several significant changes in our IP due diligence process. I’ve just pushed out an update to the Intellectual Property section in the Eclipse Foundation Project Handbook.

I’ll apologize in advance that the updates are still a little rough and require some refinements. Like the rest of the handbook, we continually revise and rework the content based on your feedback.

Here’s a quick summary of the most significant changes.

License certification only for third-party content. This change removes the requirement to perform deep copyright, provenance and scanning of anomalies for third-party content unless it is being modified and/or if there are special considerations regarding the content. Instead, the focus for third-party content is on license compatibility only, which had previously been referred to as Type A due diligence.

Leverage other sources of license information for third-party content. With this change to license certification only for third-party content, we are able to leverage existing sources of information license information. That is, the requirement that the Eclipse IP Team personally review every bit of third-party content has been removed and we can now leverage other trusted sources.

ClearlyDefined is a trusted source of license information. We currently have two trusted sources of license information: The Eclipse Foundation’s IPZilla and ClearlyDefined. The IPZilla database has been painstakingly built over most of the lifespan of the Eclipse Foundation; it contains a vast wealth of deeply vetted information about many versions of many third-party libraries. ClearlyDefined is an OSI project that combines automated harvesting of software repositories and curation by trusted members of the community to produce a massive database of license (and other) information about content.

Piggyback CQs are no longer required. CQs had previously been used for tracking both the vetting process and the use of third-party content. With the changes, we are no longer required track the use of third-party content using CQs, so piggyback CQs are no longer necessary.

Parallel IP is used in all cases. Previously, our so-called Parallel IP process, the means by which project teams could leverage content during development while the IP Team completed their due diligence review was available only to projects in the incubation phase and only for content with specific conditions. This is no longer the case: full vetting is now always applied in parallel in all cases.

CQs are not required for third-party content in all cases. In the case of third-party content due diligence, CQs are now only used to track the vetting process.

CQs are no longer required before third-party content is introduced. Previously, the IP Policy required that all third-party content must be vetted by the Eclipse IP Team before it can be used by an Eclipse Project. The IP Policy updates turn this around. Eclipse project teams may now introduce new third-party content during a development cycle without first checking with the IP Team. That is, a project team may commit build scripts, code references, etc. to third-party content to their source code repository without first creating a CQ to request IP Team review and approval of the third-party content. At least during the development period between releases, the onus is on the project team to—​with reasonable confidence—​ensure any third-party content that they introduce is license compatible with the project’s license. Before any content may be included in any formal release the project team must engage in the due diligence process to validate that the third-party content licenses are compatible with the project license.

History may be retained when an existing project moves to the Eclipse Foundation. We had previously required that the commit history for a project moving to the Eclipse Foundation be squashed and that the initial contribution be the very first commit in the repository. This is no longer the case; existing projects are now encouraged (but not required) to retain their commit history. The initial contribution must still be provided to the IP Team via CQ as a snapshot of the HEAD state of the existing repository (if any).

The due diligence process for project content is unchanged.

If you notice anything that looks particularly wrong or troubling, please either open a bug report, or send a note to EMO.


by waynebeaton at June 25, 2020 07:23 PM

Eclipse JustJ

by Ed Merks (noreply@blogger.com) at June 25, 2020 08:18 AM

I've recently completed the initial support for provisioning the new Eclipse JustJ project, complete with a logo for it.


I've learned several new technologies and honed existing technology skills to make this happen. For example, I've previously used Inkscape to create nicer images for Oomph; a *.png with alpha is much better than a *.gif with a transparent pixel, particularly with the vogue, dark-theme fashion trend, which for old people like me feels more like the old days of CRT monitors than something modern, but hey, to each their own. In any case, a *.svg is cool, definitely looks great at every resolution, and can easily be rendered to a *.png.

By the way, did you know that artwork derivative of  Eclipse artwork requires special approval? Previously the Eclipse Board of Directors had to review and approve such logos, but now our beloved, supreme leader, Mike Milinkovich, is empowered to do that personally.

Getting to the point where we can redistribute JREs at Eclipse has been a long and winding road.  This of course required Board approval and your elected Committer Representatives helped push that to fruition last year.  Speaking of which, now there is an exciting late-breaking development: the move of AdoptOpenJDK to Eclipse Adoptium.  This will be an important source JREs for JustJ!

One of the primary goals of JustJ is to provide JREs via p2 update sites such that a product build can easily incorporate a JRE into the product. With that in place, the product runs out-of-the-box regardless of the JRE installed on the end-user's computer, which is particularly useful for products that are not Java-centric where the end-user doesn't care about the fact that Eclipse is implemented using Java.  This will also enable the Eclipse Installer to run out-of-the-box and will enable the installer to create an installation that, at the user's discretion, uses a JRE provided by Eclipse. In all cases, this includes the ability to update the installation's embedded JRE as new ones are released.

The first stage is to build a JRE from a JDK using jlink.  This must run natively on the JDK's actual supported operating system and hardware architecture.  Of course we want to automate this step, and all the steps involved in producing a p2 repository populated with JREs.  This is where I had to learn about Jenkins pipeline scripts.  I'm particularly grateful to Mikaël Barbero for helping me get started with a simple example.  Now I am a pipeline junkie, and of course I had to learn Groovy as well.

In the initial stage, we generate the JREs themselves, and that involves using shell scripts effectively.  I'm not a big fan of shell scripts, but they're a necessary evil.  I authored a single script that produces JREs on all the supported operating systems; one that I can run locally on Windows and on my two virtual boxes as well. The pipeline itself needs to run certain stages on specific agents such that their steps are performed on the appropriate operating system and hardware.  I'm grate to Robert Hilbrich of DLR for supporting JustJ's builds with their organization's resource packs!  He's also been kind enough to be one of our first test guinea pigs building a product with a JustJ JRE.  The initial stage produces a set of JREs.


In the next stage, JREs need to be wrapped into plugins and features to produce a p2 repository via a Maven/Tycho build.  This is a huge amount of boiler plate scaffolding that is error-prone to author and challenging to maintain, especially when providing multiple JRE flavors.  So of course we want to automate the generation of this scaffolding as well.  Naturally if we're going to generate something, we need a model to capture the boiled-down essence of what needs to be generated.  So I whipped together an EMF model and used JET templates to sketch out the scaffolding. With the super cool JET Editor, these are really easy to author and maintain.  This stage is described in the documentation and produces a p2 update site.  The sites are automatically maintained and the index pages are automatically generated.

To author nice documentation I had to learn PHP much better.  It's really quite cool and very powerful, particularly for producing pages with dynamic content.  For example, I used it to implement more flexible browsing support of download.eclipse.org so that one can really see all the files present, even when there is an index.html or index.php in the folder.  In any case, there is now lots of documentation for JustJ to describe everything in detail, and it was authored with the help of PHP scaffolding.

Last but not least, there is an Oomph setup to automate the provisioning of a full development environment along with a tutorial to describe in detail everything in that workspace.  There's no excuse not to contribute.  While authoring this tutorial, I found that creating nice, appropriately-clipped screen captures is super annoying and very time consuming, so I dropped a little goodie into Oomph to make that easier.   You might want to try it. Just add "-Dorg.eclipse.oomph.ui.screenshot=<some-folder-location>" to your eclipse.ini to enable it.  Then, if you hit Ctrl twice quickly, screen captures will be produced immediately based on where your application currently has focus.  If you hit Shift twice quickly, screen captures will be produced after a short delay.  This allows you to bring up a menu from the menu bar, from a toolbar button, or a context menu, and capture that menu.  In all cases, the captures include the "simulated" mouse cursor and starts with the "focus", expanding outward to the full enclosing window.

The bottom line, JustJ generates everything given just a set of URLs to JDKs as input, and it maintains everything automatically.  It even provides an example of how to build a product with an embedded JRE to get you started quickly.  And thanks to some test guinea pigs, we know it really works as advertised.


On the personal front, during this time period, I finished my move to Switzerland.  Getting up early here is a feast for the eyes! The movers were scurrying around my apartment the same days as the 2020-06 release, which was also the same day as one of the Eclipse Board meetings.  That was a little too much to juggle at once!

At this point, I can make anything work and I can make anything that already works work even better. Need help with something?  I'm easy to find...

by Ed Merks (noreply@blogger.com) at June 25, 2020 08:18 AM

Clean Sheet Service Update (0.8)

by Frank Appel at May 23, 2020 09:25 AM

Written by Frank Appel

Thanks to a community contribution we’re able to announce another Clean Sheet Service Update (0.8).

The Clean Sheet Eclipse Design

In case you've missed out on the topic and you are wondering what I'm talking about, here is a screenshot of my real world setup using the Clean Sheet theme (click on the image to enlarge). Eclipse IDE Look and Feel: Clean Sheet Screenshot For more information please refer to the features landing page at http://fappel.github.io/xiliary/clean-sheet.html, read the introductory Clean Sheet feature description blog post, and check out the New & Noteworthy page.

 

Clean Sheet Service Update (0.8)

This service update fixes a rendering issue of ruler numbers. Kudos to Pierre-Yves B. for contributing the necessary fixes. Please refer to the issue #87 for more details.

Clean Sheet Installation

Drag the 'Install' link below to your running Eclipse instance

Drag to your running Eclipse* workspace. *Requires Eclipse Marketplace Client

or

Select Help > Install New Software.../Check for Updates.
P2 repository software site: @ http://fappel.github.io/xiliary/
Feature: Code Affine Theme

After feature installation and workbench restart select the ‘Clean Sheet’ theme:
Preferences: General > Appearance > Theme: Clean Sheet

 

On a Final Note, …

Of course, it’s interesting to hear suggestions or find out about potential issues that need to be resolved. Feel free to use the Xiliary Issue Tracker or the comment section below for reporting.

I’d like to thank all the Clean Sheet adopters for the support! Have fun with the latest update :-)

The post Clean Sheet Service Update (0.8) appeared first on Code Affine.


by Frank Appel at May 23, 2020 09:25 AM

Clean Sheet Service Update (0.7)

by Frank Appel at April 24, 2020 08:49 AM

Written by Frank Appel

It’s been a while, but today we’re happy to announce a Clean Sheet Service Update (0.7).

The Clean Sheet Eclipse Design

In case you've missed out on the topic and you are wondering what I'm talking about, here is a screenshot of my real world setup using the Clean Sheet theme (click on the image to enlarge). Eclipse IDE Look and Feel: Clean Sheet Screenshot For more information please refer to the features landing page at http://fappel.github.io/xiliary/clean-sheet.html, read the introductory Clean Sheet feature description blog post, and check out the New & Noteworthy page.

 

Clean Sheet Service Update (0.7)

This service update provides the long overdue JRE 11 compatibility on windows platforms. Kudos to Pierre-Yves B. for contributing the necessary fixes. Please refer to the issues #88 and #90 for more details.

Clean Sheet Installation

Drag the 'Install' link below to your running Eclipse instance

Drag to your running Eclipse* workspace. *Requires Eclipse Marketplace Client

or

Select Help > Install New Software.../Check for Updates.
P2 repository software site: @ http://fappel.github.io/xiliary/
Feature: Code Affine Theme

After feature installation and workbench restart select the ‘Clean Sheet’ theme:
Preferences: General > Appearance > Theme: Clean Sheet

 

On a Final Note, …

Of course, it’s interesting to hear suggestions or find out about potential issues that need to be resolved. Feel free to use the Xiliary Issue Tracker or the comment section below for reporting.

I’d like to thank all the Clean Sheet adopters for the support! Have fun with the latest update :-)

The post Clean Sheet Service Update (0.7) appeared first on Code Affine.


by Frank Appel at April 24, 2020 08:49 AM

EclipseCon 2020 CFP is Open

April 16, 2020 08:30 PM

If you are interested in speaking, our call for proposals is now open. Please visit the CFP page for information on how to submit your talk.

April 16, 2020 08:30 PM

Add Your Voice to the 2020 Jakarta EE Developer Survey

April 07, 2020 01:00 PM

Our third annual Jakarta EE Developer Survey is now open and I encourage everyone to take a few minutes and complete the survey before the April 30 deadline.

April 07, 2020 01:00 PM

Eclipse Oomph: Suppress Welcome Page

by kthoms at March 19, 2020 04:37 PM

I am frequently spawning Eclipse workspaces with Oomph setups and the first action I do when a new workspace is provisioned is to close Eclipse’s welcome page. So I wanted to suppress that for a current project setup. So I started searching where Eclipse stores the preference that disables the intro page. The location of that preference is within the workspace directory at

.metadata/.plugins/org.eclipse.core.runtime/.settings/org.eclipse.ui.prefs

The content of the preference file is

eclipse.preferences.version=1
showIntro=false

So to make Oomph create the preference file before the workspace is started the first time use a Resource Creation task and set the Target URL

${workspace.location|uri}/.metadata/.plugins/org.eclipse.core.runtime/.settings/org.eclipse.ui.prefs

Then put the above mentioned preference content as Content value.


by kthoms at March 19, 2020 04:37 PM

MPS’ Quest of the Holy GraalVM of Interpreters

by Niko Stotz at March 11, 2020 11:19 PM

A vision how to combine MPS and GraalVM

Way too long ago, I prototyped a way to use GraalVM and Truffle inside JetBrains MPS. I hope to pick up this work soon. In this article, I describe the grand picture of what might be possible with this combination.

Part I: Get it Working

Step 0: Teach Annotation Processors to MPS

Truffle uses Java Annotation Processors heavily. Unfortunately, MPS doesn’t support them during its internal Java compilation. The feature request doesn’t show any activity.

So, we have to do it ourselves. A little less time ago, I started with an alternative Java Facet to include Annotation Processors. I just pushed my work-in-progress state from 2018. As far as I remember, there were no fundamental problems with the approach.

Optional Step 1: Teach Truffle Structured Sources

For Truffle, all executed programs stem from a Source. However, this Source can only provide Bytes or Characters. In our case, we want to provide the input model. The prototype just put the Node id of the input model as a String into the Source; later steps resolved the id against MPS API. This approach works and is acceptable; directly passing the input node as object would be much nicer.

Step 2: Implement Truffle Annotations as MPS Language

We have to provide all additional hints as Annotations to Truffle. They are complex enough, so we want to leverage MPS’ language features to directly represent all Truffle concepts.

This might be a simple one-to-one representation of Java Annotations as MPS Concepts, but I’d guess we can add some more semantics and checks. Such feedback within MPS should simplify the next steps: Annotation Processors (and thus, Truffle) have only limited options to report issues back to us.

We use this MPS language to implement the interpreter for our DSL. This results in a TruffleLanguage for our DSL.

Step 3: Start Truffle within MPS

At the time when I wrote the proof-of-concept, a TruffleLanguage had to be loaded at JVM startup. To my understanding, Truffle overcame this limitation. I haven’t looked into the current possibilities in detail yet.

I can imagine two ways to provide our DSL interpreter to the Truffle runtime:

  1. Always register MpsTruffleLanguage1, MpsTruffleLanguage2, etc. as placeholders. This would also work at JVM startup. If required, we can register additional placeholders with one JVM restart.
    All non-colliding DSL interpreters would be MpsTruffleLanguage1 from Truffle’s point of view. This works, as we know the MPS language for each input model, and can make sure Truffle uses the right evaluation for the node at hand. We might suffer a performance loss, as Truffle had to manage more evaluations.

    What are non-colliding interpreters? Assume we have a state machine DSL, an expression DSL, and a test DSL. The expression DSL is used within the state machines; we provide an interpreter for both of them.
    We provide two interpreters for the test DSL: One executes the test and checks the assertions, the other one only marks model nodes that are covered by the test.
    The state machine interpreter, the expression interpreter, and the first test interpreter are non-colliding, as they never want to execute on the same model node. All of them go to MpsTruffleLanguage1.
    The second test interpreter does collide, as it wants to do something with a node also covered by the other interpreters. We put it to MpsTruffleLanguage2.

  2. We register every DSL interpreter as a separate TruffleLanguage. Nice and clean one-to-one relation. In this scenario, we probably had to get Truffle Language Interop right. I have not yet investigated this topic.

Step 4: Translate Input Model to Truffle Nodes

A lot of Truffle’s magic stems from its AST representation. Thus, we need to translate our input model (a.k.a. DSL instance, a.k.a. program to execute) from MPS nodes into Truffle Nodes.

Ideally, the Truffle AST would dynamically adopt any changes of the input model — like hot code replacement in a debugger, except we don’t want to stop the running program. From Truffle’s point of view this shouldn’t be a problem: It rewrites the AST all the time anyway.

DclareForMPS seems a fitting technology. We define mapping rules from MPS node to Truffle Node. Dclare makes sure they are in sync, and input changes are propagated optimally. These rules could either be generic, or be generated from the interpreter definition.

We need to take care that Dclare doesn’t try to adapt the MPS nodes to Truffle’s optimizing AST changes (no back-propagation).

We require special handling for edge cases of MPS → Truffle change propagation, e.g. the user deletes the currently executed part of the program.

For memory optimization, we might translate only the entry nodes of our input model immediately. Instead of the actual child Truffle Nodes, we’d add special nodes that translate the next part of the AST.
Unloading the not required parts might be an issue. Also, on-demand processing seems to conflict with Dclare’s rule-based approach.

Part II: Adapt to MPS

Step 5: Re-create Interpreter Language

The MPS interpreter framework removes even more boilerplate from writing interpreters than Truffle. The same language concepts should be built again, as abstraction on top of the Truffle Annotation DSL. This would be a new language aspect.

Step 6: Migrate MPS Interpreter Framework

Once we had the Truffle-based interpreter language, we want to use it! Also, we don’t want to rewrite all our nice interpreters.

I think it’s feasible to automatically migrate at least large parts of the existing MPS interpreter framework to the new language. I would expect some manual adjustment, though. That’s the price we had to pay for two orders of magnitude performance improvement.

Step 7: Provide Plumbing for BaseLanguage, Checking Rules, Editors, and Tests

Using the interpreter should be as easy as possible. Thus, we have to provide the appropriate utilities:

  • Call the interpreter from any BaseLanguage code.
    We had to make sure we get language / model loading and dependencies right. This should be easier with Truffle than with the current interpreter, as most language dependencies are only required at interpreter build time.
  • Report interpreter results in Checking Rules.
    Creating warnings or errors based on the interpreter’s results is a standard use-case, and should be supported by dedicated language constructs.
  • Show interpreter results in an editor.
    As another standard use-case, we might want to show the interpreter’s results (or a derivative) inside an MPS editor. Especially for long-running or asynchronous calculations, getting this right is tricky. Dedicated editor extensions should take care of the details.
  • Run tests that involve the interpreter.
    Yet another standard use-case: our DSL defines both calculation rules and examples. We want to assure they are in sync, meaning executing the rules in our DSL interpreter and comparing the results with the examples. This must work both inside MPS, and in a headless build / CI test environment.

Step 8: Support Asynchronous Interpretation and/or Caching

The simple implementation of interpreter support accepts a language, parameters, and a program (a.k.a. input model), and blocks until the interpretation is complete.

This working mode is useful in various situations. However, we might want to run long-running interpretations in the background, and notify a callback once the computation is finished.

Example: An MPS editor uses an interpreter to color a rule red if it is not in accordance with a provided example. This interpretation result is very useful, even if it takes several seconds to calculate. However, we don’t want to block the editor (or even whole MPS) for that long.

Extending the example, we might also want to show an error on such a rule. The typesystem runs asynchronously anyways, so blocking is not an issue. However, we now run the same expensive interpretation twice. The interpreter support should provide configurable caching mechanisms to avoid such waste.

Both asynchronous interpretation and caching benefit from proper language extensions.

Step 9: Integrate with MPS Typesystem and Scoping

Truffle needs to know about our DSL’s types, e.g. for resolving overloaded functions or type casting. We already provide this information to the MPS typesystem. I didn’t look into the details yet; I’d expect we could generate at least part of the Truffle input from MPS’ type aspect.

Truffle requires scoping knowledge to store variables in the right stack frame (and possibly other things I don’t understand yet). I’d expect we could use the resolved references in our model as input to Truffle. I’m less optimistic to re-use MPS’ actual scoping system.

For both aspects, we can amend the missing information in the Interpreter Language, similar to the existing one.

Step 10: Support Interpreter Development

As DSL developers, we want to make sure we implemented our interpreter correctly. Thus, we write tests; they are similar to other tests involving the interpreter.

However, if they fail, we don’t want to debug the program expressed in our DSL, but our interpreter. For example, we might implement the interpreter for a switch-like construct, and had forgotten to handle an implicit default case.

Using a regular Java debugger (attached to our running MPS instance) has only limited use, as we had to debug through the highly optimized Truffle code. We cannot use Truffle’s debugging capabilities, as they work on the DSL.
There might be ways to attach a regular Java debugger running inside MPS in a different thread to its own JVM. Combining the direct debugger access with our knowledge of the interpreter’s structure, we might be able to provide sensible stepping through the interpreter to the DSL developer.

Simpler ways to support the developers might be providing traces through the interpreter, or ship test support where the DSL developer can assure specific evaluators were (not) executed.

Step 11: Create Language for Interop

Truffle provides a framework to describe any runtime in-memory data structure as Shape, and to convert them between languages. This should be a nice extension of MPS’ multi-language support into the runtime space, supported by an appropriate Meta-DSL (a.k.a. language aspect).

Part III: Leverage Programming Language Tooling

Step 12: Connect Truffle to MPS’ Debugger

MPS contains the standard interactive debugger inherited from IntelliJ platform.

Truffle exposes a standard interface for interactive debuggers of the interpreted input. It takes care of the heavy lifting from Truffle AST to MPS input node.

If we ran Truffle in a different thread than the MPS debugger, we should manage to connect both parts.

Step 13: Integrate Instrumentation

Truffle also exposes an instrumentation interface. We could provide standard instrumentation applications like “code” coverage (in our case: DSL node coverage) and tracing out-of-the-box.

One might think of nice visualizations:

  • Color node background based on coverage
  • Mark the currently executed part of the model
  • Project runtime values inline
  • Show traces in trace explorer

Other possible applications:

  • Snapshot mechanism for current interpreter state
  • Provide traces for offline debugging, and play them back

Part IV: Beyond MPS

Step 14: Serialize Truffle Nodes

If we could serialize Truffle Nodes (before any run-time optimization), we would have an MPS-independent representation of the executable DSL. Depending on the serialization format (implement Serializable, custom binary format, JSON, etc.), we could optimize for use-case, size, loading time, or other priorities.

Step 15: Execute DSL stand-alone without Generator

Assume an insurance calculation DSL.
Usually, we would implement

  • an interpreter to execute test cases within MPS,
  • a Generator to C to execute on the production server,
  • and a Generator to Java to provide an preview for the insurance agent.

With serialized Truffle Nodes, we need only one interpreter:

Part V: Crazy Ideas

Step 16: Step Back Debugger

By combining Instrumentation and debugger, it might be feasible to provide step-back debugging.

In the interpreter, we know the complete global state of the program, and can store deltas (to reduce memory usage). For quite some DSLs, this might be sufficient to store every intermediate state and thus arbitrary debug movement.

Step 17: Side Step Debugger

By stepping back through our execution and following different execution paths, we could explore alternate outcomes. The different execution path might stem from other input values, or hot code replacement.

Step 18: Explorative Simulations

If we had a side step debugger, nice support to project interpretation results, and a really fast interpreter, we could run explorative simulations on lots of different executions paths. This might enable legendary interactive development.


by Niko Stotz at March 11, 2020 11:19 PM

Postmortem - February 7 storage and authentication outage

by Denis Roy at February 20, 2020 04:12 PM

On Friday, February 7 2020, Eclipse.org suffered a severe service disruption to many of its web properties when our primary authentication server and file server suffered a hardware failure.

For 90 minutes, our main website, www.eclipse.org, was mostly available, as was our Bugzilla bug tracking tool, but logging in was not possible. Wiki, Eclipse Marketplace and other web properties were degraded. Git and Gerrit were both completely offline for 2 hours and 18 minutes. Authenticated access to Jiro -- our Jenkins+Kubernetes-based CI system, was not possible, and builds that relied on Git access failed during that time.

There was no data loss, but there were data inconsistencies. A dozen Git repositories and Gerrit code changes were in an inconsistent state due to replication schedules, but thanks to the distributed nature of Git, the code commits were still in local developer Git repositories, as well as on the failed server, which we were eventually able to revive (in an offline environment). Data inconsistencies were more severe in our LDAP accounts database, where dozens of users were unable to log in, and in some isolated cases, users reported that their account was reverted back to old data from years prior.

In hindsight, we feel this outage could have, and should have been avoided. We’ve identified many measures we must enact to prevent such unplanned outages in the future. Furthermore, our communication and incident handling processes proved to be flawed, and will be scrutinized and improved, to ensure our community is better informed during unplanned incidents.

Lastly, we’ve identified aging hardware and Single Points of Failure (SPoF) that must be addressed.

 

File server & authentication setup

At the center of the Eclipse infra is a pair of servers that handle 2 specific tasks:

  • Network Attached Storage (NAS) via NFS

  • User Authentication via OpenLDAP

The server pair consists of a primary system, which handles all the traffic, and a hot spare. Both servers are configured identically for production service, but the spare server sits idly and receives data periodically from the primary. This specific architecture was originally implemented in 2005, with periodical hardware upgrades over time.

 

Timeline of events

Friday Feb 7 - 12:33pm EST: Fred Gurr (Eclipse Foundation IT/Releng team) reports on the Foundation’s internal Slack channel that something is happening to the Infra. Denis observes many “Flaky” status reports on https://status.eclipse.org but is in transit and cannot investigate further. Webmaster Matt Ward investigates.

12:43pm: Matt confirms that our primary nfs/ldap server is not responding, and activates “Plan A: assess and fix”.

12:59pm: Denis reaches a computer and activates “Plan B: prepare for Failover” while Matt works on Plan A. The “Sorry, we are down” page is served for all Flaky services except www.eclipse.org, which continues to be served successfully by our nginx cache.

1:18pm: The standby server is ready to assume the “primary” role.

1:29pm: Matt makes the call for failover, as the severity of the hardware failure is not known, and not easily recoverable.

1:49pm: www.eclipse.org, Bugzilla, Marketplace, Wiki return to stable service on the new primary.

2:18pm: Git and Gerrit return to stable service.

2:42pm: Our Kubernetes/OpenShift cluster is updated to the latest patchlevel and all CI services restarted.

4:47pm: All legacy JIPP servers are restarted, and all other remaining services report functional.  At this time, we are not aware of any issues.

During the weekend, Matt continues to monitor the infra. Authentication issues crop up over the weekend, which are caused by duplicated accounts and are fixed by Matt.

Monday, 4:49am EST: Mikaël Barbero (Eclipse Foundation IT/Releng team) reports that there are more duplicate users in LDAP that cannot log into our systems. This is now a substantial issue. They are fixed systematically with an LDAP duplicate finder, but the process is very slow.

10:37am: First Foundation broadcast on the cross-project mailing list that there is an issue with authentication.

Tuesday, 9:51am: Denis blogs about the incident and posts a message to the eclipse.org-committers mailing list about the ongoing authentication issues. The message, however, is held for moderation and is not distributed until many hours later.

Later that day: Most duplicated accounts have been removed, and just about everything is stabilized. We do not yet understand the source of the duplicates.

Wednesday: duplicate removals continue, as well as investigation into the cause.

Thursday 9:52am: We file a dozen bugs against projects whose Git and Gerrit repos may be out of sync. Some projects had already re-pushed or rebased their missing code patches and resolved the issue as FIXED.

Friday, 2:58pm: All remaining duplicates are removed. Our LDAP database is fully cleaned. The failed server re-enters production as the hot standby - even though its hardware is not reliable. New hardware is sourced and ordered.

 

Hardware failure

The physical servers assuming our NAS/LDAP setup are server-class hardware, 2U chassis with redundant power supplies, ECC (error checking and correction) memory, RAID-5 disk arrays with a battery-backup RAID controller memory. Both primary and standby servers were put into production in 2011.

On February 7, the primary server experienced a kernel crash from the RAID controller module. The RAID controller detected an unrecoverable ECC memory error. The entire server became unresponsive.

As originally designed in 2005, periodical (batched) data updates from the primary to the hot spare were simple to set up and maintain. This method also had a distinct advantage over live replication: rapid recovery in case of erasure (accidental or malicious) or data tampering. Of course, this came at a cost of possible data loss. However, it was deemed that critical data (in our case, Source Code) susceptible to loss during the short time was also available on developer workstations.


Failover and return to stability

As the standby server was prepared for production service, the reasons for the crash on the primary server were investigated. We assessed the possibility of continuing service on the primary; that course of action would have provided the fastest recovery with the fewest surprises later on.

As the nature of the hardware failure remained unknown, failover was the only option. We confirmed that some data replication tasks had run less than one hour prior to failure, and all data replication was completed no later than 3 hours prior. IP addresses were updated, and one by one, services that depended on NFS and authentication were restarted to flush caches and minimize any potential for an inconsistent state.

At about 4:30pm, or four hours after the failure, both webmasters were confident that the failover was successful, and that very little dust would settle over the weekend.
 

Authentication issues

Throughout the weekend, we had a few reports of authentication issues -- which were expected, since we failed over to a standby authentication source that was at least 12 hours behind the primary. These issues were fixed as they were reported, and nothing seemed out of place.

On Monday morning, Feb 10th, the Foundation’s Releng team reported that several committers had authentication issues to the CI systems. We then suspected that something else was at play with our authentication database, but it was not clear to us what had happened, or what the magnitude was. The common issue was duplicate accounts -- some users had an account in two separate containers simultaneously, which prevented users from being able to authenticate. These duplicates were removed as rapidly as we could, and we wrote scripts to identify old duplicates and purge them -- but with >450,000 accounts, it was time-consuming.

At that time, we got so wrapped up in trying to understand and resolve the issue that we completely underestimated its impact on the community, and we were absolutely silent about it.

 

Problem solved

On Friday afternoon, February 14, we were able to finally clean up all the duplicate accounts and understand why they existed in the first place.

Prior to December, 2011, our LDAP database only contained committer accounts. In December 2011, we imported all the non-committer accounts from Bugzilla and Wiki into an LDAP container we named “Community”. This allowed us to centralize authentication around a single source of truth: LDAP.

All new accounts were, and are created in the Community container, and are moved into the Committer container if/when they became an Eclipse Committer.

Our primary->secondary LDAP sync mechanism was altered, at that time, to sync the Community container as well -- but it was purely additive. Once you had an account in Community, it was there for life on the standby server, even if you became a committer later on. Or if you’d ever change your email address. This was the source of the duplicate accounts on the standby server.

A new server pair has been ordered on February 14, 2020 . These servers will be put into production service as soon as possible, and the old hardware will be recommissioned to clustered service. With these new machines, we believe our existing architecture and configuration can continue to serve us well over the coming months and years.

 

Take-aways and proposed improvements

Although the outage didn’t last incredibly long (2 hours from failure to the beginning of restored service), we feel it shouldn’t have occurred in the first place. Furthermore, we’ve identified key areas where our processes can be improved - notably, in how we communicate with you.

Here are the action items we’re committed to implementing in the near term, to improve our handling of such incidents:

  • Communication: Improved Service Status page.  https://status.eclipse.org gives a picture of what’s going on, but with an improved service, we can communicate the nature of outages, the impact, and estimated time until service is restored.

  • Communication: Internally, we will improve communication within our team and establish a maintenance log, whereby members of the team can discover the work that has been done.

  • Staffing: we will explore the possibility of an additional IT hire, thus enhancing our collective skillset, and enabling more overall time on the quality and reliability of the infra.

  • Aging Hardware: we will put top-priority on resolving aging SPoF, and be more strict about not running hardware devices past their reasonable life expectancy.

    • In the longer term, we will continue our investment in replacing SPoF with more robust technologies. This applies to authentication, storage, databases and networking.

  • Process and procedures: we will allocate more time to testing our disaster recovery and business continuity procedures. Such tests would likely have revealed the LDAP sync bug.

We believe that these steps will significantly reduce unplanned outages such as the one that occured on February 7. They will also help us ensure that, should a failure occur, we recover and return to a state of stability more rapidly. Finally, they will help you understand what is happening, and what the timelines to restore service are, so that you can plan your work tasks and remain productive.


by Denis Roy at February 20, 2020 04:12 PM

Interfacing null-safe code with legacy code

by Stephan Herrmann at February 06, 2020 07:38 PM

When you adopt null annotations like these, your ultimate hope is that the compiler will tell you about every possible NullPointerException (NPE) in your program (except for tricks like reflection or bytecode weaving etc.). Hallelujah.

Unfortunately, most of us use libraries which don’t have the blessing of annotation based null analysis, simply because those are not annotated appropriately (neither in source nor using external annotations). Let’s for now call such code: “legacy”.

In this post I will walk through the options to warn you about the risks incurred by legacy code. The general theme will be:

Can we assert that no NPE will happen in null-checked code?

I.e., if your code consistently uses null annotations, and has passed analysis without warnings, can we be sure that NPEs can only ever be thrown in the legacy part of the code? (NPEs inside legacy code are still to be expected, there’s nothing we can change about that).

Using existing Eclipse versions, one category of problems would still go undetected whereby null-checked code could still throw NPE. This has been recently fixed bug.

Simple data flows

Let’s start with simple data flows, e.g., when your program obtains a value from legacy code, like this:

NullFrom_getProperty

You shouldn’t be surprised, the javadoc even says: “The method returns null if the property is not found.” While the compiler doesn’t read javadoc, it can recognize that a value with unspecified nullness flows into a variable with a non-null type. Hence the warning:

Null type safety (type annotations): The expression of type ‘String’ needs unchecked conversion to conform to ‘@NonNull String’

As we can see, the compiler warned us, so we are urged to fix the problem in our code. Conversely, if we pass any value into a legacy API, all bad that can happen would happen inside legacy code, so nothing to be done for our mentioned goal.

The underlying rule is: legacy values can be safely assigned to nullable variables, but not to non-null variables (example Properties.getProperty()). On the other hand, any value can be assigned to a legacy variable (or method argument).

Put differently: values flowing from null-checked to legacy pose no problems, whereas values flowing the opposite direction must be assumed to be nullable, to avoid problems in null-checked code.

Enter generics

Here be dragons.

As a minimum requirement we now need null annotations with target TYPE_USE (“type annotations”), but we have this since 2014. Good.

NullFromLegacyList

Here we obtain a List<String> value from a Legacy class, where indeed the list names is non-null (as can be seen by successful output from names.size()). Still things are going south in our code, because the list contained an unexpected null element.

To protect us from this problem, I marked the entire class as @NonNullByDefault, which causes the type of the variable names to become List<@NonNull String>. Now the compiler can again warn us about an unsafe assignment:

Null type safety (type annotations): The expression of type ‘List<String>’ needs unchecked conversion to conform to ‘List<@NonNull String>’

This captures the situation, where a null value is passed from legacy to null-checked code, which is wrapped in a non-null container value (the list).

Here’s a tricky question:

Is it safe to pass a null-checked value of a parameterized type into legacy code?

In the case of simple values, we saw no problem, but the following example tells us otherwise once generics are involved:
NullIntoNonNullList

Again we have a list of type List<@NonNull String>, so dereferencing values obtained from that list should never throw NPE. Unfortunately, the legacy method printNames() succeeded to break our contract by inserting null into the list, resulting in yet another NPE thrown in null-checked code.

To describe this situation it helps to draw boundaries not only between null-checked and legacy code, but also to draw a boundary around the null-checked value of parameterized type List<@NonNull String>. That boundary is breached when we pass this value into legacy code, because that code will only see List<String> and happily invoke add(null).

This is were I recently invented a new diagnostic message:

Unsafe null type conversion (type annotations): The value of type ‘List<@NonNull String>’ is made accessible using the less-annotated type ‘List<String>’

By passing names into legacy code, we enable a hidden data flow in the opposite direction. In the general case, this introduces the risk of NPE in otherwise null-checked code. Always?

Wildcards

Java would be a much simpler language without wildcards, but a closer look reveals that wildcards actually don’t only help for type safety but also for null-safety. How so?

If the legacy method were written using a wildcard, it would not be (easily) possible to sneak in a null value, here are two attempts:
SneakAttempts

The first attempt is an outright Java type error. The second triggers a warning from Eclipse, despite the lack of null annotations:

Null type mismatch (type annotations): ‘null’ is not compatible to the free type variable ‘?’

Of course, compiling the legacy class without null-checking would still bypass our detection, but chances are already better.

If we add an upper bound to the wildcard, like in List<? extends CharSequence>, not much is changed. A lower bound, however, is an invitation for the legacy code to insert null at whim: List<? super String> will cause names.add() to accept any String, including the null value. That’s why Eclipse will also complain against lower bounded wildcards:

Unsafe null type conversion (type annotations): The value of type ‘List<@NonNull String>’ is made accessible using the less-annotated type ‘List<? super String>’

Comparing to raw types

It has been suggested to treat legacy (not null-annotated) types like raw types. Both are types with a part of the contract ignored, thereby causing risks for parts of the program that still rely on the contract.

Interestingly, raw types are more permissive in the parameterized-to-raw conversion. We are generally not protected against legacy code inserting an Integer into a List<String> when passed as a raw List.

More interestingly, using a raw type as a type argument produces an outright Java type error, so my final attempt at hacking the type system failed:

RawTypeArgument

Summary

We have seen several kinds of data flow with different risks:

  • Simple values flowing checked-to-legacy don’t cause any specific headache
  • Simple values flowing legacy-to-checked should be treated as nullable to avoid bad surprises. This is checked.
  • Values of parameterized type flowing legacy-to-checked must be handled with care at the receiving side. This is checked.
  • Values of parameterized type flowing checked-to-legacy add more risks, depending on:
    • nullness of the type argument (@Nullable type argument has no risk)
    • presence of wildcards, unbounded or lower-bounded.

Eclipse can detect all mentioned situations that would cause NPE to be thrown from null-checked code – the capstone to be released with Eclipse 2020-03, i.e., coming soon …


by Stephan Herrmann at February 06, 2020 07:38 PM

Remove SNAPSHOT and Qualifier in Maven/Tycho Builds

by Lorenzo Bettini at February 05, 2020 10:20 AM

Before releasing Maven artifacts, you remove the -SNAPSHOT from your POMs. If you develop Eclipse projects and build with Maven and Tycho, you have to keep the versions in the POMs and the versions in MANIFEST, feature.xml and other Eclipse project artifacts consistent. Typically when you release an Eclipse p2 site, you don’t remove the .qualifier in the versions and you will get Eclipse bundles and features versions automatically processed: the .qualifer is replaced with a timestamp. But if you want to release some Eclipse bundles also as Maven artifacts (e.g., to Maven central) you have to remove the -SNAPSHOT before deploying (or they will still be considered snapshots, of course 🙂 and you have to remove .qualifier in Eclipse bundles accordingly.

To do that, in an automatic way, you can use a combination of Maven plugins and of tycho-versions-plugin.

I’m going to show two different ways of doing that. The example used in this post can be found here: https://github.com/LorenzoBettini/tycho-set-version-example.

First method

The idea is to use the goal parse-version of the org.codehaus.mojo:build-helper-maven-plugin. This will store the parts of the current version in some properties (by default, parsedVersion.majorVersion, parsedVersion.minorVersion and parsedVersion.incrementalVersion).

Then, we can pass these properties appropriately to the goal set-version of the org.eclipse.tycho:tycho-versions-plugin.

This is the Maven command to run:

mvn \
  build-helper:parse-version org.eclipse.tycho:tycho-versions-plugin:set-version \
  -DnewVersion=\${parsedVersion.majorVersion}.\${parsedVersion.minorVersion}.\${parsedVersion.incrementalVersion}

The goal set-version of the Tycho plugin will take care of updating the versions (without the -SNAPSHOT and .qualifier) both in POMs and in Eclipse projects’ metadata.

Second method

Alternatively, we can use the goal set (with argument -DremoveSnapshot=true) of the org.codehaus.mojo:versions-maven-plugin. Then, we use the goal update-eclipse-metadata of the org.eclipse.tycho:tycho-versions-plugin, to update Eclipse projects’ versions according to the version in the POM.

This is the Maven command to run:

mvn \
  versions:set -DgenerateBackupPoms=false -DremoveSnapshot=true \
  org.eclipse.tycho:tycho-versions-plugin:update-eclipse-metadata

The first goal will change the versions in POMs while the second one will change the versions in Eclipse projects’ metadata.

Configuring the plugins

As usual, it’s best practice to configure the used plugins (in this case, their versions) in the pluginManagement section of your parent POM.

For example, in the parent POM of https://github.com/LorenzoBettini/tycho-set-version-example we have:

<build>
  <pluginManagement>
    <plugins>
      <plugin>
        <groupId>org.codehaus.mojo</groupId>
        <artifactId>build-helper-maven-plugin</artifactId>
        <version>3.0.0</version>
      </plugin>
      <plugin>
        <groupId>org.codehaus.mojo</groupId>
        <artifactId>versions-maven-plugin</artifactId>
        <version>2.7</version>
      </plugin>
      <plugin>
        <groupId>org.eclipse.tycho</groupId>
        <artifactId>tycho-versions-plugin</artifactId>
        <version>1.6.0</version>
...

 

Conclusions

In the end, choose the method you prefer. Please keep in mind that these goals are not meant to be used during a standard Maven lifecycle, that’s why we ran them explicitly.

Furthermore, the goal set of the org.codehaus.mojo:versions-maven-plugin might give you some headache if the structure of your Maven/Eclipse projects is quite different from the default one based on nested directories. In particular, if you have an aggregator project different from the parent project, you will have to pass additional arguments or set the versions in different commands (e.g., first on the parent, then on the other modules of the aggregator, etc.)


by Lorenzo Bettini at February 05, 2020 10:20 AM

JDT without Eclipse

January 16, 2020 11:00 PM

The JDT (Java Development Tools) is an important part of Eclipse IDE but it can also be used without Eclipse.

For example the Spring Tools 4, which is nowadays a cross-platform tool (Visual Studio Code, Eclipse IDE, …), is highly using the JDT behind the scene. If you would like to know more, I recommend you this podcast episode: Spring Tools lead Martin Lippert

A second known example is the Java Formatter that is also part of the JDT. Since a long time there are maven and gradle plugins that performs the same formatting as Eclipse IDE but as part of the build (often with the possibly to break the build when the code is wrongly formatted).

Reusing the JDT has been made easier since 2017 when it was decided to publish each release and its dependencies on maven central (with following groupId: org.eclipse.jdt, org.eclipse.platform). Stephan Herrmann did a lot of work to achieve this goal. I blogged about this: Use the Eclipse Java Development Tools in a Java SE application and I have pushed a simple example the Java Formatter is used in a simple main(String[]) method built by a classic minimal Maven project: java-formatter.

Workspace or not?

When using the JDT in an headless application, two cases needs to be distinguished:

  1. Some features (the parser, the formatter…) can be used in a simple Java main method.

  2. Other features (search index, AST rewriter…) require a workspace. This imply that the code run inside an OSGi runtime.

To illustrate this aspect, I took some of the examples provided by the site www.programcreek.com in the blog post series Eclipse JDT Tutorials and I adapted them so that each code snippet can be executed inside a JUnit test. This is the Programcreek examples project.

I have split the unit-tests into two projects:

  • programcreek-standalone for the one that do not require OSGi. The maven project is really simple (using the default convention everywhere)

  • programcreek-osgi for the one that must run inside an OSGi runtime. The bnd maven plugins are configured in the pom.xml to take care of the OSGi stuff.

If you run the test with Maven, it will work out-of-the box.

If you would like to run them inside an IDE, you should use one that starts OSGi when executing the tests (in the same way the maven build is doing it). To get a bnd aware IDE, you can use Eclipse IDE for Java Developers with the additional plugin Bndtools installed, but there are other possibilities.

Source code can be found on GitHub: programcreek-examples


January 16, 2020 11:00 PM

4 Years at The Linux Foundation

by Chris Aniszczyk at January 03, 2020 09:54 AM

Late last year marked the 4th year anniversary of the formation of the CNCF and me joining The Linux Foundation:

As we enter 2020, it’s amusing for me to reflect on my decision to join The Linux Foundation a little over 4 years ago when I was looking for something new to focus on. I spent about 5 years at Twitter which felt like an eternity (the average tenure for a silicon valley employee is under 2 years), focused on open source and enjoyed the startup life of going from a hundred or so engineers to a couple of thousand. I truly enjoyed the ride, it was a high impact experience where we were able to open source projects that changed the industry for the better: Bootstrap (changed front end development for the better), Twemoji (made emojis more open source friendly and embeddable), Mesos (pushed the state of art for open source infrastructure), co-founded TODO Group (pushed the state of corporate open source programs forward) and more!

When I was looking for change, I wanted to find an opportunity that could impact more than I could just do at one company. I had some offers from FAANG companies and amazing startups but eventually settled on the nonprofit Linux Foundation because I wanted to build an open source foundation from scratch, teach other companies about open source best practices and assumed non profit life would be a bit more relaxing than diving into a new company (I was wrong). Also, I was throughly convinced that an openly governed foundation pushing Kubernetes, container specifications and adjacent independent cloud native technologies would be the right model to move open infrastructure forward.

As we enter 2020, I realize that I’ve been with one organization for a long time and that puts me on edge as I enjoy challenges, chaos and dread anything that makes me comfortable or complacent. Also, I have a strong desire to focus on efforts that involve improving the state of security and privacy in a connected world, participatory democracy, climate change; also anything that pushes open source to new industries and geographies.

While I’m always happy to entertain opportunities that align to my goals, the one thing that I do enjoy at the LF is that I’ve had the ability to build a variety of new open source foundations improving industries and communities: CDF, GraphQL Foundation, Open Container Initiative (OCI), Presto Foundation, TODO Group, Urban Computing Foundation and more.

Anyways, thanks for reading and I look forward to another year of bringing open source practices to new industries and places, the world is better when we are collaborating openly.


by Chris Aniszczyk at January 03, 2020 09:54 AM

An update on Eclipse IoT Packages

by Jens Reimann at December 19, 2019 12:17 PM

A lot has happened, since I wrote last about the Eclipse IoT Packages project. We had some great discussions at EclipseCon Europe, and started to work together online, having new ideas in the progress. Right before the end of the year, I think it is a good time to give an update, and peek a bit into the future.

Homepage

One of the first things we wanted to get started, was a home for the content we plan on creating. An important piece of the puzzle is to explain to people, what we have in mind. Not only for people that want to try out the various Eclipse IoT projects, but also to possible contributors. And in the end, an important goal of the project is to attract interested parties. For consuming our ideas, or growing them even further.

Eclipse IoT Packages logo

So we now have a logo, a homepage, built using using templates in a continuous build system. We are in a position to start focusing on the actual content, and on the more tricky tasks and questions ahead. And should you want to create a PR for the homepage, you are more than welcome. There is also already some content, explaining the main goals, the way we want to move forward, and demo of a first package: “Package Zero”.

Community

While the homepage is a good entry point for people to learn about Eclipse IoT and packages, our GitHub repository is the home for the community. And having some great discussions on GitHub, quickly brought up the need for a community call and a more direct communication channel.

If you are interested in the project, come and join our bi-weekly community call. It is a quick, 30 minutes call at 16:00 CET, and open to everyone. Repeating every two weeks, starting 2019-12-02.

The URL to the call is: https://eclipse.zoom.us/j/317801130. You can also subscribe to the community calendar to get a reminder.

In between calls, we have a chat room eclipse/packages on Gitter.

Eclipse IoT Helm Chart Repository

One of the earliest discussion we had, was around the question of how and were we want to host the Helm charts. We would prefer not to author them ourselves, but let the projects contribute them. After all, the IoT packages project has the goal of enabling you to install a whole set of Eclipse IoT projects, with only a few commands. So the focus is on the integration, and the expert knowledge required for creating project Helm chart, is in the actual projects.

On the other side, having a one-stop shop, for getting your Eclipse IoT Helm charts, sounds pretty convenient. So why not host our own Helm chart repository?

Thanks to a company called Kiwigrid, who contributed a CI pipeline for validating charts, we could easily extend our existing homepage publishing job, to also publish Helm charts. As a first chart, we published the Eclipse Ditto chart. And, as expected with Helm, installing it is as easy as:

Of course having a single chart is only the first step. Publishing a single Helm charts isn’t that impressive. But getting an agreement on the community, getting the validation and publishing pipeline set up, attracting new contributors, that is definitely a big step in the right direction.

Outlook

I think that we now have a good foundation, for moving forward. We have a place called “home”, for documentation, code and community. And it looks like we have also been able to attract more people to the project.

While our first package, “Package Zero”, still isn’t complete, it should be pretty close. Creating a first, joint deployment of Hono and Ditto is our immediate focus. And we will continue to work towards a first release of “Package Zero”. Finding a better name is still an item on the list.

Having this foundation in place also means, that the time is right, for you to think about contributing your own Eclipse IoT Package. Contributions are always welcome.

The post An update on Eclipse IoT Packages appeared first on ctron's blog.


by Jens Reimann at December 19, 2019 12:17 PM

Eclipse m2e: How to use a WORKSPACE Maven installation

by kthoms at November 27, 2019 09:39 AM

Today a colleague of me asked me about the Maven Installations preference page in Eclipse. There is an entry WORKSPACE there, which is disabled and shows NOT AVAILABLE. He wanted to know how to enable a workspace installation of Maven.

Since we both did not find the documentation of the feature I digged into the m2e sources and found class MavenWorkspaceRuntime. The relevant snippets are the method getMavenDistribution() and the MAVEN_DISTRIBUTION constant:

private static final ArtifactKey MAVEN_DISTRIBUTION = new ArtifactKey(
      "org.apache.maven", "apache-maven", "[3.0,)", null); //$NON-NLS-1$ //$NON-NLS-2$ //$NON-NLS-3$

...

protected IMavenProjectFacade getMavenDistribution() {
  try {
    VersionRange range = VersionRange.createFromVersionSpec(getDistributionArtifactKey().getVersion());
    for(IMavenProjectFacade facade : projectManager.getProjects()) {
      ArtifactKey artifactKey = facade.getArtifactKey();
      if(getDistributionArtifactKey().getGroupId().equals(artifactKey.getGroupId()) //
          && getDistributionArtifactKey().getArtifactId().equals(artifactKey.getArtifactId())//
          && range.containsVersion(new DefaultArtifactVersion(artifactKey.getVersion()))) {
        return facade;
      }
    }
  } catch(InvalidVersionSpecificationException e) {
    // can't happen
  }
  return null;
}

From here you can see that m2e tries to look for workspace (Maven) projects and to find one the has the coordinates org.apache.maven:apache-maven:[3.0,).

So the answer how to enable a WORKSPACE Maven installation is: Import the project apache-maven into the workspace. And here is how to do it:

  1. Clone Apache Maven from https://github.com/apache/maven.git
  2. Optionally: check out a release tag
    git checkout maven-3.6.3
  3. Perform File / Import / Existing Maven Projects
  4. As Root Directory select the apache-maven subfolder in your Maven clone location

Now you will have the project that m2e searches for in your workspace:

And the Maven Installations preference page lets you now select this distribution:


by kthoms at November 27, 2019 09:39 AM

Eclipse startup up time improved

November 05, 2019 12:00 AM

I’m happy to report that the Eclipse SDK integration builds starts in less than 5 seconds (~4900 ms) on my machine into an empty workspace. IIRC this used to be around 9 seconds 2 years ago. 4.13 (which was already quite a bit improved used around 5800ms (6887ms with EGit and Marketplace). For recent improvements in this release see https://bugs.eclipse.org/bugs/show_bug.cgi?id=550136 Thanks to everyone who contributed.

November 05, 2019 12:00 AM

Setup a Github Triggered Build Machine for an Eclipse Project

by Jens v.P. (noreply@blogger.com) at October 29, 2019 12:55 PM

Disclaimer 1: This blog post literally is a "web log", i.e., it is my log about setting up a Jenkins machine with a job that is triggered on a Github pull request. A lot of parts have been described elsewhere, and I link to the sources I used here. I also know that nowadays (e.g., new Eclipse build infrastructure) you usually do that via docker -- but then you need to configure docker, in which

by Jens v.P. (noreply@blogger.com) at October 29, 2019 12:55 PM

LiClipse 6.0.0 released

by Fabio Zadrozny (noreply@blogger.com) at October 25, 2019 06:59 PM

LiClipse 6.0.0 is now out.

The main changes is that many dependencies have been updated:

- it's now based on Eclipse 4.13 (2019-09), which is a pretty nice upgrade (in my day-to-day use I find it appears smoother than previous versions, although I know this sounds pretty subjective).

- PyDev was updated to 7.4.0, so, Python 3.8 (which was just released) is now already supported.

Enjoy!

by Fabio Zadrozny (noreply@blogger.com) at October 25, 2019 06:59 PM

Qt World Summit 2019 Berlin – Secrets of Successful Mobile Business Apps

by ekkescorner at October 22, 2019 12:39 PM

Qt World Summit 2019

Meet me at Qt World Summit 2019 in Berlin

QtWS19_globe

I’ll speak about development of mobile business apps with

  • Qt 5.13.1+ (Qt Quick Controls 2)
    • Android
    • iOS
    • Windows 10

ekkes_session_qtws19

Qt World Summit 2019 Conference App

As a little appetizer I developed a conference app. HowTo download from Google Play Store or Apple and some more screenshots see here.

02_sessions_android

sources at GitHub

cu in Berlin


by ekkescorner at October 22, 2019 12:39 PM

A nicer icon for Quick Access / Find Actions

October 20, 2019 12:00 AM

Finally we use a decent icon for Quick Access / Find Actions. This is now a button in the toolbar which allows you to trigger arbitrary commands in the Eclipse IDE.

October 20, 2019 12:00 AM

Back to the top