September 04, 2023
Eclipse Foundation Publishes Results of Equinox p2 Security Audit
by Jacob Harris at September 04, 2023 06:43 PM
Take the 2023 Jakarta EE Developer Survey
by Jacob Harris at September 04, 2023 06:43 PM
Eclipse Sparkplug Specification Honored with IoT Business Impact Award
by Jacob Harris at September 04, 2023 06:43 PM
BRUSSELS, Belgium – May 23, 2023 – The Eclipse Foundation, one of the world’s largest open source software foundations, announced today that the Eclipse Sparkplug specification has received a 2023 IoT Business Impact Award from IoT Evolution magazine, the leading publication covering IoT technologies.
The award honors organizations for their IoT deployment case studies. Eclipse Sparkplug was recognized for their successful partnership bringing IoT functionality to NGL Energy Partners.
“Sparkplug is well on its way to becoming the de facto standard for making the IIoT ‘plug’n’play,’” said Frederic Desbiens, Eclipse Foundation program manager and evangelist for IoT and Edge Computing. “This award highlights the significant business benefits of implementing this specification in conjunction with MQTT.”
About Sparkplug & MQTT
Sparkplug provides an open and freely available specification for how Edge of Network (EoN) gateways or native MQTT-enabled end devices and MQTT Applications communicate bi-directionally within an MQTT Infrastructure. It is recognized that MQTT is used across a wide spectrum of application solution use cases and an almost indefinable variation of network topologies.
By design, the MQTT specification does not dictate a Topic Namespace or any payload encoding. However, as the IIoT and other architectures leveraging the publisher/subscriber model are adopted by device OEMs in the industrial sector, having different Topic Namespace and payload encoding can inhibit interoperability for the end customer. To that end, the Sparkplug specification addresses the following components within an MQTT infrastructure:
- Sparkplug defines an OT-centric Topic Namespace
- Sparkplug defines an OT-centric Payload definition optimized for industrial process variables
- Sparkplug defines MQTT Session State management required by real-time OT SCADA systems
“It is my pleasure to recognize Sparkplug, an innovative solution that earned the Eclipse Foundation the 2023 Business Impact Award,” said Rich Tehrani, CEO, TMC. “I look forward to seeing more successful deployments of best-in-class solutions from the Eclipse Foundation in the future.”
The Sparkplug standard has recently been submitted for acceptance as an ISO/IEC JTC 1 international standard
About the Eclipse Foundation
The Eclipse Foundation provides our global community of individuals and organisations with a mature, scalable, and business-friendly environment for open source software collaboration and innovation. The Foundation is home to the Eclipse IDE, Jakarta EE, and over 400 open source projects, including runtimes, tools, and frameworks for cloud and edge applications, IoT, AI, automotive, systems engineering, distributed ledger technologies, open processor designs, and many others. The Eclipse Foundation is an international non-profit association supported by over 330 members, including industry leaders who value open source as a key enabler for their business strategies. To learn more, follow us on Twitter @EclipseFdn, LinkedIn or visit eclipse.org.
Third-party trademarks mentioned are the property of their respective owners.
About Crossfire Media
Crossfire Media, co-publishers of IoT Evolution, is an integrated marketing company with a core focus on future trends in technology. We service communities of interest with conferences, tradeshows, webinars and newsletters. Crossfire Media has a partnership with Technology Marketing Corporation (TMC) to produce events and websites related to disruptive technologies. Crossfire Media is a division of Crossfire Consulting, a full service Information Technology company based in New York.
About TMC
Through education, industry news, live events and social influence, global buyers rely on TMC’s content-driven marketplaces to make purchase decisions and navigate markets. As a result, leading technology vendors turn to TMC for unparalleled branding, thought leadership and lead generation opportunities. Our in-person and online events deliver unmatched visibility and sales prospects for all percipients. Through our custom lead generation programs, we provide clients with an ongoing stream of leads that turn into sales opportunities and build databases. Additionally, we bolster brand reputations with the millions of impressions from display advertising on our news sites and newsletters. Making TMC a 360 degree marketing solution, we offer comprehensive event and road show management services and custom content creation with expertly ghost-crafted blogs, press releases, articles and marketing collateral to help with SEO, branding, and overall marketing efforts. For more information about TMC and to learn how we can help you reach your marketing goals, please visit www.tmcnet.com and follow us on Facebook, LinkedIn and Twitter, @tmcnet.
For more information about TMC and to learn how we can help you reach your marketing goals, please visit www.tmcnet.com.
###
Media contacts:
Schwartz Public Relations for the Eclipse Foundation, AISBL
Stephanie Brüls / Susanne Pawlik
Sendlinger Straße 42A
80331 Munich
EclipseFoundation@schwartzpr.de
+49 (89) 211 871 – 64 / -35
Nichols Communications for the Eclipse Foundation, AISBL
Jay Nichols
jay@nicholscomm.com
+1 408-772-1551
514 Media Ltd for the Eclipse Foundation, AISBL (France, Italy, Spain)
Benoit Simoneau
benoit@514-media.com
M: +44 (0) 7891 920 370
TMC Contact:
Michelle Connolly
Senior Marketing Manager
203-852-6800, ext. 170
mconnolly@tmcnet.com
La Fondation Eclipse rejoint la Société informatique de France
by Jacob Harris at September 04, 2023 06:43 PM
Paris, France, le 09 mars 2023 – La Fondation Eclipse annonce aujourd’hui avoir rejoint la Société Informatique de France afin de collaborer à une démarche commune qui vise à rassembler toutes celles et ceux pour qui faire progresser l’informatique est un métier ou une passion : enseignants, chercheurs, ingénieurs, industriels, et consultants. La Fondation Eclipse est convaincue que cette démarche doit passer par une bonne compréhension de l’open source et de ses pratiques.
« Nous sommes très heureux et fier de rejoindre la SIF et faire partie de ses membres » explique Philippe Krief, directeur des relations avec la recherche à la Fondation Eclipse. La Fondation Eclipse s'implique d’ailleurs fortement dans les projets de recherche européens depuis dix ans, en les accompagnant dans la mise en œuvre et la pérennisation de leurs résultats sous forme de projets open source.
« Ces nombreuses rencontres avec la communauté de recherche européenne nous ont permis de mieux comprendre les questionnements comme les idées reçues que le milieu de la recherche peut avoir vis-à-vis de l'open source » poursuit Philippe Krief. « Notre travail consiste donc à expliquer et à rassurer nos partenaires académiques, comme industriels, sur les bonnes pratiques de l'open source, à casser certains clichés, et à les accompagner dans le développement d'un projet open source, de sa communauté, et de sa pérennisation.
« La Société informatique de France est particulièrement heureuse d'accueillir la fondation Eclipse parmi ses membres. Cette adhésion est un signe visible de l’existence de valeurs communes et d’une ambition partagée pour une science plus ouverte et une innovation plus citoyenne. C’est bien en ce sens qu’œuvre la fondation Eclipse et que se développe chaque jour davantage l’écosystème du logiciel libre au bénéfice de l’ensemble de la société.» souligne Yves Bertrand, président de la SIF.
– Fin –
À propos de la Fondation Eclipse
La Fondation Eclipse fournit à sa communauté mondiale de personnes et d'organisations un environnement mature, évolutif et convivial pour la collaboration et l'innovation en matière de logiciels open source. La Fondation héberge l'IDE Eclipse, Jakarta EE et plus de 400 projets open source, notamment des runtimes, des outils et des frameworks pour les applications cloud et edge, l'IoT, l'IA, l'automobile, l'ingénierie des systèmes, les conceptions de processeurs ouverts, et bien d'autres. La Fondation Eclipse est une association internationale à but non lucratif soutenue par plus de 330 membres, dont des leaders de l'industrie qui apprécient l'open source comme un catalyseur clé pour leurs stratégies commerciales. Pour en savoir plus, suivez-nous sur Twitter @ResearchEclipse, @EclipseFdn, LinkedIn ou visitez eclipse.org.
À propos de la Société informatique de France
La Société Informatique de France (SIF) est la société savante française d’informatique. Elle a vu le jour en 2012 et est reconnue d'utilité publique depuis septembre 2018. Au cœur de la société, elle a vocation à porter la voix de l’informatique, science et technique, et celle des femmes et des hommes qui la font chaque jour. À ce titre, elle vise à rassembler toutes celles et ceux pour qui faire progresser l’informatique est un métier ou une passion : enseignantes, chercheuses, ingénieurs, industriels, consultantes. Société savante, la SIF vise tout particulièrement à promouvoir l’informatique, à servir et à animer sa communauté scientifique et technique, contribuer à la culture citoyenne et à l’enseignement de la discipline à tous les niveaux. Elle aspire à participer aux réflexions et initiatives sur la formation et l’emploi des informaticiennes et informaticiens et à porter la voix de la communauté dans les débats de société. Pour en savoir plus, visitez https://www.socinfo.fr.
Contact Eclipse pour la presse:
514 Media Ltd for the Eclipse Foundation, AISBL (France, Italy, Spain)
Benoit Simoneau
benoit@514-media.com
M: +44 (0) 7891 920 370
Ref: ECF018D
Contacts SIF pour la presse :
Yves BERTRAND, president@societe-informatique-de-france.fr
+33 (0)643 348 313
Sylvie ALAYRANGUES, responsable communication, sylvie.alayrangues@societe-informatique-de-france.fr
Help Identify IoT and Edge Computing Trends By Participating in Our Annual Survey
by Jacob Harris at September 04, 2023 06:43 PM
Open Letter to the European Commission on the Cyber Resilience Act
by Jacob Harris at September 04, 2023 06:43 PM
Dear Members of the European Parliament,
Dear Representatives to the Council of the European Union,
We, the undersigned, represent leading governance institutions within the European and global open source software community. We write to express our concern that the greater open source community has been underrepresented during the development of the Cyber Resilience Act (CRA) to date and wish to ensure this is remedied throughout the co-legislative process by lending our support.
Open source software (OSS) represents more than 70% of the software present in products with digital elements in Europe. Yet, our community does not have the benefit of an established relationship with the co-legislators. The software and other technical artefacts produced by us are unprecedented in their contribution to the technology industry along with our digital sovereignty and associated economic benefits on many levels. With the CRA, more than 70% of the software in Europe is about to be regulated without an in-depth consultation.
As acknowledged in the EU’s Open Source Software Strategy 2020-2023, open source software plays a critical role in the digital economy, powering everything from cloud infrastructure to mobile applications to public transportation systems. In Europe alone, we represent about €100 billion in economic impact. It is therefore essential that any legislation that impacts the software industry takes into account the unique needs and perspectives of open source software, as well as our modern methodologies used to create software.
We deeply share the CRA’s aim to improve the cybersecurity of digital products and services in the EU and embrace the urgent need to protect citizens and economies by improving software security.
However, our voices and expertise should be heard and have an opportunity to inform public authorities' decisions. If the CRA is, in fact, implemented as written, it will have a chilling effect on open source software development as a global endeavour, with the net effect of undermining the EU’s own expressed goals for innovation, digital sovereignty, and future prosperity.
Moving forward, we urge you to engage with the open source community and take our concerns into account as you consider the implementation of the Cyber Resilience Act. Specifically, moving forward, we urge you to:
- Recognise the unique characteristics of open source software and ensure that the Cyber Resilience Act does not unintentionally harm the open source ecosystem.
- Consult with the open source community during the co-legislative process.
- Ensure that any development under the CRA takes into account the diversity of open and transparent open source software development practices.
- Establish a mechanism for ongoing dialogue and collaboration between the European institutions and the open source community, to ensure that future legislation and policy decisions are informed.
The undersigned organisations collectively represent the governance of much of the open source software which industry and society rely on. We offer our collective expertise, including envisioning how these professional organisations may support a more inclusive and effective process to inform the CRA today. The same increase in dialog and collaboration will continue to support the CRA’s successful implementation in this new regulatory paradigm. We are prepared to send a representative delegation to meet with the members now.
We appreciate your attention to this matter and look forward to working with you to ensure that the Cyber Resilience Act reflects the concerns and contributions of the entire software industry, including the open source community.
Co-signed by the Executive Directors, Board Chairs, and Presidents on behalf of their respective organisations:
Associaçāo de Empresas de Software Open Source Portuguesas (ESOP)
CNLL, the French Open Source Business Association
European Open Source Software Business Associations (APELL)
COSS - Finnish Centre for Open Systems and Solutions
Open Source Business Alliance (OSBA)
The Open VSX Registry, a Vendor-Neutral Open Source Alternative to the Visual Studio Marketplace, Gets its Own Working Group at the Eclipse Foundation
by Jacob Harris at September 04, 2023 06:43 PM
BRUSSELS, Belgium – June 27, 2023 – The Eclipse Foundation, one of the world’s largest open source software foundations, today announced the formation of the Open VSX Working Group. The mission of the new working group will be to manage and accelerate adoption of the Open VSX Registry, a vendor-neutral, community-supported alternative to the Microsoft Visual Studio Marketplace.
Built on the Eclipse Open VSX open source project, the Open VSX Registry currently hosts nearly 3,000 extensions from over 1,500 different publishers, with new publishers and extensions being added daily. Since the Open VSX Registry became available in 2021, developers have consumed more than 40M extensions, with downloads now exceeding 2M per month. To manage and facilitate this ongoing growth, management of the Open VSX Registry will now shift from the Eclipse Cloud DevTools Working Group to a new working group, with initial members including Google, Huawei, Posit, Salesforce, Siemens, and STMicroelectronics.
“The Open VSX Registry has experienced significant momentum at the Eclipse Foundation, so much so that it merits having its own working group for continued evolution and growth,” said Mike Milinkovich, executive director of the Eclipse Foundation. “By creating a vendor-neutral home with a true open source model for these extensions, we can ensure that this marketplace is guided by the community, and not just a single vendor.”
The Open VSX Registry delivers on the industry’s need for a fully open source approach to marketplace technologies for Visual Studio (VS) Code extensions. It increases transparency and flexibility for extension users, publishers, and tool developers, particularly those leveraging cloud-based development tools and IDEs that want to avoid being locked into proprietary models and marketplaces.
As an open alternative to the Visual Studio Marketplace, the Open VSX Registry offers free access to extensions that can be used with any technology or tool that supports them. These include many open source solutions like Eclipse Che and Eclipse Theia, as well as Salesforce Code Builder, Google Cloud Workstations, Gitpod, SAP Business Application Studio and other applications based on Eclipse projects.
In addition, since the Eclipse Open VSX code itself is open source, any organization can contribute to the registry code and reuse it to create an internally hosted and managed extension registry for their in-house developers to publish and consume VS Code extensions.
Interested parties can learn more or start using the Open VSX Registry immediately at open-vsx.org.
To learn more about how to get involved with the Open VSX Working Group, visit the Eclipse Foundation membership page or send email to membership.coordination@eclipse-foundation.org.
Quotes from Open VSX Working Group Members
“We are excited to expand our relationship with the Eclipse Foundation and support the creation of the Open VSX Working Group,” said Thomas DeMeo, Director of Developer Tools, Google Cloud. “At Google Cloud, we want to give our customers options when it comes to working with their preferred IDE, and the Open VSX Registry furthers our ability to deliver on that goal. As both strong supporters and originators of many open source efforts, we support the customer choice and vendor neutrality that the Open VSX Registry aims to deliver.”
Huawei
“The mission of the CodeArts team at Huawei Cloud is to build the top-notch tools and services for the Huawei developer ecosystem,” said Yawei Wang, Chief Technologist of Developer Tools at Huawei Cloud. “We are committed to helping developers write and ship code faster and easier and keeping teams productive. Meanwhile, we also offer an open platform on which developers can build and publish their own extensions to suit specific business needs. The Open VSX Registry, a vendor-neutral option for IDE extensions, is essential to fulfill our mission and keep our long-standing commitment to developers.”
Posit
“The mission of Posit is to create open source software for data science, scientific research, and technical communication,” said Tareef Kawaf, President of Posit Software, PBC. “We invest heavily in open source development, education, and the community with the goal of continuing to serve knowledge creators as a 100 year company. We are happy to support the Open VSX Registry’s mission of maintaining a vendor-neutral platform for open source licensed community and professional IDE extensions.”
Salesforce
“Salesforce is excited to join the Open VSX Working Group to support the continued growth of an open ecosystem for Visual Studio Code extensions,” said Dan Fernandez, Vice President, Developer Services at Salesforce. “We also support the Open VSX Registry with built-in access from Salesforce Code Builder, a modern, web-based development environment tailored for Salesforce development. Our official Salesforce extensions are published to open-vsx.org to allow developers to work how they want, where they want.”
Siemens
“We at Siemens are dedicated to consistently improving our automation software offerings to meet the evolving needs of automation software engineers seeking innovation and efficiency,” said Johannes Birkenstock and Jacob Hilsebein, Software Developers at Siemens. “We deeply appreciate the collaborative efforts of our partners and communities, recognising their pivotal role in fostering open and flexible toolchains. Being a part of the Eclipse Foundation's Open VSX Working Group allows us to empower automation software engineers by providing them with the tools they love to use.”
STMicroelectronics
“As the world’s leading supplier of general-purpose microcontrollers, ST continues to focus its embedded tools and software efforts on features and capabilities that help developers innovate more and achieve faster,” said Ricardo De-Sa-Earp, Executive Vice President, General-Purpose Microcontrollers, STMicroelectronics. “We recognize that partners and community contributions are key to building and enhancing the most comprehensive ecosystem around our products and see the Eclipse Open VSX project as a promising initiative to enlarge the possibilities offered to embedded developers even more.“
About the Eclipse Foundation
The Eclipse Foundation provides our global community of individuals and organisations with a mature, scalable, and business-friendly environment for open source software collaboration and innovation. The Foundation is home to the Eclipse IDE, Jakarta EE, and over 400 open source projects, including runtimes, tools, and frameworks for cloud and edge applications, IoT, AI, automotive, systems engineering, distributed ledger technologies, open processor designs, and many others. The Eclipse Foundation is an international non-profit association supported by over 330 members, including industry leaders who value open source as a key enabler for their business strategies. To learn more, follow us on Twitter @EclipseFdn, LinkedIn or visit eclipse.org.
Third-party trademarks mentioned are the property of their respective owners.
###
Media contacts:
Schwartz Public Relations for the Eclipse Foundation, AISBL
Gloria Huppert / Franziska Wenzl
Sendlinger Straße 42A
80331 Munich
EclipseFoundation@schwartzpr.de
+49 (89) 211 871 – 70 / -58
Nichols Communications for the Eclipse Foundation, AISBL
Jay Nichols
jay@nicholscomm.com
+1 408-772-1551
514 Media Ltd for the Eclipse Foundation, AISBL (France, Italy, Spain)
Benoit Simoneau
benoit@514-media.com
M: +44 (0) 7891 920 370
The Adoptium Working Group Reports Significant Momentum for Open Source Java in 2023
by Jacob Harris at September 04, 2023 06:43 PM
BRUSSELS, Belgium – MARCH 14, 2023 – The Eclipse Foundation, one of the world’s largest open source software foundations, in collaboration with the Adoptium Working Group, today announced significant momentum for the global open source Java ecosystem. This momentum comes in the wake of new licensing fee structures being introduced into the industry. In some cases, enterprises that have paid tens of thousands of dollars are now facing fees of millions of dollars for access to Java.
As a result, the uptake of free-to-use high-quality Java has never been higher. In February 2023 the Eclipse Foundation delivered over 12.3M downloads of Java SE TCK certified and AQAvit quality verified Eclipse Temurin binaries, more than double the number delivered in the same month last year. Eclipse Temurin has also become the default Java option for GitHub Actions, and multiple widely used cloud container images. As the entire open source Java ecosystem continues to experience a Renaissance, millions of developers and large Enterprise users are turning to the Adoptium Marketplace for their open Java runtimes. What’s more, the Adoptium Working Group is welcoming new strategic and enterprise members including Bloomberg, Google, and Rivos.
“In a macroeconomic climate where we are all forced to do more with less, options like Eclipse Temurin mean that businesses have choices for free-to-use quality Java runtimes without having to expend additional resources,” said Mike Milinkovich, executive director of the Eclipse Foundation. “With new members joining every month, the quality and consistency of the Adoptium Working Group’s output will only continue to grow to address that pressure.”
The Adoptium Working Group, which was founded by multiple participants, including many Java developers and vendors such as Alibaba Cloud, Azul, Huawei, IBM, iJUG, Microsoft, and Red Hat, provides the Java ecosystem with fully compatible, high-quality distributions of Java based on OpenJDK source code. For enterprises that rely on Java and wish to ensure an open, community-led future for this important open source option, joining the Working Group represents a critical step in taking control of their own technical destiny.
“2023 is shaping up to be an incredibly productive year for the Adoptium Working Group and for the entire open source Java ecosystem,” said Tim Ellison, PMC Lead for Eclipse Adoptium. “With our focus on secure development practices and high quality deliveries there has never been a better time for organizations to choose a free-to-use Java runtime.”
The Eclipse Adoptium project and the governing Adoptium Working Group are the continuation of the original AdoptOpenJDK mission, which was established in 2017 to address the general lack of an open, vendor-neutral, reproducible build and test system for OpenJDK across multiple platforms. Adoptium is now the leading provider of high-quality OpenJDK-based binaries used by Java developers across embedded systems, desktops, traditional servers, modern cloud platforms, and mainframes. The Adoptium Marketplace extends this leadership role and gives even more organizations a means of distributing their binaries.
If your organization is interested in participating in the Adoptium Working Group, you can view the Charter and Participation Agreement, or email us at membership@eclipse.org. You can also participate as a sponsor; interested parties can view the Sponsorship Agreement. Both membership and sponsorship help assure sustainability.
Supporting Quotes from Adoptium Members:
“We are excited to participate in the Adoptium Working Group and contribute to the future of Eclipse Temurin. Google is committed to making Google Cloud a premier cloud for Java developers and workflows, and we believe an open, secure, and high-quality vendor-neutral JDK distribution is a critical component to that.” Dan Gazineu, Engineering Manager, Google Cloud SDK
Azul
“Third-party Java runtimes were already on the rise prior to Oracle’s recent pricing changes, and we expect that trend to accelerate rapidly moving forward,” said Simon Ritter, Deputy CTO, Azul. “Java provides immense value across the DevSecOps lifecycle, but Oracle’s new employee-based pricing is divorced from that value. Customers are increasingly frustrated over what they see as arbitrary pricing changes, audit risks and a lack of overall predictability. Thankfully, drop-in replacements for Oracle abound, and offer a compelling value for Java-based enterprises.”
Red Hat
“As a long term contributor to open source Java SE, Red Hat applauds the sustained growth and uptake of OpenJDK and Temurin throughout modern, business-critical enterprise software solutions,” said Mark Little, VP, Engineering, Red Hat. “Adoptium is a leading example of a community-powered approach to delivering secure, reliable and high-performing open source software. Red Hat’s engagement in the Adoptium Working Group and confidence in the community is reflected in our expanded support offerings that include development and production use cases of Temurin, similar to the award-winning support that comes with the Red Hat build of OpenJDK. We wish Adoptium every continued success.”
About the Eclipse Foundation
The Eclipse Foundation provides our global community of individuals and organizations with a mature, scalable, and business-friendly environment for open source software collaboration and innovation. The Foundation is home to the Eclipse IDE, Jakarta EE, and over 400 open source projects, including runtimes, tools, and frameworks for cloud and edge applications, IoT, AI, automotive, systems engineering, distributed ledger technologies, open processor designs, and many others. The Eclipse Foundation is an international non-profit association supported by over 330 members, including industry leaders who value open source as a key enabler for their business strategies. To learn more, follow us on Twitter @EclipseFdn, LinkedIn or visit eclipse.org.
Third-party trademarks mentioned are the property of their respective owners.
###
Media contacts:
Schwartz Public Relations for the Eclipse Foundation, AISBL
Stephanie Brüls / Susanne Pawlik
Sendlinger Straße 42A
80331 Munich
EclipseFoundation@schwartzpr.de
+49 (89) 211 871 – 64 / -35
Nichols Communications for the Eclipse Foundation, AISBL
Jay Nichols
jay@nicholscomm.com
+1 408-772-1551
The Eclipse Foundation Releases Results of the 2023 Cloud Developer Survey
by Jacob Harris at September 04, 2023 06:43 PM
BRUSSELS – August 29, 2023 – The Eclipse Foundation, one of the world’s largest open source foundations, along with the Eclipse Cloud DevTools Working Group, today announced the availability of the 2023 Cloud Developer Survey Report. This year’s Cloud Developer Survey results are based on an online survey of 534 cloud developers and software professionals conducted from November 21, 2022 to January 13, 2023. The survey’s objective is to gain a better understanding of the cloud-based software development ecosystem by identifying the requirements, priorities, and challenges faced by organizations leveraging a cloud-based development model, including those based on open source technologies.
“Cloud-based software developer tools are experiencing significant momentum as developer teams around the world continue to shift work to cloud native architectures. Our research has shown the majority of these developers not only leverage open source technologies but are increasingly looking to solutions governed by an open source software foundation,” said Mike Milinkovich, executive director of the Eclipse Foundation. “This demonstrates that there is solid traction for open source cloud development tools. Moreover, in an increasingly crowded market, Eclipse Foundation projects like Eclipse Open VSX, Eclipse Theia, and Eclipse Che are making an enormous impact by providing the community-led open source technologies that developers need.”
Survey participants represent a broad set of industries, organizations, and job functions. Some of the top conclusions drawn from the survey data include:
- Open source is attractive to developers, with 74% saying they would like to see their companies invest more into OSS. Developers prefer open source because it allows them to 1) focus on developing features that matter to their organizations; 2) plug into their existing environments; and 3) customize their tools.
- Cloud native applications are becoming increasingly mission-critical. Migration to the cloud continues, with 35% reporting their company’s most important applications are now cloud native. Only 13% of participants say their company has no cloud migration plans for important on-premise applications.
- Developers are increasingly leaning towards open source technologies that are governed by an open source foundation. While developers are not necessarily driving the business decisions, 36% say they would prefer working with projects that are foundation-supported.
- Overall, developers like the tools they use and spend significant time customizing them, but would consider switching to other cloud-based options under the right conditions. Motivators include: a performance boost, tight integration with cloud technologies, ease of setup/configuration, and a high level of security.
- There is a disconnect between open source software consumption and participation. 56% of respondents use open source software, with only 38% being members of open source foundations and 31% contributing to open source projects.
- Developers see opportunities for growth around AI/ML and edge. Developers are generally excited about experimenting with new technologies. Their use of AI/ML is increasing, with much of it happening at the edge.
In addition to these findings, the survey report provides detailed key takeaways and recommendations for cloud developers, employers, and other ecosystem participants. The 2023 Cloud Developer Survey Report is now available to all interested parties and can be downloaded for free here.
To learn more about getting involved with the Eclipse Cloud DevTools Working Group, please visit us at ecdtools.eclipse.org, or email us at membership@eclipse.org. Developers and other interested parties can also join the Cloud DevTools Working Group mailing list to stay informed about working group projects and progress.
About the Eclipse Foundation
The Eclipse Foundation provides our global community of individuals and organizations with a mature, scalable, and business-friendly environment for open source software collaboration and innovation. The Foundation is home to the Eclipse IDE, Jakarta EE, and over 425 open source projects, including runtimes, tools, and frameworks for cloud and edge applications, IoT, AI, automotive, systems engineering, distributed ledger technologies, open processor designs, and many others. The Eclipse Foundation is an international non-profit association supported by over 330 members, including industry leaders who value open source as a key enabler for their business strategies. To learn more, follow us on social media @EclipseFdn, LinkedIn, or visit eclipse.org.
Third-party trademarks mentioned are the property of their respective owners.
###
Media contacts:
Schwartz Public Relations for the Eclipse Foundation, AISBL (Germany)
Stephanie Brüls / Susanne Pawlik
Sendlinger Straße 42A
80331 Munich
EclipseFoundation@schwartzpr.de
+49 (89) 211 871 – 64 / -35
Nichols Communications for the Eclipse Foundation, AISBL
Jay Nichols
+1 408-772-1551
514 Media Ltd for the Eclipse Foundation, AISBL (France, Italy, Spain)
Benoit Simoneau
M: +44 (0) 7891 920 370
The Eclipse Foundation publishes its study about Open Services Cloud
by Jacob Harris at September 04, 2023 06:43 PM
BRUSSELS, Belgium – April 24th, 2023 – The Eclipse Foundation, one of the world’s largest open source software foundations, today published its study about Cloud Interoperability to foster the European Digital Market. The report provides an analysis of the current market of the European cloud services ecosystem and perspectives:
- To unlock the cloud industry,
- To ease current digital market limitations,
- To simplify the complexity behind managed cloud services usage,
- And to open new opportunities for all: users, developers, and service providers.
As part of this publication, the Eclipse Foundation will be hosting a launch event on April 25th at 1:30 PM CEST in Brussels, open to all parties interested in contributing to an open cloud ecosystem. Registration and more information on this event can be found here: https://events.eclipse.org/2023/unlockthecloud/
“EU participants are grossly underrepresented in today's cloud ecosystem. This new open source industry collaboration will level the playing field and enable Europe to not only fully embrace the cloud, but build its own innovative industry,” said Mike Milinkovich, executive director, the Eclipse Foundation. “As one of the critical ingredients to growth for multiple industries underpinning the EU’s economy, growing the region’s leadership in the cloud ecosystem will be the foundation for the future economic prosperity of Europe.”
Building cloud interoperability with the Open Services Cloud
Having reached nearly €190B in 2022, the European cloud services market is growing rapidly. It drives many of the 14 strategic industrial ecosystems defined in the 2021 EU industrial strategy and is forecasted to grow at an annual rate of 13% over the next decade.
Despite this compelling growth, the European cloud ecosystem is still limited in its ability to expand. To increase balance and boost cloud consumer purchasing power in the European market, the study introduces how Open Services Cloud platform brings three core innovations to streamline the process of using multiple clouds: a descriptive configuration language, a management portal, and a portable services and data solution.
The Open Services Cloud levels the playing field between cloud users and CSPs and ISVs. Especially beneficial to smaller businesses and start-ups, the Open Services Cloud opens the door for the emergence of a stronger European ecosystem of cloud service industry.
On April 25th, we will be hosting an in-person event in Brussels where you will learn from industry leaders, discover the results of our new study on cloud interoperability, and find out more about where the EU Data Act and the Digital Markets Act are heading when it comes to European cloud services. Register here to participate.
About the Eclipse Foundation
The Eclipse Foundation provides our global community of individuals and organizations with a mature, scalable, and business-friendly environment for open source software collaboration and innovation. The Foundation is home to the Eclipse IDE, Jakarta EE, and over 400 open source projects, including runtimes, tools, and frameworks for cloud and edge applications, IoT, AI, automotive, systems engineering, distributed ledger technologies, open processor designs, and many others. The Eclipse Foundation is an international non-profit association supported by over 330 members, including industry leaders who value open source as a key enabler for their business strategies. To learn more, follow us on Twitter @EclipseFdn, LinkedIn or visit eclipse.org.
Third-party trademarks mentioned are the property of their respective owners.
###
Media contacts:
Schwartz Public Relations for the Eclipse Foundation, AISBL
Stephanie Brüls / Susanne Pawlik
Sendlinger Straße 42A
80331 Munich
EclipseFoundation@schwartzpr.de
+49 (89) 211 871 – 64 / -35
Nichols Communications for the Eclipse Foundation, AISBL
Jay Nichols
jay@nicholscomm.com
+1 408-772-1551
514 Media Ltd for the Eclipse Foundation, AISBL (France, Italy, Spain)
Benoit Simoneau
benoit@514-media.com
M: +44 (0) 7891 920 370
August 31, 2023
Eclipse JKube 1.14 is now available!
August 31, 2023 04:00 PM
On behalf of the Eclipse JKube
team and everyone who has contributed, I'm happy to announce that Eclipse JKube 1.14.0 has been
released and is now available from
Maven Central �.
Thanks to all of you who have contributed with issue reports, pull requests, feedback, and spreading the word with blogs, videos, comments, and so on. We really appreciate your help, keep it up!
What's new?
Without further ado, let's have a look at the most significant updates:
- Gradle 8 support
- Helidon support
- Spring Boot layered jar
- Helm push support for OCI registries
- � Many other bug-fixes and minor improvements
Gradle 8 support
Gradle 8 wasn't fully supported in previous versions of JKube, as some users have reported. This release fixes the issues and adds full support for Gradle 8.
Gradle 8 brings multiple improvements and new features. You can find more information in the Gradle 8 release announcement. These are some of the most relevant changes:
- Performance Boosts:
- Introduction of a configuration cache to speed up project configuration.
- Enhanced parallelism for task execution without requiring the '--parallel' flag.
- Faster Java compilation with improved incremental compilation.
- Usability Improvements:
- Improved Java toolchain support for specifying the project's JDK and vendor.
- Introduction of Test Suites for simplifying test organization.
- Version catalogs for managing dependencies with better plugin version support.
- Ecosystem Support Upgrades:
- Support for Java 17 through 19, Groovy 4.0, and an updated Scala Zinc version.
Our JKube fix will allow you to take advantage of all these improvements and won't hold you back from upgrading to Gradle 8 🚀.
Helidon support
Helidon is a collection of Java libraries for writing microservices that run on a fast web core powered by Netty. Until now, if you wanted to use JKube with Helidon, you had to provide a complete image configuration.
This JKube release includes a new Helidon Generator and Enricher that will allow you to build and deploy your Helidon application to Kubernetes and OpenShift without any additional configuration.
You will find quickstarts for both Helidon SE and Helidon MP in our main repository.
Spring Boot layered jar
Spring Boot 2.3 introduced a new layered jar format to package your application. Being able to provide different layers when building your image allows you to take advantage of the Docker or Jib layer cache and reduce the image build time.
With this release, JKube will automatically detect if your project is using a layered jar and will configure the image build accordingly to take advantage of the layers defined in your jar. As usual with many of JKube's features, everything will work out of the box without any additional configuration from your side.
Using this release
If your project is based on Maven, you just need to add the Kubernetes Maven plugin or the OpenShift Maven plugin to your plugin dependencies:
<plugin>
<groupId>org.eclipse.jkube</groupId>
<artifactId>kubernetes-maven-plugin</artifactId>
<version>1.14.0</version>
</plugin>If your project is based on Gradle, you just need to add the Kubernetes Gradle plugin or the OpenShift Gradle plugin to your plugin dependencies:
plugins {
id 'org.eclipse.jkube.kubernetes' version '1.14.0'
}How can you help?
If you're interested in helping out and are a first-time contributor, check out the "first-timers-only" tag in the issue repository. We've tagged extremely easy issues so that you can get started contributing to Open Source and the Eclipse organization.
If you are a more experienced developer or have already contributed to JKube, check the "help wanted" tag.
We're also excited to read articles and posts mentioning our project and sharing the user experience. Feedback is the only way to improve.
Project Page | GitHub | Issues | Gitter | Mailing list | Stack Overflow

August 30, 2023
MicroStream Debuts Eclipse Store Java Persistence Framework at Eclipse Foundation
by Sirisha Pratha at August 30, 2023 07:00 AM

MicroStream, an open-source Java persistence framework, recently announced the first release of Eclipse Store under the auspices of the Eclipse Foundation. This first release contains two core components from MicroStream, its Serializer and StorageManager restructured as Eclipse Serializer, and Eclipse Store, respectively.
By Sirisha PrathaAugust 26, 2023
My ten year quest for concise lambda expressions in Java
by Donald Raab at August 26, 2023 05:20 AM
My ten-year quest for concise lambda expressions in Java
A mission to hold off the horde of for loops in Java.
A series of fortunate and unfortunate events
I started learning Java in 1997. I thought initially Java would be a fad and that Smalltalk would emerge as the victor in the battle for object-oriented programmer productivity over C++. I was wrong. In the great object-oriented battles in the 1990s between C++ and Smalltalk, Java emerged as the victor.
Y2K marked the end of my career as a professional Smalltalk developer. I stopped programming in Smalltalk professionally for the promise of a higher-paying career in Java. Over two decades later, this would turn out to be the best decision I ever made in my career. I miss Smalltalk, but it’s never too far away from my keyboard.
My journey with Java has helped me evolve as a software engineer, learner, and teacher. I know things now that if I had continued programming professionally in Smalltalk, I might have never had the opportunity to learn and share with so many other developers. My Smalltalk experience gave me a solid perspective on what Java was missing. This knowledge and experience created a great opportunity for me to become a teacher, and to learn things more deeply as a result. Teaching a Smalltalk way of thinking to thousands of Java developers globally has enabled me to achieve a higher level of appreciation for many things. I have helped many Java developers learn how to elevate their coding style so they can communicate their intent with a more expressive vocabulary. I’ve seen quite a few developers build great things with the skills they have developed. This is everything to me.
In the rest of this blog, I will explain how a dislike for reading and writing for loops became so acute that it led to me creating an open-source collections library (Eclipse Collections) for Java inspired by Smalltalk. This then set me on a quest to find a way to help Java get good syntactical support for concise lambda expressions.
Hold the for
I dislike reading for loops in application code. I see for loops as a sign that a developer failed to learn and use a higher level abstraction to communicate what they are doing with an intention revealing name. For loops are a necessity in languages like Java. With the minimal interface design approach of Java Collections, the for loop became the “Hodor” of developer communication. For anyone who hasn’t heard of “Hodor”, he is a character in the “Game of Thones” series who says only one word: Hodor! I will refer to the style of programming where all iteration patterns are written with for loops as Hofor!
By 2004, everything I was reading in Java code was Hofor!, Hofor!, Hofor!, and… Hofor!. I got paid to read Hofor! every day. I got to write a lot of Hofor! as well. This is pedestrian level programming that I would gladly welcome our new AI overlords to automagically replace with higher level abstractions employing intention revealing names.
The following example shows simple Hofor! style coding in Java. By itself, this code is easy to read and may not seem too bad. But wait for it… it will get worse.
public List<Person> getListOfPeople()
{
List<Person> people = new ArrayList<>();
people.add(new Person("Bob", "Smith"));
people.add(new Person("Sally", "Taylor"));
return people;
}
@Test
public void forLoopFindUniqueLastNames()
{
List<Person> people = this.getListOfPeople();
// Find the unique last names of the people
// Hofor!
Set<String> lastNames = new HashSet<>();
for (Person each : people)
{
lastNames.add(each.lastName());
}
Assertions.assertEquals(Set.of("Smith", "Taylor"), lastNames);
}
Using a method with the intention revealing name can replace the for loop here. Extracting for loops into their own methods became a regular activity for me so I could quickly read code that communicated its intent, instead of requiring me to painstakingly translate the how to the what by reading every line in a for loop.
In the following example, I use an Extract Method refactoring to create a method named getUniqueLastNames.
@Test
public void findUniqueLastNames()
{
List<Person> people = this.getListOfPeople();
Set<String> lastNames = this.getUniqueLastNames(people);
Assertions.assertEquals(Set.of("Smith", "Taylor"), lastNames);
}
private Set<String> getUniqueLastNames(List<Person> people)
{
// Hofor!
Set<String> lastNames = new HashSet<>();
for (Person each : people)
{
lastNames.add(each.lastName());
}
return lastNames;
}
This might not look too terrible to most Java developers, at least if they were programming before Java 8 was released.
As a former Smalltalk developer, this code made me want to scream. I should not have had to create the method getUniqueLastNames. I should have been able to use a method on List for this common iteration pattern known as collect, map, or transform.
It should have also been trivial to convert a List to a Set, or if efficiency was a concern, transform the elements directly into a Set. Instead, we had to use Hofor!
After four years of reading and writing code like this in Java, I was becoming increasingly motivated to do something to fix it. Reading thousands of for loops and extracting methods, knowing there is a missing level of abstraction was too much for a former Smalltalk developer to bear. I saw many former Smalltalk developers abandon Java for Ruby or Groovy or even JavaScript. I decided to take a different path and stick with Java.
I would have to slow down, in order to speed up.
Flotsam and Jetsam
The code example above may not seem bad on its own. So let’s show two more examples that should make the duplication obvious and more painful to read.
In the following example I add a second method named getUniqueFirstNames and the code in the test asserts both the unique first names and last names for the people.
@Test
public void findUniqueFirstAndLastNames()
{
List<Person> people = this.getListOfPeople();
Set<String> firstNames = this.getUniqueFirstNames(people);
Set<String> lastNames = this.getUniqueLastNames(people);
Assertions.assertEquals(Set.of("Bob", "Sally"), firstNames);
Assertions.assertEquals(Set.of("Smith", "Taylor"), lastNames);
}
private Set<String> getUniqueFirstNames(List<Person> people)
{
// Hofor!
Set<String> firstNames = new HashSet<>();
for (Person each : people)
{
firstNames.add(each.firstName());
}
return firstNames;
}
private Set<String> getUniqueLastNames(List<Person> people)
{
// Hofor!
Set<String> lastNames = new HashSet<>();
for (Person each : people)
{
lastNames.add(each.lastName());
}
return lastNames;
}
You should see some clear structural similarities between getUniqueFirstNames and getUniqueLastNames.
For me once, shame on you. For me twice, shame on me.
There is one more example I will share below, in order to help drive home the point through repetition. I will add a method named getUniqueFullNames.
@Test
public void findUniqueFirstLastAndFullNames()
{
List<Person> people = this.getListOfPeople();
Set<String> firstNames = this.getUniqueFirstNames(people);
Set<String> lastNames = this.getUniqueLastNames(people);
Set<String> fullNames = this.getUniqueFullNames(people);
Assertions.assertEquals(Set.of("Bob", "Sally"), firstNames);
Assertions.assertEquals(Set.of("Smith", "Taylor"), lastNames);
Assertions.assertEquals(Set.of("Bob Smith", "Sally Taylor"), fullNames);
}
private Set<String> getUniqueFirstNames(List<Person> people)
{
// Hofor!
Set<String> firstNames = new HashSet<>();
for (Person each : people)
{
firstNames.add(each.firstName());
}
return firstNames;
}
private Set<String> getUniqueLastNames(List<Person> people)
{
// Hofor!
Set<String> lastNames = new HashSet<>();
for (Person each : people)
{
lastNames.add(each.lastName());
}
return lastNames;
}
private Set<String> getUniqueFullNames(List<Person> people)
{
// Hofor!
Set<String> lastNames = new HashSet<>();
for (Person each : people)
{
lastNames.add(each.fullName());
}
return lastNames;
}
For me three times, shame on both of us. — with apologies to Stephen King
Hofor! Hofor! Hofor! The structural similarity between these three methods is screaming “Hold the for!”
Help would soon be on its way.
Fate of Sisyphus or Temporary Eye Gouging
By 2004, I had enough of this painful loop code duplication. They say necessity is the mother of invention. I needed to end this for loop code duplication in Java. I viewed it as an existential threat to my sanity as a developer.
I turned to the only tool I had available to address the problem in Java at the time. This tool was Anonymous Inner Classes (AIC). Unfortunately, every time I use an AIC in Java, I feel like I am sticking a sharp instrument in my eye. Strangely, this felt better than having to roll the same ball up a hill like Sisyphus for eternity. Java IDEs with code folding made it less painful to read AICs, so the pain was only in writing the code. That pain was also lessened by IDEs that generated the boilerplate required to implement an Anonymous Inner Class.
I will walk you through the thought process I went through to remove the duplication in the above example. I will create a single method named getUniqueValues which takes the List and a Function. Function will be an interface with a single abstract method named apply which will take a parameter and return some result. The getUniqueValues method which is single purpose here will later be converted to use a reusable method named collect and which is part of the Eclipse Collections API.
@Test
public void findUniqueFirstLastAndFullNamesWithAICs()
{
List<Person> people = this.getListOfPeople();
Set<String> firstNames =
this.getUniqueValues(people, new Function<Person, String>() {
public String apply(Person person) {
return person.firstName();
}
});
Set<String> lastNames =
this.getUniqueValues(people, new Function<Person, String>() {
public String apply(Person person) {
return person.lastName();
}
});
Set<String> fullNames =
this.getUniqueValues(people, new Function<Person, String>() {
public String apply(Person person) {
return person.fullName();
}
});
Assertions.assertEquals(Set.of("Bob", "Sally"), firstNames);
Assertions.assertEquals(Set.of("Smith", "Taylor"), lastNames);
Assertions.assertEquals(Set.of("Bob Smith", "Sally Taylor"), fullNames);
}
private Set<String> getUniqueValues(
List<Person> people,
Function<Person, String> function)
{
// Hofor!
Set<String> values = new HashSet<>();
for (Person each : people)
{
values.add(function.apply(each));
}
return values;
}
By passing a Function in using an AIC, I can alter the one part of the code that was different, which is whether to use firstName, lastName, or fullName.
OMG! How is this even remotely better than the code I wrote before!?!? Why would anyone write code like this! This code is terrible. My eyes are bleeding! Please, make it stop!
Were it not for code like this with AICs that I wrote from 2004–2014 in Java, it’s possible we might not have gotten lambdas in Java as soon as we did. If Java didn’t get lambdas in Java 8 by March 2014, some of us Java developers who were exhausted from writing for loops would instead be programming in Groovy, Scala, Kotlin, C#, JavaScript or some other language that had lambda support built-in from the beginning.
I chose temporary eye gouging over the fate of Sisyphus. This was a choice of higher level abstraction or being stuck with pedestrian programming with for loops forever. Programming is a creative endeavor. Building and using higher-level abstractions to solve problems fuels our creativity as developers. The fact that Anonymous Inner Classes are hideously ugly and temporarily blind the developer who uses them could not hide the beauty and benefit of the higher level abstractions.
It’s been almost ten years since Java 8 was released. My eyes have healed, and the Anonymous Inner Classes are mostly gone, replaced by concise lambda expressions.
Rowing to the end of the known Java world
In 2004, I knew there was a better route to programming productivity. I would just have to row a boat across an entire ocean of developer time in order to get there. The years from 2004–2014 were lonely for me as a Java developer who believed lambdas in Java were an inevitability. I set out to convince others I worked with that Java needed lambdas so badly that it would have to eventually get them. I built a library inside Goldman Sachs called Caramel which was later open-sourced as GS Collections in 2012. I was successful in convincing hundreds of developers inside of Goldman Sachs that it was better to write code using higher level method abstractions and using Anonymous Inner Classes than it was to write tens of thousands of unnecessary for loops. I convinced developers in Goldman Sachs that lambdas would arrive eventually, and using Caramel would prepare them for a new lambda-enabled Java programming language. I also suggested they would likely be able to convert Anonymous Inner Classes to Lambdas through automated refactorings.
The following is an example of what I told developers would eventually be possible with Java with reasonable syntactic support for lambdas.
@Test
public void findUniqueFirstLastAndFullNames()
{
List<Person> people = this.getListOfPeople();
Set<String> firstNames =
this.getUniqueValues(people, each -> each.firstName());
Set<String> lastNames =
this.getUniqueValues(people, each -> each.lastName());
Set<String> fullNames =
this.getUniqueValues(people, each -> each.fullName());
Assertions.assertEquals(Set.of("Bob", "Sally"), firstNames);
Assertions.assertEquals(Set.of("Smith", "Taylor"), lastNames);
Assertions.assertEquals(Set.of("Bob Smith", "Sally Taylor"), fullNames);
}
private Set<String> getUniqueValues(
List<Person> people,
Function<Person, String> function)
{
// Hofor!
Set<String> values = new HashSet<>();
for (Person each : people)
{
values.add(function.apply(each));
}
return values;
}
By 2006, I started getting in touch with folks at Sun and then Oracle pleading my case fervently that we needed to get Lambdas into the Java programming language. I watched the first set of Java lambda battles play out between 2006–2010 with BGGA, CICE, FCM. While these battles were happening and no one seemed to be winning, I kept quietly rowing my boat (building the Caramel library), and teaching Java developers about lambdas using the programming language I love — Smalltalk. I worked with several other amazing developers for years at Goldman Sachs building a bigger, better boat that would eventually become Eclipse Collections. I knew that once I reached land with lambdas for Java, folks on the Island of Hofor! would want to be able to leave the island for loops and experience the power of lambdas with feature-rich collections that could immediately leverage them.
I struggled at first with how to teach developers what using lambdas would be like in Java. Between 2007–2014 we taught around 200 Java developers in Goldman Sachs a week-long class in Smalltalk. The experience using Smalltalk gave them a better appreciation for OO programming and TDD. I couldn’t show these developers how to use lambdas in Java until Java 8 was released in March 2014. Instead, I was able to show them how to use lambdas with rich collections in Smalltalk, and this showed them what would be possible eventually in Java. They were able to see what I saw every day. I saw the potential for land, well, lambdas landing in Java eventually anyway.
This is how I picture the code examples above with my Smalltalk lenses on.
testFindUniqueFirstLastAndFullNames
|people firstNames lastNames fullNames|
people := OrderedCollection
with: (Person new firstName: 'Bob'; lastName: 'Smith')
with: (Person new firstName: 'Sally'; lastName: 'Taylor').
firstNames := (people collect: #firstName) asSet.
lastNames := (people collect: #lastName) asSet.
fullNames := (people collect: #fullName) asSet.
self assert: firstNames equals: (Set with: 'Bob' with: 'Sally').
self assert: lastNames equals: (Set with: 'Smith' with: 'Taylor').
self assert: fullNames equals: (Set with: 'Bob Smith' with: 'Sally Taylor').
This is the syntax for an individual method in Smalltalk. The method name is testFindUniqueFirstLastAndFullNames which I added to a class called PeopleTest. The method has no parameters. Smalltalk is dynamic, so there is no need to specify a return type for the method, or types for any variables. The pipes around |people firstNames lastNames fullNames| create four local variables in Smalltalk. An OrderedCollection in Smalltalk is the equivalent of Java’s List type. I create an OrderedCollection containing two instances of the class Person. The method collect: transforms the Person instances in the OrderedCollection to String in these three cases, using the specified accessors.
I did not actually use any lambdas in the code above, as Pharo Smalltalk made it possible to simply use a Symbol (unique String marked with a #) instead. This is the closest equivalent in Smalltalk to using what we now know as Method References in Java. If I used lambdas with the collect method instead, the code above would have looked as follows. Lambdas in Smalltalk are delineated with square brackets with a pipe in the middle— [|]. The text :each to the left of the pipe defines a parameter named each. The code on the right of the pipe is the expression that will be evaluated.
firstNames := (people collect: [:each | each firstName]) asSet.
lastNames := (people collect: [:each | each lastName]) asSet.
fullNames := (people collect: [:each | each fullName]) asSet.
The method asSet converts the OrderedCollection instances that are created by calling collect: to Set instances which will make sure the values are unique.
Medium doesn’t support syntax highlighting for Smalltalk syntax, so I chose the closest compatible language which is Ruby. It was a nice feeling writing a small bit of Smalltalk for this blog. I used Pharo Smalltalk 11.0 as my Smalltalk IDE to create the Person class and this supporting unit test code above.
Writing simple code like this without for loops makes me very happy. It might surprise you if I told you that Smalltalk doesn’t have for loops. It doesn’t have statements of any kind. Everything in the Smalltalk language is “object message.” Of course, this might also explain why I dislike using for loops so much in Java.
Land Ho(for)!
Around 2011, I joined the JSR 335 Expert Group. Working on the JSR 335 EG was one of the greatest experiences in my career. Our mission was to get a working specification for lambdas into the Java Language. We succeeded. Java 8 was released in March 2014 with support for Lambdas, Method References, Default Methods, and Java Streams. Java finally had decent language syntax support for lambdas! Win!
Almost two years before the successful release of Java 8, in July 2012, I gave a talk at the JVM Language Summit. I shared what I saw as the potential for lambda support in Java to improve the quality and quantity of code that developers had to write. The slides for the talk are still available online here. Slide 24 of the PDF shows the code for the pattern named collect in GS Collections (now Eclipse Collections). The collect pattern in Eclipse Collections is the equivalent of the pattern named map in Java Streams.
If we use Java lambda expressions with the Eclipse Collections collect method to solve the problem above, the code will look as follows:
@Test
public void findUniqueFirstLastAndFullNamesEclipseCollections()
{
MutableList<Person> people = Lists.mutable.with(
new Person("Bob", "Smith"),
new Person("Sally", "Taylor"));
Set<String> firstNames =
people.collect(person -> person.firstName()).toSet();
Set<String> lastNames =
people.collect(person -> person.lastName()).toSet();
Set<String> fullNames =
people.collect(person -> person.fullName()).toSet();
Assertions.assertEquals(Set.of("Bob", "Sally"), firstNames);
Assertions.assertEquals(Set.of("Smith", "Taylor"), lastNames);
Assertions.assertEquals(Set.of("Bob Smith", "Sally Taylor"), fullNames);
}
As you can see, there is no need for an extra method named getUniqueValues to create a Set based on some Function applied to each element of the collection. No more Hofor! The MutableList class has the behavior collect which can transform elements in the list from one type (Person) to another (String). The toSet method then converts the MutableList to a MutableSet. MutableSet extends java.util.Set.
Eclipse Collections or Java Streams?
The answer to this question is yes. When I first began my quest for lambdas in Java, I didn’t imagine we would get anything quite like Java Streams in Java 8. I wanted eager methods on the collections themselves, and needed lambdas to reduce the verbosity. Eclipse Collections gives developers eager methods on collections, and so much more. Eclipse Collections has great integration with Java Collection Framework types and provides support for Java Streams, so there is no need to make an either/or decision. You can use Eclipse Collections and Java Streams together. They are complementary.
The following shows how to use an Eclipse Collections MutableList with Java Streams to solve the example problem. I am going to use method references here instead of lambdas to make the code even less verbose. I have a “method reference preference” in Java. While I wanted lambdas so much in Java, method references would be an extra gift in Java 8 that I now can’t imagine programming in Java without.
public static MutableList<Person> getMutableListOfPeople()
{
return Lists.mutable.with(
new Person("Bob", "Smith"),
new Person("Sally", "Taylor"));
}
@Test
public void findUniqueFirstLastAndFullNamesJavaStream()
{
MutableList<Person> people = this.getMutableListOfPeople();
Set<String> firstNames =
people.stream()
.map(Person::firstName)
.collect(Collectors.toSet());
Set<String> lastNames =
people.stream()
.map(Person::lastName)
.collect(Collectors.toSet());
Set<String> fullNames =
people.stream()
.map(Person::fullName)
.collect(Collectors.toSet());
Assertions.assertEquals(Set.of("Bob", "Sally"), firstNames);
Assertions.assertEquals(Set.of("Smith", "Taylor"), lastNames);
Assertions.assertEquals(Set.of("Bob Smith", "Sally Taylor"), fullNames);
}
This code is slightly more verbose than using the eager collect method directly on the Eclipse Collections MutableList. Eclipse Collections also has custom support for lazy iteration using its own asLazy method which returns a LazyIterable type.
The following shows how to use Eclipse Collections with asLazy to solve the same problem.
@Test
public void findUniqueFirstLastAndFullNamesAsLazy()
{
MutableList<Person> people = this.getMutableListOfPeople();
LazyIterable<Person> lazyPeople = people.asLazy();
Set<String> firstNames =
lazyPeople.collect(Person::firstName).toSet();
Set<String> lastNames =
lazyPeople.collect(Person::lastName).toSet();
Set<String> fullNames =
lazyPeople.collect(Person::fullName).toSet();
Assertions.assertEquals(Set.of("Bob", "Sally"), firstNames);
Assertions.assertEquals(Set.of("Smith", "Taylor"), lastNames);
Assertions.assertEquals(Set.of("Bob Smith", "Sally Taylor"), fullNames);
}
There are some subtle difference here between Java Stream and Eclipse Collections LazyIterable. A Java Stream can only be used once. A LazyIterable can be used as many times as you need. The other difference is that Eclipse Collections API has many converter methods available directly on LazyIterable (toList, toSet, toBag, toMap) whereas Java Stream has to rely on the collect method used with Collector instances, for everything other than toList, which was added to Java Stream in JDK 16.
The point I would stress is that you can use Eclipse Collections types and Java Streams very well together.
After ten years of lambdas in Java
I took a big gamble in 2004 building a collections library in Java that needed lambdas, ten years before Java would have them. I couldn’t wait for battles over lambda syntax to start fighting what I saw as the real battle, which is teaching Java developers how to code using higher levels of abstraction with rich collection interfaces that leverage lambdas. I had to get used to the temporary eye gouging with Anonymous Inner Classes. I guess ten years in programming language time can be considered temporary in a venerable language like Java, even if at the time it felt like an eternity. I loved applying automated refactorings to replace Anonymous Inner Classes with lambdas and method references on entire code bases after Java 8 was released. I gave many Java developers inside of Goldman Sachs an amazing head start on learning and using lambdas effectively in Java 8 when they became available. If any of the Java developers I worked with in Goldman Sachs whom I taught Caramel, GS Collections, or Eclipse Collections are reading this blog, I would like to say two things.
You are very welcome! Thank you for believing!
I haven’t worked at Goldman Sachs for several years now. Six years ago I created an open source Java Lambda Kata at my current employer, to help developers I worked with learn how lambdas work in Java and how to use them effectively. I was told yesterday by a developer I work with who has been out of university for two years that this same Lambda Kata was the first time she had experienced using lambdas in Java. This surprised me. I had kind of expected lambdas to have become part of the Java curriculum in universities by now. It seems Hofor! is still being taught in university.
So here I am, Java lambdas in hand with arguably the best Java collections framework available (Eclipse Collections) still fighting the battle of Hofor! with new developers. I am ok with this. I enjoy teaching developers about using lambdas with Java and Eclipse Collections. There is a growing arsenal of Eclipse Collections Code Katas that leverage lambdas that I can use to arm developers with amazing productivity tools and to defend against Hofor!
Thank you for reading my story. It only took me two decades to tell it. I hope you found it interesting and informational. I wonder what the next decade will bring. More Java with lambdas and Eclipse Collections most likely.
Best of luck with Hofor! in your Java coding adventures!
I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions.
My ten year quest for concise lambda expressions in Java was originally published in Better Programming on Medium, where people are continuing the conversation by highlighting and responding to this story.
August 25, 2023
Bazel Eclipse Feature is here!
by Gunnar Wagenknecht at August 25, 2023 09:16 AM
Are you using Bazel as a build system and doing Java development? Do you want to give the Eclipse plug-in for Bazel a try?
I’ve spend the last couple weeks reworking most of the plug-in to better support Bazel development. It’s approach isn’t very different from the IntelliJ plug-in, i.e. it runs Bazel query and performs Bazel builds with aspects to obtain project and classpath information from Bazel. With that information it creates projects in Eclipse. It also uses project views (.bazelproject) to configure what’s visible in Eclipse.
However, there are a few differences compared to the Bazel IntelliJ plug-in, which I’d like to highlight.
- It uses Eclipse projects to map targets so the classpath is really scoped to the classpath of the targets.
- It allows for some flexibility how targets are mapped to projects – you can have one project per target or one project per package.
- Because of Eclipse auto-build feature of individual Java source files HotSwap just works and is fast.
Documentation is here. The update site is https://opensource.salesforce.com/bazel-eclipse/latest/.
Oh and there is a preview release of a VS Code extension using the same feature to setup the Java Language Server in VS Code.
August 22, 2023
Eclipse Cloud DevTools Contributor Award: STMicroelectronics for TypeScript-based GLSP Servers
by John Kellerman at August 22, 2023 03:42 PM
The Eclipse Cloud Developer Tools contributor award for August goes to STMicroelectronics for initiating the support for TypeScript-based GLSP server implementations. This contribution marked an important starting point for a now very popular and active sub-component of GLSP: the Typescript-based GLSP server framework. This new framework not only makes it easy to implement diagrams using GLSP, entirely in TypeScript, but also enables a homogeneous developer experience with the same tooling and programming language used throughout the entire diagram editor project.
The Eclipse Graphical Language Server Platform (GLSP) is a framework for efficiently building web-based diagram editors that are easy to embed into Eclipse Theia, VS Code, Eclipse desktop or even an arbitrary web page. Please visit the Eclipse GLSP website to learn more.
GLSP provides a flexible diagram canvas for displaying diagram editors in the browser. GLSP diagram editors can connect to a server component, which handles the underlying business logic, such as data management, validation, etc. As this communication is defined in a protocol, you can actually implement a GLSP server in the language of your choice. However, with a dedicated framework, writing GLSP diagram servers with Typescript got a whole lot easier with this new GLSP server framework for Typescript. Read more about the TypeScript support for implementing GLSP servers.
The success of open source projects such as Eclipse GLSP always depends on many people from various organizations. However, single contributions such as the one from STMicroelectronics extend the capabilities of technologies such as GLSP and make it more attractive to a new group of adopters. New adopters then contribute to making the project even better. Congratulations to STMicroelectronics and thank you for the contribution.
The Cloud DevTools Working Group provides a vendor-neutral ecosystem of open-source projects focused on defining, implementing and promoting best-in-class web and cloud-based development tools. It is hosted at the Eclipse Foundation, current members of the group include AMD, Arm, EclipseSource, Ericsson, Obeo, RedHat, Renesas, STMicroelectronics and TypeFox.
This Eclipse Cloud DevTools contributor award is sponsored by EclipseSource, providing consulting and implementation services for web-based tools, Eclipse GLSP, Eclipse Theia, and VS Code.
August 21, 2023
Embedded SIG Evolves to CDT Cloud Project
by John Kellerman at August 21, 2023 04:39 PM
The Embedded Special Interest Group (SIG) hosted as part of the Eclipse Cloud DevTools working group has now evolved into the CDT Cloud project. This reflects the growth and continuously high level of activity in the group, which has outgrown the original governance structure and matured into the establishment of several active open source initiatives under the CDT Cloud umbrella.
The Embedded SIG has been an open collaboration of embedded vendors and service providers, with the goal of strengthening the open source ecosystem for building web- and cloud-based tools used for C/C++ development and embedded development. Current members of the group include Arm, EclipseSource, Ericsson, Renesas, STMicroelectronics, and AMD. You can learn more about the various CDT Cloud technical initiatives in this blog post about the Embedded SIG. One of the major achievements of the SIG was aligning and channeling several open source initiatives into the CDT Cloud project.
The CDT Cloud SIG has worked on a number of successful initiatives, including a memory inspector, TraceCompass Cloud or the CDT GDB DAP Adapter. Probably the most important achievement was the creation of the CDT Cloud project itself. CDT Cloud is an umbrella project hosting open source components for building custom web-based C/C++ tools such as Trace Compass Cloud and the CDT Cloud Amalgamator. With CDT Cloud Blueprint, the project provides a ready-to-be-used template tool (see screenshot below). Please visit the CDT Cloud website for more information about the sub components and initiatives of the CDT Cloud project.

A great big thanks to Rob Moran who has been the chair for the Embedded SIG over several years and largely responsible for its success. Rob has consequently also taken a leading role in the CDT Cloud project.
If you are interested in joining the ecosystem or learn more about it, there is an open monthly call to meet the community of CDT Cloud!
Help Us Test the New Version of the Eclipse Marketplace Website
August 21, 2023 03:20 PM
The Eclipse Marketplace, your platform for discovering and installing Eclipse IDE extensions, is taking a big step forward. With Drupal 7 approaching its end-of-life in January 2025, we must move towards a more supported and robust version of Drupal.
As we prepare to go live with this new version, we’re turning to you, our valued community, to assist us with the testing. Our goal is to ensure that the new Drupal 9 implementation of the Eclipse Marketplace website delivers the quality and functionalities we all have come to rely on. To this end, while this new implementation does make some updates in functionality and other enhancements, it largely is a migration of the existing marketplace functionality.
You can access the staging instance at https://marketplace-staging-d9.eclipse.org.
Areas Up for Testing:
- MarketPlace Client (MPC) in Eclipse IDE: Our new API implementation and the revamped favorite management system are up for review. For detailed steps on how to configure your Eclipse IDE to query our staging server, please refer to the Marketplace REST API Documentation.
- Editing and Content Management Features: For extension maintainers, we’ve introduced a mobile-responsive theme and made several content management improvements.
- Search Performance and Accuracy: It’s vital to ensure the revamped search mechanism operates with efficiency and accuracy.
- General Website Features: Explore the site and confirm all the familiar features are functioning seamlessly.
How Should I Submit Feedback?
Please submit your feedback via a new Gitlab issue. When sharing your experiences, include pertinent details such as the current behavior, the desired outcome, the affected URL, any specific search queries alongside expected results, and/or any other information that may be relevant.
For complete guidance, refer to the dedicated issue we’ve set up for this testing phase.
Facing a Glitch?
If you hit any roadblocks accessing our staging server or configuring MPC, feel free to drop a comment on our issue dedicated to this testing phase.
Your contribution ensures that our community continues to have the best tools at its disposal.
Let’s make the Eclipse Marketplace a robust and seamless experience for everyone. Dive in, test, and share your feedback. Your insights can make all the difference.
August 15, 2023
Xtext job
by Andrey Loskutov (noreply@blogger.com) at August 15, 2023 02:16 PM
Do you want to have...
- a very challenging and never boring job on an extreme complex piece of software?
- agile and intercultural working environment in Germany?
- work for a real high-tech company (we build semiconductor test hardware to test chips from tomorrow for almost all chip companies on the world)?
- a top notch RHEL workstation with 256 GB RAM, SSD and 16 core Xeon?
- a recent Thinkpad of your choice?
- possibility to work 50% or more in home office?
- not only good salary but also other benefits?
We (Advantest Europe GmbH) are hiring! We are the leader in semiconductor testing industry and also in the top 10 employees in the IT industry in Germany (see our Kununu profile).
I have 1 open job position for Eclipse/Xtext developer in my team in our main office in Böblingen (and of course we have way more other job offers).
The main job focus is Xtext support in house in the context of the very complex, Eclipse based IDE, used as the front end for the semiconductor tester. Other responsibility will be helping with the Xtext project maintenance in general (bug fixes, releng jobs etc).
- You should have a proven Xtext development experience or comparable experience in language engineering and language generation frameworks
- You should be able to mentor other engineers in all Xtext related areas
- You should be able to express yourself very good (both spoken/written) in English or German
- You should have fun reading thread dumps and debugging unknown code
- You should have very good both computer science education and core Java knowledge
- Ideally you should have experience with open source projects development
We speak English (main job language), German, Java and few other languages here.
by Andrey Loskutov (noreply@blogger.com) at August 15, 2023 02:16 PM
August 05, 2023
Bonus Slides from QCon NY 2023
by Donald Raab at August 05, 2023 06:12 PM
The slides that didn’t make the 50 minute time limit for our talk.
No time? No problem.
While working on a performance talk for QCon New York, my co-speaker Rustam Mehmandarov and I had more material than we had time for during our presentation. Our solution was simple. Don’t delete the slides. Move them to the Appendix.
The slides are available as AsciiDoc in this GitHub repo. The talk was about memory-efficiency, and the Appendix contains some more examples folks might find interesting.
I also wrote a prequel blog for the talk, which goes into much more detail about the historical context for the talk. The following is the link to the prequel blog titled “Sweating the small stuff in Java.”
Sweating the small stuff in Java
Writing the prequel blog saved about 15 minutes from the talk.
Does anyone ever look at the Appendix?
I know I do occasionally. Here’s the Appendix for our talk. You will find some links to resources on the first page, but there is more. The following sections of the blog will show the slides as they would appear in IntelliJ which is what we used along with AsciiDoc in the live presentation.
Appendix 0 — Resources
The first page has some useful links to resources we used or referenced in the talk.
GitHub Repos
- Eclipse Collections (creator: Donald Raab)
- DataFrame-EC (creator: Vladimir Zakharov)
- Jackson Dataformat CSV (creator: @cowtowncoder )
- Jackson Datatypes Collections
Kata Repos
Articles
We had referenced the Java Object Layout tool earlier in our talk, which is the tool we used for measuring memory footprints. Here’s a link to the slide with the references to JOL that will help explain how we came up with some of the example slides that follow. The following image shows the slide as it appeared in our talk.

Appendix 1 — Boxed vs. Primitive Lists
We didn’t have time to show every memory cost comparison during the talk that we did, so here’s the one where we compared a java.util.ArrayList of Integer with an IntArrayList. Each List contains integer values 1 through 10.

Note, the extra cost here of 160 bytes for ArrayList is due to the boxing of int values as Integer instances.
Appendix 2 — Mutable vs. Immutable Lists
The JDK provides both Mutable and Immutable List implementations now. They both implement the List interface. Most folks won’t realize that the Immutable List implementations are more memory efficient than their Mutable counterparts. This is because they are trimmed-to-size since they don’t change. There are ImmutableCollections$ListN and ImmutableCollections$List12 implementations. The latter should be read as ListOneTwo, not ListTwelve, which is how I read it when I first saw the class. This class contains either one or two elements.
In this example, we created a List with two Integer instances. The first class we used is ArrayList and then we created a copy of the ArrayList into an Immutable List using List.copyOf().

The boxing cost is the same between the Mutable and Immutable List implementations in the JDK, but the List12 instance does not have a default sized array of size 10 like the ArrayList.
Appendix 3 — Boxed vs. Primitive Map of Long → Set of Long
I was asked on Twitter if there was a more efficient way of creating a Map of Long to Set of Long for 200,000 Long keys using Eclipse Collections. The short answer is yes, as long as you don’t box the Long values.

24 bytes for each Long object. These can add up quickly depending on your use cases. Don’t box!
Appendix 4 — Caching vs. Pooling
We discussed pooling in our talk, and desxcribed some of the pools built into the JDK like String.intern() and the boxed Number pools available through valueOf methods on the integral value types Byte, Short, Integer, and Long. Caching is subtly different in that lookups for an object are usually provided via some index. Pooling provides uniquing and lookup is based on the instance you are looking for.

Country is implemented as a record, and we keep a cache of Country instances indexed by the country name in a Map.

Appendix 5 — Scaling Conferences x50
In the talk, we covered an example that scaled from 1 million Conference instances to 25 million. A few days before the talk, we tried it again with 50 million and 100 million instances, with the memory tuning done for one of the four row based solutions (Eclipse Collections ImmutableList). The attempt to load 100 million instances failed with OutOfMemoryError. I did not have time to research what the cause of the OutOfMemoryError and see if it was fixable.
Here is the slide with 50 million instances of Conference.

The intent here is to show how scaling impacts total memory savings. By manually tuning one of the row based solutions with a savings of 16 bytes per Conference, we were able to save over 800MB of memory. If you target the multipliers in your data, even small memory savings can become significant.
Thank you and Enjoy!
Rustam and I had a blast presenting at QCon New York this year, and wanted to thank the conference organizers, our track host Neha Sardana, and everyone who attended our talk! I hope you enjoy the bonus slides I shared here that didn’t make the cut for the talk.
Thank you for reading, and Happy Father’s Day!
I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions.
August 01, 2023
Rich, Lazy, Mutable, and Immutable Interfaces in Eclipse Collections
by Donald Raab at August 01, 2023 07:30 PM
Learn about Java collection interfaces with intention revealing names.

A quick guide to reading this blog
I wrote this blog to help explain the design of the Eclipse Collections interface hierarchies to folks who are new to the library and might find the large number of interfaces disorienting to learn by browsing through code and/or Javadoc. This blog should also be a good reference for folks that understand the design of Eclipse Collections, but occasionally want a quick link to find, validate, or explain some things.
The rest of this blog will explain the relationship between Rich, Lazy, Mutable, and Immutable types in Eclipse Collections. The end of this blog has a reference section with a list of all the unique methods that show why Eclipse Collection is a “rich” collections library for Java. Click the link to skip there if that is what you are looking for.
Enjoy!
Everything (almost) extends Iterable
All object containers in Eclipse Collections, with the exception of Multimap types, extend java.lang.Iterable. This design decision makes it possible to use any Eclipse Collections interface with a Java 5 style for each loop. This provides the most basic and flexible form of external iteration you expect from a Java Iterable.
All container types in Eclipse Collections, including Multimap types, favor providing internal iterators like forEach that take a Functional Interface as a parameter that can be expressed as a lambda or method reference. It is not necessary to extend java.lang.Iterable to support internal iterators. As of Java 8, even java.lang.Iterable added a default forEach method to make internal iterators possible on JDK types.
Eclipse Collections has an interface named InteralIterable that inherits directly from java.lang.Iterable. The purpose of InternalIterable is to provide internal iterators like forEach to all Eclipse Collections types. There are also two other methods defined on InternalIterable named forEachWith and forEachWithIndex.
The forEachWithIndex method is marked as deprecated because with unordered collections, an index is not meaningful and winds up being just an iteration counter instead. If you are looking for something to have an iteration counter over an unordered type, then forEachWithIndex (while poorly named) might be useful to you.
The following code shows how a for loop, forEach and forEachWith method can be used with any InternalIterable in Eclipse Collections.
@Test
public void forLoopVsForEachVsForEachWith()
{
InternalIterable<String> iterable = Lists.immutable.with("a", "b", "c");
// Java 5 for each loop
for (String each : iterable)
{
this.output(each, String::toUpperCase);
}
// forEach with lambda
iterable.forEach(each -> this.output(each, String::toUpperCase));
// forEachWith with method reference
iterable.forEachWith(this::output, String::toUpperCase);
}
public void output(String each, Function<String, String> function)
{
System.out.print(function.valueOf(each));
}
// Output:
// ABCABCABC
The methods forEach, forEachWith, forEachWithIndex are occasionally useful. There are many more useful methods on the RichIterable interface and its child interfaces LazyIterable, MutableCollection, and ImmutableCollection.
The Child and Grandchildren of InternalIterable
The four “rich” types in Eclipse Collections that extend InternalIterable are RichIterable, LazyIterable, MutableCollection and ImmutableCollection. The hierarchy of these types is shown in the following class diagram.

The interfaces in the preceding diagram communicate their capabilities through the prefix in their names — Rich, Lazy, Mutable, and Immutable. Developers can leverage these names to communicate their intent to developers who use their APIs.
- Rich — Read-only, serial. Lazy or eager behavior (up to implementation).
- Lazy — Read-only, serial, lazy behavior for non-terminal methods.
- Mutable — Serial, eager, with mutating methods for growth
- Immutable — Serial, eager, with non-mutating methods for growth
What does the “Rich” in RichIterable mean?
The design philosophy behind RichIterable favored defining useful internal iteration patterns that were seen in production code directly on the interface. There are many useful iteration patterns we saw in production code over the years, so we named them as best we could, and added them as features to the RichIterable interface. We thought the plethora of methods made this a feature-rich interface. This is how we arrived at the name RichIterable for the parent interface for most of the types in Eclipse Collections. RichIterable is an Iterable that is feature-rich.
RichIterable provides many methods that iterate over a container or view and perform some useful algorithms. RichIterable is a serial and read-only interface, so it has no methods that mutate the underlying container. RichIterable can be useful as both a parameter and return type. Methods that employ RichIterable can either accept or return LazyIterable, MutableCollection, or ImmutableCollection types, while only exposing the read-only API. There is no guarantee that methods on RichIterable are lazy or eager, or that the types are mutable or immutable. That decision is left up to the implementation types. If you want something to explicitly be mutable or immutable, then it is better to use the more explicit named types with these prefixes.
How many methods makes a feature-rich interface?
I don’t know if there is a minimum method count for an interface to qualify as feature-rich. I counted 176 methods defined in RichIterable that were not defined in InternalIterable or Iterable. I believe this method count qualifies.
One Hundred Seventy Six methods = Most certainly feature-rich
Click this link or scroll to the bottom of the blog to see the list of unique methods with return types defined on RichIterable<T>. I moved this list to the bottom of the blog because it has 130 unique methods. That is a lot to scroll past to continue reading. I do encourage you to check out the list and see what kind of useful methods are included in RichIterable.
The following sections will show the methods in RichIterable that have covariant overrides in LazyIterable, MutableCollection, and ImmutableCollection.
LazyIterable
LazyIterable is similar to RichIterable in that it is serial, read-only and inherits all of the same methods. All of the methods that return an Iterable type in RichIterable have covariant overrides in LazyIterable that return LazyIterable. All the non-terminal methods on LazyIterable are lazy. LazyIterable is similar to java.util.stream.Stream in that it supports lazy iteration patterns, but it is different in that a LazyIterable instance can be reused safely for multiple operations without throwing an “exhausted” RuntimeException.
Covariant Overrides
The following methods are overridden from RichIterable on LazyIterable and use covariant returns. What this means is that the return types are more specific in LazyIterable than the return types in RichIterable. The return types must be a subtype of the return type defined in RichIterable.
- chunk ➡️ LazyIterable<RichIterable<T>>
- collect ➡️ LazyIterable<V>
- collectBoolean ➡️ LazyBooleanIterable
- collectByte ➡️ LazyByteIterable
- collectChar ➡️ LazyCharIterable
- collectDouble ➡️ LazyDoubleIterable
- collectFloat ➡️ LazyFloatIterable
- collectIf ➡️ LazyIterable<V>
- collectInt ➡️ LazyIntIterable
- collectLong ➡️ LazyLongIterable
- collectShort ➡️ LazyShortIterable
- collectWith ➡️ LazyIterable<V>
- flatCollect ➡️ LazyIterable<V>
- flatCollectWith ➡️ LazyIterable<V>
- reject ➡️ LazyIterable<T>
- rejectWith ➡️ LazyIterable<T>
- select ➡️ LazyIterable<T>
- selectInstancesOf ➡️ LazyIterable<S>
- selectWith ➡️ LazyIterable<T>
- tap ➡️ LazyIterable<T>
MutableCollection
MutableCollection inherits all of the methods of RichIterable, and it is mutable as the prefix implies. Methods that mutate the underlying container such as add and remove are provided. All of the methods that return an Iterable type have covariant overrides in MutableCollection that return MutableCollection. All methods on MutableCollection, with the exception of asLazy, are serial and eager. Eager iteration patterns are very easy to understand, as they most closely resemble the code a developer would write by hand implementing an iteration pattern using a for loop.
Covariant Overrides
The following methods are overridden from RichIterable on MutableCollection and use covariant returns. What this means is that the return types are more specific in MutableCollection than the return types in RichIterable. The return types must be a subtype of the return type defined in RichIterable.
- aggregateBy ➡️ MutableMap<K, V>
- aggregateInPlaceBy ➡️ MutableMap<K, V>
- collect ➡️ MutableCollection<V>
- collectBoolean ➡️ MutableBooleanCollection
- collectByte ➡️ MutableByteCollection
- collectChar ➡️ MutableCharCollection
- collectDouble ➡️ MutableDoubleCollection
- collectFloat ➡️ MutableFloatCollection
- collectIf ➡️ MutableCollection<V>
- collectInt ➡️ MutableIntCollection
- collectLong ➡️ MutableLongCollection
- collectShort ➡️ MutableShortCollection
- collectWith ➡️ MutableCollection<V>
- countBy ➡️ MutableBag<V>
- countByEach ➡️ MutableBag<V>
- countByWith ➡️ MutableBag<V>
- flatCollect ➡️ MutableCollection<V>
- flatCollectWith ➡️ MutableCollection<V>
- groupBy ➡️ MutableMultimap<V, T>
- groupByEach ➡️ MutableMultimap<V, T>
- groupByUniqueKey ➡️ MutableMap<V, T>
- partition ➡️ PartitionMutableCollection<T>
- partitionWith ➡️ PartitionMutableCollection<T>
- reject ➡️ MutableCollection<T>
- rejectWith ➡️ MutableCollection<T>
- select ➡️ MutableCollection<T>
- selectInstancesOf ➡️ MutableCollection<S>
- selectWith ➡️ MutableCollection<T>
- sumByDouble ➡️ MutableObjectDoubleMap<V>
- sumByFloat ➡️ MutableObjectDoubleMap<V>
- sumByInt ➡️ MutableObjectLongMap<V>
- sumByLong ➡️ MutableObjectLongMap<V>
- tap ➡️ MutableCollection<T>
ImmutableCollection
ImmutableCollection is similar to RichIterable in that it is read-only and inherits all of the same methods. All of the methods that return an Iterable type have covariant overrides in ImmutableCollection that return ImmutableCollection. All methods on ImutableCollection, with the exception of asLazy, are serial and eager. Eager iteration patterns are very easy to understand, as they most closely resemble the code a developer would write by hand implementing an iteration pattern using a for loop.
Covariant Overrides
The following methods are overridden from RichIterable on ImmutableCollection and use covariant returns. What this means is that the return types are more specific in ImmutableCollection than the return types in RichIterable. The return types must be a subtype of the return type defined in RichIterable.
- aggregateBy ➡️ ImmutableMap<K, V>
- aggregateInPlaceBy ➡️ ImmutableMap<K, V>
- collect ➡️ ImmutableCollection<V>
- collectBoolean ➡️ ImmutableBooleanCollection
- collectByte ➡️ ImmutableByteCollection
- collectChar ➡️ ImmutableCharCollection
- collectDouble ➡️ ImmutableDoubleCollection
- collectFloat ➡️ ImmutableFloatCollection
- collectIf ➡️ ImmutableCollection<V>
- collectInt ➡️ ImmutableIntCollection
- collectLong ➡️ ImmutableLongCollection
- collectShort ➡️ ImmutableShortCollection
- collectWith ➡️ ImmutableCollection<V>
- countBy ➡️ ImmutableBag<V>
- countByEach ➡️ ImmutableBag<V>
- countByWith ➡️ ImmutableBag<V>
- flatCollect ➡️ ImmutableCollection<V>
- flatCollectWith ➡️ ImmutableCollection<V>
- groupBy ➡️ ImmutableMultimap<V, T>
- groupByEach ➡️ ImmutableMultimap<V, T>
- groupByUniqueKey ➡️ ImmutableMap<V, T>
- partition ➡️ PartitionImmutableCollection<T>
- partitionWith ➡️ PartitionImmutableCollection<T>
- reject ➡️ ImmutableCollection<T>
- rejectWith ➡️ ImmutableCollection<T>
- select ➡️ ImmutableCollection<T>
- selectInstancesOf ➡️ ImmutableCollection<S>
- selectWith ➡️ ImmutableCollection<T>
- sumByDouble ➡️ ImmutableObjectDoubleMap<V>
- sumByFloat ➡️ ImmutableObjectDoubleMap<V>
- sumByInt ➡️ ImmutableObjectLongMap<V>
- sumByLong ➡️ ImmutableObjectLongMap<V>
- tap ➡️ ImmutableCollection<T>
Diamonds grow on trees
Some of the Diamond hierarchies that formed in Eclipse Collections were the result of the Mutable part of the Eclipse Collections hierarchy intersecting with the JDK interfaces (Collection ➡️ MutableCollection, List ➡️ MutableList, Set ➡️ MutableSet). I described some of these diamonds in the following blog.
Other diamonds become visible as you move down the hierarchy to more specific types. There is a pattern in the type hierarchy that formed at the top which results in many instances of tree hierarchies with an Iterable, Mutable, and Immutable type.
For each container type in the following list, there is a tree with an Iterable, Mutable, and Immutable type.
- Bag ➡️ MutableBag / ImmutableBag
- ListIterable ➡️ MutableList / ImmutableList
- SetIterable ➡️ MutableSet / ImmutableSet
- StackIterable ➡️ MutableStack / ImmutableStack
- MapIterable ➡️ MutableMap / ImmutableMap
- Multimap ➡️ MutableMultimap / ImmutableMultimap
- etc.
The following diagram shows the diamonds that form as the RichIterable and ListIterable trees connect. A similar diamond formation happens for each container type listed above.

A similar symmetry exists in the Eclipse Collections primitive interface hierarchy as well. There are a lot more types in the primitive hierarchy because the hierarchy is a combination of container type and primitive type. At the top of the hierarchy is a type called PrimitiveIterable, which has a small set of common methods between primitive types. The following list shows a sample of the trees that exist for the primitive types in Eclipse Collections using the int type as an example.
- IntIterable ➡️ MutableIntCollection / ImmutableIntCollection
- IntList ➡️ MutableIntList / ImmutableIntList
- IntSet ➡️ MutableIntSet / ImmutableIntSet
- etc.
The following diagram shows the diamonds that form as the IntIterable and IntList trees connect.

I hope these diagrams share a small bit of the symmetry that exists in Eclipse Collections. I have included the select method in both the object and primitive hiearchy examples to illustrate the covariant return types at each interface level.
Symmetry in most things
Understanding the symmetry that exists between Iterable, Mutable, and Immutable types across different containers should make it easier to understand the various types, trees and diamonds that exist in the Eclipse Collections library. This library has evolved over almost two decades, and occasionally we still find some asymmetry that exists. We will fix issues like this when there is a measurable benefit. For the most part, if you would expect something to exist based on your understanding of how other things are defined, the symmetry will probably exist and meet your expectations.
Thank you for reading this blog, and I hope you learned something useful about the library along the way!
All 130 Unique Methods on RichIterable
The list below has all 130 unique methods in RichIterable with links to Javadoc. This list is a bit more concise than the Javadoc for RichIterable and should make it easier to find things quickly. The method names appear first with the return types second separated by the ➡️ emoji.
- aggregateBy ➡️ MapIterable<K, V>
- aggregateInPlaceBy ➡️ MapIterable<K, V>
- allSatisfy ➡️ boolean
- allSatisfyWith ➡️ boolean
- anySatisfy ➡️ boolean
- anySatisfyWith ➡️ boolean
- appendString ➡️ void
- asLazy ➡️ LazyIterable<T>
- chunk ➡️ RichIterable<RichIterable<T>>
- collect ➡️ RichIterable<V>
- collectBoolean ➡️ BooleanIterable
- collectByte ➡️ ByteIterable
- collectChar ➡️ CharIterable
- collectDouble ➡️ DoubleIterable
- collectFloat ➡️ FloatIterable
- collectIf ➡️ RichIterable<V>
- collectInt ➡️ IntIterable
- collectLong ➡️ LongIterable
- collectShort ➡️ ShortIterable
- collectWith ➡️ RichIterable<V>
- contains ➡️ boolean
- containsAll ➡️ boolean
- containsAllArguments ➡️ boolean
- containsAllIterable ➡️ boolean
- containsAny ➡️ boolean
- containsAnyIterable ➡️ boolean
- containsBy ➡️ boolean
- containsNone ➡️ boolean
- containsNoneIterable ➡️ boolean
- count ➡️ int
- countBy ➡️ Bag<V>
- countByEach ➡️ Bag<V>
- countByWith ➡️ Bag<V>
- countWith ➡️ int
- detect ➡️ T
- detectIfNone ➡️ T
- detectOptional ➡️ Optional<T>
- detectWith ➡️ T
- detectWithIfNone ➡️ T
- detectWithOptional ➡️ Optional<T>
- each ➡️ void
- flatCollect ➡️ RichIterable<V>
- flatCollectBoolean ➡️ R extends BooleanIterable
- flatCollectByte ➡️ R extends ByteIterable
- flatCollectChar ➡️ R extends CharIterable
- flatCollectDouble ➡️ R extends DoubleIterable
- flatCollectFloat ➡️ R extends FloatIterable
- flatCollectInt ➡️ R extends IntIterable
- flatCollectLong ➡️ R extends LongIterable
- flatCollectShort ➡️ R extends ShortIterable
- flatCollectWith ➡️ RichIterable<V>
- forEach ➡️ void
- getAny ➡️ T
- getFirst ➡️ T
- getLast ➡️ T
- getOnly ➡️ T
- groupBy ➡️ Multimap<V, T>
- groupByAndCollect ➡️ R extends MutableMultimap<K, V>
- groupByEach ➡️ Multimap<V, T>
- groupByUniqueKey ➡️ MapIterable<V, T>
- injectInto ➡️ IV
- injectIntoDouble ➡️ double
- injectIntoFloat ➡️ float
- injectIntoInt ➡️ int
- injectIntoLong ➡️ long
- into ➡️ R
- isEmpty ➡️ boolean
- makeString ➡️ String
- max ➡️ T
- maxBy ➡️ T
- maxByOptional ➡️ Optional<T>
- maxOptional ➡️ Optional<T>
- min ➡️ T
- minBy ➡️ T
- minByOptional ➡️ Optional<T>
- minOptional ➡️ Optional<T>
- noneSatisfy ➡️ boolean
- noneSatisfyWith ➡️ boolean
- notEmpty ➡️ boolean
- partition ➡️ PartitionIterable<T>
- partitionWith ➡️ PartitionIterable<T>
- reduce ➡️ Optional<T>
- reduceInPlace ➡️ R
- reject ➡️ RichIterable<T>
- rejectWith ➡️ RichIterable<T>
- select ➡️ RichIterable<T>
- selectInstancesOf ➡️ RichIterable<S>
- selectWith ➡️ RichIterable<T>
- size ➡️ int
- sumByDouble ➡️ ObjectDoubleMap<V>
- sumByFloat ➡️ ObjectDoubleMap<V>
- sumByInt ➡️ ObjectLongMap<V>
- sumByLong ➡️ ObjectLongMap<V>
- summarizeDouble ➡️ DoubleSummaryStatistics
- summarizeFloat ➡️ DoubleSummaryStatistics
- summarizeInt ➡️ IntSummaryStatistics
- summarizeLong ➡️ LongSummaryStatistics
- sumOfDouble ➡️ double
- sumOfFloat ➡️ double
- sumOfInt ➡️ long
- sumOfLong ➡️ long
- tap ➡️ RichIterable<T>
- toArray ➡️ Object[]
- toBag ➡️ MutableBag<T>
- toBiMap ➡️ MutableBiMap<K, V>
- toImmutableBag ➡️ ImmutableBag<T>
- toImmutableBiMap ➡️ ImmutableBiMap<K, V>
- toImmutableList ➡️ ImmutableList<T>
- toImmutableMap ➡️ ImmutableMap<K, V>
- toImmutableSet ➡️ ImmutableSet<T>
- toImmutableSortedBag ➡️ ImmutableSortedBag<T>
- toImmutableSortedBagBy ➡️ ImmutableSortedBag<T>
- toImmutableSortedList ➡️ ImmutableList<T>
- toImmutableSortedListBy ➡️ ImmutableList<T>
- toImmutableSortedSet ➡️ ImmutableSortedSet<T>
- toImmutableSortedSetBy ➡️ ImmutableSortedSet<T>
- toList ➡️ MutableList<T>
- toMap️ ➡️ MutableMap<K, V>
- toSet ➡️ MutableSet<T>
- toSortedBag ➡️ MutableSortedBag<T>
- toSortedBagBy ➡️ MutableSortedBag<T>
- toSortedList ➡️ MutableList<T>
- toSortedListBy ➡️ MutableList<T>
- toSortedMap ➡️ MutableSortedMap<K, V>
- toSortedMapBy ➡️ MutableSortedMap<K, V>
- toSortedSet ➡️ MutableSortedSet<T>
- toSortedSetBy ➡️ MutableSortedSet<T>
- toString ➡️ String
- zip ➡️ RichIterable<Pair<T, S>>
- zipWithIndex ➡️ RichIterable<Pair<T, Integer>>
I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions.
Rich, Lazy, Mutable, and Immutable Interfaces in Eclipse Collections was originally published in Better Programming on Medium, where people are continuing the conversation by highlighting and responding to this story.
July 31, 2023
My Sixth Blogiversary
by Donald Raab at July 31, 2023 09:13 PM
After 6 years of blogging, it’s time to write some more.
No commitment, No problem
Last year I decided to release myself from the commitment to blog at least once per month. So how did I do this past year in terms of writing?
I wrote 34 blogs in the past year. So 2–3 blogs per month with zero commitment to a quota. I can live with that pace, without having to commit to it.
I love writing.
My Favorite Blog This Past Year
The biggest surprise I had this past blogging year was writing about my experiment building a ToDoList in Java using JavaFX. It reminded me of what coding used to be like when I would build UIs in a flash in Smalltalk.
The blog turned into four part series. I got to code with emojis, use Java Records and experiment with using Jackson to persist my ToDoList. It was a lot of fun to code and to write about. I hope the blogs are informational and fun to read.
Writing makes me happy, so I will keep writing
I hope along the way I will again see some of that humanity and caring I witnessed online during the pandemic. Little things like the happiness of a cup of coffee shared virtually in TheCoffeeClub, or a collaborative celebration of a friend or colleague who does something awesome, regardless of how big or small. I hope to continue to see other developers writing more, so I can dedicate some of my time to enjoying their words.
Be safe. Be kind. Be mindful. Be happy. Self care. Write. Hugs.
I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions.
July 24, 2023
Eclipse Cloud DevTools Contributor Award: Mark Sujew for extraordinary contributions to Theia
by John Kellerman at July 24, 2023 05:42 PM
The Eclipse Cloud Developer Tools contributor award for July goes to Mark Sujew at TypeFox for his continuous, strategic and valuable contributions to Eclipse Theia, including recently for example, his contributions to enable remote SSH support for Eclipse Theia!

Mark has been working on Eclipse Theia since 2021 and he has become one of the core committers in the project. Mark is technically very knowledgeable and has a great overview of the Theia code base. This enables him to regularly contribute strategic features, such as the internationalization support and, recently, the support for remote SSH.
Besides technical skills, Mark is also a great community member. He regularly joins the weekly developer calls and actively participates in discussions and decision making. Mark has an opinion about key questions and expresses his valuable thoughts. He always has a view for the global health of the Theia ecosystem and often contributes to general improvements. Last but not least, Mark is an active and accurate reviewer, which contributes to the overall quality of Eclipse Theia.
Of course, great technologies such as Eclipse Theia depend on many contributors and we are pleased to have a very active and diverse ecosystem in the project. However, it is also important to recognize single developers who make a difference and Mark is definitely one of them. Thanks and congratulations, Mark!
The Cloud DevTools Working Group provides a vendor-neutral ecosystem of open-source projects focused on defining, implementing and promoting best-in-class web and cloud-based development tools. It is hosted at the Eclipse Foundation, current members of the group include AMD, Arm, EclipseSource, Ericsson, Obeo, RedHat, Renesas, STMicroelectronics and TypeFox.
This Eclipse Cloud DevTools contributor award is sponsored by EclipseSource, providing consulting and implementation services for web-based tools, Eclipse GLSP, Eclipse Theia, and VS Code.
July 15, 2023
Polishing Diamonds in Java
by Donald Raab at July 15, 2023 06:45 PM
Managing interface change in diamond hierarchies.
Inheriting Diamonds in Java
Java is an object-oriented language that supports single inheritance for classes. A class can inherit from at most one single parent class. Java also supports classes implementing multiple interfaces. Interfaces may extend multiple interfaces as well.
The following class diagram illustrates the two kinds of inheritance models supported in Java.

Before Java 8, methods on interfaces could only be abstract. It was the responsibility of classes to define the behavior of methods defined on interfaces.
Java 8 introduced an extremely powerful feature called default methods. A default method provides both the signature of a method, and a “default” implementation in a method body that can be used by classes that don’t override the behavior. The default methods feature can help make old interfaces new again.
Change happens. Unfortunately, change sometimes comes with a cost. Open Source Java projects have to be able to understand and respond quickly to change with the six month release cadence of OpenJDK feature releases. Open Source Java projects have a great feedback loop if they participate in the OpenJDK Quality Outreach Program. This program notifies members of the availability of early access releases of the OpenJDK that they can test their projects against.
The availability of early access releases of the OpenJDK has helped the Eclipse Collections open source project discover, understand, report, and address issues before new versions of the OpenJDK are released. In this blog, I will explain three times we had to make changes in Eclipse Collections after default methods were added to existing interfaces in the JDK.
Diamond Hierarchies
Inheriting from multiple interfaces can lead to the creation of diamond hierarchies. A diamond hierarchy gets its name from the shape of the hierarchy. Consider the following diagram showing 5 interfaces and a class in a diamond hierarchy relationship.

You may see different interface inheritance shapes in the wild. The shapes may not be diamonds. Sometimes you may encounter upside down trees. The diamond shapes themselves are not all that important. The most important characteristic that can lead to issues is that there exists a child interface or class at the bottom of a hierarchy that has multiple parents. Having multiple parent interfaces can lead to method signature collisions that may break compilation and potentially runtime behavior.
Diamond Hierarchies before Java 8
The primary interface inheritance problem before Java 8 occurred if two or more interfaces have the same method signatures with different return types.
Consider the following diamond hierarchy diagram that defines all abstract foo methods that are ultimately implemented by ClassD.

The interfaces A, B, C in this diagram all define foo methods that return different types that are covariant overrides of the parent interface Top. In order for ClassD to compile, it must override the foo method and return a type that is compatible with the foo methods from all three interfaces. The solution here is to override foo in ClassD and return ClassD. ClassD is a subtype of A, B, and C. ClassD is a covariant return type. Covariance is an important feature of Java return types that was added in Java 5.
Interfaces in the JDK were extremely stable before Java 8. We never saw any interface evolution issues in our diamond hierarchies in Eclipse Collections that integrated with Java types like Iterable, Collection, Map, List and Set. That changed slightly after Java 8.
Diamond Hierarchies after Java 8
Java 8 introduced a new feature called default methods. A default method allows a developer to add behavior to an existing interface without theoretically requiring any changes to classes that extend that interface. The default method feature has allowed the JDK to evolve interfaces that are decades old. We use default methods extensively in Eclipse Collections to reduce code duplication across abstract class hierarchies. There are a few gotchas to be aware of when using default methods, especially where there are diamond hierarchies that depend on interfaces that evolve over time.
Every default method that is added to decades-old interfaces like Iterable, Collection, Map, List, Set creates a possibility for unexpected method signature collisions to happen. Don’t be too alarmed for your applications. Most applications will probably never encounter the diamond hierarchy issues that Eclipse Collections and other libraries that provide Collection types that integrate with Java types may run into.
The following diagram shows the default methods that were added to the basic Collection interfaces in Java 8. Did any of these default methods break your applications when they arrived in Java 8? My guess would be, probably not.

The Map interface had the most notable evolution in Java 8 based on the number of default methods added. The methods that were added to Map were a much needed improvement for the Java community. While Eclipse Collections MutableMap has some functional overlap with methods in the Map interface, there were, fortunately, no method signature collisions with the new default methods that were added. Awesome!
The other Java Collection Framework interfaces had more modest additions, as most of the functionality was provided by the new Stream API. The spliterator, stream and parallelStream methods theoretically had the greatest possibility of collisions in the wild because the methods had zero arguments. In practice, I never saw or heard of any collisions that happened with these three methods. Awesome!
On the other hand, the forEach, removeIf, and replaceAll methods had method name collisions in frameworks like Eclipse Collections. Since the one argument types the methods required were all new in Java 8 (Consumer, Predicate, UnaryOperator), any collisions were simply considered overloads by the compiler. Awesome!
The method sort on List, which takes a decades-old interface named Comparator would result in collisions that created scratches in some diamonds in the wild. This was unfortunate, but sometimes the JDK needs to break some eggs in order to evolve and improve. List should have had a sort method from the beginning of its existence. Thankfully, it does now.
Scratching a Diamond by evolving interfaces
Every once in a while, a change can be made in an interface that requires “polishing” a diamond hierarchy. Interface changes may be out of your control if you have a relationship with an interface that is managed by another library or the JDK itself. Your only option may be to fix compilation failures once they are found and determine if there is also a binary incompatibility that may require a new release of your library or application in order for your clients to use a new version of the JDK or another library.
The following sections describe three different issues that may be encountered with evolving interfaces in diamond hierarchies. These are real examples that illustrate where Eclipse Collections had to address issues caused by the evolution of interfaces with default methods after Java 8.
Method collisions with different return types
The worst gotcha you can encounter with a diamond hierarchy is with methods signatures colliding with different return types. There is only one good solution to this problem. One of the methods must be renamed, and all client calls to the renamed method must be renamed as well.
When Java 8 was released, a default method sort was added to the java.util.List interface that had a void return type. Before we open sourced GS Collections, we had a sort method on the MutableList interface that returned MutableList. MutableList extends java.util.List, so our only option was to rename our method and change all of our client code to call the new method. Thankfully, all of the client code was in one company, so this was manageable with some explanation that compile errors would happen and there was a simple fix to change calls to sort that required a return type to sortThis.

This is what the sortThis method signatures looks like today on MutableList. These two methods were added as default methods on MutableList in the Eclipse Collections 10.0 release to reduce some code duplication.
default MutableList<T> sortThis(Comparator<? super T> comparator)
{
this.sort(comparator);
return this;
}
default MutableList<T> sortThis()
{
return this.sortThis(null);
}
The method sortThis delegates to the method sort, which was added to the java.util.List interface as default method in Java 8. The sort method is then overridden in FastList, which implements MutableList.
@Override
public void sort(Comparator<? super T> comparator)
{
Arrays.sort(this.items, 0, this.size, comparator);
}
If it isn’t obvious, the reason sortThis returns this, is so that it can be used fluently and directly as a return result in a method. This is an amazing convenience that often reduces lines of code when using sortThis.
The List change in Java 8 was described in this recent blog by Stuart Marks:
The Importance of Writing Stuff Down
Following the advice in this blog, I am writing all of these experiences down for other maintainers to learn from. Thank you, Stuart!
Default method collisions can result in ambiguity
Another gotcha happens when two parent interfaces define default methods with the same exact signature. When this happens then a child interface must also override that default method as well to remove the ambiguity that arises. The compiler and runtime will not be able to determine which default method should be chosen as the implementation. This results in an ambiguity at compile time and runtime.
Eclipse Collections encountered this particular problem with JDK 15. Stuart Marks wrote a great blog describing the issue as well.
Incompatibilities with JDK 15 CharSequence.isEmpty
The only thing this blog is missing is a diagram to help visualize the issue. The following picture shows the colliding default methods in the hierarchy for CharAdapter.

The solution to this problem was to add an override of isEmpty in the CharAdapter class.
This hierarchy doesn’t have a full diamond shape. It does have the colliding method issue with isEmpty caused by extending multiple interfaces with the same method signatures for default methods.
Abstract and Default method collisions
In JDK 21, a new interface called SequencedCollection was added in between Collection and List that has methods like getFirst and getLast. These default methods collided with abstract methods with the same signature that were defined in RichIterable, OrderedIterable, and ListIterable in Eclipse Collections.
The following diagram shows the diamond hierarchy in Eclipse Collections and where the methods ultimately collide in the child interface MutableList.

Compilation Only Error
The collision between abstract getFirst methods in the Eclipse Collections types on the left and the default getFirst method in SequencedCollection on the right resulted in a compilation error in our JDK 21 EA builds for Eclipse Collections. All previous JDK versions compiled fine. What was unclear was whether this was only a compilation error, or if this would require a release of a new version of Eclipse Collections in order for the library to work with JDK 21 when it is released.
I checked to see if getFirst or getLast was used in either of the two OSS repos that I am a committer for that have an Eclipse Collections dependency and have a JDK 21 EA build. Both repos had tests that use getFirst. The code ran on JDK 21 EA using Eclipse Collections 11.1 without issue. The two repos are the Eclipse Collections Kata and BNY Mellon CodeKatas.
This should verify that the issue is compilation only. I found the section in the JLS that I believe covers this situation with colliding abstract and default methods in interface hierarchies.
This is a new kind of issue we hadn’t seen before that we can now be on the lookout for with future releases of the JDK.
Polishing this issue away
I am going to include code examples here which show how the problem manifest itself at compile time and the two potential solutions.
The following is a compilation error and the example code that creates the compilation issue with a diamond hierarchy.
java: types Diamond.SequencedCollection<E> and Diamond.ListIterable<T> are incompatible; interface Diamond.MutableList<T> inherits abstract and default for getFirst() from types Diamond.SequencedCollection and Diamond.ListIterable
public class Diamond
{
interface Iterable<E>
{
}
interface OrderedIterable<T> extends Iterable<T>
{
T getFirst();
}
interface ListIterable<T> extends OrderedIterable<T>
{
T getFirst();
}
interface Collection<E> extends Iterable<E>
{
}
interface SequencedCollection<E> extends Collection<E>
{
default E getFirst()
{
return null;
}
}
interface List<E> extends SequencedCollection<E>
{
}
interface MutableCollection<T> extends Collection<T>
{
}
interface MutableList<T> extends MutableCollection<T>, ListIterable<T>, List<T>
{
// This code doesn't compile and fails with error below:
// java: types Diamond.SequencedCollection<E> and
// Diamond.ListIterable<T> are incompatible;
// interface Diamond.MutableList<T> inherits abstract and default
// for getFirst() from types
// Diamond.SequencedCollection and Diamond.ListIterable
}
static class MyList<T> implements MutableList<T>, List<T>
{
public T getFirst()
{
return null;
}
}
public static void main(String[] args)
{
OrderedIterable<String> a = new MyList<>();
ListIterable<String> b = new MyList<>();
SequencedCollection<String> c = new MyList<>();
List<String> d = new MyList<>();
MutableList<String> e = new MyList<>();
MyList<String> f = new MyList<>();
System.out.println(a.getFirst());
System.out.println(b.getFirst());
System.out.println(c.getFirst());
System.out.println(d.getFirst());
System.out.println(e.getFirst());
}
}
There are two possible solutions to solving this compilation issue. One solution is to add an abstract getFirst method in MutableList . The other solution is to add a default implementation for getFirst in MutableList.
The actual solution I used to solve the compilation issue in Eclipse Collections was to add default methods to MutableList for getFirst and getLast.
@Override
default T getFirst()
{
return this.isEmpty() ? null : this.get(0);
}
@Override
default T getLast()
{
return this.isEmpty() ? null : this.get(this.size() - 1);
}
This getLast default method implementation would be suboptimal for LinkedList, but these two methods already have appropriate overrides in abstract and concrete classes. This solution's primary goal was to make the compiler happy.
Diamonds are forever so prepare to polish them
When diamond hierarchies or any multiple interface inheritance exist in a code base, care needs to be taken to upkeep them when interface evolution happens. Change does and will happen. I hope this blog demonstrates some useful real world example where rules in the Java Language Specification collide with real world libraries that are integrated with JDK interfaces.
I am quite happy as an Eclipse Collections maintainer how the ecosystem has evolved with Early Access versions of the JDK being provided with easy automation that we can leverage to use them. Getting a heads up months in advance on an upcoming change in the JDK is a huge improvement. Early warning capability for JDK and library developers is really amazing.
In case you want to learn more about the benefits of participating in the OpenJDK Quality Outreach Program, I will shamelessly plug my recent blog on the topic below.
The benefits of participating in the OpenJDK Quality Outreach Program
Thank you for reading this blog! I hope you found the information and examples here useful. Enjoy!
I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions.
Polishing Diamonds in Java was originally published in Better Programming on Medium, where people are continuing the conversation by highlighting and responding to this story.
July 13, 2023
Eclipse Cloud DevTools Digest - May and June 2023
by John Kellerman at July 13, 2023 06:11 PM
Open VSX Working Group Formed

We announced the formation of a working group to take responsibility for the Open VSX Registry deployment. The working group’s mandate is to supervise and expedite the adoption of the Open VSX Registry, a vendor-neutral, community-backed alternative to Microsoft’s Visual Studio Marketplace. Initial members include Google, Huawei, Posit, Salesforce, Siemens and STMicroelectronics.
Open VSX Registry Recognized by SD Times
In a related article, SD Times recognized Open VSX Registry as its open source project of the week.
Contributor Awards to Red Hat and Yining Wang
Cloud DevTools recognized Yining Wang of Ericsson in May for her contributions to github.com/eclipse/openvsx and github.com/EclipseFdn/open-vsx.org and the deployment at Open VSX Registry. We also recognized Red Hat in June for its initial contribution of the VS Code Extension API to Theia. This allows regular VS Code extensions to run directly in Theia and any Theia-based product.
CDT Cloud Blueprint

In a series of three articles, Jonas, Maximillian and Philip discuss CDT Cloud Blueprint, a customizable tool for C/C++ development based on web technologies: introduction, getting started, dynamic toolbar.
Langium Becomes a Cloud DevTools Project
The Cloud DevTools Steering Committee voted to include Langium as a project of interest to the working group. Langium is an open source language engineering tool, written in TypeScript and running in Node.js, with support for the Language Server Protocol. It enables domain-specific languages in VS Code, Eclipse Theia, web applications, and more.
JKube 1.13 is Available

Eclipse JKube 1.13 is now available. Improvements include support for Helm Chart YAML fragments and the introduction of a security profile to improve the overall security of the generated Kubernetes resources.
Multiple Theia Releases
The Eclipse Theia Community Release 2023-05 is available. Community releases are provided every quarter by the Theia project and designed to be more hardened releases that tend to align with related technologies, such as Eclipse GLSP or CDT Cloud. Learn more about the advantages of the Theia community release and visit the Theia release page.
Eclipse Theia 1.37 Release adds improvements to tabs, keybindings and support for VS Code Extensions API 1.74.2.
Eclipse Theia 1.38 Release adds further improvements to tabs, workspace search and support for VS Code Extensions API 1.77.
TheiaCon 2023 and Community Day at EclipseCon Events
The Eclipse Cloud DevTools and Open VSX Working Groups will be co-hosting a Community Day event at EclipseCon on October 16th. Last year’s Community Day was well-attended and a very informative and enjoyable time was had by all. Be sure to submit your topics for this years event.
The Call for Presentations is also now live for TheiaCon 2023, which will be held virtually on November 15-16. We look forward to another excellent program and encourage you to submit your proposals early.
Cloud Tool Time Webinars

We are now scheduling Cloud Tool Time webinars for 2023. Be sure to Sign up now to get on the calendar and let us help tell your story. You can see past sessions on our Youtube channel.
Eclipse Cloud DevTools Projects

Explore the Eclipse Cloud DevTools ecosystem! Check out our projects page to find out more about open source innovation for cloud IDEs, extension marketplaces, frameworks and more.
Getting Listed on the Cloud DevTools Blog
If you are working with, or on, anything in the Cloud DevTools space, learn how to get your writings posted in our blog section.
July 12, 2023
Eclipse Foundation Publishes Results of Equinox p2 Security Audit
July 12, 2023 02:00 PM
Over the past year, the Eclipse Foundation has made securing the open source software supply chain a priority. By growing our security team and laying the groundwork for the Cyber Risk Initiative, we’ve made strides to improve the security posture of our open source projects.
Today, we’re taking another step forward with the completion of the security audit for Equinox p2, the provisioning component of the Eclipse IDE.
Equinox p2 was a logical choice for our first security audit because of its new signature verification mechanism. The plugin authentication mechanism incorporates PGP digital signatures, and is one of the new security features included in the 2023-06 Eclipse IDE release. The existing mechanism for provisioning new plugins and extensions into the IDE to verify their signatures is an industry standard (jar signing), but that is not the case for PGP digital signatures support.
This lack of assurance that the new mechanism was secure led us to order the audit, which was done in partnership with the Open Source Technology Improvement Fund (OSTIF) and completed by IncludeSecurity. The audit revealed that a number of fixes were required, including providing users with more information so they can decide whether the extensions they are installing are safe.
All vulnerabilities that were identified, including one critical risk, have since been resolved. Check out the full report for more information.
Identifying and addressing vulnerabilities of any provisioning system through a security audit is a critically important aspect of supply chain security. In this case, these fixes will lower the risk of installing malware when developers obtain extensions from the internet.
The Eclipse IDE, and all extensible IDEs in the market, play a crucial role in the software supply chain. These are the tools used for writing, testing, and deploying software. If an IDE becomes infected with malware, significant damage to the downstream supply chain can follow.
We’re thrilled with the outcomes of the audit, and this proactive approach to security will save us time and effort in the long run.
This is the first time the Eclipse Foundation has funded a security audit for an Eclipse project, with three more audits in progress, and an additional three to be conducted later this year. The six upcoming audits are possible because of the funding the Eclipse Foundation received from the Alpha-Omega Project.
Get Involved
- Join the Eclipse IDE Working Group to help strengthen the IDE
- Learn more about the Eclipse Cyber Risk Initiative, and how your organization can join the effort to strengthen the open source supply chain. Please subscribe to the ECRI mailing list to join the initiative, or to follow its progress.
- Government regulation of the software industry is coming, and pending legislation in Europe such as the Cyber Resilience Act and the Product Liability Directive pose enormous risks to the open source community and ecosystem. We will be holding an all members call next Thursday, July 13, at 14:30 CEST to provide an update on the challenges we face as a community and as an industry.
July 08, 2023
The benefits of participating in the OpenJDK Quality Outreach Program
by Donald Raab at July 08, 2023 11:55 PM
FOSS Projects and OpenJDK collaborating for a more robust Java platform.
The OpenJDK Quality Outreach Program
I’d like to tell you about an amazing OpenJDK Program that FOSS Java project maintainers should consider joining, to help guarantee that the Java Platform remains the best and most stable platform for software developers. An introduction to the OpenJDK Quality Outreach Progam (QOP), which is run by the OpenJDK Quality Group is available at the following link.
Joining the OpenJDK Quality Outreach Program is easy, and the benefits are real. The above link also has instructions for joining the program in a section titled “How to join the Quality Outreach Program”.
Once you join the program, and actively report your test results against early access versions of the OpenJDK, the benefits begin. Your test suite(s) become part of the overall suite of tests that can help validate the quality of the next OpenJDK release.
It’s that simple. You run the tests of your open source project against early access builds that can be easily setup if your project has automated builds setup with GitHub Actions.
Worst case, you never find a problem but still contribute to your own confidence level that the next release of the OpenJDK will work fine with your project. Win!
Best case, you discover a real problem that you report in time for the OpenJDK Core team to submit a bug fix before the next release. Win!
Eclipse Collections is an active member of QOP
Eclipse Collections has been actively reporting testing status against early access versions of the OpenJDK for several years. We email David Delabassee and he updates the project status on the OpenJDK Quality Outreach Program Wiki.

Nikhil Nanivadekar and I are the co-project leads for Eclipse Collections. We take turns reporting testing status to David and the OpenJDK Quality Outreach Program.
Early Access testing with the OpenJDK
I began testing GS Collections, before it became Eclipse Collections, with early access binary versions of the OpenJDK for the much anticipated Java 8 release. I was a member of the JSR 335 Expert Group, so was actively testing and reporting my experience to the Expert Group, testing Lambdas and Java Streams with GS Collections. Among other things, I discovered and reported a performance issue related to parallel Java Streams and non-JDK RandomAccess List implementations. I blogged about this experience at the following link.
Traveling the road from Idea all the way to OpenJDK
The parallel Stream performance problem I discovered was verified and addressed with a fix in time for the JDK 9 release. The fix resulted in the creation of a class named RandomAccessSpliterator.
Enter the OpenJDK Quality Outreach Program
I scanned the quality-discuss mail list archive to see if I could find when the OpenJDK Quality Outreach Program became a formal program. The mail list and Quality Group dates all the way back to 2007. I discovered this email from Rory O’Donnell in August 2014 that looks like the beginning of the expansion of QA Outreach for the OpenJDK. This email corresponds with this Quarterly Report from Q1 2014.
Screenshot from above link

By 2015, the Quality Outreach Program was moved to the OpenJDK Quality Group as mentioned in this email from Rory. There were 48 participating FOSS projects at that time.
Based on a manual counting of projects on the Quality Outreach Program wiki today, the program now has 168 FOSS projects participating.
I’m going to share some discoveries in the sections that follow that the Eclipse Collections project has made during Early Access testing of OpenJDK releases over the past few years. I hope these stories encourage more FOSS Java projects to join the program.
A discovery in JDK 15
During testing of the Early Access version of JDK 15, Nikhil discovered an issue with a default isEmpty method that was added to CharSequence. The new default method clashed with a default isEmpty method we had defined much earlier on a type named PrimitiveIterable.
Nikhil reported the issue to the OpenJDK Quality Outreach Program. We decided the fix to the problem should happen in Eclipse Collections so we added implementations of isEmpty to all concrete types where the two default implementations resulted in ambiguity.
Stuart Marks wrote a great blog describing the problem we discovered with CharSequence.isEmpty.
Incompatibilities with JDK 15 CharSequence.isEmpty
The problem we discovered had more far reaching implications to the testing of default methods when they are added to existing interfaces in the OpenJDK or other widely used JVM Languages and Libraries. What we learned was that since the default method feature was introduced in Java 8, there exists the possibility for collisions of default and abstract methods to happen in diamond interface hierarchies with methods with identical signatures. Stuart describes this in depth in the article above.
Discoveries in JDK 21
During recent testing of the Early Access version of JDK 21, I discovered two issues. One issue was similar to the issue we discovered in JDK 15. New default methods were added along with new interfaces in JEP 431. The new default methods clashed with existing abstract methods we had defined in Eclipse Collections. The entire Java community was given a heads up on the potential impact of this JEP being included in JDK 21. This heads up helped me spot and understand the issue quickly. Thank you!
Quality Outreach Heads-up - JDK 21: Sequenced Collections Incompatibilities
We had expected potential compilation issues in Eclipse Collections with the addition of Sequenced Collections in JDK 21. We have had getFirst and getLast methods defined on the OrderedIterable interface in Eclipse Collections for many years, and on RichIterable since version 1.0 of the library. The compilation issues we encountered may result in us releasing a new version of Eclipse Collections (12.0) that existing users of Eclipse Collections will have to upgrade to in order to use the library with JDK 21. I consider this an acceptable cost for the continued improvement of the Collections Framework in the JDK.
Update (July 8, 2023): It looks like a new release will not be necessary for Eclipse Collections to work with JDK 21. It looks like the issue with getFirst and getLast is a compilation only issue. I validated this looking at GitHub Actions builds using JDK 21 and Eclipse Collections 11.1 in the Eclipse Collections Kata and BNY Mellon CodeKatas Repos. This is good news! Thanks to Nicolai Parlog for asking me the question!
There was a second issue that I encountered that was unexpected, and has resulted in a fix being applied in the JDK itself. We have a battery of serialization tests in Eclipse Collections, that wind up testing some of the serialization of JDK types like ArrayList, HashSet, HashMap. We have never seen a failure for JDK types in our serialization tests. In our automated tests, I was seeing a failure of serialization for LinkedHashMap. I reported the failure I was seeing in our tests to the OpenJDK Quality Outreach Program. It turns out I had discovered and reported a real issue that has been since fixed for JDK 22 and backported for JDK 21.
Quality Outreach Heads-up - On The Importance of Testing With Early-Access Build
Contributing to Java’s continued stability
I encourage other FOSS maintainers to join the OpenJDK Quality Outreach Program and participate in Early Access JDK testing. The more we collaboratively test the early access Java Platform releases, the more we collectively guarantee Java’s continued stability. Java is evolving rapidly, and this is a good thing. Evolving safely without breaking existing functionality is much better.
I have championed the participation of two other FOSS projects at my current employer in the OpenJDK Quality Outreach Program.

Spring Kata
A great set of Spring and Spring Boot code katas from Java Champion, Chandra Guntur.
GitHub - BNYMellon/spring-kata: Code katas for learning Spring® and Spring Boot.
Code Katas
A great set of Java code katas maintained by Emilie Robichaud and Aqsa Malik, who also blog here on Medium.
GitHub - BNYMellon/CodeKatas: Code Kata collection for JVM Languages and Libraries.
Thank you for reading this blog! I hope you will consider adding your FOSS projects to the OpenJDK Quality Outreach Program. The folks in the OpenJDK Quality Group are very supportive and appreciative of testing and contributions, as you can see in some of the links above. I enjoy working in this community, and I hope you will too. Together we can help guarantee that Java remains the best and most stable platform to work with for software developers.
I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions.
July 05, 2023
JBoss Tools 4.28.0.Final for Eclipse 2023-03
by sbouchet at July 05, 2023 10:12 AM
Happy to announce 4.28.0.Final build for Eclipse 2023-06.
Downloads available at JBoss Tools 4.28.0 Final.
What is New?
Full info is at this page. Some highlights are below.
General
Components Depreciation
As communicated in a previous blog article, the following components are now deprecated:
-
Openshift CDK Tooling
-
Openshift Explorer view (already hidden by default) and wizards, targeting Openshift v3
The current openshift Application Explorer view, based on odo v3 is now the default supported tooling.
Related JIRA: JBIDE-29044
Hibernate Tools
Runtime Provider Updates
The Hibernate 6.2 runtime provider now incorporates Hibernate Core version 6.2.5.Final, Hibernate Ant version 6.2.5.Final and Hibernate Tools version 6.2.5.Final.
The Hibernate 5.3 runtime provider now incorporates Hibernate Core version 5.3.30.Final and Hibernate Tools version 5.3.30.Final.
New Back-end usage
We changed the backend system used in the usage bundle from Google Analytocs to Segment. This as no impact on how we collected usage inforamtion, neither change the opt-in choice made in previous version of JBossTools.
As always, neither Eclipse nor JBoss will use any information unless a user has opted in, nor is there ever any personal information sent unless it is provided on the Preferences page.
JBoss Tools for Eclipse 2023-06M2
by sbouchet at July 05, 2023 10:12 AM
Happy to announce 4.28.0.AM1 (Developer Milestone 1) build for Eclipse 2023-06M2.
Downloads available at JBoss Tools 4.28.0 AM1.
What is New?
Full info is at this page. Some highlights are below.
General
Components Depreciation
Fuse Tooling is now deprecated. More information here.
Future releases cadences
Starting from 4.28.0.Final, there will be no more .AM1 releases. The .Final releases can be now scheduled close to the Eclipse releases.
JBoss Tools 4.27.0.Final for Eclipse 2023-03
by sbouchet at July 05, 2023 10:12 AM
Happy to announce 4.27.0.Final build for Eclipse 2023-03.
Downloads available at JBoss Tools 4.27.0 Final.
What is New?
Full info is at this page. Some highlights are below.
July 03, 2023
45 years working as a developer
by ekkescorner at July 03, 2023 03:49 PM
Today 45 years working as a developer.
Started 1978-07-03 at Kienzle Datensysteme Hannover with Assembler, soon followed by Cobol.
Some highlights from last decades:
Apps for Apple/// with SystemB by Hermann Bense
App for AppleDealers with Omnis3/5/7
Eclipse RCP Java Apps with Riena
Mobile Apps for BlackBerry10 / Cascades / C++/Qt 4.8 #BB10
MDSD from #OpenArchitectureware to Xtext/Xtend
Still using Eclipse Xtext/Xtend to generate Entities / DTOs for Qt / C++ Code
BTLE support for BarcodeScanner, Waiter Lock, SlattedFrame Motors (#Lattoflex) and more.
IDE’s: Eclipse, Momentics, #QtCreator
Since 2016:
Mobile Apps Android, iOS with Qt 5 / QtQuickControls2
Speaker at #JAX / W-JAX, EclipseCON, BBJAM, #QTWS
Thx to all the developers committing to OpenSource Software, enabling me to develop complex Apps cross-platform as a single independent developer.
Current adventure:
Porting my Apps from Qt 5 to Qt 6.6
Speaking about at #QtWS23 in Berlin 2023-11-29 – my first developer conference after Corona.

couldn’t wait 
Still fun developing mobile business Apps for customers.
Find me at #QtWS23 or at Qt Discord Server
Looking forward celebrating my 50th anniversary in 2028 
June 29, 2023
Eclipse Cloud DevTools Contributor Award: Red Hat for VS Code compatibility in Eclipse Theia
by John Kellerman at June 29, 2023 05:55 PM
The Eclipse Cloud Developer Tools contributor award for June goes to Red Hat for initially contributing the VS Code Extension API to Theia. This allows regular VS Code extensions to run directly in Theia and any Theia-based product.

As you might know, you can use VS Code extensions in Eclipse Theia applications. This enables you to enhance your Theia-based application with a rich palette of features from a large, robust ecosystem of extensions available for VS Code, e.g. via the Open VSX Registry. Running a VS Code extension in Theia is possible because Theia provides the VS Code extension API. This API was initially contributed by Red Hat.
It is worth noting that the VS Code extension API has of course been extended a lot since then, with the original work evolving over the years. The recent Theia release 1.38 raises the compatibility level to VS Code 1.77, which is just one month behind VS Code and allows the vast majority of extensions to be installed in their latest versions.
Over the years, in addition to Red Hat, many others have contributed to this work including STMicroelectronics, Ericsson, TypeFox, Arm, EclipseSource, and Gitpod. The VS Code Extension API is a great example of how open source collaboration works well; one stakeholder kicks off an effort and others join the initiative over time.
Congratulations and well done Read Hat!
The Cloud DevTools Working Group provides a vendor-neutral ecosystem of open-source projects focused on defining, implementing and promoting best-in-class web and cloud-based development tools. It is hosted at the Eclipse Foundation, current members of the group include AMD, Arm, EclipseSource, Ericsson, Obeo, RedHat, Renesas, STMicroelectronics and TypeFox.
This Eclipse Cloud DevTools contributor award is sponsored by EclipseSource, providing consulting and implementation services for web-based tools, Eclipse GLSP, Eclipse Theia, and VS Code.
Iterate over any Iterable in Java
by Donald Raab at June 29, 2023 07:24 AM
Eclipse Collections supplies iteration patterns for any Iterable type.
Iterating over collections
Iterating over collections is a fundamental feature of any modern programming language. There are many common iteration patterns that have well known names and alternatives like filter (select/reject), map (collect), reduce (injectInto), groupBy. There are four approaches to implementing iteration patterns — eager/serial, eager/parallel, lazy/serial, and lazy/parallel. Java Streams provide lazy/serial and lazy/parallel iteration patterns. Terminal operations like forEach, collect, any/all/noneMatch, force iteration execution to happen. Terminal operations are eager operations.
Eclipse Collections has offered eager/serial and eager/parallel iteration patterns for a very long time. Eager iteration patterns execute immediately. They are the equivalent of iteration code developers would write by hand using for loops.
Eclipse Collections also offers lazy/serial and lazy/parallel iteration patterns. I will not discuss lazy/parallel iteration in this blog. Instead I wanted to describe the eager/serial, eager/parallel and lazy/serial iteration patterns that were implemented in Eclipse Collections via utility classes that works with any java.lang.Iterable type. These classes still exist in Eclipse Collections today and can be useful for executing eager or lazy operations against any Iterable type.
If you’d like to understand more about eager and lazy iteration, the following blog explains the differences in detail.
Eager is Easy, Lazy is Labyrinthine
Looping with Iterable
Since Java 5, the Java Collections framework has had the java.lang.Iterable interface, which is the parent interface of java.util.Collection. In Java 5, the enhanced for loop was added that would work with any implementation of Iterable. The enhanced for loop allowed developers to write more concise loops when iterating over Java Collection implementations and other Iterable types.
Before Java 5, we had to write code like this to iterate over a java.util.Collection using a for loop.
Collection<Integer> list = Arrays.asList(1, 2, 3, 4, 5);
for (Iterator<Integer> it = list.iterator(); it.hasNext(); )
{
Integer each = it.next();
System.out.println(each);
}
After Java 5, we could use the less verbose enhanced for loop as illustrated in the following example.
Iterable<Integer> list = Arrays.asList(1, 2, 3, 4, 5);
for (Integer each : list)
{
System.out.println(each);
}
Recharging Iterable with forEach
Eclipse Collections was initially developed using JDK 1.4. The first iteration method in Eclipse Collections, named forEach, was added to a utility class named Iterate and took a functional interface type called Procedure as a parameter. Before Java 5, we could use Iterate and forEach with any java.util.Collection. We had to resort to using anonymous inner classes to work with methods like forEach. After Java 5, the Iterate utility was updated to work with java.lang.Iterable instead of java.util.Collection.
The following code shows how you could use the Eclipse Collections Iterate utility and its forEach method after Java 5 and before Java 8.
Iterable<Integer> list = Arrays.asList(1, 2, 3, 4, 5);
Iterate.forEach(list, new Procedure<Integer>()
{
@Override
public void value(Integer each)
{
System.out.println(each);
}
});
If your reaction to this code is “yuck!”, then you would not be alone. Why would anyone agree to write code like this? I believed since I started programming in Java that it was inevitable that Java would eventually get lambdas, and this style of coding with anonymous inner classes would eventually be replaced with something much more concise and readable. Once Java 8 provided support for lambdas and method references to the Java development community, coding patterns using anonymous inner classes were able to be converted using automated refactoring tools.
The same method when used with Java 8 or above looks as follows.
Iterable<Integer> list = Arrays.asList(1, 2, 3, 4, 5);
Iterate.forEach(list, each -> System.out.println(each));
This code can be further simplified by using a method reference shown below.
Iterable<Integer> list = Arrays.asList(1, 2, 3, 4, 5);
Iterate.forEach(list, System.out::println);
Since the release of Java 8, the need for Iterate.forEach was lessened by the addition of the default implementation of forEach that was added to Iterable. With the new default forEach method, the following code works with Iterable.
Iterable<Integer> list = Arrays.asList(1, 2, 3, 4, 5);
list.forEach(System.out::println);
Eager Iteration methods for any Iterable type
The Iterate utility class provides much more than just forEach. There are many eager iteration methods provided for any Iterable type. Browse the Javadoc below to find out what methods are available.
Iterate (Eclipse Collections - 11.1.0)
Structure
A more compact view of the methods on Iterate is available by using the Structure view in IntelliJ. There are over 130 methods available on the Iterate class. I include a code example of a method from the Iterate class contained in each screenshot in the sections that follow.

Examples — any and all
The methods anySatisfy and allSatisfy are the equivalents of Java Streams anyMatch and allMatch. The methods anySatisfyWith and allSatisfyWith take an extra parameter which makes it possible to use them with more method references.
@Test
public void anyAndAllSatisfy()
{
List<String> list = List.of("cat", "bat", "rat");
// Java Streams
Assertions.assertTrue(
list.stream().anyMatch(each -> each.contains("at")));
Assertions.assertTrue(
list.stream().allMatch(each -> each.contains("at")));
// Eclipse Collections Iterate
Assertions.assertTrue(
Iterate.anySatisfy(list, each -> each.contains("at")));
Assertions.assertTrue(
Iterate.allSatisfy(list, each -> each.contains("at")));
// Eclipse Collections Iterate "With"
Assertions.assertTrue(
Iterate.anySatisfyWith(list, String::contains, "at"));
Assertions.assertTrue(
Iterate.allSatisfyWith(list, String::contains, "at"));
}

Examples — detect
The method detect finds the first element that matches a Predicate. There are also detectOptional, detectWith, and detectWithOptional versions.
@Test
public void detect()
{
List<String> list = List.of("cat", "bat", "rat");
// Java Streams
Assertions.assertEquals(
"cat",
list.stream()
.filter(each -> each.contains("at"))
.findAny()
.orElse(null));
// Eclipse Collections Iterate
Assertions.assertEquals(
"cat",
Iterate.detectOptional(list, each -> each.contains("at"))
.orElse(null));
Assertions.assertEquals(
"cat",
Iterate.detect(list, each -> each.contains("at")));
// Eclipse Collections Iterate "With"
Assertions.assertEquals(
"cat",
Iterate.detectWithOptional(list, String::contains, "at")
.orElse(null));
Assertions.assertEquals(
"cat",
Iterate.detectWith(list, String::contains, "at"));
}

Example — makeString
The method makeString is the equivalent of Collectors.joining. One notable difference is that makeString does not require an Object to be converted to a String first.
@Test
public void makeString()
{
List<Integer> integers = List.of(1, 2, 3);
// Java Streams
Assertions.assertEquals(
"1, 2, 3",
integers.stream()
.map(Object::toString)
.collect(Collectors.joining(", ")));
// Eclipse Collections Iterate
Assertions.assertEquals(
"1, 2, 3",
Iterate.makeString(integers, ", "));
}

Example — sumOfInt
The method sumOfInt returns the sum of some IntFunction applied to each element of the Collection. The difference between IntStream.sum and sumOfInt is that IntStream returns an int, which may quietly overflow. The sumOfInt method widens to a long, which will handle summing much larger numbers.
@Test
public void sumOfInt()
{
List<Integer> integers = List.of(1, 2, 3);
// Java Streams
Assertions.assertEquals(
6,
integers.stream()
.mapToInt(Integer::intValue)
.sum());
// Eclipse Collections Iterate
Assertions.assertEquals(
6L,
Iterate.sumOfInt(integers, Integer::intValue));
}

Example — zip
The method zip takes two Iterable instances, and creates a List of Pair instances. There is no equivalent in Java Stream, but this does work with any two Iterable instances.
@Test
public void zip()
{
List<Integer> integers = List.of(1, 2, 3);
List<String> strings = List.of("1", "2", "3");
// Eclipse Collections Iterate
List<Pair<Integer, String>> zipped =
Iterate.zip(integers, strings, new ArrayList<>());
List<Pair<Integer, String>> expected = List.of(
Tuples.pair(1, "1"),
Tuples.pair(2, "2"),
Tuples.pair(3, "3"));
Assertions.assertEquals(expected, zipped);
}
Optimized by type
Iterate does its best to optimize each eager iteration method by type. There are instanceof checks that look for ArrayList, List, and RandomAccess.

The Futility of Utility
Utility classes can be very useful for extending the capabilities of types without having the expand the interface of the types. Unfortunately, utility classes also have a problem — what type should they return?
You only get one shot
The Iterate utility takes Iterable as a parameter, and usually returns Collection as a result. Collection is the most usesful abstract type to return. Unfortunately, Collection is not as useful as List or Set in terms of communicating the capabilities of the result.
There are methods on Iterate that take the return result as a parameter, and return the same result. These methods, although slightly more verbose, are the most useful as they provide the most specific return type.
The following code example illustrates the differences of return type between the overloaded forms of collect on the Iterate class. The method collect in Eclipse Collections is the equivalent of map on Stream. The name collect used in Eclipse Collections comes from the same method name in the Collections framework in the Smalltalk-80 programming language, which has been around since 1980.
@Test
public void collectOnIterate()
{
Set<Integer> set = Set.of(1, 2, 3);
Set<String> expected = Set.of("1", "2", "3");
// Return type of Collection
Collection<String> collect =
Iterate.collect(set, Object::toString);
Assertions.assertEquals(expected, collect);
CopyOnWriteArraySet<String> target = new CopyOnWriteArraySet<>();
// Return type is the same type as the target parameter
CopyOnWriteArraySet<String> collectWithTarget =
Iterate.collect(set, Object::toString, target);
Assertions.assertEquals(expected, collectWithTarget);
}
If I had the opportunity to re-build Iterate from scratch, I would only provide the methods which take a target collection as a parameter and return the same type as the target collection. While these methods are more verbose, they are also more versatile and useful.
Iterate and other static utility classes
In addition to Iterate, there are several other *Iterate static utility classes.

ParallelIterate is the eager/parallel equivalent of Iterate. ParallelIterate also takes Iterable as a parameter.
The following code example shows the equivalent of collect using ParallelIterate.
@Test
public void collectOnParallelIterate()
{
Set<Integer> set = Set.of(1, 2, 3);
Set<String> expected = Set.of("1", "2", "3");
// Return type of Collection
Collection<String> collect =
ParallelIterate.collect(set, Object::toString);
Assertions.assertEquals(expected, collect);
CopyOnWriteArraySet<String> target = new CopyOnWriteArraySet<>();
// Return type is the same type as the target parameter
CopyOnWriteArraySet<String> collectWithTarget =
ParallelIterate.collect(set, Object::toString, target, true);
Assertions.assertEquals(expected, collectWithTarget);
}
ListIterate returns MutableList instead of Collection. If you know you are iterating over a List, then using this utility class gives you access to methods that work with any List, and returns a MutableList, which has the extensive Eclipse Collections API available.
@Test
public void collectOnListIterate()
{
List<Integer> list = List.of(1, 2, 3);
List<String> expected = Lists.mutable.of("1", "2", "3");
// Return type of MutableList
MutableList<String> collect =
ListIterate.collect(list, Object::toString);
Assertions.assertEquals(expected, collect);
// Extensive Eclipse Collections API available on MutableList
Assertions.assertEquals("1, 2, 3", collect.makeString());
MultiReaderList<String> target = Lists.multiReader.empty();
// Return type is the same type as the target parameter
MultiReaderList<String> collectWithTarget =
ListIterate.collect(list, Object::toString, target);
Assertions.assertEquals(expected, collectWithTarget);
}
Over the years, many developers have opted for using ListIterate instead of Iterate. This is primarily because Iterable as a type has never seen widespread use. List and Set are much more commonly used types than either Iterable or Collection. ParallelIterate continues to demonstrate good performance for use cases where parallelism has been proven to be useful.
Lazy Iteration methods for any Iterable type
While Iterate and ParallelIterate provide eager iteration patterns for any Iterable, LazyIterate provides lazy iteration patterns for any Iterable. LazyIterate will create a LazyIterable that then requires a terminal operation to iterate over the collection, similar to how Java Stream operates. Browse the Javadoc below to find out what methods are available.
LazyIterate (Eclipse Collections - 11.1.0)
LazyIterate was created much later than Iterate and ParallelIterate. By the time LazyIterate was created, Eclipse Collections already had an extensive RichIterable interface hierarchy, which included the LazyIterable interface.
Structure
A concise view of the methods is available by using the Structure view in IntelliJ. In this case, concise refers to the method signatures which contain all of the information without as much of the structure.

Example — adapt
The method adapt, adapts any Iterable as a LazyIterable, which has the complete API of RichIterable, which LazyIterable extends.
@Test
public void adaptOnLazyIterate()
{
List<Integer> list = List.of(1, 2, 3);
List<String> expected = Lists.mutable.of("1", "2", "3");
// Adapt the List as a LazyIterable
LazyIterable<Integer> iterable = LazyIterate.adapt(list);
// Return type of MutableList
MutableList<String> collectToList = iterable
.collect(Object::toString)
.toList();
Assertions.assertEquals(expected, collectToList);
// Extensive Eclipse Collections API available on MutableList
Assertions.assertEquals("1, 2, 3", collectToList.makeString());
}
Example — collect
The lazy form of collect is very similar to map in Java Stream in terms of how it behaves. The difference between Stream and LazyIterable on the other hand is enormous.
@Test
public void collectOnLazyIterate()
{
List<Integer> list = List.of(1, 2, 3);
List<String> expected = Lists.mutable.of("1", "2", "3");
// Return type of LazyIterable
LazyIterable<String> collect =
LazyIterate.collect(list, Object::toString);
// Return type of MutableList
MutableList<String> toList = collect.toList();
Assertions.assertEquals(expected, toList);
// Extensive Eclipse Collections API available on MutableList
Assertions.assertEquals("1, 2, 3", toList.makeString());
}
The following is the equivalent code for comparison using Java Stream and the map method.
@Test
public void mapOnStream()
{
List<Integer> list = List.of(1, 2, 3);
List<String> expected = Lists.mutable.of("1", "2", "3");
// Return type of Stream
Stream<String> map =
list.stream().map(Object::toString);
// Return type of List
List<String> toList = map.toList();
Assertions.assertEquals(expected, toList);
Assertions.assertEquals("1, 2, 3", toList.stream()
.collect(Collectors.joining(", ")));
}
The Utility of Utility
Utility classes allow you to extend the behavior of types you have no direct control over. In the cases described above, Eclipse Collections has extended the possibilities for all Iterable types in Java. Java Stream is great, but does not work directly with Iterable. Stream also provides mostly lazy behavior.
The Iterate, ParallelIterate and ListIterate utility classes provide eager behaviors for Iterable and List. There can be a challenge when it comes to choosing return types when using utility classes (you get one choice), but ListIterate and LazyIterate show that it is possible to bridge abstract and anemic types like Iterable with rich and fluent types like MutableList and LazyIterable.
Iterate was the first class in Eclipse Collections to provide a rich eager/serial API initially to java.util.Collection, and later to java.lang.Iterable. It has survived a long time in the library, and still continues to provide useful behaviors.
Thank you for reading this blog! I hope you have found it interesting and informational!
I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions.
Iterate over any Iterable in Java was originally published in Javarevisited on Medium, where people are continuing the conversation by highlighting and responding to this story.
June 23, 2023
Announcing Eclipse Ditto Release 3.3.0
June 23, 2023 12:00 AM
The Eclipse Ditto teams is proud to announce the availability of Eclipse Ditto 3.3.0.
Version 3.3.0 contains features improving merge/PATCH commands, skipping modifications of a twin if the value would be equal after the modification and a more production ready Ditto Helm chart.
Adoption
Companies are willing to show their adoption of Eclipse Ditto publicly: https://iot.eclipse.org/adopters/?#iot.ditto
When you use Eclipse Ditto it would be great to support the project by putting your logo there.
Changelog
The main improvements and additions of Ditto 3.3.0 are:
- Support replacing certain json objects in a merge/PATCH command instead of merging their fields
- Implicitly convert a merge/PATCH command to a “Create Thing” if thing is not yet existing
- Provide option to skip a modification in the “twin” if the value “is equal” to the previous value
- Addition of the DevOps API endpoints to Ditto’s OpenAPI definition
- Improve DittoProtocol MessagePath to be aware of message subject
- Support alternative way of specifying “list” query parameters
- UI enhancements:
- Enhance Ditto-UI to dynamically configure log levels of Ditto
- Building and packaging the UI with esbuild
The following non-functional enhancements are also included:
- Provide official Eclipse Ditto Helm chart via Docker Hub and move its sources to Ditto Git repository
- In addition, provide a lot more configuration options and hardening of the chart to make it more feasible for productive use
The following notable fixes are included:
- Fix that redeliveries for acknowledgeable connectivity messages were issued too often
- Fix WoT dispatcher starvation by adding timeouts to fetch models
Please have a look at the 3.3.0 release notes for a more detailed information on the release.
Artifacts
The new Java artifacts have been published at the Eclipse Maven repository as well as Maven central.
The Ditto JavaScript client release was published on npmjs.com:
The Docker images have been pushed to Docker Hub:
- eclipse/ditto-policies
- eclipse/ditto-things
- eclipse/ditto-things-search
- eclipse/ditto-gateway
- eclipse/ditto-connectivity
The Ditto Helm chart has been published to Docker Hub:
–
The Eclipse Ditto team
June 14, 2023
WTP 3.30 Released!
June 14, 2023 06:59 PM
Eclipse JKube 1.13 is now available!
June 14, 2023 12:45 PM
On behalf of the Eclipse JKube
team and everyone who has contributed, I'm happy to announce that Eclipse JKube 1.13.1 has been
released and is now available from
Maven Central �.
Thanks to all of you who have contributed with issue reports, pull requests, feedback, and spreading the word with blogs, videos, comments, and so on. We really appreciate your help, keep it up!
What's new?
Without further ado, let's have a look at the most significant updates:
- Support for Helm
Chart.yamlfragments - Kubernetes resource Security Hardening profile
- � Many other bug-fixes and minor improvements
Support for Helm Chart.yaml fragments
JKube now allows you to use fragments to configure the resulting generated Helm Chart.yaml file.
Until now, it was only possible to customize this file by providing XML or DSL configuration.
With this new enhancement, you can now place a Chart.helm.yaml file in the src/main/jkube directory of your project
and JKube will merge the contents of this file with the generated Chart.yaml file.
For example, the following Chart.helm.yaml file will set the description of the generated Helm chart:
description: The description provided through a fragmentThe generated Chart.yaml file will look like this:
apiVersion: v1
name: the-fragment-name
description: The description provided through a fragment
# ...Kubernetes resource Security Hardening profile
To improve the overall security of the generated Kubernetes resources, JKube now provides a new profile called security-hardening.
The profile enforces a set of rules, the following list contains a few of them:
- Disables the auto-mounting of the service account token.
- Prevents containers from running in privileged mode.
- Ensures containers do not allow privilege escalation.
You can find the complete list of rules in the Kubernetes-Maven-Plugin documentation.
This profile is not enabled by default (opt-in). You can enable it through the plugin configuration:
<plugin>
<groupId>org.eclipse.jkube</groupId>
<artifactId>kubernetes-maven-plugin</artifactId>
<configuration>
<profile>security-hardening</profile>
<!-- ... -->
</configuration>
</plugin>Or through a Maven/Gradle property. For example, in a Maven project you can do this from the command line:
mvn k8s:resource -Djkube.profile=security-hardeningUsing this release
If your project is based on Maven, you just need to add the Kubernetes Maven plugin or the OpenShift Maven plugin to your plugin dependencies:
<plugin>
<groupId>org.eclipse.jkube</groupId>
<artifactId>kubernetes-maven-plugin</artifactId>
<version>1.13.1</version>
</plugin>If your project is based on Gradle, you just need to add the Kubernetes Gradle plugin or the OpenShift Gradle plugin to your plugin dependencies:
plugins {
id 'org.eclipse.jkube.kubernetes' version '1.13.1'
}How can you help?
If you're interested in helping out and are a first-time contributor, check out the "first-timers-only" tag in the issue repository. We've tagged extremely easy issues so that you can get started contributing to Open Source and the Eclipse organization.
If you are a more experienced developer or have already contributed to JKube, check the "help wanted" tag.
We're also excited to read articles and posts mentioning our project and sharing the user experience. Feedback is the only way to improve.
Project Page | GitHub | Issues | Gitter | Mailing list | Stack Overflow

June 09, 2023
Xtext, monorepo and Maven/Tycho
by Lorenzo Bettini at June 09, 2023 09:38 AM
June 05, 2023
Sweating the small stuff in Java
by Donald Raab at June 05, 2023 03:51 PM
The story of small FixedSizeCollection types in Eclipse Collections
Sometimes, You’re on Your Own
Every once in a while, we are required as application developers to roll up our sleeves and find ways to squeeze performance or memory savings beyond the built-in capabilities of our language and libraries.
I started programming professionally in DOS/Clipper in the late 1980s when 640K was the memory limit, so I was accustomed to memory-constrained programming. I wasn’t used to anything else until I started programming in Smalltalk, where I had access to hundreds of megabytes of memory.
Towards the end of the 1990s, I worked in Smalltalk, loading decent-sized object graphs into memory and doing things at blazing memory speed. Processes that used to run in minutes in DOS/Clipper could be completed in hundreds of milliseconds. I was working with a 32-bit memory constraint, but I never seemed to get close to the edge of the RAM limit to worry about running out of memory with the domain I was working in.
This was a good progress. Memory was plentiful and fast. Life as a programmer was good. I hadn’t yet encountered a big data application. That would happen soon enough.
Big memory meet bigger data
In 2004, I worked on a Java application designed and built using an in-memory caching architecture. I experienced firsthand the saying, “you can’t fit ten pounds of $%*# in a five-pound bag.”. I was still working within the confines of 32-bit software imposed memory limit. The hardware had already progressed beyond this and offered tens of gigs of RAM, but it was inaccessible to me now. A 64-bit version of Java was available with the JDK 1.4.0 release, but I didn’t have access yet.
My choices at the time were simple.
- Abandon the architecture and start with something more scalable but with a different performance profile.
- Wait for a 64-bit JVM.
- Figure out how to make ten pounds of $%*# fit in a five-pound bag.
I went with option number 3. By measuring, executing, and repeating small memory efficiency tricks, I could make the application work. I did things that never would have occurred to me to do in the previous 15 years of my programming career. I felt a bit like Mark Watney (Matt Damon) from the movie The Martian.
In the face of overwhelming odds, I’m left with only one option. I’m gonna have to science the $%*# out of this.
— Mark Watney, The Martian
Step 0: Find Tools To Measure Memory
In 2004, I used jmap -histo <pid> to figure out where I can save memory. Jmap is a command line tool in the JDK for analyzing Java heaps. It still works fine in OpenJDK 20 today. Jmap is a blunt tool that can help you spot glaring issues on the Java heap and measure the overall impact of any changes you make.
Today, I use Java Object Layout (JOL) from the OpenJDK tools for measuring the memory cost of specific objects. JOL gives a more precise and targeted set of information than jmap. There is a JOL plugin for IntelliJ available as well. I have not used the IntelliJ plugin, but some folks I know have said positive things about it.
Once you have JOL included as a Maven dependency, you can use GraphLayout to look at the memory cost and layout of particular instances of objects programmatically. You will see some code examples below that use GraphLayout.
Step 1: Understand Your Data
I had a large static object graph loaded one time up front and cached, and then multiple calculated object graphs created from that data graph and also cached that was explorable in either direction. Every node in the graph had a List of children and a List of parents. The requirement I needed to satisfy was to store two calculated graphs in memory to be explored on demand by the users.
- Total memory required for both data and calculations— ~4-6 GB
- Total memory needed for application to run — ~7–8 GB
- Available RAM on Solaris with 32-bit JVM- ~4 GB
I started looking at the heap output using jmap -histo <pid> . Jmap is a simple but effective tool for looking at and starting to understand what is occupying a Java heap. I used jmap for quite a few years, helping folks find waste and trim the fat out of their Java heaps. I once saw a Java heap with two million Boolean objects in it. Yeah, that kind of stuff can happen when you’re not looking. I convinced the team with the two million Boolean objects in their heap that they didn’t need that kind of fault tolerance for boolean and would survive just as well with one instance of each of true and false.
I digress.
In the application I was working on, I saw an extremely large number of List, Set and Map instances in the heap output from jmap, along with the corresponding array instances that occupied the data structures.
What I didn’t know right away was the size of the List, Set and Map instances. So, I dug around in the code. I saw ArrayList being created in multiple places using a default constructor. Anyone who has programmed with Java before JDK 7_u40 might remember when ArrayList eagerly initialized an empty array of size 10.
When I was working on this application, we were using JDK 4. This was seven years before Java 7 was released. I didn’t know then, but I would later help validate the importance of the change to ArrayList that was introduced to lazy initialize the backing array of size 10. The benefit of this for all Java developers on the planet for the rest of the time is that empty ArrayList instances stay empty. This has a significant and lasting impact on the total memory savings of Java applications globally forever. Win!
When I realized all of the List instances would be backed by default sized arrays, my first thought was to initialize them all using new ArrayList(0). This did have a noticeable benefit right away. Unfortunately, this wasn’t enough savings, and I would go on to discover that most of the List, Set, and Map instances in a heap were of sizes zero through six. I kept investigating where I could save memory.
Step 2: Understanding Array Instances
Arrays are used in a lot of places in Java. Lists have them. Maps have them. Strings have them.
Every time you say new with an empty array, you get back a new empty array. Each new empty array is effectively immutable and equal to every other empty array of the same type because it has no elements and is the same size. Empty array sharing is an important optimization I discovered later.
What I couldn’t easily tell is how many of the array instances on the heap were empty. I had to guess. What I started with was adding an EmptyList class that I could use anywhere I wanted an empty List. This would result in reasonable savings for empty lists in the heap. In Java 5, Collections.emptyList() would be added, which returns a type-safe immutable instance. Unfortunately, I didn’t know this and didn’t have time to wait for Java 5.
Step 3: Optimize for Empty Lists
I created an EmptyList that was a singleton and initialized all Lists to this singleton instance. Today, in Eclipse Collections, there is ImmutableEmptyList and EmptyList . EmptyList was created first and implements an interface called FixedSizeList, which extends MutableList and java.util.List. ImmutableEmptyList extends both ImmutableList and java.util.List.
Using EmptyList would necessarily complicate the application code because there is no way to add to a fixed-size empty list. Wherever I used EmptyList, I had to add code that tested first whether it was an empty List (i.e., size == 0) and, if so, created a new List if I needed to add to it. Then I had to set that new List instance into the variable that pointed to the EmptyList. This increased the cost of testing and implementing all of the methods where List instances were created and grown to make sure bugs weren’t introduced. The cost was worth the benefit.
Step 4: Optimize for Fixed Size Lists w/ Sizes 1–6
Creating ArrayList with an initial size of zero, provided some memory savings benefit, but I was still seeing millions of instances of object arrays due to what I guessed were List instances in size one to ten range. I created SingletonList and tried it out and saw some great benefits.
SingletonList holds onto a single Object reference and has no backing array. Then I introduced DoubletonList and saw some more benefits. Then came TripletonList, QuadrupletonList, QuintupletonList and SextupletonList. This is as far as I went in 2004. The savings I saw with all of these changes were dramatic in the dynamic calculation object graph. This was because the design of the object graph was bi-directional, and every node in the graph knew its parents and children. Most nodes in the graph had one parent and most often had one to three children.
The following chart shows the potential savings even today comparing a default sized ArrayList (new ArrayList()), a zero initial-sized ArrayList (new ArrayList(0)), and FixedSizeList instances created using Lists.fixedSize.of() from Eclipse Collections.

I wrote the following test using OpenJDK 20 with JOL version 0.17 to print out all the memory sizes in the chart. Here’s the code I used:
@Test
public void fixedSizeListsToSizeSix()
{
ArrayList arrayList = new ArrayList();
System.out.println("ArrayList Empty: " +
GraphLayout.parseInstance(arrayList).totalSize());
arrayList.add(null);
System.out.println("ArrayList 1: " +
GraphLayout.parseInstance(arrayList).totalSize());
arrayList.add(null);
System.out.println("ArrayList 2: " +
GraphLayout.parseInstance(arrayList).totalSize());
arrayList.add(null);
System.out.println("ArrayList 3: " +
GraphLayout.parseInstance(arrayList).totalSize());
arrayList.add(null);
System.out.println("ArrayList 4: " +
GraphLayout.parseInstance(arrayList).totalSize());
arrayList.add(null);
System.out.println("ArrayList 5: " +
GraphLayout.parseInstance(arrayList).totalSize());
arrayList.add(null);
System.out.println("ArrayList 6: " +
GraphLayout.parseInstance(arrayList).totalSize());
arrayList = new ArrayList(0);
System.out.println("ArrayList 0: " +
GraphLayout.parseInstance(arrayList).totalSize());
arrayList.add(null);
System.out.println("ArrayList0 1: " +
GraphLayout.parseInstance(arrayList).totalSize());
arrayList = new ArrayList(0);
arrayList.add(null);
arrayList.add(null);
System.out.println("ArrayList0 2: " +
GraphLayout.parseInstance(arrayList).totalSize());
arrayList = new ArrayList(0);
arrayList.add(null);
arrayList.add(null);
arrayList.add(null);
System.out.println("ArrayList0 3: " +
GraphLayout.parseInstance(arrayList).totalSize());
arrayList = new ArrayList(0);
arrayList.add(null);
arrayList.add(null);
arrayList.add(null);
arrayList.add(null);
System.out.println("ArrayList0 4: " +
GraphLayout.parseInstance(arrayList).totalSize());
arrayList = new ArrayList(0);
arrayList.add(null);
arrayList.add(null);
arrayList.add(null);
arrayList.add(null);
arrayList.add(null);
System.out.println("ArrayList0 5: " +
GraphLayout.parseInstance(arrayList).totalSize());
arrayList = new ArrayList(0);
arrayList.add(null);
arrayList.add(null);
arrayList.add(null);
arrayList.add(null);
arrayList.add(null);
arrayList.add(null);
System.out.println("ArrayList0 6: " +
GraphLayout.parseInstance(arrayList).totalSize());
List list = Lists.fixedSize.empty();
System.out.println("FixedSizeList Empty: " +
GraphLayout.parseInstance(list).totalSize());
list = Lists.fixedSize.of((Object)null);
System.out.println("FixedSizeList 1: " +
GraphLayout.parseInstance(list).totalSize());
list = Lists.fixedSize.of(null, null);;
System.out.println("FixedSizeList 2: " +
GraphLayout.parseInstance(list).totalSize());
list = Lists.fixedSize.of(null, null, null);
System.out.println("FixedSizeList 3: " +
GraphLayout.parseInstance(list).totalSize());
list = Lists.fixedSize.of(null, null, null, null);
System.out.println("FixedSizeList 4: " +
GraphLayout.parseInstance(list).totalSize());
list = Lists.fixedSize.of(null, null, null, null, null);
System.out.println("FixedSizeList 5: " +
GraphLayout.parseInstance(list).totalSize());
list = Lists.fixedSize.of(null, null, null, null, null, null);
System.out.println("FixedSizeList 6: " +
GraphLayout.parseInstance(list).totalSize());
}
In these examples, I used null as the elements, so the memory cost of the lists was the only thing on display. Savings add up quickly here when you have millions of EmptyList, SingletonList, DoubletonList, and TripletonList instances. There were smaller numbers of QuadrupletonList, QuintupletonList and SextupletonList, but enough that the memory savings mattered.
Step 5: Optimize for Fixed Size Sets w/ Sizes 0–4
Nothing will prepare you when you discover how terrible the memory footprint of the java.util.HashSet class is. HashSet is a suboptimal class that can grow quietly in your heap if you’re not careful to use it sparingly and dispose of instances when you are done using them. HashSet performs very well for most use cases where a Set is required, but the cost is unnecessarily high. This cost is the product of using delegation to a HashMap inside of HashSet.
The following chart shows the memory savings today comparing a default sized HashSet (new HashSet()), a zero initial-sized HashSet (new HashSet (0)), and FixedSizeSet instances created using Sets.fixedSize.of() from Eclipse Collections.

I wrote the following test using OpenJDK 20 with JOL version 0.17 to print out all the memory sizes in the chart. Here’s the code I used:
@Test
public void fixedSizeSetsToSizeFour()
{
HashSet hashSet = new HashSet();
System.out.println("HashSet Empty: " +
GraphLayout.parseInstance(hashSet).totalSize());
hashSet.add(new Object());
System.out.println("HashSet 1: " +
GraphLayout.parseInstance(hashSet).totalSize());
hashSet.add(new Object());
System.out.println("HashSet 2: " +
GraphLayout.parseInstance(hashSet).totalSize());
hashSet.add(new Object());
System.out.println("HashSet 3: " +
GraphLayout.parseInstance(hashSet).totalSize());
hashSet.add(new Object());
System.out.println("HashSet 4: " +
GraphLayout.parseInstance(hashSet).totalSize());
hashSet = new HashSet(0);
System.out.println("HashSet 0: " +
GraphLayout.parseInstance(hashSet).totalSize());
hashSet.add(new Object());
System.out.println("HashSet0 1: " +
GraphLayout.parseInstance(hashSet).totalSize());
hashSet = new HashSet(0);
hashSet.add(new Object());
hashSet.add(new Object());
System.out.println("HashSet0 2: " +
GraphLayout.parseInstance(hashSet).totalSize());
hashSet = new HashSet(0);
hashSet.add(new Object());
hashSet.add(new Object());
hashSet.add(new Object());
System.out.println("HashSet0 3: " +
GraphLayout.parseInstance(hashSet).totalSize());
hashSet = new HashSet(0);
hashSet.add(new Object());
hashSet.add(new Object());
hashSet.add(new Object());
hashSet.add(new Object());
System.out.println("HashSet0 4: " +
GraphLayout.parseInstance(hashSet).totalSize());
Set set = Sets.fixedSize.empty();
System.out.println("FixedSizeSet Empty: " +
GraphLayout.parseInstance(set).totalSize());
set = Sets.fixedSize.of(new Object());
System.out.println("FixedSizeSet 1: " +
GraphLayout.parseInstance(set).totalSize());
set = Sets.fixedSize.of(new Object(), new Object());;
System.out.println("FixedSizeSet 2: " +
GraphLayout.parseInstance(set).totalSize());
set = Sets.fixedSize.of(new Object(), new Object(), new Object());
System.out.println("FixedSizeSet 3: " +
GraphLayout.parseInstance(set).totalSize());
set = Sets.fixedSize.of(new Object(), new Object(), new Object(), new Object());
System.out.println("FixedSizeSet 4: " +
GraphLayout.parseInstance(set).totalSize());
}
With sets, I needed to add data to the sets that would have unique a hashCode and equals combination. If I used null, there only would have been one element in each Set. So there is an extra 16 bytes for each element in the set. Just multiply 16 times the Set size to determine the extra overhead for the elements.
Step 6 and Beyond
There were other important lessons over the years I learned about how to save memory. Object Pooling is usually the most beneficial for loading data from external storage (e.g., database) into long-lived objects in memory. Immutable objects like String and LocalDate are good candidates for pooling. You have to understand the makeup of your data and have a decent number of duplicate strings and dates to see the benefit of pooling. I will not go more into object pooling in this article as it is a topic worthy of its own post.
Finally, if you know the lower and upper ranges of your numeric data and that they will be known to stay within those ranges forever, you can also save memory by using smaller integral or float types to store data in long-lived objects in memory. Using byte, short, and int instead of long The alignment and padding in the memory layout can help you fit more into an object. Be careful to widen your type when doing math (e.g., summing or counting) because you may otherwise encounter silent overflow errors.
The JDK is constantly improving in the memory-efficiency and performance, so new help is constantly on the way. The future of Java is looking good.
No One Has Ever Been Stranded on Mars
Mark Watney was a character in a movie. The story made a great film with memorable quotes, and that is all. No human that we know has ever been to Mars.
You may have never encountered or even heard of an application that creates millions of small List, Set, and Map instances and holds onto them for a long time in memory. That is maybe until now. I had never seen or heard of an application like this until I started working on one in 2004.
There is one potential gotcha with this small collection memory savings strategy. You might wind up trading off memory for performance. I was faced with the problem of getting an application working. I wasn’t concerned with how fast the calls to methods on the collection classes were or whether there would be megamorphic virtual calls in hot code paths to cause significant slowdowns.
When the application was finally running, it was so fast in memory we didn’t notice any bottlenecks caused by megamorphic virtual calls. If performance and throughput are your biggest concern, I recommend using JMH or other performance profilers to measure specific hotspots for tuning. I recommend only tuning for performance if you see a specific performance issue. You’ll be in the best position to determine whether memory or performance is the biggest concern for your application.
I hope you never face this kind of memory problem in the applications you work on today. If you encounter this kind of situation in the future (hey, we might see real folks go to Mars one day), you can leverage the knowledge you have gained here to help you address any memory issues you have. All of the small List, Set, and Map implementations exist in Eclipse Collections today. The JDK also has a small immutable List and Set implementations (List12, ListN, Set12, SetN) that optimize for both throughput and memory. I wish I had access to these classes in the JDK in 2004.
The following chart compares the memory cost of the JDK and Eclipse Collections ImmutableList implementations up to size 11.

If you’re stranded on Mars with an application that won’t work and memory savings for small List, Set and Map instances are exactly what you need to help you, then Eclipse Collections might be your best option. When it comes to raw throughput performance with some great memory savings for very small collections, the JDK (after Java 11) may be the best option today. The JDK now offers a memory-efficiency option that it didn’t have in 2004 when I was stuck working on my application on Mars. Progress is a good thing. Your mileage may vary.
Tracer Bullets in your Java Heap
There is an additional subtle benefit to having named versions of all of the small List, Set, and Map instances in your Java heap. The named classes show up in both jmap and JOL output. When used, these classes tell you more about the distribution of sizes of your collections in memory. Seeing that you have a large number of ArrayList instances tells you nothing about the size of them.
If we output the following code using JOL, you will see the size distribution of your Lists when using Eclipse Collections ImmutableList implementations. The ImmutableList types are similar to their FixedSizeList counterparts but are hand-optimized to size ten instead of six.
@Test
public void ecListOfToSizeEleven()
{
ImmutableList<String>[] array = new ImmutableList[]{
Lists.immutable.of(),
Lists.immutable.of(""),
Lists.immutable.of("", ""),
Lists.immutable.of("", "", ""),
Lists.immutable.of("", "", "", ""),
Lists.immutable.of("", "", "", "", ""),
Lists.immutable.of("", "", "", "", "", ""),
Lists.immutable.of("", "", "", "", "", "", ""),
Lists.immutable.of("", "", "", "", "", "", "", ""),
Lists.immutable.of("", "", "", "", "", "", "", "", ""),
Lists.immutable.of("", "", "", "", "", "", "", "", "", ""),
Lists.immutable.of("", "", "", "", "", "", "", "", "", "", "")
};
Assertions.assertEquals(496L,
GraphLayout.parseInstance(array).totalSize());
System.out.println(GraphLayout.parseInstance(array).toFootprint());
}
The following is the output from JOL after calling GraphLayout.parseInstance(array).toFootprint().
Note: I shortened the package names manually to remove the scroll bar.
COUNT AVG SUM DESCRIPTION
1 16 16 [B
1 64 64 [Ljava.lang.String;
1 24 24 java.lang.String
1 16 16 org.ecl.co.impl.list.imm.ImmutableArrayList
1 56 56 org.ecl.co.impl.list.imm.ImmutableDecapletonList
1 24 24 org.ecl.co.impl.list.imm.ImmutableDoubletonList
1 16 16 org.ecl.co.impl.list.imm.ImmutableEmptyList
1 48 48 org.ecl.co.impl.list.imm.ImmutableNonupletonList
1 48 48 org.ecl.co.impl.list.imm.ImmutableOctupletonList
1 32 32 org.ecl.co.impl.list.imm.ImmutableQuadrupletonList
1 32 32 org.ecl.co.impl.list.imm.ImmutableQuintupletonList
1 40 40 org.ecl.co.impl.list.imm.ImmutableSeptupletonList
1 40 40 org.ecl.co.impl.list.imm.ImmutableSextupletonList
1 16 16 org.ecl.co.impl.list.imm.ImmutableSingletonList
1 24 24 org.ecl.co.impl.list.imm.ImmutableTripletonList
15 496 (total)
Consider the following code and JOL output for the small immutable collections in the JDK to show the difference where class names do not give you as much information.
@Test
public void jdkListOfToSizeEleven()
{
List<String>[] array = new List[]{
List.of(),
List.of(""),
List.of("", ""),
List.of("", "", ""),
List.of("", "", "", ""),
List.of("", "", "", "", ""),
List.of("", "", "", "", "", ""),
List.of("", "", "", "", "", "", ""),
List.of("", "", "", "", "", "", "", ""),
List.of("", "", "", "", "", "", "", "", ""),
List.of("", "", "", "", "", "", "", "", "", ""),
List.of("", "", "", "", "", "", "", "", "", "", "")
};
Assertions.assertEquals(776L,
GraphLayout.parseInstance(array).totalSize());
System.out.println(GraphLayout.parseInstance(array).toFootprint());
}
JOL Output:
COUNT AVG SUM DESCRIPTION
1 16 16 [B
10 43 432 [Ljava.lang.Object;
1 16 16 java.lang.Object
1 24 24 java.lang.String
2 24 48 java.util.ImmutableCollections$List12
10 24 240 java.util.ImmutableCollections$ListN
25 776 (total)
You’ll notice that there are two instances of List12 and ten instances of ListN. If you know that List12 should actually be read as ListOneTwo and not ListTwelve, you will at least know that all instances have a size of one or two.
In 2004, once we had made the changes to our code using the small FixedSizeCollection implementations, we got immediate feedback on the changes to the size distributions of our small collections any time we looked at jmap -histo <pid>. Changes in code or data could have caused changes in these sizes.
Best of Luck on Your Journey
After almost 20 years, I am telling you this story because you can now buy a MacBook Pro with as many cores (12) and as much memory (96gig) as I had access to between 2004–2010 on big Solaris servers. I don’t know if this will result in more applications with large heaps and lots of collections being built now, but I do have to believe it will not result in less.
If you’re heading out for your own Martian application experience soon, remember that there may be solutions available for memory waste issues you might encounter. The JDK is continually improving and finding ways to save memory that were out of reach for me two decades ago. The Core JDK team has prioritized balancing throughput with memory savings, which is great. Eclipse Collections focused primarily on memory savings in the ImmutableCollection implementations, based on my challenging experience in 2004.
Project Lilliput and Project Valhalla will be two of the most important changes to the JDK regarding memory savings and performance. Both projects are complementary. If you’ve never heard of these OpenJDK projects, you should check out the links I provided to them above. We are fortunate to have such an amazing language and library as the JDK that continues to evolve after 28 years.
There are two articles from Aleksey Shipilёv I would recommend reading whether or not you find yourself in a dire situation involving memory or performance:
JVM Anatomy Quark #24: Object Alignment
The Black Magic of (Java) Method Dispatch
Thank you for reading this story. I hope you find the lessons and information in this story useful in your travels.
If you’re interested in getting started and finding out more about Eclipse Collections, I can recommend the following blog series.
Enjoy!
I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions.
Sweating the small stuff in Java was originally published in Better Programming on Medium, where people are continuing the conversation by highlighting and responding to this story.
May 24, 2023
Blog-is-fear
by Donald Raab at May 24, 2023 01:33 AM
The blogosphere is intimidating. Build confidence through practice.
Note: I originally shared this on LinkedIn as an adhoc New Years 2023 post. I thought it was an important enough message to share in its own blog.
You don’t have the time not to write
Many folks have told me over the years they would like to blog but don’t have the time, or don’t know what to blog about. My usual response… Just do it.
No one has the time. Folks who seem to have the time are merely prioritizing their time differently.
Not knowing what to blog about is a confidence problem. Everyone has something to write about. No one wants to write something that isn’t high quality or isn’t well liked. There is a natural fear of public embarrassment. The first blog you write will not be great and probably will not be well liked. It will also probably not get many reads. Set your expectations low and get ready to practice. Find a friend or two to read your first blog before posting if you’re worried. If you’re worried about your employer, then read your social media policy and/or talk to your manager first. The confidence problem is feeding the perceived time problem because folks think that it will take a long time to find a topic, or a long time to write about something that is great, or to get approval if necessary from their employer. This is a cycle of despair. Break the cycle by doing something.
Compared to most developers, I write a lot. I published 34 blogs on Medium in 2022. I have always written a lot, just not publicly until five years ago. I wrote a lot of poetry in high school. Some of it was decent enough to get published. I submitted a lot of the poetry I wrote to my high school literary magazine. I was the top contributor for three years in a row. I was definitely not the best writer. I just submitted the most writing. The practice, and candid feedback I received helped me to become a better writer. I’ve published some of my high school poetry on my Medium blog. I’m terrified sharing poetry from my teenage years. That is why I share it. It’s good to do things that terrify you occasionally. I care about what people think. I also don’t care about what people think.
To say farewell to 2022, I wrote a final blog on December 30th. I did not really have the time to write this blog. I prioritized my time on the 31st to spend with family to celebrate New Years. By the time I was ready to publish, it was already 2am on the 31st. I published and went to sleep.
I wanted to write this blog to show how unstructured blogging can be. A blog doesn’t have to be Shakespearean or Dickensian to be good enough to share. The blog I wrote is about nothing in particular, but the topics have some common meandering relationships and a final destination… the end of a stream of consciousness. It is a babbling brook, and potpourri. Each topic could have been a blog on its own.
I forgot how to write this blog
I forgot how to write this blog, but I wrote it anyway. Enjoy, and Happy New Year! If you made a resolution to blog in 2023, I hope I get to read some of your writing. Best of luck!
I am the creator of and a Committer for the Eclipse Collections OSS project which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions.
May 17, 2023
Eclipse Cloud DevTools Digest - March and April, 2023
by John Kellerman at May 17, 2023 06:52 PM
March and April, 2023
An Emerging Open VSX Working Group! - Based on discussions with various stakeholders, we have proposed a new working group specifically for the Open VSX Registry. Spoiler alert: it is starting and formal announcement soon.

Rodrigo Pinto: Eclipse Cloud DevTools Contributor of the Month! - The Eclipse Cloud DevTools contributor award for March goes to Rodrigo Pinto of Ericsson for his significant contributions to Trace Compass Cloud and the Eclipse Cloud DevTools Ecosystem.
Eclipse Cloud DevTools Contributor Award: Eclipse Theia Community Release - The Eclipse Cloud DevTools contributor award for April goes to STMicroelectronics for initiating the Theia Community Release. The community release is a new, special type of release done every three months in addition to the monthly releases
Eclipse JKube 1.12 is now available! - Enhancements include cron job controller generation, resource limits through XML/DSL, and concurrent remote development sessions
Eclipse Theia 1.36 Release: News and Noteworthy - Enhancements include a terminal view context menu, improved outline view expansion, improved options for debug sessions, and support for VS code extensions to 1.72.2.
The Eclipse Theia Community Release 2023-02 - Included are detachable web and terminal views, improved VS Code extension API, and improvements to help adopters.
Cloud Tool Time Webinars
We are now scheduling Cloud Tool Time webinars for 2023. Be sure to Sign up now to get on the calendar and let us help tell your story. You can see past sessions on our Youtube channel.
Eclipse Cloud DevTools Projects
Explore the Eclipse Cloud DevTools ecosystem! Check out our projects page to find out more about open source innovation for cloud IDEs, extension marketplaces, frameworks and more.
Getting Listed on the Cloud DevTools Blog
If you are working with, or on, anything in the Cloud DevTools space, learn how to get your writings posted in our blog section.
Eclipse Cloud DevTools Contributor Award: Yining Wang for Contributions to Open VSX
by John Kellerman at May 17, 2023 05:03 PM
The Eclipse Cloud Developer Tools contributor award for this month goes to Yining Wang from Ericsson for her contributions to github.com/eclipse/openvsx and github.com/EclipseFdn/open-vsx.org, its deployment at Open VSX Registry.

The Open VSX Registry at open-vsx.org is a vendor-neutral open-source alternative to the Visual Studio Marketplace for VS Code extensions. A public instance of the Registry is hosted by the Eclipse Foundation, but more instances can be freely deployed in public or private places. The primary purpose of this project is to provide a marketplace for VS Code extensions that can be used with Eclipse Theia and other IDEs. Another important goal is to allow self-hosting the Registry, e.g. within a company network. None of this is currently possible with the VS Marketplace, which is proprietary and may be accessed only from Visual Studio products. These projects are part of the Eclipse Cloud DevTool Working Group.
Yining has made a significant impact on the Open VSX project. She works with Ericsson on internal projects as well as directly on the Open VSX’s code base. With her open and supportive work, she already has helped out people around the globe, internal to Ericson and external, in both English and Mandarin. She has been laying the groundwork down for a more stable Open VSX by removing background noise tasks. In addition, as a result of her hard work and contributions, Yining has been nominated to get committer status.
Thanks to Yining for your contributions and congratulations for winning this award!
The Cloud DevTools Working Group provides a vendor-neutral ecosystem of open-source projects focused on defining, implementing and promoting best-in-class web and cloud-based development tools. It is hosted at the Eclipse Foundation, current members of the group include AMD, Arm, EclipseSource, Ericsson, Obeo, RedHat, Renesas, STMicroelectronics and TypeFox.
This Eclipse Cloud DevTools contributor award is sponsored by EclipseSource, providing consulting and implementation services for web-based tools, Eclipse GLSP, Eclipse Theia, and VS Code.
Blog Series: Getting Started with Eclipse Collections
by Donald Raab at May 17, 2023 05:19 AM
Every day we get to learn something new and useful, is a good day.
Start Here
You are here. Welcome to the first day of learning a new way of coding with collections in Java using Eclipse Collections. Today is going to be a good day. Thank you for letting my series of four blogs on Getting Started with Eclipse Collections be a part of your day.
Read the blogs in this series (linked below) at a pace that you find comfortable. Jump around, skip sections… find your own path that works best for learning. The blogs in this series are a comprehensive beginner’s reference guide. I highly recommend complementing the knowledge you obtain in these blogs with hands-on practice using the Eclipse Collections Kata. The information contained in these blogs will help you learn many of the basic usage patterns of the Eclipse Collections library. Alternating between the theory in the blogs, and the practice in the katas will help you hone your skills, and strengthen your knowledge and understanding.
These blogs are a mile marker on a journey I started back in 2004. Twenty years ago, I was a Technical Architect in Goldman Sachs who had decided to move with his family to London on a year long business trip. I had some problems to solve while I was working in London, and I worked hard to solve them. I did not expect to return from that trip with the beginnings of a new collections library in Java, but that is exactly what happened. What you will see in the blogs that follow is the evolution of a library that was developed to satisfy the real needs of large scale enterprise applications developed in the back office of Goldman Sachs. It would take a full eight years for anyone outside of Goldman Sachs to be able to see and use this code as an open source library in their own applications. In 2012, what we now know as Eclipse Collections was originally open sourced as GS Collections.
And here you are. Waiting to learn. Enough back story. Let’s get you started. The following sections have links to the blogs in the series.
Part 1 — Creating Collections
Know why you need the Eclipse Collections — the new types it offers and how it enhances other familiar Collection types. In this blog, you will learn how to create Lists, Sets, Bags, Stacks, Maps and other collection types in Eclipse Collections.
Getting Started with Eclipse Collections — Part 1
Part 2 — Adding to and removing from Collections
If you thought you knew everything you needed to know about adding to and removing from collections, then you need to read this blog. Learn about the covariantly overridden methods with, without, withAll, withoutAll, along with other specialized methods.
Getting Started with Eclipse Collections — Part 2
Part 3 — Converting between Collection types
Are you satisfied with the method toList on Java’s Stream ? Do you find yourself longing for converter methods named toSet, toMap as well? Then read this blog and find out just how many converter methods you have been living without.
Getting Started with Eclipse Collections — Part 3
Part 4 — Processing information in Collections
Collections contain information in them, and they provide methods that you can use to process that information. This blog covers the methods forEach, select, reject, partition, collect, detect, any/all/noneSatisfy and count. There also are fun FizzBuzz (🥤�) and Lego Brick (🔴🔴🔴) examples using emojis in code to help demonstrate these methods.
Getting Started with Eclipse Collections — Part 4
Further Reading
If you’re reading this right now, then you have found my Medium Account blog. I have been blogging about Eclipse Collections for almost six years. There are over 150 blogs on my Medium Account. I will share the blogs below that I think will help you the most if you want to continue your journey of learning Eclipse Collections.
Blog Series: The missing Java data structures no one ever told you about
These are more advanced topics about Eclipse Collections. The blogs in this series will you help understand how Eclipse Collections expands far beyond what is available in the standard Java Collections library.
Blog Series: The missing Java data structures no one ever told you about
Java Streams are great but it’s time for better Java Collections
Java Streams were a great addition to the standard library, but there is so much more that is missing from Java Collections. Read this blog to learn more.
Java Streams are great but it’s time for better Java Collections
Ten reasons to use Eclipse Collections
Ten reasons to use Eclipse Collections
Have a great Journey!
Thank you for taking the time to read! I hope you have learned something new and useful. Good luck on your journey, and I hope you will share any new found knowledge with others you encounter along the way.
I am the creator of and a Committer for the Eclipse Collections OSS project which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions.
May 15, 2023
OpenAPI Generator supports N4JS
by n4js dev (noreply@blogger.com) at May 15, 2023 10:10 AM
We are happy to announce that the OpenAPI Generator (version >= 6.6.0) supports N4JS client generation. This enables N4JS users to generate n4jsd files from OpenAPI specifications.
Our extension to the OpenAPI Generator supports the generation of client APIs and provides several command line options:
- For api and model files a target path can be specified. For api files a file prefix can be specified.
- Api files can be generated including calls to check methods whether mandatory parameters are missing.
- Additionally, generated api files can also contain checks whether the given objects might contain more fields than necessary. This could prevent sending superfluous data to the server.
- A default implementation for the actual REST calls is part of the generated file set.
Detailed specifications of the N4JS OpenAPI Generator and its command line options can be found at the official OpenAPI Generator website.
by Marcus Mews
April 20, 2023
Eclipse IDE – What the future Holds!
by Manoj NP at April 20, 2023 04:45 AM
The Eclipse IDE Working Group Steering Committee recently discussed the future of the Eclipse IDE — Many of you must have seen this LinkedIn post. Stakeholders from different companies met in Frankfurt to discuss the way forward for the Eclipse IDE

The stakeholders included a variety of roles — customers, partners, developers, founders, evangelists and more. This was not just a discussion,they came prepared with ideas on how the Eclipse IDE could evolve in the near and far future — and brainstormed how to get there.

Aren’t you curious to find out what happened in that room? What did they discuss? What is the vision of the Eclipse IDE Working Group?
They are planning to let you in on all of it.
On April 26th, 12:00 UTC — Come be a part of Eclipse!
Join Zoom Meeting
https://eclipse.zoom.us/j/82041269789?pwd=M1Jpaysrd1Y2R3RQOFZtTnk3bGtjUT09
Meeting ID: 820 4126 9789
Passcode: 875753
April 12, 2023
Eclipse Cloud DevTools Contributor Award: Eclipse Theia Community Release
by John Kellerman at April 12, 2023 02:31 PM
The Eclipse Cloud DevTools contributor award for this month goes to STMicroelectronics for initiating the Theia Community Release. The community release is a new, special type of release done every three months in addition to the monthly releases.
Selecting the right release cycle is crucial for software projects, including open source projects. The general trend is towards short release cycles, allowing fast deployment of innovations and fixes. Eclipse Theia takes this approach with a monthly release schedule by default. However, Theia is very often used as a platform for custom products. For many of these adopters, a monthly update of the base technology is often not the best fit. Furthermore, several other technologies provide integrations with Theia and are looking for a good point in time to ensure compatibility.

Source: Pixabay
To solve the balancing act between different release cycles, STMicroelectronics initiated the community release for the Eclipse Theia project. Community releases are published every three months and are derived from the monthly releases. Community releases have a longer consolidation period, allowing integrators to ensure compatibility with the new version.
Theia has recently completed its second community release (2023-02), and the process was well received by adopters and contributors. Initiating the idea of a community release is a nice example for the openness of the Eclipse Theia ecosystem. Not only can contributors influence the technical direction of a project, the community can also influence and improve the underlying development processes and the project’s governance.
Thanks to STMicroelectronics for this great initiative!
The Cloud DevTools Working Group provides a vendor-neutral ecosystem of open-source projects focused on defining, implementing and promoting best-in-class web and cloud-based development tools. It is hosted at the Eclipse Foundation, current members of the group include AMD, Arm, EclipseSource, Ericsson, Obeo, RedHat, Renesas, STMicroelectronics and TypeFox.
This Eclipse Cloud DevTools contributor award is sponsored by EclipseSource, providing consulting and implementation services for web-based tools, Eclipse GLSP, Eclipse Theia, and VS Code.
April 03, 2023
Eclipse JKube 1.12 is now available!
April 03, 2023 03:45 PM
On behalf of the Eclipse JKube
team and everyone who has contributed, I'm happy to announce that Eclipse JKube 1.12.0 has been
released and is now available from
Maven Central �.
Thanks to all of you who have contributed with issue reports, pull requests, feedback, and spreading the word with blogs, videos, comments, and so on. We really appreciate your help, keep it up!
What's new?
Without further ado, let's have a look at the most significant updates:
- Support for
CronJobcontroller generation - Setting resource limits through XML/DSL configuration
- Concurrent Remote Dev sessions
- � Many other bug-fixes and minor improvements
Setting resource limits through XML/DSL configuration
You can now set resource limits for your containers through XML/DSL configuration.
The following code snippet shows how you can leverage this new feature in your pom.xml configuration:
<plugin>
<groupId>org.eclipse.jkube</groupId>
<artifactId>kubernetes-maven-plugin</artifactId>
<configuration>
<resources>
<controller>
<containerResources>
<requests>
<cpu>1337m</cpu>
<memory>42Gi</memory>
</requests>
<limits>
<cpu>1337m</cpu>
<memory>42Gi</memory>
</limits>
</containerResources>
</controller>
</resources>
</configuration>
</plugin>Using this release
If your project is based on Maven, you just need to add the Kubernetes Maven plugin or the OpenShift Maven plugin to your plugin dependencies:
<plugin>
<groupId>org.eclipse.jkube</groupId>
<artifactId>kubernetes-maven-plugin</artifactId>
<version>1.12.0</version>
</plugin>If your project is based on Gradle, you just need to add the Kubernetes Gradle plugin or the OpenShift Gradle plugin to your plugin dependencies:
plugins {
id 'org.eclipse.jkube.kubernetes' version '1.12.0'
}How can you help?
If you're interested in helping out and are a first-time contributor, check out the "first-timers-only" tag in the issue repository. We've tagged extremely easy issues so that you can get started contributing to Open Source and the Eclipse organization.
If you are a more experienced developer or have already contributed to JKube, check the "help wanted" tag.
We're also excited to read articles and posts mentioning our project and sharing the user experience. Feedback is the only way to improve.
Project Page | GitHub | Issues | Gitter | Mailing list | Stack Overflow

March 30, 2023
An Emerging Open VSX Working Group!
by John Kellerman at March 30, 2023 04:26 PM
A little over a month ago, I wrote a blog about the need for us in the Eclipse Community to establish a sustainable, long term funding and operational model for the Open VSX Registry at open-vsx.org. The current deployment hosts over 2,600 extensions from over 1,600 different publishers and is a critical, vendor-neutral resource for development environments consuming VSX extensions, including those based on Eclipse Theia.

Based on discussions with various stakeholders, we proposed a new working group specifically for the Open VSX Registry and began looking for interested organizations. I'm pleased to say that we have had a great deal of interest and are nearing critical mass to start the working group. As part of the Eclipse Foundation Working Group Process, one of our first steps will be a meeting of interested organizations. We will be hosting a Zoom call, open to all, on Tuesday, April 4, at 11:00 AM EDT. If you would like to be added to the calendar entry, email us at collaborations@eclipse-foundation.org, or simply join using the following coordinates.
Join Zoom Meeting
https://eclipse.zoom.us/j/83676771936?pwd=SElaa1J3eXB3UWxkTkZYNTU2eXR6QT09
Meeting ID: 836 7677 1936
Passcode: 812536
One tap mobile
+13017158592,,83676771936#,,,,*812536# US
Find your local number: https://eclipse.zoom.us/u/keq9zOLgD4
We’ll have a mailing list for the working group on line soon. This call will be recorded and posted to that mailing list, which will be our main communication venue for the working group.
Thanks and we're looking forward to seeing you.
March 29, 2023
The Jakarta EE 2023 Developer Survey is now open!
by Tanja Obradovic at March 29, 2023 09:24 PM
It is that time of the year: the Jakarta EE 2023 Developer Survey open for your input! The survey will stay open until May 25st.
I would like to invite you to take this year six-minute survey, and have the chance to share your thoughts and ideas for future Jakarta EE releases, and help us discover uptake of the Jakarta EE latest versions and trends that inform industry decision-makers.
Please share the survey link and to reach out to your contacts: Java developers, architects and stakeholders on the enterprise Java ecosystem and invite them to participate in the 2023 Jakarta EE Developer Survey!
March 21, 2023
Organising Your Eclipse Open Source Project Team
March 21, 2023 12:00 AM
March 15, 2023
WTP 3.29 Released!
March 15, 2023 10:59 PM
New SLSA++ Survey Reveals Real-World Developer Approaches to Software Supply Chain Security
March 15, 2023 12:00 PM
Answering even basic questions about software supply chain security has been surprisingly hard. For instance, how widespread are the different practices associated with software supply chain security? And do software professionals view these practices as useful or not? Easy or hard? To help answer these and related questions, Chainguard, the Eclipse Foundation, the Rust Foundation, and the Open Source Security Foundation (OpenSSF) partnered to field a software supply chain security survey. The questions were primarily, but not exclusively, derived from the security requirements associated with the Supply-chain Levels for Software Artifacts (SLSA) supply chain integrity framework version 0.1 (the version when the survey was conducted), hence SLSA++.
In light of the recent White House National Cybersecurity Strategy, which emphasizes organizations use best practices and frameworks for secure software development, it’s important to understand how individual contributors responsible for this work–like developers, open source maintainers and security practitioners–are adopting software supply chain security practices and guidelines. The new SLSA++ survey provides insights into these trends, what’s working and what’s not working.
The survey, conducted in the summer and fall of 2022, includes data from nearly 170 respondents at a wide range of organizations, large and small, some security-focused in their role and others not. All respondents answered a series of questions for ten different software supply chain security practices. Three key findings stand out:
Some software supply chain security practices are already widely adopted.
Many practices already have strong or moderate adoption. For instance, over half of the respondents report always using a centralized build service. Other practices, such as digital signatures, were practiced less often: only 25% of respondents reported that their team always signs built artifacts. These findings are consistent with Google’s 2022 State of DevOps report.
Most practices are considered helpful though there is surprisingly little variation in the perceived level of helpfulness.
For each software supply chain security practice in the survey, at least 50% of the respondents labeled the practice as either extremely helpful or very helpful. Surprisingly though, the perceived helpfulness varies only slightly from practice to practice among the practices surveyed. Finally, the extent to which a participant views a particular practice as helpful is positively correlated with the likelihood that the participant’s organization adopts that practice. Whether these practices are viewed as helpful and then used or whether used practices are used and then viewed as helpful can’t be determined from the survey data.
Some SLSA practices are considered substantially more difficult than others.
Hermetic builds and reproducible builds were considered much more difficult than the other practices. Over 50% of respondents stated that those practices were either extremely difficult or very difficult. Other practices, such as scanning container images, were considered relatively easy. Additionally, the perceived difficulty of these practices had no statistically significant relationship with adoption.
In summary, the survey results suggest that software supply chain security practices are not an unattainable ideal. Some software supply chain security practices already enjoy widespread adoption. Also importantly, because perceived usefulness, not difficulty, appears to currently explain trends in adoption of these practices, parties interested in promoting these practices should consider explaining the benefits of these different practices rather than simply focusing on better tools.
A report detailing the survey, including its methodology, can be found here.
If interested in learning more about the findings and how organizations can implement the SLSA framework join Chainguard, OpenSSF, Rust Foundation and Eclipse Foundation for a virtual discussion on March 22, 2023 from 11-12 PM ET / 8-9 AM PT. Sign up for a calendar reminder here.
Authors: David A. Wheeler, The Linux Foundation; John Speed Meyers, Chainguard; Mikaël Barbero, Eclipse Foundation; and Rebecca Rumbul, Rust Foundation
March 11, 2023
JBoss Tools for Eclipse 2023-03M3
by sbouchet at March 11, 2023 12:06 PM
Happy to announce 4.27.0.AM1 (Developer Milestone 1) build for Eclipse 2023-03M3.
Downloads available at JBoss Tools 4.27.0 AM1.
What is New?
Full info is at this page. Some highlights are below.
General
Components Depreciation
As previously announced here, we’re in the process to remove the Central / update tab from JBossTools in next release. This work can be followed here.
That means that all the current extra features that can be installed via this tab needs to be available through a new channel. This channel already exists as p2 repo, but using Eclipse Marketplace Client is more close to what’s existing right now.
Most of those additional features are already present in the Jboss marketplace entry, so it’s just a matter of use it to install your favorite feature.
OpenShift
OpenShift Application Explorer view service creation support
The missing create service feature that was available with odo 2.X is now back in this release.
See the previous annoucement on this feature
Hibernate Tools
Runtime Provider Updates
A new Hibernate 6.2 runtime provider incorporates Hibernate Core version 6.2.0.CR2, Hibernate Ant version 6.2.0.CR2 and Hibernate Tools version 6.2.0.CR2.
The Hibernate 6.1 runtime provider now incorporates Hibernate Core version 6.1.7.Final, Hibernate Ant version 6.1.7.Final and Hibernate Tools version 6.1.7.Final.
The Hibernate 5.6 runtime provider now incorporates Hibernate Core version 5.6.15.Final and Hibernate Tools version 5.6.15.Final.
March 10, 2023
Product Liability Directive: More Bad News for Open Source
by Mike Milinkovich at March 10, 2023 01:49 PM
In my previous two blog posts I discussed concerns with the European Cyber Resilience Act (“CRA”) which we believe will harm both the open source community and the innovation economy in Europe. But the CRA needs to be understood as part of a larger legislative framework. In this post we will examine the potential impact of the proposed changes to the European Product Liability Directive (“PLD”) on the open source community and ecosystem.
As in previous discussions I think it is important to note that the intentions of the PLD are good. No one can argue that the time has come to protect consumers from poor software. But at the same time, it is important to ensure that the consumer liability obligations are borne by the economic actors who deliver products and services to consumers, and not by the open source community which enables so much benefit to society by providing free software but does not share in the profits of the delivery.
As I understand it, the purpose of the CRA is to establish which parties are responsible for ensuring the quality of software products, particularly as it relates to cybersecurity. The purpose of the PLD is to establish which parties are liable for defects which cause harm to individuals or their property. So strictly speaking, my assertion in my previous blog posts that the CRA will break the limited liability obligations that underpins free software was incorrect. It is the PLD which is doing that.
The European Commission presented a draft of the revisions to the PLD last September, and it is going through the process of being adopted by the European Parliament and the Council of the European Union. As a Directive, the PLD will be interpreted by each member state of the European Union and applied to updates of the local laws in each country. The specific intent of these revisions are to update the PLD of 1985 to address issues related to the modern digital economy. One of the key features of the PLD is its “no fault liability” model where injured parties can seek redress without demonstrating any error or fault on the part of the product manufacturer. The proposed revision explicitly expands the scope of no fault liability to cover software and artificial intelligence, and adds “loss or corruption of data” as a harm that could be suffered by a consumer.
There are numerous legal summaries of the PLD available, but this one from the law firm Baker Mackenzie provides a nice overview, as does this one from the law firm Cooley.
It has long been understood that product liability could not be completely waived by open source licenses in Europe. Hence, the “…to the extent permissible by law…” statements you see in many licenses. Since at least 1985, there have been strict provisions in Europe that you were always liable for harm caused to natural persons or their personal property as a result of using a defective product. From the perspective of an open source developer, the PLD extends and modernizes this legal framework in the following important ways:
- It explicitly extends the definition of product to include software and artificial intelligence;
- It explicitly extends the definition of harm to include loss or corruption of data;
- The definition of manufacturer (formerly producer) has been extended to cover developers, providers of software, providers of digital services, and online marketplaces;
- It makes it clear that a cybersecurity vulnerability is a product defect, and that failure to update a product to protect against a vulnerability may result in liability;
- It makes it clear that if a component is defective, liability may extend to the manufacturer of the component (e.g. the developer of the open source software), in addition to the manufacturer of the end product;
- Distribution of a product or component in Europe explicitly incurs liability obligations on the part of the distributor, unless they can identify a responsible economic actor in Europe; and
- There is an attempt to exclude open source from the provisions of the Directive, but as previously discussed the “…outside the course of a commercial activity…” language means that the exclusion is not helpful in practice.
Article 7 of the PLD goes to great lengths to identify the economic operators who can be held liable for a defective product, with a particular emphasis on identifying an entity in Europe who can bear the responsibility for a defective product made available in the single market. If you parse Article 7, who get something like the following to determine the party in Europe liable for a defective product:
- If the manufacturer is European, then the manufacturer is liable.
- Otherwise, if the importer or manufacturer’s authorized representative are European, then the importer and/or manufacturer’s authorized representative are liable.
- If none of the above conditions apply, each distributor is liable (each distributor is given 1 month to identify one of the above economic operators to hold the bag)
Note that the manufacturer of a defective component also becomes liable.
Should Open Source Developers be Worried?
I think they should. Particularly if they are located in Europe.
Huge caveat here. I’ve been studying the PLD for a couple of weeks now, and every time I read it I think of more corner cases and more scenarios. If anyone finds fault in my analysis or logic, do please let me know!
Scenario One
Imagine a scenario where a year ago or so a consumer in Europe lost data as a result of using the Wizbang product from BigCo GmbH. The vulnerability in Wizbang was caused by the famous Log4shell bug. As part of its normal build process, BigCo downloaded the Apache Log4j jar file from Maven Central. Under the PLD framework, the Apache Software Foundation (“ASF”) is the manufacturer of the Apache Log4j jar file and Sonatype (the company controlling Maven Central) is the distributor of the Log4j component as they made the Log4j software available to the European market. (The relevant definition reads “…‘making available on the market’ means any supply of a product for distribution, consumption or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge”). Both the ASF and Sonatype are US based organizations.
Under the PLD, BigCo, the ASF, and Sonatype are all ‘economic operators’ involved in the development of the Wizbang defective product. As mentioned above, Article 7 of the PLD lays out the liability obligations for each of the various types of economic operators.
My read of the PLD is that as the European manufacturer of Wizbang and the importer of the Log4j component, BigCo GmbH would be liable to consumers of the defective product. I think the ASF would not be held liable for the defect in Log4j because it does not meet the definition of an economic operator in Europe. I.e. the ASF has no legal presence in Europe. Similarly, Maven Central is a distributor in this context, but the algorithm in Article 7 puts the importer ahead of the distributor in the queue for liability obligations.
Scenario Two
Same as above, but instead the defective open source component is (say) the Eclipse Modeling Framework (EMF), so the component manufacturer is the Eclipse Foundation AISBL, a European-based open source foundation.
My read of the PLD is that as the European manufacturers of the Wizbang product and the EMF component, BigCo GmbH and the Eclipse Foundation would both be jointly and severally liable to consumers of the defective product. If I am correct, this scenario puts European open source projects, communities, and foundations at a disadvantage relative to their international peers.
Summary
The good news is that I can’t think of a scenario where Maven Central, or services like it, become liable as a distributor under the PLD. The components they distribute would be used by a manufacturer and there are several layers of economic operators in front of a component distributor before liability results. The same seems to be true for open source foundations based outside of Europe.
The bad news is that it does appear that the PLD as currently worded would expose European-based open source projects to product liability. I have to assume that this was an unintended consequence.
Proposed Enhancements
I hypothesize that when some people think of open source software components and the open source supply chain, they think of something like a braking system module that is assembled into a passenger car. After all, terminology like “component” and “supply chain” lends itself perfectly to that interpretation. I believe that a closer analogy would be inputs to a chemical process. Don’t think of a “braking component”, think acetate or sulphuric acid. I think this analogy is correct because beyond the sheer malleability of software, it is important to recall that open source software is by definition not restricted to any field of use. Every piece of open source software can (and is) used for any purpose that anyone can find for it. To give just one example, the Eclipse IDE platform was designed to implement desktop developer tools. But over the years it has ended up being used in scientific instruments on the International Space Station, to control medical imaging devices, mission planning for the Mars Rover, operations control of major railway networks, and ground station control software for space satellites. The adopters of open source have rich imaginations indeed.
The point of the above is that it is essential that open source software be excluded from the strict, no-fault liability obligations of the PLD. Not because open source developers are entitled to special treatment, but because the liability truly rests with the organization that placed the open source software into a product, and placed that product into the hands of a consumer. It is the act of using open source software that makes it critical, not the act of publishing or distributing it.
To that end, I feel that the correct enhancement is to strengthen the exclusion of open source in the PLD to make it much clearer than it currently is.
The Gory Details
For those who want to look into the language of the PLD, here are what I noticed as the relevant sections and what they mean. (Emphasis added by me in a few places.)
- (12) Products in the digital age can be tangible or intangible. Software, such as operating systems, firmware, computer programs, applications or AI systems, is increasingly common on the market and plays an increasingly important role for product safety. Software is capable of being placed on the market as a standalone product and may subsequently be integrated into other products as a component, and is capable of causing damage through its execution. In the interest of legal certainty it should therefore be clarified that software is a product for the purposes of applying no-fault liability, irrespective of the mode of its supply or usage, and therefore irrespective of whether the software is stored on a device or accessed through cloud technologies. The source code of software, however, is not to be considered as a product for the purposes of this Directive as this is pure information. The developer or producer of software, including AI system providers within the meaning of [Regulation (EU) …/… (AI Act)], should be treated as a manufacturer.
So Recital 12 makes it clear that software is a product under the PLD and that the developer is the manufacturer.
- (13) In order not to hamper innovation or research, this Directive should not apply to free and open-source software developed or supplied outside the course of a commercial activity. This is in particular the case for software, including its source code and modified versions, that is openly shared and freely accessible, usable, modifiable and redistributable. However where software is supplied in exchange for a price or personal data is used other than exclusively for improving the security, compatibility or interoperability of the software, and is therefore supplied in the course of a commercial activity, the Directive should apply.
Recital 13 provides a carve out for open source. However, it retains the same fatal flaw as the CRA in that the carve out applies only to “software developed or supplied outside the course of a commercial activity”, which is woefully misplaced if it is intended to provide any protection of the open source ecosystem from the scope of this legislation. To see why, please see Maarten Aertsen’s blog post.
- (23) In order to reflect the increasing prevalence of inter-connected products, the assessment of a product’s safety should also take into account the effects of other products on the product in question. The effect on a product’s safety of its ability to learn after deployment should also be taken into account, to reflect the legitimate expectation that a product’s software and underlying algorithms are designed in such a way as to prevent hazardous product behaviour. In order to reflect that in the digital age many products remain within the manufacturer’s control beyond the moment at which they are placed on the market, the moment in time at which a product leaves the manufacturer’s control should also be taken into account in the assessment of a product’s safety. A product can also be found to be defective on account of its cybersecurity vulnerability.
Recital 23 makes it clear that a cybersecurity vulnerability can be considered a product defect, and hence explicitly incur liability.
- (26) The protection of the consumer requires that any manufacturer involved in the production process can be made liable, in so far as their product or a component supplied by them is defective. Where a manufacturer integrates a defective component from another manufacturer into a product, an injured person should be able to seek compensation for the same damage from either the manufacturer of the product or from the manufacturer of the component.
Recital 26 makes it clear that if an open source component is integrated into a product, and that open source component is found to be defective, the developer of that open source component may be liable.
- (38) The possibility for economic operators to avoid liability by proving that a defect came into being after they placed the product on the market or put it into service should also be restricted when a product’s defectiveness consists in the lack of software updates or upgrades necessary to address cybersecurity vulnerabilities and maintain the product’s safety. Such vulnerabilities can affect the product in such a way that it causes damage within the meaning of this Directive. In recognition of manufacturers’ responsibilities under Union law for the safety of products throughout their lifecycle, such as under Regulation (EU) 2017/745 of the European Parliament and of the Council, manufacturers should also be liable for damage caused by their failure to supply software security updates or upgrades that are necessary to address the product’s vulnerabilities in response to evolving cybersecurity risks. Such liability should not apply where the supply or installation of such software is beyond the manufacturer’s control, for example where the owner of the product does not install an update or upgrade supplied for the purpose of ensuring or maintaining the level of safety of the product.
Recital 38 makes it clear that a failure to properly update a product to protect any security vulnerabilities is considered a defect and incur liability on the part of the manufacturer.
- (40) Situations may arise in which two or more parties are liable for the same damage, in particular where a defective component is integrated into a product that causes damage. In such a case, the injured person should be able to seek compensation both from the manufacturer that integrated the defective component into its product and from the manufacturer of the defective component itself. In order to ensure consumer protection, all parties should be held liable jointly and severally in such situations.
Recital 40 makes it clear that the manufacturer of a defective component is liable to the consumer, as well as the manufacturer of the end product.
- (42) The objective of consumer protection would be undermined if it were possible to limit or exclude an economic operator’s liability through contractual provisions. Therefore no contractual derogations should be permitted. For the same reason, it should not be possible for provisions of national law to limit or exclude liability, such as by setting financial ceilings on an economic operator’s liability.
Recital 42 makes it clear that the limitations of liability and no warranty clauses in open source licenses are superseded by the PLD.
March 08, 2023
Announcing Eclipse Ditto Release 3.2.0
March 08, 2023 12:00 AM
The Eclipse Ditto teams is proud to announce the availability of Eclipse Ditto 3.2.0.
Version 3.2.0 brings a new History API, Eclipse Hono connection type, case-insensitive searches and other smaller improvements, e.g. on the Ditto UI and in the JS client.
Adoption
Companies are willing to show their adoption of Eclipse Ditto publicly: https://iot.eclipse.org/adopters/?#iot.ditto
When you use Eclipse Ditto it would be great to support the project by putting your logo there.
Changelog
The main improvements and additions of Ditto 3.2.0 are:
- New History API in order to be able to:
- access historical state of things/policies/connections (with either given revision number or timestamp)
- stream persisted events of things/policies via async APIs (WebSocket, Connections) and things also via existing SSE (Server-Sent-Events) API
- configure deletion retention of events in the database for each entity
- Addition of new Eclipse Hono connection type for Ditto managed connections
- Option to do case-insensitive searches and addition of a new RQL operator to declare case-insensitive like:
ilike - UI enhancements:
- Push notifications on the Ditto UI using SSE (Server-Sent-Events), e.g. on thing updates
- Autocomplete functionality for the search slot
- Added configuring
Bearerauth type for the “devops” authentication
- JavaScript client:
- Support for “merge” / “patch” functionality in the JS client
The following non-functional enhancements are also included:
None in this release.
The following notable fixes are included:
- Undo creating implicitly created policy as part of thing creation if creation of thing failed
Please have a look at the 3.2.0 release notes for a more detailed information on the release.
Artifacts
The new Java artifacts have been published at the Eclipse Maven repository as well as Maven central.
The Ditto JavaScript client release was published on npmjs.com:
The Docker images have been pushed to Docker Hub:
- eclipse/ditto-policies
- eclipse/ditto-things
- eclipse/ditto-things-search
- eclipse/ditto-gateway
- eclipse/ditto-connectivity
–
The Eclipse Ditto team
March 06, 2023
Rodrigo Pinto: Eclipse Cloud DevTools Contributor of the Month!
by John Kellerman at March 06, 2023 07:11 PM
The Eclipse Cloud DevTools contributor award for this month goes to Rodrigo Pinto of Ericsson for his significant contributions to Trace Compass Cloud and the Eclipse Cloud DevTools Ecosystem. He is a committer on Trace Compass’s UI components based on Eclipse Theia. To hear him, check out his talk at TheiaCon 2022.

Rodrigo has worked on the Trace Compass project and innovated it in many positive ways. To start, he has helped make the UI faster and more responsive. One of his key contributions was progressive loading in Eclipse Trace Compass. When loading a multi-gigabyte or terabyte sized trace, now the user can see the progress in an intuitive way. Rodrigo’s focus on making data more digestible brings great value to the project. He is working on porting trace compass to a VS Code extension.![]()
Rodrigo is the ideal open source engineer, as he balances strong empathy, a curiosity towards new platforms and techniques as well as a drive for improvements. He has worked diligently on many aspects of his own volition, such as improving documentation and training videos. These items are not in the git log of a project, but contribute to them in an equal way. He is proof that a project is more than its code. Rodrigo has always driven to make Trace Compass a more inclusive environment by ramping up new talent whenever he could.
He has also collaborated with UX developers and universities to help build the community.
These and many more reasons are why this Eclipse Cloud DevTools contributor award is very well deserved, congratulations Rodrigo!
The Cloud DevTools Working Group provides a vendor-neutral ecosystem of open-source projects focused on defining, implementing and promoting best-in-class web and cloud-based development tools. It is hosted at the Eclipse Foundation, current members of the group include Ericsson, IBM, Obeo, RedHat, SAP, AMD, Arm, EclipseSource, Renesas, STMicroelectronics and TypeFox.
This Eclipse Cloud DevTools contributor award is sponsored by EclipseSource, providing consulting and implementation services for web-based tools, Eclipse GLSP, Eclipse Theia, and VS Code.
March 03, 2023
March 2023 Update on Security improvements at the Eclipse Foundation
March 03, 2023 09:00 AM
Thanks to financial support from the OpenSSF’s Alpha-Omega project, the Eclipse Foundation is glad to have made significant improvements in the last couple of months.
Two Factor Authentication
Eclipse Tycho, Eclipse m2e, and Eclipse RAP have all enforced 2FA for all their committers on GitHub:
- https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/issues/2701
- https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/issues/2702
- https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/issues/2611
Meanwhile, we’ve seen an increase of adoption of 2FA globally on all Eclipse Projects at Github, increasing from 63.7% to 67% since the begining of the year. We are now starting to actively enforce 2FA for projects with a dedicated GitHub organization.
Security Audits
We have successfully initiated the 3 security audits that will all be performed by Trail of Bits in collaboration with OSTIF. The projects that will be covered in these audits are:
- Eclipse Jetty: an open-source Java-based web server that provides a HTTP server and servlet container.
- Eclipse JKube: a toolkit for building container images and deploying them to Kubernetes.
- Eclipse Mosquito: an open-source IoT platform that enables the development and management of connected devices.
Threat modeling for one out of the three audits have been completed. Code review is underway. The timeline has been locked in for threat modeling and code review for the second security audit. The schedule of the third one is still a work in progress, but will likely be delayed due to project’s constraint. This last one will likely complete in May.
Hiring
We have build capacity since the beginning of the year, hiring 3 talented people:
- Marta Rybczynska, Technical Program Manager. They bring a wealth of experience and knowledge to the team. She initially focusing on improving security / vulnerability policies, procedures, and guidelines that adhere to industry best practices. She started early January.
- Thomas Neidhart, Software Engineer. He is initially focusing on SLSA attestation generation and GitHub management tooling. He started mid-January.
- Francisco Perez, Software Engineer. He will work closely with Eclipse Foundation Projects to enhance their software supply chain security. He started begining of March.
We’re also in talks with a SecOps professional to improve the security of our infrastructure and introduce new tools and services, such as a self-hosted sigstore.
Rework of the CVE process
We have started gathering feedback from projects about Eclipse’s security processes. We are performing interviews with committers and project leads, starting with projects selected for the audit or having a recent security vulnerability. We have contacted six projects, conducted four interviews, and gathered helpful feedback.
The common outcome is a request for more detailed documentation and clarification of the process. Proposals for updated documentations are currently under review. More interviews are planned. We’ve extended the experimentation of GitHub security advisories. We have also worked on a SECURITY.md template for all Eclipse Foundation projects.
GitHub organizations and repositories management
We have re-started the work on a custom tool to enforce and create security related configurations of organizations and their associated repositories on GitHub. The tool is codenamed OtterDog.
What is currently supported:
- descriptive definition of required organization settings / webhooks, repositories and associated branch protection rules
- mechanism to fetch the current configuration from an organization hosted on GitHub
- verification on how the required configuration differs from the current live configuration hosted on GitHub
- update mechanism to enforce the required configuration on GitHub
Some work has been done in order to improve the usability of the tool. The tool will output in a concise way what settings / resources will be changed / created prior to applying the configuration to GitHub by comparing the current live settings on GitHub to the intended configuration.
A credential provider for pass (in addition to existing one for Bitwarden) has been added to support using the tool for our first organization: Eclipse CBI which hosts various tools and projects for common build infrastructure at the Eclipse Foundation.
SLSA tools
We started to work on slsa-tools which is a collection of tools written in Java to operate on SLSA provenance files. The idea behind this project is to have a rich set of tools to verify / generate provenance files for the Java ecosystem.
Existing SLSA tools are implemented in Go which make it somewhat cumbersome to use them in certain settings, e.g. to develop a Jenkins plugin to generate provenance files for builds.
The medium-term goal is to develop such a Jenkins plugin with features similar to the existing slsa-github-generator action for GitHub.
March 01, 2023
Shell Hole: How Advanced Prompts are Putting Software Developers at Risk
March 01, 2023 08:00 AM
Advanced shell prompts, such as those provided by theme engines like oh-my-zsh and oh-my-posh, have become increasingly popular among software developers due to their convenience, versatility, and customizability. However, the use of plugins that are executed outside of any sandbox and have full access to the developer shell environment, presents significant security risks, especially for Open Source Software developers.
Open Source Software (OSS) developers are primary targets for software supply chain attacks because they often have access to a wide range of sensitive information and powerful tools that can be used to amplify the impact of a successful attack. OSS developers often have access not only to source code, but also access keys and credentials, which can be exfiltrated and used for malicious purposes. By compromising a single OSS developer, attackers can potentially gain access to the sensitive information and powerful tools of many other developers and users. This can enable them to launch more sophisticated and damaging attacks, potentially affecting a large number of individuals, organizations, and even whole industries.
For these reasons, OSS developers are primary targets for software supply chain attacks, and it’s crucial for them to be aware of the risks and to take steps to protect themselves and their users. This includes verifying the authenticity and security of any software they use, keeping software up to date, and being vigilant for signs of compromise.
Shell theme engines and other advanced shell prompts is an example of the tools that have gain in popularity in the last couple of years and does not seem to considered as much as a threat as some others like projects dependencies or IDE plugins. A compromised shell prompt plugin can steal valuable information and credentials in several ways:
- Keylogging: The plugin can capture keystrokes, including sensitive information such as passwords, credit card numbers, and access tokens.
- Screenshots: The plugin can take screenshots of the user’s screen, potentially capturing sensitive information displayed on the screen.
- Data exfiltration: The plugin can exfiltrate data from the user’s system, such as sensitive files, source code, or even access tokens.
- Remote access: The plugin can open a remote connection to an attacker-controlled system, allowing the attacker to gain access to the user’s system and steal sensitive information.
- Credentials harvesting: The plugin can harvest sensitive information such as passwords, access tokens, and private keys from the user’s system, such as:
- System keychain: Windows Credential Manager, macOS Keychain, Gnome Keyring or KDE KWallet. If misconfigured, they don’t require systematic authorization to read entries.
- User’s configuration files which are sometime used to store sensitive information.
- Shell history may contain sensitive information passed as arguments to commands (e.g., `curl -H ‘Authorization: Bearer xxxxxx’).
- Environment variables: environment variables are sometimes used to share credentials with applications without having to pass explicit parameters.
- Browser Cookies file store can be inspected to steal session tokens.
- Network reconnaissance: The plugin can gather information about the user’s network, such as IP addresses, hostnames, and open ports, which can be used to launch more targeted attacks.
It’s worth noting that these are just a few examples of how a compromised shell prompt plugin can steal valuable information and credentials. The actual methods used by attackers may vary and will depend on the attacker’s goals.
To mitigate these risks, it is important for software developers to properly configure and secure their advanced shell prompts. This includes verifying the authenticity and security of any plugins before use, regularly patching and updating systems, and monitoring for suspicious activity. Additionally, it is also important to educate developers on the proper use of advanced shell prompts, and the risks associated with them.
In conclusion, advanced shell prompts like oh-my-zsh and oh-my-posh are powerful tools that can greatly enhance productivity and automation for software developers. However, the use of plugins that are executed outside of any sandbox and have full access to the user shell environment, presents significant security risks. Software developers are particularly vulnerable to software supply chain attacks, which can have serious consequences for their software development environments. It is important for organizations to take appropriate measures to secure their command-line interfaces and educate their users on the risks associated with them, especially when it comes to using plugins from untrusted sources.
February 28, 2023
Migrating to Google Analytics 4: Recommendations for Eclipse Project Websites
February 28, 2023 07:20 PM
As part of our commitment to providing support to our community, we would like to take a moment to share some recommendations regarding the use of Google Analytics (GA) for Eclipse project websites.
As you may be aware, Google Analytics 4 is being rolled out as a replacement for Universal Analytics. While Universal Analytics is still supported at this time, Google has announced that they will stop processing new hits for all standard Universal Analytics properties on July 1, 2023.
Therefore, we strongly recommend that all projects currently using GA take some time to re-evaluate whether it is still necessary for their needs. If the data provided by GA is no longer useful or necessary, we recommend that projects remove GA from their website.
If it is determined that the data is still relevant and useful for the project, we recommend that it be migrated to Google Analytics 4 manually. Google has provided a migration guide for those who are looking to upgrade.
We believe this is the safest way for projects to confirm that all is well, as we cannot assume that the automatic GA migration will work as expected for everyone. Also, it’s the only way for projects to ensure that explicit consent has been given by the user via our cookie consent banner before enabling GA.
As a reminder, it’s possible to add our cookie consent banner to any website by adding the following code snippet in the section of each page:
<link rel="stylesheet" type="text/css" href="//www.eclipse.org/eclipse.org-common/themes/solstice/public/stylesheets/vendor/cookieconsent/cookieconsent.min.css" />
<script src="//www.eclipse.org/eclipse.org-common/themes/solstice/public/javascript/vendor/cookieconsent/default.min.js"></script>
A project website with GA must then ensure that the value of the eclipse_cookieconsent_status cookie is set to “allow” before loading GA on a webpage.
Additionally, we recommend that projects opt-out from letting GA make any changes to their account. This will ensure that projects have full control over their upgrade and can manage it in a way that best suits their needs.
Upgrading to GA 4 will not only provide access to new features and benefits but will also ensure that project websites are prepared for the future of Google Analytics before the end of support for Universal Analytics.
As always, we are here to support you with any questions or concerns you may have. Please do not hesitate to reach out to us via issue #2630 in our Helpdesk.
You can also find additional information and requirements around the acceptable usage of GA for Eclipse projects in our Eclipse Foundation Hosted Services Privacy and Acceptable Usage Policy.
February 23, 2023
Cyber Resilience Act: Good Intentions and Unintended Consequences
by Mike Milinkovich at February 23, 2023 05:09 PM
In my previous blog post on the European Cyber Resilience Act (“CRA”), I touched on a topic which I feel warrants additional discussion. Specifically:
Fundamentally, the core of the proposed legislation is to extend the CE Mark regime to all products with digital elements sold in Europe. Our assumption based on the current text is that this process will be applied to open source software made available under open source licenses and provided free of charge, ostensibly under licenses which disclaim any liability or warranty. We are deeply concerned that the CRA could fundamentally alter the social contract which underpins the entire open source ecosystem: open source software provided for free, for any purpose, which can be modified and further distributed for free, but without warranty or liability to the authors, contributors, or open source distributors. Legally altering this arrangement through legislation can reasonably be expected to cause unintended consequences to the innovation economy in Europe.
First, a mea culpa. In the quote above I stated that “…the proposed legislation is to extend the CE Mark regime to all products with digital elements sold in Europe.” That statement is inaccurate. It should have said “the proposed legislation is to extend the CE Mark regime to all products with digital elements made available in Europe.” That is a critical distinction, as it makes the CRA broadly extra-territorial. In today’s world where most software is downloaded over the internet, “made available” means that the documentation, certification, and liability requirements of the CRA are expected to apply to all software worldwide.
I honestly believe that CRA was developed with the best of intentions. Software has become critically important to our economies and societies, and to date has been a completely unregulated industry. Recent events such as the SolarWinds and Apache Log4j vulnerabilities have shown that there can be very large economic impacts when something goes wrong. The Log4j event showed that open source software components can have a very large impact due to wide usage. Given that, it is a reasonable position that the time has come to implement regulations upon the software industry, and to ensure that open source software is included within the scope of those regulations. I want to stress that I believe that the open source community very much wants to be part of the solution to the industry problems that we all face with respect to supply chain security. The open source community provides extremely high quality software and takes great pride in the value that it provides to society.
However, the CRA legislation (along with the companion revisions to the Product Liability Directive) in its current form will have enormous negative effects on both the open source community and the European economy.
For the purposes of this blog post I am going to ignore for the moment the impact of applying the CE Mark regime to all software all at once, as that would be a long post in its own right. This post will focus on the unintended consequences of applying legal product liability obligations to the open source community and ecosystem. But before doing so, I want to spend a few moments describing what open source software is, and why it is important. If you have a good understanding of that topic, feel free to skip that section.
The Economics of Open Source
Today’s software systems are mind-bogglingly complex. And for most systems, a very large percentage of the overall code base provides zero product differentiating features. For example, any modern system will require code which allows it to connect to the internet to acquire and share data. Open source at its core is a simple licensing model which allows individuals, researchers, academics, and companies to come together to develop and maintain software which can be freely studied, used, modified, and redistributed. The breadth of software which has been developed under this model encompasses every field of human endeavor. But arguably the most common use case is to share the cost of developing and maintaining software which implement non-differentiating technologies used across a broad spectrum of products and applications. To be clear, “non-differentiating technologies” means implementations in software of the types of technologies that many similar applications must implement. Examples include network access, database access, user authentication, and the like. It is impossible to overstate the economic benefits of being able to reuse software in this way. Reuse decreases lifecycle costs, reduces time to market, and mitigates development risk across every type of system which contains software. Which is to say, every single aspect of social and economic activity. That is why it is estimated that most modern software and cyber-physical products contain 80 to 90 percent open source. It is simply not economically viable to write all software yourself while your competitors are building theirs using open source.
But the economic benefits of open source only start there. In fact, there is arguably even greater value in the pace of innovation which is made possible by open source. All developers today start their development off by selecting and assembling open source components to form the basis for their product or application. And they are able to do so without asking permission from any of the providers. This ‘permissionless innovation’ has vastly accelerated the pace at which new products in all fields can be developed and brought to market. When open source was first introduced, it was primarily used to commoditize technologies which were already well understood. Today, open source is used to introduce new technologies, in sectors such as Big Data, Cloud, Edge, AI and software defined vehicle, in order to accelerate adoption and create new market segments.
It is important to remember that open source software is provided at zero cost to the consumer. This completely decouples its value from its sale price. And there are many examples of open source software which are almost immeasurably valuable: Linux, Kubernetes, Apache, and OpenJDK are just a few examples of open source which support multi-billion euro ecosystems.
It is also important to recognize that open source software is a non-rivalrous good. In fact, it is an anti-rivalrous good in that the more a software component is used, the more valuable it becomes. This is incredibly important to understand: the value of a piece of open source software is not determined when it is made available. It becomes valuable (and potentially critical) when it is used. And the more it is used, the more valuable and critical it becomes. As a logging framework, Log4j was not a piece of software which at face value would be expected to be security critical; it became so because it was so broadly used and adopted.
Finally, there is no open source business model. Open source licensing has enabled an incredibly successful collaborative production model for the development of software, but that is decoupled from the commercialization of that software. Obviously, given the massive investments in open source someone must be making money somewhere. And they are. Open source technologies are used in virtually every cyber-physical, software, SaaS, and cloud product on the planet. It is also very widely used in the internal bespoke software applications that run our governments, enterprises, and industrials. When we talk of the open source supply chain, it is important to recognize that what we are discussing is the use by governments and commercial organizations of freely provided software. Unlike any other market that I am aware of, the financial resources available to manage and secure the open source software supply chains are solely available to the consumers, rather than the producers. For this reason, it is important that any compliance burden be placed on the downstream commercial adopters and consumers, rather than the producers of open source.
Unintended Consequences
Which brings me to the risks to Europe’s economy that I see from the CRA. The preamble to the legislation states: “For the whole EU, it is estimated that the initiative could lead to a costs reduction from incidents affecting companies by roughly EUR 180 to 290 billion annually.” On the cost side it states: “For software developers and hardware manufacturers, it will add direct compliance costs for new security requirements, conformity assessment, documentation and reporting obligations, leading to aggregated compliance costs amounting to up to roughly EUR 29 billion.” In other words, spend 29 billion to save 290 billion. The impact assessment further describes that an analysis was done which spurred the decision to extend to legislation to cover all tangible and non-tangible products:
This option would ensure the setting out of specific horizontal cybersecurity requirements for all products with digital elements being placed or made available on the internal market, and would be the only option covering the entire digital supply chain.
As discussed in my previous blog post, the CRA as currently drafted will be extended to cover virtually all open source software. This will legally obligate producers of open source software to the documentation, certification, and liability provisions of the CRA. Let us focus here solely on the liability topic.
The fundamental social contract that underpins open source is that its producers freely provide the software, but accept no liability for your use, and provide no warranties. Every open source license contains “as is”, no liability, and no warranty clauses. I’ve always assumed that this is simple common sense: if I provide you with a working program that you can study, use, modify, and further distribute freely for any purpose, why should I accept any liability for your (mis)use of that program? It is the companies which commercialize the technology and make a business from it who need to accept liability and provide warranties to their paying customers, not the open source projects which they have freely consumed. The CRA fundamentally breaks this understanding by legislating non-avoidable liability obligations to producers of free software.
What might be the consequences of forcing the producers of free and open source software made available in Europe to accept statutory liability for code that they provide? Remembering, of course, that all open source software is both developed and distributed over the internet so “made available in Europe” can arguably apply to it all. And also remembering that enormous amounts of open source software are produced worldwide by projects, communities, and nonprofit foundations which make no money off of their software, and who have always operated under the assumption that their liability obligations were extremely low. Thirdly, it is important to remember that open source software is provided for free. The producers of open source do not receive any revenue from users and adopters in Europe, so the usual market incentives to accept additional regulations to retain access to the European single market do not apply.
So with the caveat that these are all hypothetical scenarios, let’s look at some potential unintended consequences of the CRA’s liability obligations. (Some of these points are also made in Brian Fox’s excellent blog post.)
- A reasonable and rational response would be for non-European producers of open source code to state that its use is not permitted in Europe. If you are not willing to accept statutory liability obligations for something you make available for free, a notice file stating the above would be an obvious reaction. What would this mean to the European companies that build products on platforms such as Linux, Kubernetes, Apache, and OpenJDK? I would assume that the vast majority of their procurement and compliance organizations would conclude that they can no longer use those technologies in their product development. Cutting Europe off from these platforms would have catastrophic consequences.
- European producers of open source will be at a significant disadvantage relative to their international peers. Since they cannot avoid the liability obligations, they will be forced to accept them as part of their operations. For a small project hosted at (say) Github, it would probably be simpler to just terminate the project and pull its source code off of the internet. For a foundation such as the Eclipse Foundation, amongst other things I would expect that we would be forced to procure a very large liability insurance policy to mitigate the exposure of the organization and its directors and officers to potential liabilities. The result would threaten the €65 billion to €95 billion that open source software development contributes to EU GDP, as per the Commission’s own study.
- The CRA extends the liability obligations to distributors of software. In the open source context, some of the most important distributors include the language and platform-specific package distribution sites such as npm, Maven Central, PyPi and the like. None of those sites are in a position to accept liability for the packages they make available. As Brian Fox of Sonatype stated “…the consequence of this would be [Maven] Central, npm, PyPi and countless other repositories being suddenly inaccessible to the European Union.” As Brian is the leader of Maven Central, I am confident he understands what he’s talking about. I cannot stress enough how disruptive it would be to Europe’s business if that should occur.
- The CRA liability obligations could also force European business to stop contributing to open source projects. At the moment, it is generally understood that the risk that contributions to open source may incur a liability to the company is low. The CRA changes that equation and as a result European companies may curtail their open source collaborations. This would be extremely damaging to the innovation economy in Europe, for the reasons described in the economics section above. It also runs counter to a number of European wide strategies such as digital sovereignty which explicitly have major open source components. Initiatives such as GAIX-X, Catena-X, Dataspaces, Digital Twins, and Industrie 4.0 all explicitly rely upon open source collaboration which could be at risk under the CRA.
Europe’s Cyber Resilience Act was developed with the best of intentions. And it is certainly the right time to look at what regulations are appropriate for software generally, and given its importance open source software likely needs to be included in some manner. But the liability obligations imposed by the CRA upon projects, communities, and nonprofit foundations will have negative unintended consequences on Europe’s innovation economy and digital sovereignty initiatives.
February 16, 2023
Eclipse JKube 1.11 is now available!
February 16, 2023 02:30 PM
On behalf of the Eclipse JKube team and everyone who has contributed, I'm happy to announce that Eclipse JKube 1.11.0 has been released and is now available from Maven Central �.
Thanks to all of you who have contributed with issue reports, pull requests, feedback, and spreading the word with blogs, videos, comments, and so on. We really appreciate your help, keep it up!
What's new?
Without further ado, let's have a look at the most significant updates:
- Eclipse JKube Remote Development (Preview) enhancements
- Init Containers via XML/DSL plugin configuration
- � Many other bug-fixes and minor improvements
Eclipse JKube Remote Development (Preview) enhancements
This release brings a few improvements to the Remote Development feature:
SOCKS 5 proxy
In addition to the standard remote and local service configuration, you can now enable a SOCKS 5 proxy. The proxy can be then used to dynamically forward ports and resolve cluster DNS names for your local applications that support a SOCKS 5 proxy configuration.
You can enable the SOCKS proxy just by setting its port in the <remoteDevelopment> configuration:
<plugin>
<groupId>org.eclipse.jkube</groupId>
<artifactId>kubernetes-maven-plugin</artifactId>
<configuration>
<remoteDevelopment>
<socksPort>1080</socksPort>
</remoteDevelopment>
</configuration>
</plugin>Once you start the session, you can use the SOCKS proxy to connect to a remote service:
curl --socks5-hostname localhost:1080 http://my-cluster-service:80/Remote Service port discovery
Your application might expose different ports depending on the environment it's running. For example, a React application is usually exposed at port 3000 in development mode, but at port 80 in production mode.
With this release, we've improved the local service port forwarding to detect the port at which your application is being exposed in the cluster. This way you can provide a local service configuration with the port pointing to where your application listens locally. JKube will take care of analyzing the cluster service to determine the port where the Service listens and forward it to the local port.
Using this release
If your project is based on Maven, you just need to add the Kubernetes Maven plugin or the OpenShift Maven plugin to your plugin dependencies:
<plugin>
<groupId>org.eclipse.jkube</groupId>
<artifactId>kubernetes-maven-plugin</artifactId>
<version>1.11.0</version>
</plugin>If your project is based on Gradle, you just need to add the Kubernetes Gradle plugin or the OpenShift Gradle plugin to your plugin dependencies:
plugins {
id 'org.eclipse.jkube.kubernetes' version '1.11.0'
}How can you help?
If you're interested in helping out and are a first-time contributor, check out the "first-timers-only" tag in the issue repository. We've tagged extremely easy issues so that you can get started contributing to Open Source and the Eclipse organization.
If you are a more experienced developer or have already contributed to JKube, check the "help wanted" tag.
We're also excited to read articles and posts mentioning our project and sharing the user experience. Feedback is the only way to improve.
Project Page | GitHub | Issues | Gitter | Mailing list | Stack Overflow

February 12, 2023
Add Social Icon to OceanWP
by Kai Tödter at February 12, 2023 04:05 PM
Recently I migrated my WordPress-based site to the free version of the great OceanWP theme.
While many social icons are already supported, Mastodon is currently not supported.
So I filed a feature request here: https://github.com/oceanwp/oceanwp/issues/433
The friendly guys from OceanWP the explained, how I could add a Mastodon link with icon today.
After fiddling around a bit, their solution worked well for me:
- Save your OceanWP customizer settings
- Create an OceanWP child theme (you can use the Ocean Extra plugin for that task)
- Import your saved customizer settings => Now your website should look the same as before
- Activate child theme
- The theme icons to “Font Awesome”
- Open the Appearence/Theme File Editor
- Add the following code to functions.php and save
function my_ocean_social_options( $array ) {
// Mastodon icon
$array['mastodon'] = array(
'label' => 'Mastodon',
'icon_class' => oceanwp_icon( 'mastodon', false)
);
return $array;
}
add_filter( 'ocean_social_options', 'my_ocean_social_options' );
function add_new_icons( $icons ) {
$icons['mastodon']= array(
'sili' => 'fab fa-mastodon',
'fai' => 'fab fa-mastodon',
'svg' => 'fab fa-mastodon',
);
return $icons;
}
add_filter( 'oceanwp_theme_icons', 'add_new_icons' );
Now, whenever you want to add a social link, you will find “Mastodon” at the end and you can add your Mastodon link.
Mine is https://mastodon.social/@kaitoedter and you can see a live version of this at https://toedter.com.
February 09, 2023
Do you know Ecore? Looking for a reference card?
by Cédric Brun (cedric.brun@obeo.fr) at February 09, 2023 12:00 AM
“ğ��¸ğ�‘£ğ�‘’ğ�‘Ÿğ�‘¦ğ�‘¡â„�ğ�‘–ğ�‘›ğ�‘” ğ�‘ â„�ğ�‘œğ�‘¢ğ�‘™ğ�‘‘ ğ�‘�ğ�‘’ ğ�‘šğ�‘�ğ�‘‘ğ�‘’ ğ�‘�ğ�‘ ğ�‘ ğ�‘–ğ�‘šğ�‘�ğ�‘™ğ�‘’ ğ�‘�ğ�‘ ğ�‘�ğ�‘œğ�‘ ğ�‘ ğ�‘–ğ�‘�ğ�‘™ğ�‘’, ğ�‘�ğ�‘¢ğ�‘¡ ğ�‘›ğ�‘œğ�‘¡ ğ�‘ ğ�‘–ğ�‘šğ�‘�ğ�‘™ğ�‘’ğ�‘Ÿâ€� probably was one of the mantra the Eclipse Modeling Framework team (Ed Merks, Marcelo Paternostro, Dave Steinberg among others…) sticked to when the created the core concepts which would allow the definition of all the others tools.
Ecore is a kernel, you define your domain specific model using these constructs. It boils down to classes, types, attributes and relationships, yet there is a lot of beauty in the way it has been designed and we can safely say it passed the test of time. In 2016 I tried to condensed all of it in a single reference card. I did not finished it to the point of publishing it but I’m doing it today (better late than never!)
To produce it I exclusively used Open-Source tools :
- Ecore Tools: Ecore diagraming editor built on top of Eclipse Sirius ,
- Inkscape : one of my favorite OSS tool to produce vector graphics.
I created 4 distincts diagrams from the Ecore.ecore model, then used the “Export as Image� feature of Sirius to get SVG files out of it. I dragged and dropped those file in Inkscape, scaled, composed a bit, and voilà ! Here is the refcard.
You can decorate your office now ;) Hope you enjoy
Do you know Ecore? Looking for a reference card? was originally published by Cédric Brun at CEO @ Obeo on February 09, 2023.
by Cédric Brun (cedric.brun@obeo.fr) at February 09, 2023 12:00 AM
January 31, 2023
Jakarta EE track at Devnexus 2023!!!!
by Tanja Obradovic at January 31, 2023 08:25 PM
We have great news to share with you!
For the very first time at Devnexus 2023 we will have Jakarta EE track with 10 sessions and we will take this opportunity, to whenever possible, celebrate all we have accomplished in Jakarta EE community.
Jakarta EE track sessions
- 5 years of Jakarta EE Panel: a look into the future (hosted by Ivar and Tanja)
- Deep Dive MicroProfile 6.0 with Jakarta EE 10 Core Profile
- From javax to jakarta, the path paved with pitfalls
- Jakarta EE 10 and Beyond
- Jakarta EE and MicroProfile Highlights
- Jakarta EE for Spring Developers
- Jakarta EE integration testing
- Jakarta EE or Spring? Real world testimonies
- Let's take a look at how a Jakarta EE cloud-native application should look!
- Upgrading a Legacy Java EE App with Style
You may not be aware but this year (yes, time flies!!) marks 5 years of Jakarta EE, so we will be celebrating through out the year! Devnexus 2023, looks a great place to mark this milestone as well! So stay tuned for details, but in the meanwhile please help us out, register for the event come to see us and spread the word.
Help us out in spreading the word about Jakarta EE track @Devnexus 2023, just re-share posts you see from us on various social platforms!
To make it easier for you to spread the word on socials, we also have prepared a social kit document to help us with promotion of the Jakarta EE track @Devnexus 2023, sessions and speakers. The social kit document is going to be updated with missing sessions and speakers, so visit often and promote far and wide.
Note: Organizers wanted to do something for people impacted by the recent tech layoffs, and decided to offer a 50% discount for any conference pass (valid for a limited time). Please use code DN-JAKARTAEE for @JakartaEE Track to get additional 20% discount!
In addition, there will be an IBM workshop that will be highlighting Jakarta EE; look for "Thriving in the cloud: Venturing beyond the 12 factors". Please use the promo code ($100 off): JAKARTAEEATDEVNEXUS the organizers prepared for you (valid for a limited time).
I hope to see you all at Devnexus 2023!
What if you could design, simulate and analyze all at once using Open-Source solutions?
by Cédric Brun (cedric.brun@obeo.fr) at January 31, 2023 12:00 AM
At the Eclipse Foundation conference last October, we had the opportunity to demonstrate an integration of Eclipse Sirius Web and Project Jupyter Notebook for seamless design, simulation, and analysis.
As a data enthusiast, I’ve always been impressed by the versatility of Jupyter. It serves as a hub for reproducible science and a gateway to a vast array of Python �frameworks for data science🔢 , visualization 📉, machine learning and much more. Even if the demo is fairly basic I wanted to showcase how such integrations can remodel the engineering process.
I can’t help but imagine the endless possibilities of quickly simulating design choices and making data-driven decisions on the spot. No more tedious data transfer between tools. The future of engineering is looking brighter every day with these open-source solutions.
What do you think? Have you used ������� or ������� ������ in your work? How? Would you consider it ?
What if you could design, simulate and analyze all at once using Open-Source solutions? was originally published by Cédric Brun at CEO @ Obeo on January 31, 2023.
by Cédric Brun (cedric.brun@obeo.fr) at January 31, 2023 12:00 AM
January 27, 2023
2023.1: Follow the water rabbit!
January 27, 2023 10:00 AM
2023 has been here for a month, and it’s time to hop into the year of the Water Rabbit according to the Chinese calendar. Preparing the Sirius Web project’s objectives, I was wondering: what’s in store for this year? And, just for curiosity, I searched for the chinese zodiac fortune predictions for 2023:
The year of the Water Rabbit is going to be a gentler year.
We’ll have time to take a breather.
We’ve been in the tunnel for the last few years, and the light is getting bigger now.
And “OMG! that’s exactly how I feel!”, we have been working on Eclipse Sirius Web really hard for the last two years, and we are gently landing to a new maturity stage. We introduced many new features in 2022 and in parallel, we are engaged in a quality process to ensure a sustainable product for the next decade.
Follow the rabbit and discover the new 2023.1 Sirius Web release (and its soundtrack, I know you like it!):
End user
Your node is fadin’
Your love is fadin'
Sirius Web used to display all elements of a diagram but in a large diagram, the user may want to focus only on certain elements that are more relevant to his use.
Since 2023.1, you can hide or fade a diagram element through the UI.
If a diagram element is hidden, all contained nodes and connected edges have to be hidden too. After an element is hidden, in case of manual layout, the layout remains the same.
If a diagram element is faded, there is no propagation to contained or connected diagram elements. Concerning faded graphic elements, they are still visible (from the user point of view) so the auto-layout will be applied.

Studio Maker
Provide your images at runtime!
Shatter my image with the stones you throw
Don't shatter my image with the stones you throw
Users can now upload their own images from a project’s new settings page. These images can be displayed in forms using the new image widget, or in View-based diagrams.

List compartments!
I redesign that heart of yours
7 compartments plus one
You might need it
7 compartments plus two
You loved the Sirius Desktop compartments? Now, it also exists in Sirius web. Starting from 2023.1, we support vertical compartments to display list elements. It’s a first step we will continue to improve in the next releases.

Circle of Edge and Completion!
It's the circle of life
And it moves us all
Through despair and hope
We enhance the way to create your custom representations.
- We provide new arrow circle styles : Circle, FillCircle and CrossedCircle.

- We have support for completion in the View details for Domain types and AQL expressions. Any field in the property view which has a green background expects a Domain type and yellow background expects an interpreted expression.
In these fields, auto-completion can be triggered by hitting
+
.
When hitting auto-completion on an empty expression, the first completion proposal will correspond to the features, services, etc. which are available on the current element.
If I am a rich form
If I was rich girl
Na, na, na, na, na, na, na, na, na, na, na
See, I'd have all the money in the world
We introduced with 2022.7 a WYSIWYG editor to simplify and speed up the process of building a Form Description.

We continue our work to enrich forms, with support for:
- style preview: you can see the static styles directly in the form preview (for instance, on the previous screenshot the green and red buttons).
-
a basic image widget:

- a rich text edition widget: The widget behaves in a similar way to the existing textfield and textarea widgets, except that the text value should be valid Markdown, and can be edited in a WYSIWYG way by the end user.

- groups : A Group is used to represent a Section in a details view tab.
- toolbar actions: A group can also define toolbar actions which will be used to create buttons in the toolbar of the group to let the end user execute some operations easily.

What’s next
- For the end user:
- A “link with editor” option to disable the auto link between the explorer and the diagram view
- Auto-wrap labels
- Templates to ease the creation of new content
- For the studio maker:
- Free form compartments
- Updates on the View
- For the developer:
- Switch to Java 17
That’s it for this release! As usual, you can find the detailed release notes in our documentation: https://docs.obeostudio.com/
Thanks to all our valued customers, we truly appreciate your involvement in sponsoring the Sirius Web open source project! You want to join us and become a Sirius Web backer, send me an email, or contact the team.
I wish you all a Hoppy New Rabbit Year!
January 25, 2023
Jakarta EE Community Update - 2022 in Review
by Tanja Obradovic at January 25, 2023 06:08 PM
2022 was an extremely important and successful year for Jakarta EE! We continue to see growth in membership, growth of compatible products, and most importantly growth of contributors and committers.
Here are some highlights from 2022:
Releases
We released Jakarta EE 10 the first innovative community driven release with the new features. Jakarta EE 10 defines a new profile specification with Jakarta EE Core Profile 10. The “Core Profile” is targeting modernized and lightweight Java applications and microservices.
The release also contains updates in over 20 specifications and adds important features requested by our global community. It also has a new profile:Jakarta EE Core Profile
- Jakarta Contexts and Dependency Injection (CDI) 4.0, including CDI-Lite that enables build time extensions
- Jakarta Security 3.0 supporting OpenID Connect
- Jakarta Servlet 6.0 for simplified programming and improved security
- Jakarta Faces (JSF) 4.0 with a modernized API using CDI
- Jakarta JSON Binding (JSON-B) 3.0 with new support for polymorphic types
- Jakarta RESTful Web Services standardizes a Java SE Bootstrap API and standard support for multipart/form-data
- Jakarta Persistence standardizing UUID as Basic Type and extending Query language and Query API
- Jakarta Concurrency 3.0 is moved to the Web Profile and enhances parallel and reactive programming models available to applications
The work on Jakarta EE 11 has started! Now is a great time to get involved and have an impact on the development of the technology. The Jakarta EE Steering Committee has approved a resolution about the next Jakarta EE 11 release with the following high level guidelines:
- Target Java version 21
- Target GA date Q1 2024
- Priorities
- Unified APIs improving Developer Experience
- New Specifications
- Build on the Latest Java
- Enable Community Contribution
These guidelines are provided to encourage a common community direction for Jakarta EE 11.
Jakarta EE Platform team meetings are open for everyone to attend! There are weekly calls happening on Tuesdays at 11:00 AM ET and everyone is welcome to join. Please check the Jakarta EE Specifications Calendar (public url, iCal) for details. We are looking forward to more involvement and input from the community! If you miss a call or are interested in seeing what is being discussed, check out the meeting minutes.
Membership Growth
We have noticed growth in the individuals becoming contributors and committers in the Jakarta EE Specification projects. We encouraged, promoted and celebrated individual contributions in one of our Jakarta EE Studio sessions during the JakartaOne Livestream 2022 event.

We also had a great year for organization membership growth. New members from 2022 are:
- Beijing Vsettan Data Technology Co. Ltd.
- Microsoft
- OmniFish
- NEC Corporation
- Shenzhen Ping A Communication Technology Co., Ltd
- Garden State JUG
- Open Elements
Compatible Products Program
The compatible product list is continually growing!
In total for all the releases, 17 vendors with 19 products listed are on the Jakarta EE Compatible Products page so far.
- Jakarta EE 10 (5 vendors with 4 Full Profile Compatible Products and 3 Web Profile and 3 Core Profile Compatible Products; some products with multiple versions) https://jakarta.ee/compatibility/certification/10/
- Jakarta EE 9.1 (12 vendors with 11 Full Profile Compatible Products and 5 Web Profile Compatible Products; some products with multiple versions) https://jakarta.ee/compatibility/certification/9.1/
- Jakarta EE 9 (6 vendors with 6 Full Profile Compatible Products and 4 Web Profile Compatible Products) https://jakarta.ee/compatibility/certification/9/
- Jakarta EE 8 (17 vendors with 19 Full Profile Compatible Products and 6 Web Profile Compatible Products) https://jakarta.ee/compatibility/certification/8/
JakartaOne Livestream Events
Our popular JakartaOne Livestream virtual conference series has attracted many interesting speakers and even more attendees!
56 Speakers:
- 12 Keynotes
- 33 Technical Talks
- 20 Vendor Presentations
- 36+ hours
We had over 2000 registered attendees for the live event and over 3000 YouTube playlist views!
The biggest celebration of Jakarta EE is always our JakartaOne Livestream annual event! This year was no exception, JakartaOne Livestream 2022 took place on December 6, 2022 and it was a great success!
The JakartaOne Livestream virtual conferences, as you know, run in different languages as well!
This year we had the following language-specific events:
- JakartaOne Livestream - German - June 30, 2022
- JakartaOne LiveStream - Japanese - September 16, 2022
- JakartaOne Livestream - Chinese - August 31, 2022
- JakartaOne Livestream - Portuguese - September 29, 2022
If you and your community have interest in organizing the JakartaOne Livestream event, please visit jakartaone.org and find out all about hosting the event!
Jakarta EE Developer Survey!
Now in its sixth year, this is the enterprise Java ecosystem’s leading survey with thousands of developers sharing their insights from around the globe. This time around, we will launch the survey on March 16, 2023. Please take the survey and share it with your network to maximize the community outreach. Your input is greatly appreciated and matters to help Java ecosystem stakeholders better understand the requirements, priorities, and perceptions of enterprise developer communities.
Key findings of the 2022 survey include:
- Jakarta EE is the basis for the top frameworks used for building cloud native applications
- The top three frameworks for building cloud native applications include Spring/Spring Boot, which lost ground this year at 57% (60% in 2022), followed by Jakarta EE at 53% (up from 47% in 2021), and MicroProfile at 30% (down from 34% in 2021). It’s important to note that Spring/SpringBoot is reliant on Jakarta EE developments for its operation and is not competitive with Jakarta EE. Both are critical ingredients to the healthy enterprise Java ecosystem.
- Jakarta EE 9/9.1 usage has grown to 14% (vs. 9% in 2021).
- While 36% of respondents have already migrated or plan to adopt Jakarta EE 9/9.1 (with 14% already running Jakarta EE 9/9.1 in production), 19% of respondents plan to skip Jakarta EE 9/9.1 altogether and move directly to Jakarta EE 10.
- Over 59% of respondents (48% in 2021) have migrated to Jakarta EE or plan to do so within the next 6-24 months.
Stay tuned for the 2023 Developer Survey URL on March 16!
Jakarta Tech Talks
Jakarta Tech Talks is another very popular community oriented meet-up series that is designed to share knowledge and invite all interested to participate in Jakarta EE-related technologies. We had 13 sessions in 2022 and we have quite a few sessions already scheduled for 2023!
If you are interested in presenting or have an idea on what you would like to hear in a Jakarta Tech Talk, please let us know.

______________________________
The Jakarta EE Working Group Charter can be viewed here. More information about the working group is available via its website and its mailing lists can be found here. The Jakarta EE Working Group is supported and backed by its industry members. The Working Group has declared these projects as being in its purview. Jakarta EE compatible products can be viewed here.
Stay Connected With the Jakarta EE Community
The Jakarta EE community is very active and there are a number of channels to help you stay up to date with all of the latest and greatest news and information. Subscribe to your preferred channels today:
· Social media: Twitter, Facebook, LinkedIn Group, LinkedIn Page
· Mailing lists: jakarta.ee-community@eclipse.org, jakarta.ee-wg@eclipse.org, project mailing lists, Slack workspace
· Calendars: Jakarta EE Community Calendar, Jakarta EE Specification Meetings Calendar
· Newsletters, blogs, and emails: Eclipse Community Newsletter, Jakarta EE blogs, Hashtag Jakarta EE
· Meetings: Jakarta Tech Talks, Jakarta EE Update, and Eclipse Foundation events and conferences
You can find the complete list of channels here.
To help shape the future of open source, cloud native Java, get involved in the Jakarta EE Working Group.
To learn more about Jakarta EE-related plans and check the date for the next Jakarta Tech Talk, be sure to bookmark the Jakarta EE Community Calendar.
We always welcome your feedback!
Thank you for your interest and involvement in Jakarta EE!
Are your engineering tools built on top of strong and well-maintained technologies?
by Cédric Brun (cedric.brun@obeo.fr) at January 25, 2023 12:00 AM
When you pick technologies to build tools empowering hundreds of your engineers, you aim at making the best choice. Open-Source is the best when you play the long term.
The Eclipse Foundation ensures that all the metrics related to an Open-Source project are exposed on its website, helping you to assess the effort which goes into maintaining and updating the technology.
With ğ�Ÿğ�Ÿ® ğ�—¿ğ�—²ğ�—¹ğ�—²ğ�—®ğ�˜€ğ�—²ğ�˜€ ğ�—¶ğ�—» ğ�Ÿ®ğ�Ÿ¬ğ�Ÿ®ğ�Ÿ® : 📦 📦 📦 📦 📦 📦 📦 📦 📦 📦 📦 📦 , the Eclipse Sirius project demonstrates its strength as a platform for building graphical modeling tools in any engineering domain. Each release brings a number of improvements and fixes to the core technology. ğ�—œğ�—» ğ�˜�ğ�—µğ�—² ğ�—¹ğ�—®ğ�˜€ğ�˜� ğ�Ÿ² ğ�˜†ğ�—²ğ�—®ğ�—¿ğ�˜€ ğ�˜�ğ�—µğ�—² ğ�˜�ğ�—²ğ�—®ğ�—º ğ�—¯ğ�—²ğ�—µğ�—¶ğ�—»ğ�—± ğ�—¦ğ�—¶ğ�—¿ğ�—¶ğ�˜‚ğ�˜€ ğ�˜€ğ�—µğ�—¶ğ�—½ğ�—½ğ�—²ğ�—± ğ�—»ğ�—¼ ğ�—¹ğ�—²ğ�˜€ğ�˜€ ğ�˜�ğ�—µğ�—®ğ�—» ğ�Ÿ²ğ�Ÿ¬ ğ�—¿ğ�—²ğ�—¹ğ�—²ğ�—®ğ�˜€ğ�—²ğ�˜€ !
We are committed to maintain a similar pace for 2023. We are still at the beginning of the year and both Sirius and Sirius Web have already shipped one, enjoy!
Building graphical modeling tools can be a complex undertaking, especially if they need to support many features and functions. At Obeo, we have extensive experience in this area and strive to make the process as easy and accessible as possible. To accomplish this, we rely on several strategies, including modular design, higher-level abstractions, and the ability to iterate quickly on a tool definition. In the last few years we have kept these principles while transitionning the technologies to the Web.
Are your engineering tools built on top of strong and well-maintained technologies? was originally published by Cédric Brun at CEO @ Obeo on January 25, 2023.
by Cédric Brun (cedric.brun@obeo.fr) at January 25, 2023 12:00 AM
January 16, 2023
European Cyber Resilience Act: Potential Impact on the Eclipse Foundation
by Mike Milinkovich at January 16, 2023 02:22 AM
Europe has proposed new legislation intended to improve the state of cybersecurity for software and hardware products made available in Europe. The Cyber Resilience Act (“CRA”) will mandate that all manufacturers take security into account across both their development processes and the lifecycle of their products once in the hands of consumers.
This document discusses the legislation and the potential impact it may have on the Eclipse Foundation and its 400+ projects and community. Many of the issues noted could have a similar impact on other open source organizations and projects. It is written based on our reading of the current draft legislation and a number of assumptions stated below. Note that is consciously does not include a discussion of possible revisions to the legislation, although we may post a followup which does. It also does not include any discussion concerning the warranty and product liability provisions of the legislation as we have not yet analyzed the impact those may have on us.
We are sincerely looking for comments and feedback, as it is quite possible that we have misunderstood or misinterpreted the documents.
It is important to stress that the Eclipse Foundation is better positioned to deal with the fallout from the CRA than many other open source organizations. We have staff. We have some resources. We have common community processes and practices shared across our many projects. We have CI/CD infrastructure shared by most (but not all) of our projects. We have a security team, written security policies and procedures, and are a CVE numbering authority. Despite being in a better position than most, we fear that the obligations set forth by the legislation will cripple the Eclipse Foundation and its community.
There are a number of other excellent summaries of the worrisome impact of this legislation on the open source ecosystem. We highly recommend reading:
- Open-source software vs. the proposed Cyber Resilience Act by Maarten Aertsen.
- The EU’s Proposed Cyber Resilience Act Will Damage the Open Source Ecosystem by Olaf Kolkman.
Both of those articles primarily focus on the potential impact of the CRA on individual open source projects. Olaf’s document in particular suggests improvements to the draft. In this document we want to focus on the impact on an organization such as the Eclipse Foundation and its open source projects if the CRA was approved in its current form. How the CRA should or could be amended is being discussed elsewhere. The purpose of this document is to provide a resource explaining the impact of the legislation as it stands today.
It is important to note that the CRA does make a laudable attempt to carve out free and open source software but only “…outside the course of a commercial activity…”. Maarten Aertsen does an excellent job of summarizing the problems with this carve out. In particular he references a definition of commercial activity used in EU Blue guide to the implementation of EU product rules which states:
Commercial activity is understood as providing goods in a business related context. Non-profit organisations may be considered as carrying out commercial activities if they operate in such a context. This can only be appreciated on a case by case basis taking into account the regularity of the supplies, the characteristics of the product, the intentions of the supplier, etc. In principle, occasional supplies by charities or hobbyists should not be considered as taking place in a business related context.
Assumptions
- The CRA references the term “product” over 600 times but does not appear to define it. The act does define the term ‘product with digital elements’. For the purposes of this document we will assume that for the purposes of the CRA, any Eclipse Foundation project software made generally available to the public as a downloadable, installable, and executable binary would be considered a ‘product with digital elements’ under the regulation.
- In addition, there are at least some EF projects which may be considered ‘critical product with digital elements’ (e.g. Kura, Keyple, ioFog, fog05) or ‘highly critical product with digital elements’ (e.g. Oniro, Leda, 4diac) .
- The CRA defines ‘manufacturer’ as “any natural or legal person who develops or manufactures products with digital elements or has products with digital elements designed, developed or manufactured, and markets them under his or her name or trademark, whether for payment or free of charge”. For the purposes of this document, we will assume that the Eclipse Foundation would be considered the manufacturer of the binaries produced by its projects. Among other reasons justifying this assumption, the Eclipse Foundation asserts that it owns the trademark rights for each of its projects and the binaries they release and (resources permitting) we market them as works of the Eclipse Foundation.
- As mentioned above there is an attempt to exclude free and open source software produced outside the course of a commercial activity from the scope of the legislation. For the purposes of this document we will assume that Eclipse Foundation project software would be considered as produced under the course of a commercial activity, and would therefore be subject to the legislation. This assumption is based on the following:
- The Eclipse Foundation is not a charity. It is a Belgian-incorporated international nonprofit association of hundreds of business members.
- Eclipse Foundation projects are not, generally speaking, developed by hobbyists. While some are, our projects are commonly developed by full-time employees of our member companies or by individuals who are making a living from consulting services related to their project work.
- Eclipse Foundation projects provide goods in a business related context. By that we mean that EF projects are largely intended to provide software which is immediately ready for adoption by businesses either as a component within a commercial product or by use by employees in their daily work.
- Eclipse Foundation projects provide a regularity of supply. As one extreme example, the Eclipse IDE takes great pride in having not missed a single release date in over 15 years.
- Eclipse Foundation projects deliver high quality software, equivalent to the quality found in commercial products. So the “characteristics of the product” are equivalent to commercial products.
Having said all of the above it is important to remind the reader that all Eclipse Foundation projects provide their software for free, on a non-profit basis, and under OSI-approved open source licenses which permit further use, study, modification, and distribution.
Impact Assessment
CE Markings for Software Products
Fundamentally, the core of the proposed legislation is to extend the CE Mark regime to all products with digital elements sold in Europe. Our assumption based on the current text is that this process will be applied to open source software made available under open source licenses and provided free of charge, ostensibly under licenses which disclaim any liability or warranty. We are deeply concerned that the CRA could fundamentally alter the social contract which underpins the entire open source ecosystem: open source software provided for free, for any purpose, which can be modified and further distributed for free, but without warranty or liability to the authors, contributors, or open source distributors. Legally altering this arrangement through legislation can reasonably be expected to cause unintended consequences to the innovation economy in Europe.
Without a clearer exemption for open source, in order to comply with the legislation the Eclipse Foundation will be required to:
- Develop, document, and implement policies and procedures for every project at the Eclipse Foundation to ensure they are conformant with the requirements of the CRA including:
- All of the development and post-release security requirements set forth in Annex I, including providing notification and update mechanisms.
- All of the user documentation requirements set forth in Annex II.
- All of the product technical documentation set forth in Annex V, including “…complete information on the design and development of the product…including, where applicable, drawings and schemes and/or a description of the system architecture explaining how software components build on or feed into each other and integrate into the overall processing.”
- For each EF project release, prepare the project-specific documentation required by Annex V, including “…an assessment of the cybersecurity risks against which the product with digital elements is designed, developed, produced, delivered and maintained…”.
- Determine for each project whether it meets the definition of ‘product with digital elements’, ‘critical product with digital elements’, or ‘highly critical product with digital elements’.
- For each project which is a ‘product with digital elements’, establish, complete, and document a CE mark self assessment process.
- For each ‘critical product with digital elements’ or ‘highly critical product with digital elements’ engage with an external CE auditing body and complete the additional processes required to get the CE mark approval. Note that it is not clear to us what the costs in time, resources, and money would be to implement these external audit processes. Our assumption is that they would be substantial.
It is also important to note that in most other domains regulated with CE markings they are done where there are well known standards, specifications, and/or certification processes in place. These are not in place for most Eclipse Foundation open source projects. This could significantly increase the costs and risks associated with conformance.
- For each single project release, document that the relevant CE mark process is followed (as described above), that an EU declaration of conformity is written and signed by an officer of the foundation, that the CE mark is affixed, and that the technical documentation and EU declaration of conformity is made available for at least 10 years after the release. Note that we estimate that in any given year the Eclipse Foundation’s projects make available several hundred releases.
Article 4(3)
Member States shall not prevent the making available of unfinished software which does not comply with this Regulation provided that the software is only made available for a limited period required for testing purposes and that a visible sign clearly indicates that it does not comply with this Regulation and will not be available on the market for purposes other than testing.
Many Eclipse Foundation projects make integration, nightly, weekly, and milestone builds available under their open source licenses available indefinitely. The intent is to provide for community testing and for traceability. These binaries are marked as such, but the terms under which they are provided do not require that they be used for testing purposes only.
It is not clear how this requirement could be implemented by any open source project using modern CI/CD infrastructure and operating under the principle of transparency. Even if the binaries were marked as “testing purposes only”, the open source licenses they are provided under do, in fact, permit uses other than testing. Further, it is common practice to provide intermediate builds for extended periods of time (often permanently) to provide testers with access to past builds for problem identification and remediation. Discontinuing that practice would be significantly disruptive. And any solution based on providing intermediate builds under non-open source licenses would be impossible for Eclipse Foundation projects, as the EF does not own the copyright and obtaining the approval of all contributors would be impractical. In summary, compliance with this CRA requirement would represent a significant blow to open source development best practices.
Article 5(1) and Section 1 of Annex I
(1) Products with digital elements shall be designed, developed and produced in such a way that they ensure an appropriate level of cybersecurity based on the risks
At a minimum this would require the development and enforcement of written policies requiring every project to assess their level of cybersecurity risk and to implement processes to ensure that there is a determination of the risk level and a justification for the development processes adopted.
(2) Products with digital elements shall be delivered without any known exploitable vulnerabilities
(3) On the basis of the risk assessment referred to in Article 10(2) and where applicable, products with digital elements shall:
(a) …(j)
These would require a material change to our community’s release processes to require attestations that there are no known vulnerabilities and to comply with the many requirements listed.
(k) ensure that vulnerabilities can be addressed through security updates, including, where applicable, through automatic updates and the notification of available updates to users.
With a few exceptions, EF projects do not “call home”, require any sort of user registration, and do not provide a mechanism for notifying all users that an update is either available or required. Implementing these requirements would require a whole new infrastructure to be mandated across all projects.
Article 5(2) and Section 2 of Annex I “Vulnerability Handling Requirements”
In general, the Eclipse Foundation is in decent shape to deal with many of the stated requirements. As noted above we have a security team, written security policies and procedures, and are a CVE numbering authority. However, there are two notable elements in the requirements.
(1) identify and document vulnerabilities and components contained in the product, including by drawing up a software bill of materials in a commonly used and machine-readable format covering at the very least the top-level dependencies of the product
This would impose a legal requirement to produce SBOMs for all EF projects. Although it is something we aspire to, this is a very significant effort. It would also require actively monitoring all project dependencies for known vulnerabilities in dependencies. This is generally considered an unsolved problem within the open source ecosystem with no known path to implementation.
(3) apply effective and regular tests and reviews of the security of the product with digital elements;
These would require a material change to our community’s development processes to mandate a whole class of testing which is not currently mandated for our projects. This is a very significant effort both to implement and to maintain.
January 14, 2023
Sticky Transparent OceanWP Header
by Kai Tödter at January 14, 2023 09:27 AM
Recently I updated my web site to a new, fresh look. I chose the great OceanWP theme (free version). While the build-in functionality is already awesome, I wanted my header to be sticky and transparent.
I searched for a solution and was inspired by this article, but got better results when using position: sticky instead of position: fixed. Furthermore, I wanted the top-header to always scroll and only the main header to be sticky.
For the transparency, I used a (non-transparent) minimal header in the UI and overrode it in the CSS. I just decided to start with black and make the background as transparent that it fits my desired gray color at the beginning.
To implement this I just added a few lines of CSS in the template customizer (Custom CSS/JS):
@media only screen and (min-width: 768px) {
#site-header {
position: sticky;
top: 0;
background-color: #000000e1 !important;
transition: height .3s
}
}
Then I wanted to add shrinking of the header height and the logo. For that I added a bit JavaScript (could be optimized…):
window.addEventListener('scroll', function (e) {
if (window.innerWidth > 768) {
if (window.scrollY >= 30 && document.getElementById('site-header').style.height !== '40px') {
document.getElementById('site-header').style.height = '40px'
var list = document.querySelectorAll('#site-navigation-wrap .dropdown-menu >li >a');
for (i = 0; i < list.length; ++i) {
list[i].style.setProperty('line-height', '40px', 'important');
}
var el = document.querySelector('#site-logo #site-logo-inner a img')
el.style.setProperty('max-width', '100px', 'important');
el = document.querySelector('#site-logo #site-logo-inner')
el.style.setProperty('height', '40px', 'important');
} else if (window.scrollY < 30 && document.getElementById('site-header').style.height !== '60px') {
document.getElementById('site-header').style.height = '60px'
var list = document.querySelectorAll('#site-navigation-wrap .dropdown-menu >li >a');
for (i = 0; i < list.length; ++i) {
list[i].style.setProperty('line-height', '60px', 'important');
}
var el = document.querySelector('#site-logo #site-logo-inner a img')
el.style.setProperty('max-width', '130px', 'important');
el = document.querySelector('#site-logo #site-logo-inner')
el.style.setProperty('height', '60px', 'important');
}
}
});
You can see the result when scrolling this blog post. But my approach is considered to be a quick and dirty solution that may have unexpected side effects. If you want even more features and a more professional implementation, you could take a look at the commercial “Sticky Header” plugin-in from OceanWP.
January 12, 2023
Docker on macOS M1
by Lorenzo Bettini at January 12, 2023 08:15 AM
Building Graphical Modeling Tools, Approaches to Reducing Complexity
by Cédric Brun (cedric.brun@obeo.fr) at January 12, 2023 12:00 AM
Building graphical modeling tools can be a complex undertaking, especially if they need to support many features and functions. At Obeo, we have extensive experience in this area and strive to make the process as easy and accessible as possible. To accomplish this, we rely on several strategies, including modular design, higher-level abstractions, and the ability to iterate quickly on a tool definition. In the last few years we have kept these principles while transitionning the technologies to the Web.
The Fellowship of the Modules
Just like how the quest to destroy the One Ring in the Lord of the Rings was made easier by breaking it down into smaller tasks and delegating them to various members of the fellowship, we at Eclipse modeling technologies use a modular design to manage complexity in our software. Each project is responsible for a specific task, delivering components that can be reused and integrated into a tool for the end user.
For example,
- EMF handles model data and its API,
- Sirius focuses on editors and tooling,
- EMF Compare enables the comparison, merging, and conflict resolution of different versions of models,
- Acceleo allows for code or text generation from models,
- M2Doc produces reports and documents using models and diagrams as inputs.
This modular design has several benefits. It makes the software easier to understand and work on, as you can focus on one module at a time rather than trying to comprehend the entire system all at once. Modular design also facilitates code and functionality reuse. If you build a module that does something useful, you can use it in other projects. The Sirius project is a good example of this, as it provides a complete set of features that are reused and exposed through hundreds of graphical modelers. You can see some examples in the Sirius Gallery
While modular design is useful, it is not a perfect solution and does have some challenges. One challenge is ensuring that the modules work well together and do not have conflicts or dependencies. This can be especially difficult when the modules are evolving independently within their own projects. To address this issue, we coordinate with other projects within the Eclipse Release Train and build an integrated suite called the “Obeo Designer Community,” which is a ready-to-use packaging.
Inception: The Higher-Level Abstraction Edition
Just like Cobb and his team in Inception, we use higher-level abstractions to hide the underlying complexity of building a graphical modeling software and make the process more manageable for our users.
Higher-level abstractions can take many forms, such as libraries, frameworks, or domain-specific languages (DSLs). At Obeo, we use DSLs as our choice for higher-level abstractions. An example of this is Sirius.
When you define a tool using Sirius, you specify the graphical modeler you want to achieve in terms of graphical shapes and how these shapes are mapped to the domain model. You can also specify a set of editors, actions, and wizards that can be launched by the end user, without having to deal with the details of coding these features on the underlying platform. Sirius handles these details behind the scenes.
However, higher-level abstractions also have their challenges. One challenge is that they can add an extra layer of complexity to the software. Developers must understand how the abstraction works and how to use it correctly. To help with this, we offer support and expertise, training, and tutorials for getting started with Sirius. We also hold the SiriusCon conference each year since 2015 to help our community discover what they can do with Sirius.
Another challenge is that higher-level abstractions can be limiting. They may not provide all the features and flexibility that developers need, or they may make it difficult to do things in a different way. To address this, we allow for tool behavior to be extended with Java code when necessary. This is useful when the tool needs to interface with another tool directly, rather than through file exchanges, or when specific computations or user interfaces are required.
The Eclipse Modeling platform is generally extensible, and EMF, Compare, Acceleo, Sirius, and other projects provide dedicated extension points to allow their behavior to be customized using Java code and APIs. In addition, Sirius and Acceleo allow for branching out to simple Java code directly, without the need to fully understand the Eclipse platform.
The Fast and the Furious of Graphical Modeling Tools: Hot Reloading
Like the crew in the Fast and Furious franchise, we aim to reduce the complexity of building graphical modeling software by enabling fast iteration and turnaround.
Fast iteration means being able to make changes to the software quickly and easily, and see the results of those changes right away. In the case of Sirius two factors are enabling this, first by providing a higher level abstraction to define modeling tool one can express quicker and with more precision what the tool should look like and do. The second factor, and this one stands out quite a bit compare to the other frameworks you can use to build a graphical tool, is that Sirius will hot-reload your tool definition, you are able to instantly see the tool in action, adapt it’s definition, see the result, and iterate. It’s life changing, as then the cost of trying another way to represent the domain and interact with it is only minutes, and going back to the previous version of the tool is one CTRL-Z away.
With Sirius Web we even go one step further in reducing this feedback loop: you adapt the tool, it’s instantly usable by all the engineers accessing it directly from their web browser.
To summarize, building a graphical modeling tool can be complex, but there are several ways to approach this complexity. Modular design allows for easier understanding and reuse of code, while higher-level abstractions can hide underlying complexity from the user. Fast iteration and turnaround is also important for efficient development. Obeo has been working on technologies to make building graphical modeling tools more accessible for many years now, and we are excited by the prospects of what is to come on this path : while Sirius on the desktop has proven this is an efficient way to tackle this complexity, Sirius on the Web goes even one step further in making such tools accessible to anyone.
Building Graphical Modeling Tools, Approaches to Reducing Complexity was originally published by Cédric Brun at CEO @ Obeo on January 12, 2023.
by Cédric Brun (cedric.brun@obeo.fr) at January 12, 2023 12:00 AM
December 20, 2022
JBoss Tools 4.26.0.Final for Eclipse 2022-12
by jeffmaury at December 20, 2022 02:10 PM
Happy to announce 4.26.0.Final build for Eclipse 2022-12.
Downloads available at JBoss Tools 4.26.0 Final.
What is New?
Full info is at this page. Some highlights are below.
General
Components removal
As planned and communicated in a previous blog article, the following components have been removed from the JBoss Tools distribution:
-
Forge
-
Livereload
-
Angular
-
JSDT
Please note that the following components are obsoleted so they are still part of this JBoss Tools distribution but they will be removed from the next JBoss Tools release:
-
WebServices
-
JSF
-
Seam
-
Batch
-
Visual Page Editor
-
Central installation
OpenShift
OpenShift Application Explorer view based on odo 3.x
The OpenShift Application Explorer view based based on odo 2.x in previous versions of JBoss Tools. It already leverages the power of devfiles to describe your development environment, odo 3.x enhances and simplifies the workflow.
With odo 3.x, you can create a component (unit of deployment) from your source files and once the component is created, you can start it in dev mode: a new deployment will be created on the cluster, application will be built on the cluster and then, each time you modify some of the source files on your local workstation, the change will be broadcasted to the remote cluster.
In order to test your application, you can open a browser of the OpenShift Application Explorer and browse your application running on the cluster.
Once your component is running in dev mode, you can start a local debugger (Java, Node.js, Python) that will connect to the deployment on the cluster and let’s you troubleshoot and analyze more complex use cases.
This addresses the inner loop style of development where you can get instant feedback on your changes.
odo 3.x also supports outloop style of development: once you think your application in ready to be deployed on an staging, integration or production cluster, you can start your component in deploy mode: a new image defined by yourself will then be built and deployed on the cluster.
In the following example, we will start from a Quarkus application generated from https://code.quarkus.io, create the component, start the dev mode, check that we can access the application, start the debugger and check that we can reach a breakpoint.
Hibernate Tools
Runtime Provider Updates
The Hibernate 6.1 runtime provider now incorporates Hibernate Core version 6.1.5.Final, Hibernate Ant version 6.1.5.Final and Hibernate Tools version 6.1.5.Final.
The Hibernate 5.6 runtime provider now incorporates Hibernate Core version 5.6.14.Final and Hibernate Tools version 5.6.14.Final.
The Hibernate 5.3 runtime provider now incorporates Hibernate Core version 5.3.27.Final and Hibernate Tools version 5.3.27.Final.
Happy Holidays from the Eclipse Foundation
by Mike Milinkovich at December 20, 2022 12:30 PM
As 2022 draws to a close, I would like to express my sincere gratitude to our contributors, committers, members, and the Eclipse Foundation team for your commitment, passion, professionalism, persistence, and tremendous contributions to our community’s success.
This year included a number of accomplishments and milestones in the Eclipse community. We welcomed over 20 new projects, 55 new member companies, and a new working group with Eclipse Software Defined Vehicle. Also, this year, the Board of Directors approved the creation of Interest Groups, the next step in furthering the Eclipse Foundation’s governance framework to enable “innovation through collaboration” by empowering members to work together using a lighter-weight governance structure than our more formal working groups. Find out how to start a collaboration and share the opportunity with your colleagues and network.
After 3 years of virtual interactions, we held our first in-person EclipseCon in Ludwigsburg. This chance to connect with friends and colleagues, new and old, was not taken for granted with over 415 participants. We could not make it happen without our speakers, sponsors and participants! Mark your calendars for the next EclipseCon – October 16-20, 2023.
Moments like that remind us of the importance of coming together, and we hope that the new year will give us many more opportunities for our global community to collaborate.
All the best for 2023!
December 16, 2022
Introducing the Automotive Open Source Summit
by Mike Milinkovich at December 16, 2022 12:30 PM
2022 has been a fantastic year for the Eclipse Foundation. We’ve managed to grow all aspects of our organization, due in no small part to the ongoing proliferation of open source across industries worldwide. Perhaps no one Working Group exemplifies our efforts in 2022 more than the Eclipse Software Defined Vehicle (SDV) Working Group, which has seen significant momentum in terms of new members, innovation, and new projects. Just this week, Mercedes-Benz Tech Innovation has announced they will be lending their own resources and talents to Eclipse SDV. With this in mind and before we all part ways for the holidays, I wanted to cap this year with some exciting news related to just this subject.
Coming in early June 2023, the Eclipse Foundation and the Eclipse SDV WG will hold the first automotive industry event focused on open source software and collaboration-based innovation. Named the Automotive Open Source Summit and based in the Munich area, this event will highlight speakers from organizations throughout the auto industry, including organizations outside the Eclipse community, as well as leaders within our own automotive initiatives. This will be a one-day and a half event and specific details will be finalized early in the new year.
The Summit will focus on latest trends, “business” topics targeting executives, senior technical leaders, and other decision makers. The main conference will be preceded by an exclusive executive round table attended by the industry’s most influential leaders. Our goal is that this conference becomes a “must attend” event for all participants in the automotive software ecosystem regardless of whether they are actively engaged with open source technology. In the coming years, we plan on extending the program for developers by designing a technical targeted track.
The Summit will feature speakers from Eclipse automotive initiatives as well as organizations and leaders from outside the Eclipse community. We want to attract the participation of all high profile open source and open specification initiatives in the automotive industry.
This is just one of the exciting new developments the Eclipse Foundation has percolating for next year. We can’t wait to give you and the rest of the community more details. In the meantime, Happy Holidays to you and yours for 2022 and we look forward to engaging with all of you in 2023!
Announcing Eclipse Ditto Release 3.1.0
December 16, 2022 12:00 AM
The Eclipse Ditto teams is proud to announce the availability of Eclipse Ditto 3.1.0.
Version 3.1.0 brings policy imports, AMQP 1.0 message annotation support, conditional message sending and other smaller improvements, e.g. regarding shutdown/restart improvements.
Adoption
Companies are willing to show their adoption of Eclipse Ditto publicly: https://iot.eclipse.org/adopters/?#iot.ditto
When you use Eclipse Ditto it would be great to support the project by putting your logo there.
Changelog
The main improvements and additions of Ditto 3.1.0 are:
- Conditional message processing based on a specified condition targeting the twin state
- Support for reading/writing AMQP 1.0 “Message annotations” in Ditto managed connections
- Policy imports: Reference other policies from policies, enabling reuse of policy entries
- Several Ditto explorer UI enhancements
- Support for configuring an audience for Ditto managed HTTP connections performing OAuth2.0 based authentication
The following non-functional enhancements are also included:
- End-2-End graceful shutdown support, enabling a smoother restart of Ditto services with less user impact
- Support for encryption/decryption of secrets (e.g. passwords) part of the Ditto managed connections before persisting to the database
- IPv6 support for blocked subnet validation
The following notable fixes are included:
- Fixing that known connections were not immediately started after connectivity service restart
Please have a look at the 3.1.0 release notes for a more detailed information on the release.
Artifacts
The new Java artifacts have been published at the Eclipse Maven repository as well as Maven central.
The Ditto JavaScript client release was published on npmjs.com:
The Docker images have been pushed to Docker Hub:
- eclipse/ditto-policies
- eclipse/ditto-things
- eclipse/ditto-things-search
- eclipse/ditto-gateway
- eclipse/ditto-connectivity
–
The Eclipse Ditto team
December 12, 2022
Pitfalls in TypeScript - Broken Liskov Substitution Principle for Fields
by n4js dev (noreply@blogger.com) at December 12, 2022 09:39 AM
Subtypes can specialise their parent type or leave it as is. This specialisation can be done via either adding some members like fields or methods, or by overriding members and widen or narrow their types. However, manipulating types is very tricky and can only be done depending on how members are accessed. In general, the Liskov substitution principle applies.
Subtype Requirement: Let be a property provable about objects
of type T. Then
should be true for objects
of type S where S is a subtype of T.
-- Barbara Liskov and Jeannette Wing, 1994
In other words: Whatever you can do with an object typed T you should also be able to do with an object typed S, where S is a subtype of T. However, let's have a look at what is possible in TypeScript but questionable from a type safety point of view (and hence not allowed in N4JS, Java, and other languages):
class C1 {fieldC1: boolean = true;}class C2 extends C1 {// Liskov substitution principle violated here since type of 'fieldC1' gets narrowedoverride fieldC1: true = true; // note: 'true' is a subtype of 'boolean'}const c2 = new C2();c2.fieldC1 = false; // TypeScript ERROR: Type 'false' is not assignable to type 'true'.const c1: C1 = c2; // up-castc1.fieldC1 = false; // assign 'false' via supertypeconsole.log(c2.fieldC1) // yields print out: "false"if (c2.fieldC1 == false) { // TypeScript ERROR: This comparison appears to be unintentional because the types 'true' and 'false' have no overlap.// actually this is reachable}
What we see in the example above is how the Liskov substitution principle was broken for fields in TypeScript: We cannot do with c2 what we are able to do with c1, despite the fact that the type of c2 is a subtype of the type of c1. In order to preserve the Liskov substitution principle we know from languages like Java, N4JS and others, that we are neither allowed to narrow nor to widen the types of fields along the type hierarchy. This is called type invariance. In TypeScript however, covariance is allowed, that means that the type of fields can be narrowed to a subtype, like shown in the example above.
by Marcus Mews
by n4js dev (noreply@blogger.com) at December 12, 2022 09:39 AM
December 10, 2022
UI Tests in Another Display in Linux
by Lorenzo Bettini at December 10, 2022 11:28 AM
December 07, 2022
WTP 3.28 Released!
December 07, 2022 03:59 PM
December 06, 2022
Updates to the Eclipse IP Due Diligence Process 2022
December 06, 2022 12:00 AM
November 25, 2022
Yes, JakartaOne Livestream is on December 6th and you are invited!
by Tanja Obradovic at November 25, 2022 10:20 PM
Yes, JakartaOne Livestream is on December 6th and you are invited!
Our annual one-day virtual conference for developers and technical business leaders is taking place on December 6th this year! The program of the JakartaOne Livestream 2022 in English, has been published and I invite you to register for the event. Tune in on December 6th, sit back and enjoy!
The Program Committee, composed of prominent members of our community, had the very hard task of selecting talks this year, and the final list looks great! Thank you Edwin, Mala, Josh, Mary and Otavio!

The speaker line up is impressive and we are all excited to hear them talk about cloud native architecture, developer tools and testing, Jakarta EE, and MicroProfile!

Your hosts Shabnam, Ivar and I will have quite a few interesting conversations for you in the Studio Jakarta EE as well!

We’re looking forward to seeing you at JakartaOne Livestream 2022!
November 24, 2022
Update on Security improvements at the Eclipse Foundation
November 24, 2022 03:00 PM
Thanks to financial support from the OpenSSF’s Alpha-Omega project, the Eclipse Foundation is glad to have made significant improvements in the last couple of months. Our previous analysis helped us prioritize work area where improvements would be the most significant. Let’s see where we are today.
Protect the branches from GitHub project
One of the main issue that has been identified by Scorecard during our previous analysis is the lack of branch protection on our repositories at GitHub. Trying to set this up manually on all of our 1000+ repositories is not scaling. We need some tooling. We’ve reviewed the tool on the market that help to manage GitHub organizations and repositories at scale, but none were complying with our requirements in terms of security, workflow, or ease of use. Also, we are a strong proponent of As Code approach. We think this principle helps tremendously in being open and transparent. The Eclipse Foundation advocates these two principles as the basis for collaborating and innovating with Open Source.
As such, we’ve started to work on our own custom solution, based on an idea from George Adams. The project is named Otterdog (because 🦦ğŸ�¶ » ğŸ�™ğŸ�±). The idea is to let an administrator define a default configuration for organizations and repositories, and encode only the difference for specific projects. It’s still in its infancy and focus currently on retrieving (at scale) the configuration from GitHub and store the variation from the default configuration.
The project relies heavily on Jsonnet for the configuration as code part, Bash script and the Github CLI tool to interact with the REST and GraphQL API, and also with Puppeteer for all settings that are not available through Github APIs.
We’ve implemented a default setting in the repository. The settings file for the Eclipse OpenJ9 organization would look like:
local orgs = import './orgs.libsonnet';
orgs.newOrg('eclipse-openj9') {
api+: {
billing_email: 'webmaster@eclipse.org',
dependabot_alerts_enabled_for_new_repositories: false,
dependabot_security_updates_enabled_for_new_repositories: false,
dependency_graph_enabled_for_new_repositories: false,
},
puppeteer+: {
'settings/discussions'+: {
discussions_enabled: false,
},
'settings/member_privileges'+: {
members_can_change_repo_visibility: true,
members_can_delete_repositories: true,
readers_can_create_discussions: true,
},
'settings/packages'+: {
packages_containers_internal: false,
packages_containers_public: false,
},
},
repositories+: [
{
default_branch: 'master',
description: "Eclipse OpenJ9: A Java Virtual Machine for OpenJDK that's optimized for small footprint, fast start-up, and high throughput. Builds on Eclipse OMR (https://github.com/eclipse/omr) and combines with the Extensions for OpenJDK for OpenJ9 repo.",
branch_protection_rules: [
{ pattern: 'v*-release' },
{ pattern: 'master' },
{ pattern: 'jitaas' },
],
name: 'openj9',
},
{
default_branch: 'openj9',
description: "Eclipse OpenJ9's clone of the Eclipse OMR (https://github.com/eclipse/omr) project. PRs should be opened against the upstream OMR project whenever possible.",
branch_protection_rules: [
{ pattern: 'v*-release' },
{ pattern: 'openj9' },
{ pattern: 'jitaas' },
],
name: 'openj9-omr',
},
...
],
}
Apply principle of least privilege to GitHub workflows’ tokens
We’ve been contacted by StepSecurity in order to evaluate their solution that makes it easy to submit PR with fixes for some issues reported by Scorecard. We’ve been quite impressed by the ease of use of their solution:
The dashboard provides a nice overview of the repositories and their scorecard results. Their is also an histogram of the security score for all repositories in the organization which is similar to what we did manually in our previous analysis. But the killer feature for this app is the ability to automatically create pull-requests to fixes some of the issues identified by scorecards, e.g.:
- Pin Actions to a full length commit SHA
- Restrict permissions for GITHUB_TOKEN
StepSecurity also recently released a new feature that helps with properly configuring dependabot on repositories.
We have started to use StepSecurity on some of our repositories and we are very satisfied of the results. A couple of missing features from the early days have already been implemented: previewing the pull-request before opening it, the ability to customize the PR comment… We will now use this across all our organization and repositories and will have a second run of full analysis to watch how our scorecard results improved.
SLSA badging program
Projects at Eclipse Foundation can now declare their SLSA compliance level on their Project Management Infrastructure (PMI) page. Next improvement will help projects with specifying how to comply with requirements for each level.
Some projects already have the SLSA compliance process well underway. Eclipse Temurin (part of Adoptium), for example, just recently completed the work necessary to reach level 2 SLSA compliance, and is working on achieving level 3.
Revise vulnerability reporting practices
An RFC has been published and socialized with the security team and the architecture council, proposing several changes and improvements to the vulnerability disclosure processes at the Eclipse Foundation. The RFC also outlines how to leverage tooling at gitlab.eclipse.org and github.com in order for projects to receive vulnerability reports in confidential channels and to work on security fixes under embargo.
Eclipse Californium and Eclipse Oniro have started to experiment with this tooling on github.com and gitlab.eclipse.org respectively.
Improve security of marketplace.eclipse.org
The Eclipse Marketplace is a successful service that serves more than 12 millions API requests per month. In an effort to better protect its users, we’ve started to enforce the use of HTTPS for all contents linked by the Eclipse Marketplace on October 14th, 2022. The Eclipse Marketplace does not host the content of the provided solutions, it only provides links to them. We are actively working with owners of existing solutions with plain HTTP entries to fix them. Beginning December 15th, non-compliant solutions will be unpublished from the Eclipse Marketplace. This effort can be followed on the tracking issue.
Public presentations
We’ve been busy presenting why securing the supply chain of open source software is important what we do at the Eclipse Foundation to help our projects do that:
- EclipseCon 2022: Open Source Software Supply Chain Security — Why does it matter? (slides, video recording)
- Open Source Experience: Open Source Software Supply Chain Security — Why does it matter? (video upcoming)
- Open CIO Summit: Les communautés open source et les enjeux autour du Secure & Green By Design (video upcoming)
We’re hiring!
We’ve still plenty to do:
- we are conducting code audits with OSTIF for a couple of our projects,
- we will soon start a campaign to promote the usage of Two-Factors Authentication to our committers,
- leverage our ORT setup and the SBOM it generates to help our projects publish a SBOM with their releases;
- implement a digital signature infrastructure for all Eclipse Foundation projects (best candidate for that is Sigstore).
- … and many other things
To do that, we are growing the team. We currently have 3 openings:
If you’re interested in the topic of Software Supply Chain security and would like to actively participate in securing the 420+ projects developed at the Eclipse Foundation, you should consider applying!






