Skip to main content

Language execution with Langium and LLVM

November 19, 2025 12:00 AM

In this blog post, we continue exploring the synergy between Langium and LLVM by detailing how to generate LLVM IR from an AST created by Langium to make your language executable.

November 19, 2025 12:00 AM

Understanding Open Source Stewards and the Cyber Resilience Act

by Marta Rybczynska at November 18, 2025 06:54 AM

Understanding Open Source Stewards and the Cyber Resilience Act Marta Rybczynska

by Marta Rybczynska at November 18, 2025 06:54 AM

TheiaCon 2025: The Eclipse Theia Project Update

by Jonas, Maximilian & Philip at November 18, 2025 12:00 AM

We’re excited to share the opening keynote from TheiaCon 2025, providing a comprehensive update on the Eclipse Theia project over the past year. This keynote highlights the remarkable progress Theia …

The post TheiaCon 2025: The Eclipse Theia Project Update appeared first on EclipseSource.


by Jonas, Maximilian & Philip at November 18, 2025 12:00 AM

Mind Maps Didn’t Make Me Scroll

by Donald Raab at November 17, 2025 03:19 AM

How to view complete Java library package structures without scrolling.

The api packages in the Eclipse Collections API jar as a mind map

Fully Expanded Packages Without Scrolling

I’ve been a fan of using mind maps for almost a decade. I have created many mind maps for my blogs using Astah UML. I really enjoy using Astah UML to help me explore and communicate ideas. As an added bonus, Astah UML is written in Java.

While I was writing my first book, “Eclipse Collections Categorically: Level up your programming game”, I wanted to show the complete package structure of the two jar files that make up the Eclipse Collections library. I started out trying to use screenshots of the project view from IntelliJ IDEA. The problem was that the entire package hierarchy wouldn’t fit in a tree view without having to scroll.

This is the best I could do without scrolling using the tree view in IntelliJ. I took two snapshots of the tree view and mashed them up in a single image. It kind of gives a sense of what the package structure in the two jars looks like, but it was incomplete.

Side by side tree view of Eclipse Collections api and impl packages in IntelliJ

I started out using an image like this in the book, and got some feedback from one of my technical reviewers that the picture wasn’t very clear because it was incomplete.

This is what led me to the “light bulb” moment to use mind maps to capture the package structure of the two jars. I am very happy with the results and how it looks in both the printed and digital versions of my book. The first image in this blog shows the api packages as a mind map.

For more information about the package design in Eclipse Collections, the following blog is a great resource. Unfortunately, when I wrote this blog, I didn’t yet have the bright idea to use mind maps for the package hierarchies in the two jars. Lesson learned.

Leverage Information Chunking to scale your Java library package design

That’s all folks

If you ever find yourself creating some documentation or a book that needs to include a Java package or directory structure, maybe next time consider using a mind map. This approach worked well for me in my book. When folks ask me about how the Eclipse Collections package hierarchies are structured, I point them to the two mind maps that appear on facing pages in the digital and print versions of the Eclipse Collections Categorically book. Both mind maps can be found in the free online reading sample for Kindle in Chapter 2 under the “Package Design” section.

Thanks for reading! If you’ve read this far, check the Kindle version of the of the book on Amazon on November 20, 2025. You might find a good deal on that date if you are interested in the Kindle version of the book.

Note: You don’t need a Kindle Reader to read a Kindle book, as there is a Kindle App available for several hardware platforms. I read Kindle books on my MacBook Pro using the Kindle App.

I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions. I am the author of the book, Eclipse Collections Categorically: Level up your programming game.


by Donald Raab at November 17, 2025 03:19 AM

OCX 2026: Let’s build the future of open collaboration together

by Clark Roundy at November 13, 2025 04:41 PM

OCX 2026: Let’s build the future of open collaboration together

TL;DR - Registration for OCX26 is officially open! Join us in Brussels from 21–23 April 2026, and grab your early bird discount before 6 January. Don’t miss the chance to be part of the future of open collaboration.

 

The heartbeat of open source

At the Eclipse Foundation, openness is more than a value. It’s who we are. Each year, the Open Community Experience (OCX) brings that spirit to life by connecting developers, innovators, researchers, and industry leaders from around the world. 

OCX 2026 is shaping up to be our biggest and most inspiring event yet. And we’re doing it in the heart of Europe: Brussels, a city known for innovation, collaboration, and great waffles.

One pass. Six experiences. Endless opportunities.

Your OCX26 pass gives you full access to the Main OCX Track plus five collocated events, each focused on the technologies and communities shaping the future of open source:

Whether you write code, design smarter cars, research AI, navigate compliance, or just love open source, OCX26 is where you belong.

Why register early?

Because it saves you over €100!  Register before 6 January 2026 to lock in early bird pricing. 

Our program teams are now putting together an unmissable lineup filled with fresh ideas, bold conversations, and practical insights. You can expect sessions on everything from secure software practices and CRA compliance to AI-powered development tools and next-generation mobility platforms shaping the future of open source.

Who should attend?

If you care about open source, OCX26 is the place to be:

  • Developers and maintainers shaping open tools and frameworks
  • Innovators in automotive, embedded, and edge systems
  • AI researchers advancing ethical, open AI
  • Compliance and security professionals navigating new regulations
  • Academics and industry partners turning research into real-world impact
  • Tech leaders connecting innovation to industry needs

In short, YOU!

Got something to share?

There’s still time to submit your talk, but not much.

The call for proposals closes on 19 November

We’re looking for stories, insights, and breakthroughs from across the open source ecosystem: Java, AI, automotive, embedded, compliance, and research. Whether it’s a new project, an interesting idea, or a collaboration success story, your voice belongs on the OCX stage.

Don’t miss the chance to share your expertise and connect with hundreds of passionate community members from across the world.

Sponsor the future

OCX exists because of the organizations that believe in open collaboration and community-driven innovation. 

Now’s your chance to join them as a sponsor of OCX. Our flexible Sponsorship packages put your brand in front of developers, innovators, and leaders who are shaping the next generation of open technology. 

From AI and automotive to tooling and compliance, OCX26 connects your brand with the communities shaping tomorrow’s technology.

Be part of the experience

Mark your calendars, grab your early bird pass, and get ready to join over 600 open source innovators in Brussels this April for three days of collaboration, connection, and creativity.

👉 Register now.
👉 Submit your talk by 13 November.
👉 Explore sponsorship opportunities

 

Clark Roundy

by Clark Roundy at November 13, 2025 04:41 PM

Eclipse Theia 1.66 Release: News and Noteworthy

by Jonas Helming at November 13, 2025 09:11 AM

Eclipse Theia 1.66 Release: News and Noteworthy Jonas Helming

Eclipse Theia 1.66 delivers a feature-rich update with persistent AI chat sessions, slash commands, agent modes, and new GitHub and Project Info agents. It also brings significant debugging, UI, and API improvements. Check out the full announcement!

Image
alt

by Jonas Helming at November 13, 2025 09:11 AM

Eclipse Theia 1.66 Release: News and Noteworthy

by Jonas, Maximilian & Philip at November 13, 2025 12:00 AM

We are happy to announce the Eclipse Theia 1.66 release! The release contains in total 78 merged pull requests. In this article, we will highlight some selected improvements and provide an overview of …

The post Eclipse Theia 1.66 Release: News and Noteworthy appeared first on EclipseSource.


by Jonas, Maximilian & Philip at November 13, 2025 12:00 AM

AWS invests in strengthening open source infrastructure at the Eclipse Foundation

by Anonymous at November 05, 2025 03:08 PM

AWS invests in strengthening open source infrastructure at the Eclipse Foundation Anonymous (not verified)

This commitment will benefit multiple core services, including Open VSX Registry, the open source registry for Visual Studio Code extensions that powers AI-enabled development environments such as Kiro and other leading tools.


by Anonymous at November 05, 2025 03:08 PM

AWS invests in strengthening open source infrastructure at the Eclipse Foundation

by Mike Milinkovich at November 05, 2025 02:29 PM

In our recent open letter and blog post on sustainable stewardship of open source infrastructure, we called on the industry to take a more active role in supporting the systems and services that drive today’s software innovation. Today, we’re excited to share a powerful example of what that kind of leadership looks like in action.

The Eclipse Foundation is pleased to announce that Amazon Web Services (AWS) has made a significant investment to strengthen the reliability, performance, and security of the open infrastructure that supports millions of developers around the world. This commitment will benefit multiple core services, including Open VSX Registry, the open source registry for Visual Studio Code extensions that powers AI-enabled development environments such as Kiro and other leading tools.

Sustaining the backbone of open source innovation

For more than two decades, the Eclipse Foundation has quietly maintained open infrastructure that forms the foundation of modern software creation for millions of software developers worldwide. Its privately hosted systems deliver more than 500 million downloads each month across services such as download.eclipse.org, the Eclipse Marketplace, and Open VSX. These platforms serve as the backbone for individuals, organisations, and communities that rely on open collaboration to build the technologies of the future.

AWS’s investment will help improve performance, reliability, and security across this infrastructure. The collaboration reflects a shared commitment to keeping open source systems resilient, transparent, and sustainable at global scale.

Open VSX: a model for sustainable open infrastructure

Open VSX is a vendor-neutral, open source (EPL-2.0) registry for Visual Studio Code extensions. It serves as the default registry for Kiro, Amazon’s AI IDE platform, and is relied upon by a growing global community of developers. The registry now hosts over 7,000 extensions from nearly 5,000 publishers and delivers in excess of 110 million downloads per month. As a leading registry serving developer communities worldwide, including JavaScript and AI development communities, Open VSX has become a vital piece of open source infrastructure that supports thousands of development teams worldwide.

By supporting Open VSX, AWS is helping to strengthen the foundations of this essential service and reinforcing the Eclipse Foundation’s ability to provide secure, reliable, and globally accessible infrastructure. Their contribution reflects the importance of collective investment in maintaining the resilience, openness, and security of the tools developers use every day.

This sponsorship highlights the shared responsibility that all organisations have in sustaining the technologies they depend on. It also sets a strong example of how industry leaders can contribute to ensuring that the services we all rely on remain trustworthy, scalable, and sustainable for the future.

Improving reliability, security, and trust

The AWS investment is helping strengthen security, ensuring fair access, and improving long-term service reliability. Ongoing work focuses on enhancing malware detection, improving traffic management, and expanding operational monitoring to ensure a stable and trusted experience for developers around the world.

As part of this collaboration, AWS is providing infrastructure and services that will improve availability, performance, and scalability across these systems. This support will accelerate key roadmap initiatives and help ensure that the platforms developers rely on remain secure, scalable, and trustworthy well into the future.

A shared commitment to open source sustainability

AWS’s contribution demonstrates how industry leaders can make strategic investments in sustaining the shared infrastructure their businesses depend on every day. By investing in the services that support open source development, AWS is helping to ensure that critical technologies remain open, reliable, and accessible to everyone.

The Eclipse Foundation continues to serve as an independent steward of open source infrastructure, maintaining the tools and systems that enable software innovation across industries. Together with supporters like AWS, we are building a stronger foundation for the future of open collaboration.

But this is only the beginning. The long-term health of open source infrastructure depends on collective action and shared responsibility. We encourage other organisations to follow AWS’s example and take an active role in sustaining the technologies that make modern development possible.

Learn how your organisation can make a difference through Eclipse Foundation membership or direct sponsorship opportunities. The future of open innovation depends on all of us; and together, we can keep it strong, secure, and sustainable.


by Mike Milinkovich at November 05, 2025 02:29 PM

Self-Brewed Beer is (Almost) Free - Experiences using Ollama in Theia AI - Part 2

November 04, 2025 06:12 PM

This is part two of an extended version of a talk I gave at TheiaCon 2025. That talk covered my experiences with Ollama and Theia AI in the previous months. In part one I provided an overview of Ollama and how to use it to drive Theia AI agents, and presented the results of my experiments with different local large language models.

In this part, I will draw conclusions from these results and provide a look into the future of local LLM usage.

Considerations Regarding Performance

The experiment described in part one of this article showed that working with local LLMs is already possible, but still limited due to relatively slow performance.

Technical Measures: Context

The first observation is that the LLM is becoming slower as the context grows. The reason is that the LLM needs to parse the entire context for each message. At the same time, a too small context window leads to the LLM forgetting parts of the conversation. In fact, as soon as the context window is filled, the LLM engine will start discarding the first messages in the conversation, while retaining the system prompt. So, if you experience that an agent seems to forget the initial instructions you gave it in the chat, this means most likely that the context window is exceeded. In this case the agent might become unusable, so it is a good idea to use a context window that is large enough to fit the system prompt, the instructions, and the tool calls during processing. On the other hand, at a certain point in long conversations or reasoning chains, the context can become so large that each message takes more than a minute to process.

Consequently, as users, we need to develop an intuition for the necessary context length–long enough for the task, but not too excessive.

Also, it is a good idea to reduce the necessary context by

  • adding paths in the workspace to the context beforehand, so that instead of letting the agent browse and search the workspace for the files to modify via tool calls, we already provide that information. In my experiments, this reduced token consumption from about 60,000 tokens to about 20,000 for the bug analysis task. (Plus, this also speeds up the analysis process as a whole, because the initial steps of searching and browsing the workspace do not need to be performed by the agent).
  • keeping conversations and tasks short. Theia AI recommends this even for non-local LLMs and provides tools such as Task Context and Chat Summary. So, it is a good idea to follow Theia AI's advice and use these features regularly.
  • defining specialized agents. It is very easy to define a new agent with its custom prompt and tools in Theia AI. If we can identify a repeating task that needs several specialized tools, it is a good idea to define a specialized agent with this specialized toolset. In particular regarding the support for MCP servers, it might be tempting to start five or more MCP servers and just throw all the available tools into the Architect or Coder agent's prompt. This is a bad idea, though, because each tool's definition is added to the system prompt and thus, consumes a part of the context window.

Note that unloading/loading models is rather expensive as well and usually takes up to several seconds. And in Ollama, even changing the context window size causes a model reload. Therefore, as VRAM is usually limited, it is a good idea to stick to one or two models that can fit into the available memory, and not change context window sizes too often.

Organizational Measures

Even with these considerations regarding the context length, Local LLMs will always be slower than their cloud counterparts.

Therefore, we should compensate for this at the organizational level by adjusting the way we work; for example, while waiting for the LLM to complete a task,

  • we could start to write and edit prompts for the next features
  • we can review the previous task contexts or code modifications and adjust them
  • we can do other things in parallel, like go to lunch, grab a coffee, go to a meeting, etc., and let the LLM finish its work while we are away.

Considerations Regarding Accuracy

As mentioned in part 1, local LLMs are usually quantized (which means basically: rounded) so that weights-or parameters-consume less memory. Therefore, a model can have a lower accuracy. The symptom for this is that the agent does not do the correct thing, or does not use the correct arguments when calling a tool. 

In my experience, analyzing the reasoning/thinking content and checking the actual tool calls an agent makes, is a good way to determine what goes wrong. Depending on the results of such an analysis

  • we can modify the prompt; for example by giving more details, more examples, or by emphasizing important things the model needs to consider
  • we can modify the implementation of the provided tools. This, of course, requires building a custom version of the Theia IDE or the affected MCP server. But if a tool call regularly fails, because the LLM does not get the arguments 100% correct, but we could mitigate for these errors in the tool implementation, it might be beneficial to invest in making the tool implementation more robust.
  • we can provide more specific tools; for example, Theia AI only provides general file modification tools, such as writeFileReplacements. If you work mostly with TypeScript code, for example, it might be a better approach to implement and use a specialized TypeScript file modification tool that can automatically take care of linting, formatting, etc. on the fly. 

Considerations Regarding Complexity

During my experiments, I have tried to give the agent more complex tasks to work on and let it run overnight. This failed however, because sooner or later, the agent will be unable to continue due to the limited context size; it starts forgetting the beginning of the conversation and thus, its primary objective.

One way to overcome this limitation is to split complex tasks into several smaller, lower-level ones. Starting with version 1.63.0, Theia AI supports agent-to-agent delegation. Based on this idea, we could implement a special Orchestrator agent (or a more programmatic workflow) that is capable to split complex tasks into a series of simpler ones. These simpler tasks could then be delegated to specialized agents (refined versions of Coder, AppTester, etc.) one by one. This would have the advantage that each step could start with a fresh, empty context window, thus following the considerations regarding context discussed above. 

This is something that would need to be implemented and experimented with. 

Odds and Ends

This blog article has presented my experiences and considerations about using local LLMs with Theia AI.

Several topics have only been touched slightly, or not at all, and are subject of further inspection and experimentation:

  • Until recently, I had considered Ollama too slow for code completion, mostly because the TTFT (time to first token) is usually rather high. But recently, I have found that at least with the model zdolny/qwen3-coder58k-tools:latest, the response time feels okay. So, I will start experimenting with this and some other models for code completion. 
  • Also, Ollama supports fill-in-the-middle completion. This means that the completion API does not only support providing a prefix, but also a suffix as input. This API is currently not supported by Theia AI directly. The Code Completion Agent in Theia usually provides the prefix and suffix context as part of the user prompt. So Theia AI would have to be enhanced to support the fill-in-the-middle completion feature natively. And it needs to be determined whether this will also help to improve performance and accuracy.
  • Next, there are multiple approaches regarding optimizing and fine-tuning models for better accuracy and performance. There are several strategies, such as Quantization, Knowledge Distillation, Reinforcement Learning, and Model Fine Tuning which can be used to make models more accurate and performant for one's personal use cases. The Unsloth and MLX projects, for example, aim at providing optimized, local options to perform these tasks.
  • Finally, regarding Apple Silicon Processors in particular, there are two alternatives to boost performance, if they were supported:
    • CoreML is a proprietary Apple framework to use the native Apple Neural Engine (which would provide another performance boost, if an LLM could run fully on it). The bad news is that at the moment, it seems that using the Apple Neural Engine is currently limited by several factors. Therefore, there are no prospects of running a heavier LLM, such as gpt-oss:20b, on the ANE, at the moment.
    • MLX is an open framework, also developed by Apple, that runs very efficiently on Apple Silicon processors using a hybrid approach to combine CPU, GPU, and Apple Neural Engine resources. Yet, there is still very limited support available to run LLMs in MLX format. But at least, there are several projects and enhancements in development:
      • there is a Pull Request in development to add MLX support to Ollama, which is the basis for using the Neural Engine
      • other projects, such as LM Studio, swama, mlx-lm and others support models in the optimized MLX format, but in my experiments, tool call processing was unstable, unfortunately. 

Outlook

The evolution of running LLMs locally and using them for agentic development in Theia AI has been moving fast recently. The progress made in 2025 alone suggests that LLMs running locally will continue to get better and better over time:

  • better models keep appearing: from deepseek-r1 to qwen3 and gpt-oss, we can be excited about what will come next
  • context management is getting better: every other week, we can observe discussions around enhancing or using the context window more effectively in one way or another: the Model-Context-Procol, giving LLMs some form of persistent memory, choosing more optimal representations of data, for example by using TOON, utilizing more intelligent context compression techniques, to name just a few.
  • hardware is becoming better, cheaper, and more available; I have performed my experiments with a 5 year old processor (Apple M1 Max) and I have already achieved acceptable results. Even today's processors are already much better, and there is more to come in the future
  • software is becoming better: Ollama is being actively developed and enhanced, and Microsoft has recently published BitNet, an engine to support 1-bit LLMs, etc.

We can be excited to see what 2026 will bring…


November 04, 2025 06:12 PM

Self-Brewed Beer is (Almost) Free - Experiences using Ollama in Theia AI - Part 1

November 04, 2025 03:38 PM

This blog article is an extended version of a talk I gave at TheiaCon 2025. The talk has covered my experiences with Ollama and Theia AI in the previous months.

What is Ollama?

Ollama is an OpenSource project which aims at making it possible to run Large Language Models (LLMs) locally on your own hardware with a docker-like experience. This means that, as long as your hardware is supported, it is detected and used with no further configuration.

Advantages

Running LLMs locally has several advantages:

  • Unlimited tokens: you only pay for the power you are consuming and the hardware if you do not already own it.
  • Full confidentiality and privacy: the data (code, prompts, etc.) never leaves your network. You do not have to worry about providers using your confidential data to train their models.
  • Custom models: You have the option to choose from a large number of pre-configured models, or you can download and import new models, for example, from huggingface. Or you can take a model and tweak it or fine-tune it to your specific needs.
  • Vendor neutrality: It does not matter who wins the AI race in a few months, you will always be able to run the model you are used to locally.
  • Offline: You can use a local LLM on a suitable laptop even when traveling, for example by train or on the plane. No Internet connection required. (A power outlet might be good, though...)

Disadvantages

Of course, all of this also comes at a cost. The most important disadvantages are:

  • Size limitations: Both the model size (number of parameters) and context size are heavily limited by the available VRAM.
  • Quantization: As a compromise to allow for larger models or contexts, quantization is used to sacrifice weight precision. In other words, a model with quantized parameters can fit more parameters in the same amount of memory. This comes at a cost of lower inference accuracy as we will see further below.

Until recently, the list of disadvantages has included that there was no support for local multimodal models. So, reasoning about images, video, audio, etc. was not possible. But that has changed last week, when ollama 0.12.7 was released along with locally runnable qwen3-vl model variants.

Development in 2025

A lot has happened in 2025 alone. At the beginning of 2025, there was neither a good local LLM for agentic use (especially reasoning and tool calling was not really usable) and also the support for Ollama in Theia AI was limited.

But since then, in the last nine months:

With the combination of these changes, it is now very well possible to use Theia AI agents backed by local models.

Getting Started

To get started with Ollama, you need to follow these steps:

  1. Download and install the most recent version of Ollama. Be sure to regularly check for updates, as with every release of Ollama, new models, new features, and performance improvements are implemented.
  2. Start Ollama using a command line like this:

    OLLAMA_NEW_ESTIMATES="1" OLLAMA_FLASH_ATTENTION="1" OLLAMA_KV_CACHE_TYPE="q8_0" ollama serve

    Keep an eye open for the Ollama release changelogs, as the environment settings can change over time. Make sure to enable and experiment with new features.

  3. Download a model using

    ollama pull gpt-oss:20b

  4. Configure the model in Theia AI by adding it to the Ollama settings under Settings > AI Features > Ollama
  5. Finally, as described in my previous blog post, you need to add request settings for the Ollama models in the settings.json file to adjust the context window size (num_ctx), as the default context window in Ollama is not suitable for agentic usage.

Experiments

As a preparation for TheiaCon, I have conducted several non-scientific experiments on my MacBook Pro M1 Max with 64GB of RAM. Note that this is a 5-year-old processor.

The task I gave the LLM was to locate and fix a small bug: A few months ago, I had created Ciddle - a Daily City Riddle, a daily geographical quiz, mostly written in NestJS and React using Theia AI. In this quiz, the user has to guess a city. After some initial guesses, the letters of the city name are partially revealed as a hint, while keeping some letters masked with underscores. As it turned out, this masking algorithm had a bug related to a regular expression not being Unicode-friendly: it matched only ASCII letters, but not special characters, such as é. So special characters would never be masked with underscores.

Therefore, I wrote a prompt explaining the issue and asked Theia AI to identify the bug and fix it. I followed the process described in this post

  1. I asked the Architect agent to analyze the bug and plan for a fix
    • once without giving the agent the file containing the bug, so the agent needs to analyze and crawl the workspace to locate the bug
    • once with giving the agent the file containing the bug using the "add path to context" feature of Theia AI
  2. I asked Theia AI to summarize the chat into a task context
  3. I asked Coder to implement the task (in agent mode, so it directly changes files, runs tasks, writes tests, etc.)
    • once with the unedited summary (which contained instructions to create a test case)
    • once with the summary with all references to an automated unit test removed, so the agent would only fix the actual bug, but not write any tests for it

The table below shows the comparison of different models and settings:

Model Architect Architect (with file path provided) Summarize Coder (fix and create test) Coder (fix only)
gpt-oss:20b          
 - w/ num_ctx = 16k 175s 33s 32s 2.5m (3) 43s
 - w/ num_ctx = 128k 70s 50s 32s 6m 56s
qwen3-14b
 - w/ num_ctx = 40k
(1) 143s 83s (4) (4)
qwen3-coder:30b
 - w/ num_ctx = 128k
(2) (2) 64s 21m (3) 13m
gpt-oss:120b-cloud 39s 16s 10s 90s (5) 38s

(1) without file path to fix, the wrong file and bugfix location is identified
(2) with or without provided path to fix, qwen3-coder "Architect" agent runs in circles trying to apply fixes instead of providing an implementation plan
(3) implemented fix correctly, but did not write a test case, although instructed to do so.
(4) stops in the middle of the process without any output
(5) in one test, gpt-oss:120b-cloud did not manage to get the test file right and failed when the hourly usage limit was exceeded

Observations

I have performed multiple experiments. The table reports more or less the best case times. As usual when working with LLMs, the results are not always deterministic. But, in general, if the output is similar for a given model, the processing time is also the same within a few seconds, so the table above shows more or less typical results for the case that the outcome was acceptable, if this was possible.

In general, I have achieved the best results with gpt-oss:20b with a context window of 128k tokens (the maximum for this model). A smaller context window can result in faster response times, but at the risk of not performing the task completely; for example, when running with 16k context, the Coder agent would fix the bug, but not provide a test, even though the task context contained this instruction.

Also, in my first experiments, the TypeScript/Jest configuration contained an error which caused the model (even with 128k context) to run around in circles for 20 minutes and eventually deleting the test again before finishing its process.

The other two local models, I used in the tests, qwen3:14b and qwen3-coder:30b were able to perform some of the agentic tasks, but usually at a lower performance and even failing in some scenarios.

Besides the models listed in the table above, I have also tried a few other models that were popular in the Ollama model repository, such as granite4:small-h and gemma3:27b. But they either had a similar behavior as qwen3:14b, so they just stopped at some point without any output, or they did not use the tools provided and just replied with a general answer.

Also note, that some tools (such as deepseek-r1) do not support tool calling in their local variants (yet...?). There are some variants of common models that are modified by users to support tool calling in theory, but in practice the tool calls are either not properly detected by ollama, or the provided tools are not used at all. 

Finally, just for comparison, I have also used the recently released Ollama cloud model feature to run the same tasks with gpt-oss:120b-cloud. As expected, the performance is much better than with local models, but at the same time, the gpt-oss:120b-cloud model also began to run around in circles once. So even that is not perfect in some cases.

To summarize, the best model for local agentic development with Ollama is currently gpt-oss:20b. In case everything works, it is surprisingly fast even with my 5 year old hardware. But, if something goes wrong, it usually goes fatally wrong, and the model will entangle itself in endless considerations and fruitless attempts to fix the situation.

Stay tuned for the second part of this article, where I will describe the conclusions I draw from my experiences and experiments, discuss consequences, and provide a look into the future of local LLMs in the context of agentic software development.


November 04, 2025 03:38 PM

The Active Ecosystem of Eclipse Theia Adopters

by Jonas, Maximilian & Philip at November 04, 2025 12:00 AM

We’re pleased to call attention to a compelling article by Thomas Froment at the Eclipse Foundation: “The Active Ecosystem of Eclipse Theia Adopters: A Tour of Diverse Tools and IDEs.” For those in …

The post The Active Ecosystem of Eclipse Theia Adopters appeared first on EclipseSource.


by Jonas, Maximilian & Philip at November 04, 2025 12:00 AM

What if Java had Symmetric Converter Methods on Collection?

by Donald Raab at November 02, 2025 05:03 PM

Comparing converter methods in Smalltalk, Java, and Eclipse Collections

Using converter methods in Pharo Smalltalk. Converter methods are prefixed with “as” in Smalltalk.

toBe(), or not toBe()?

Converter methods are more than a convenience in a programming language. They are a means to discovering additional collection types available to developers. When the number of available collection types is large, this becomes even more important to have good discoverability. Smalltalk has mostly mutable collection types. Java and Eclipse Collections both have mutable and immutable implementations. Only Eclipse Collections has mutable and immutable types as separate interfaces. Eclipse Collections also has primitive collections, so converter methods help provide helpful symmetry and discoverability between Object and primitive collection types.

toSmalltalk

In Smalltalk, converter methods are prefixed with as. The Collection abstract class has eleven converter methods — asArray, asBag, asByteArray, asCharacterSet, asDictionary, asIdentitySet, asMultilineString, asOrderedCollection, asOrderedDictionary, asSet, asSortedCollection.

This is the code from the above image inlined.

|ordered sorted set bag|
ordered := OrderedCollection with: 'Apple' with: 'Pear' with: 'Banana' with: 'Apple'.
sorted := ordered asSortedCollection: #yourself descending.
set := ordered asSet.
bag := sorted asBag.

Transcript show: ordered printString; cr.
Transcript show: sorted printString; cr.
Transcript show: set printString; cr.
Transcript show: bag printString; cr.

This is the output:

an OrderedCollection('Apple' 'Pear' 'Banana' 'Apple')
a SortedCollection('Pear' 'Banana' 'Apple' 'Apple')
a Set('Pear' 'Banana' 'Apple')
a Bag('Pear' 'Banana' 'Apple' 'Apple')

IIRC, most of the Collection types available via converter methods in Smalltalk are mutable.

toJava

In Java, converter methods are prefixed with to. The Collection interface has two converter methods — toString and toArray. The Stream interface has three converter methods —toString, toArray, toList. The Stream interface has a collect method which takes a Collector as a parameter. The Collectors utility class has nine unique to methods (some are overloaded) — toCollection, toList, toSet, toMap, toConcurrentMap, toUnmodifiableList, toUnmodifiableSet, toUnmodifiableMap, toConcurrentMap.

To convert a List to a “sorted” List, a Set, and a Bag , we can use the following. There is no SortedList or Bag type in Java, but we’ll find an equivalent.

@Test
public void converterMethodsInJava()
{
List<String> ordered =
List.of("Apple", "Pear", "Banana", "Apple");
List<String> sorted =
ordered.stream()
.sorted(Comparator.reverseOrder())
.toList();
Set<String> set =
ordered.stream()
.collect(Collectors.toSet());
Map<String, Long> bag =
sorted.stream()
.collect(Collectors.groupingBy(
Function.identity(),
Collectors.counting()));

assertEquals(
List.of("Pear", "Banana", "Apple", "Apple"),
sorted);
assertEquals(
Set.of("Pear", "Banana", "Apple"),
set);
assertEquals(
Map.of("Pear", 1L, "Banana", 1L, "Apple", 2L),
bag);
}

Most of the converter methods in Java are three steps away from Collection. I do not think it is likely we will see any more converter methods on Collection or Stream.

Note, in the code example above it is not easy to distinguish between mutable and immutable collection implementations. You have to read the Javadoc or code to understand the return types of different methods.

IntelliJ also recommends not using the converter method in the case of Collectors.toSet().

stream().collect(Collectors.toSet()) shows up highlighted in yellow

IntelliJ recommends writing it as follows and this can be accomplished by hitting Alt-Enter and choosing the recommended action above.

Set<String> set = new HashSet<>(ordered);

Using this approach is more concise and is probably more performant (measure, don’t guess), but it introduces more asymmetry and draws implementation details (java.util.HashSet class) into our example.

toEclipseCollections

In Eclipse Collections, like Java, we use the to prefix for converter methods that have a linear time cost. The Eclipse Collections RichIterable interface has twenty six unique converter methods (some are overloaded). The converter methods can be found using IntelliJ Structure view in a category named “Converting”.

Expanding the converter methods for Eclipse Collections in RichIterable in IntelliJ

To convert a List to a “sorted” List, a Set, and a Bag , we can use the following. There is no SortedList in Eclipse Collections, but we’ll find an equivalent. We will use mutable collections in these examples, and we can tell they are mutable based on the type names.

@Test
public void converterMethodsInEclipseCollections()
{
MutableList<String> ordered =
Lists.mutable.of("Apple", "Pear", "Banana", "Apple");
MutableList<String> sorted =
ordered.toSortedList(Comparator.reverseOrder());
MutableSet<String> set =
ordered.toSet();
MutableBag<String> bag =
sorted.toBag();

assertEquals(
List.of("Pear", "Banana", "Apple", "Apple"),
sorted);
assertEquals(
Set.of("Pear", "Banana", "Apple"),
set);
assertEquals(
Bags.mutable.withOccurrences("Apple", 2, "Pear", 1, "Banana", 1),
bag);
}

If we want to use immutable collections in Eclipse Collections, the code would look like this.

@Test
public void immutableConverterMethodsInEclipseCollections()
{
ImmutableList<String> ordered =
Lists.immutable.of("Apple", "Pear", "Banana", "Apple");
ImmutableList<String> sorted =
ordered.toImmutableSortedList(Comparator.reverseOrder());
ImmutableSet<String> set =
ordered.toImmutableSet();
ImmutableBag<String> bag =
sorted.toImmutableBag();

assertEquals(
List.of("Pear", "Banana", "Apple", "Apple"),
sorted);
assertEquals(
Set.of("Pear", "Banana", "Apple"),
set);
assertEquals(
Bags.mutable.withOccurrences("Apple", 2, "Pear", 1, "Banana", 1),
bag);
}

I did not have to change the assertions in the example. The equals and hashCode contract for mutable and immutable types of the same container type is the same.

Takeaways

Java is a great language. Java’s standard Collection library is useable but does not have great symmetry or convenience. Eclipse Collections brings back and extends the conveniences Smalltalk had thirty years ago into Java today, and adds symmetry for mutable and immutable converter methods.

Think about the code you are writing today, and what it will be like maintaining it for the next 5, 10, 20 or 30 years. Writing code that communicates well helpful in reducing the cost of understanding and maintenance. Well written code will also help new developers learn how things work, without having to memorize a lot of asymmetric alternatives to converting between collection types.

If you want to learn more about converter methods in Eclipse Collections, I have blogged about them previously, and they are also covered in Chapter 4 of the book “Eclipse Collections Categorically.” Here is a table of most of the mutable and immutable converter methods described in Chapter 4.

Converting between RichIterable Types from Eclipse Collections Categorically.

Vladimir Zakharov and I also covered some converter methods in our “Refactoring to Eclipse Collections” talk at dev2next which I blogged about here.

Refactoring to Eclipse Collections with Java 25 at the dev2next Conference

If you don’t think this applies to you because you’ve moved to Kotlin, Python, Ruby, or some other language, take a look at the Kotlin Collections to methods. Once Immutable Collections become part of the Kotlin standard library, I will take an educated guess how this will grow.

Kotlin Collections converter methods

Thanks for reading!

I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions. I am the author of the book, Eclipse Collections Categorically: Level up your programming game.


by Donald Raab at November 02, 2025 05:03 PM

The Eclipse Foundation’s Theia AI wins 2025 CODiE Award for Best Open Source Development Tool

by Anonymous at October 29, 2025 08:45 AM

The Eclipse Foundation’s Theia AI wins 2025 CODiE Award for Best Open Source Development Tool Anonymous (not verified)

BRUSSELS – 29 October 2025 – The Eclipse Foundation, one of the world’s largest open source software foundations, today announced that Theia AI has been named the winner of the 2025 CODiE Award for Best Open Source Development Tool.

The CODiE Awards are the only peer-recognised program honouring excellence and innovation across the technology landscape. Each product undergoes a rigorous evaluation by expert judges and industry peers based on innovation, impact, and overall value.

“We are honoured to be recognised among such groundbreaking technologies and organisations,” said Jonas Helming, Project Lead for Eclipse Theia and CEO of EclipseSource. “This CODiE Award underscores our team’s commitment to advancing open source innovation and empowering the next generation of AI-native tools and IDEs.”

Theia AI: Giving developers full control over AI integration

Part of the Eclipse Theia tool platform, Theia AI is an open source framework that gives tool builders complete control over how AI is integrated into their products. It allows developers to manage every aspect of AI capabilities, from selecting the most suitable Large Language Model (LLM), whether cloud-based, self-hosted, or fully local, to orchestrating the entire prompt engineering flow, defining agentic behaviours, and choosing which data and knowledge sources to use. 

This flexibility ensures transparency, adaptability, and precision, enabling developers to fine-tune AI interactions to fit their specific use cases and strategic goals. Tool developers can design AI-driven user experiences exactly as they envision, whether through interactive chat interfaces, AI-assisted code editors, or fully customised user interfaces.

By simplifying complex AI integration challenges, Theia AI enables the creation of advanced, tailor-made AI capabilities that go beyond today’s state of the art and align with the unique demands of each domain. Following extensive beta testing and real-world adoption, Theia AI is now publicly available to empower developers and tool builders to bring intelligent, domain-specific AI capabilities to life. Learn more in the Theia AI release announcement.

“The CODiE Awards celebrate the visionaries shaping the future of technology,” said Jennifer Baranowski, President of the CODiE Awards. “This year’s winners exemplify how innovation, leadership, and purpose can come together to create solutions that move industries forward and make a lasting impact.”

A full list of 2025 CODiE Award winners can be found at www.codieawards.com/winners.

Explore the future of open source development at TheiaCon 2025, happening now (29–30 October). Registration is free and open to everyone. 

To connect with the growing global Eclipse Theia community, contribute, or learn more, visit: https://theia-ide.org/

 

About the Eclipse Foundation

The Eclipse Foundation provides our global community of individuals and organisations with a business-friendly environment for open source software collaboration and innovation. We host the Eclipse IDE, Adoptium, Software Defined Vehicle, Jakarta EE, Open VSX, and over 400 open source projects, including runtimes, tools, registries, specifications, and frameworks for cloud and edge applications, IoT, AI, automotive, systems engineering, open processor designs, and many others. Headquartered in Brussels, Belgium, the Eclipse Foundation is an international non-profit association supported by over 300 members. To learn more, follow us on social media @EclipseFdnLinkedIn, or visit eclipse.org.

Third-party trademarks mentioned are the property of their respective owners.

###

Media contacts:

Schwartz Public Relations (Germany)

Julia Rauch/Marita Bäumer

Sendlinger Straße 42A

80331 Munich

EclipseFoundation@schwartzpr.de

+49 (89) 211 871 -70/ -62

 

514 Media Ltd (France, Italy, Spain)

Benoit Simoneau

benoit@514-media.com

M: +44 (0) 7891 920 370

 

Nichols Communications (Global Press Contact)   

Jay Nichols

jay@nicholscomm.com

+1 408-772-1551

Image
alt

by Anonymous at October 29, 2025 08:45 AM

Open Source MBSE at Scale: From Industry-Proven Tools to Web-Native SysML v2

by Cédric Brun (cedric.brun@obeo.fr) at October 29, 2025 12:00 AM

Cedric Brun, CEO of Obeo, and Asma Charfi, from CEA, look back on 15 years of open-source ecosystem development and share their vision for the next generation of Model-Based Systems Engineering (MBSE) tools.

Download the slides


Context


Summary

This joint presentation explored how open-source MBSE technologies have evolved over the past 15 years — from Eclipse-based industrial tools like Capella, Papyrus, and Sirius, to new web-native environments supporting SysML v2 and agent-assisted engineering.

Key messages included:

  • The power of open ecosystems for accelerating innovation in education, research, and industry.
  • Lessons learned from large-scale industrial adoption of MBSE tools.
  • The emergence of next-generation modeling environments — collaborative, extensible, and AI-augmented, bridging the gap between domain experts and software engineers.

The talk sparked lively discussions and a strong interest from the IEEE community regarding the convergence of open-source platforms and upcoming SysML v2 tooling.


Highlights

  • 15 years of open collaboration across the Eclipse ecosystem — from early Papyrus and Capella foundations to today’s vibrant MBSE community.
  • Industry-proven tools at scale, including Capella and its extensions (Team, Cloud, and Publication), showcasing how open-source can sustain mission-critical engineering.
  • A live proof of concept illustrating “Obeo Enterprise for SysON,” combining SysML v2 with Arcadia semantics and an AI agent assisting the creation of a logical architecture for the X-Wing spacecraft.
  • A forward-looking perspective on the transition to web-native, cloud-enabled, and AI-augmented modeling platforms built for openness and collaboration.

📥 Download the slides (PDF)

Open Source MBSE at Scale: From Industry-Proven Tools to Web-Native SysML v2 was originally published by Cédric Brun at CEO @ Obeo on October 29, 2025.


by Cédric Brun (cedric.brun@obeo.fr) at October 29, 2025 12:00 AM

Eclipse LMOS Redefines Agentic AI with Industry’s First Open Agent Definition Language (ADL) for Enterprises

by Anonymous at October 28, 2025 08:45 AM

Eclipse LMOS Redefines Agentic AI with Industry’s First Open Agent Definition Language (ADL) for Enterprises Anonymous (not verified)

BRUSSELS – 28 October 2025 – The Eclipse Foundation, one of the world’s largest open source software foundations, today announced the introduction of the Agent Definition Language (ADL) functionality to the Eclipse LMOS (Language Models Operating System) project. 

Eclipse LMOS is an open source platform for orchestrating intelligent AI agents that perform complex tasks at enterprise scale. It is composed of three core components:

  • Eclipse LMOS ADL (Agent Definition Language): A structured, model-neutral language and visual toolkit that lets domain experts define agent behavior reliably and collaborate seamlessly with engineers.
  • Eclipse LMOS ARC Agent Framework: A JVM-native framework with a Kotlin runtime for developing, testing, and extending AI agents comes with a built-in visual interface for quick iterations and debugging. 
  • Eclipse LMOS Platform: An open, vendor-neutral orchestration layer for agent lifecycle management, discovery, semantic routing, and observability, built on the CNCF stack and currently in Alpha.

An industry-first innovation, ADL addresses the complexity of traditional prompt engineering by providing a structured, model-agnostic framework that allows business and engineering teams to co-define agent behaviour in a consistent, maintainable, and versionable way. This shared language increases the reliability and scalability of growing agentic use cases, enabling enterprises to design and govern complex agentic systems with confidence. This capability further distinguishes Eclipse LMOS from proprietary alternatives.

The goal of the LMOS project is to create a sovereign, open platform where AI agents can be developed, deployed, and integrated seamlessly across networks and ecosystems. Built on open standards such as Kubernetes, LMOS is already in production with one of the largest enterprise Agentic AI deployments in Europe.

“Agentic AI is redefining enterprise software, yet until now there has been no open source alternatives to proprietary offerings,” said Mike Milinkovich, executive director of the Eclipse Foundation. “With Eclipse LMOS and ADL, we’re delivering a powerful, open platform that any organisation can use to build scalable, intelligent, and transparent agentic systems.”

Empowering Enterprises to Build the Future of Agentic AI

Agentic AI represents a generational shift in how enterprises approach their technology stack. According to Gartner (June 2025), by 2028, 15% of daily business decisions will be made autonomously through agentic AI, and 33% of enterprise applications will include such capabilities, up from less than 1% in 2024.

Eclipse LMOS is uniquely designed to let enterprise IT teams leverage their existing infrastructure, skills, and DevOps practices. Running on technologies such as Kubernetes, Istio, and JVM-based applications, LMOS integrates naturally into enterprise environments, accelerating adoption while protecting prior investments.

The introduction of ADL builds on this foundation by empowering non-technical users to shape agent behavior. Business domain experts, not just engineers, can directly encode requirements into agents, accelerating time-to-market and ensuring that agent behavior accurately reflects real-world domain knowledge.

“With ADL, we wanted to make defining agent behaviour as intuitive as describing a business process, while retaining the rigor engineers expect,” said Arun Joseph, Eclipse LMOS project lead. “It eliminates the fragility of prompt-based design and gives enterprises a practical path to scale agentic AI using their existing teams and resources.”

Together, these two pillars, leveraging existing engineering investments and empowering business experts with ADL, make LMOS unique among agentic AI platforms.

Enterprise-Ready Advantages

Compared to proprietary solutions, Eclipse LMOS delivers:

  • Open architecture - Innovation thrives in an open environment. LMOS is part of an open ecosystem that invites developers, data scientists, and organisations to collaborate and shape the future of Multi-Agent Systems.
  • Collaboration - AI agent collaboration enhances problem-solving. LMOS orchestrates these interactions with advanced routing based on the user’s intent or goals,  allowing agents to work together seamlessly within a single, unified system.
  • Cloud native scalability - As your AI needs grow, LMOS grows with you. Its cloud-native architecture dynamically scales from a few agents to hundreds, ensuring seamless performance as your AI operations expand.
  • Modularity - LMOS is built with modularity at its core, allowing you to easily integrate new Agents in your preferred development language or framework.
  • Extensibility - Extensibility drives innovation. LMOS defines clear specifications, allowing you to quickly extend its ecosystem.
  • Multi-tenant capable - Built with enterprises in mind, LMOS is designed to be multi-tenant capable from the ground up. LMOS enables the efficient management of multiple tenants and agent groups within a single infrastructure.

Real-World Impact

At Deutsche Telekom, Eclipse LMOS powers the award-winning Frag Magenta OneBOT assistant and other customer-facing AI systems. This deployment, one of Europe’s largest multi-agent enterprise deployments, has processed millions of service and sales interactions across several countries, showcasing LMOS’s enterprise-grade scalability and reliability in production environments.

Get Involved

Developers, enterprises, and researchers are invited to join the community and contribute to the evolution of open source Agentic AI. Full details on the LMOS project, participating organisations, and ways to get involved are available here. To learn more about AI initiatives at the Eclipse Foundation, visit eclipse.org/ai

 

About the Eclipse Foundation

The Eclipse Foundation provides our global community of individuals and organisations with a business-friendly environment for open source software collaboration and innovation. We host the Eclipse IDE, Adoptium, Software Defined Vehicle, Jakarta EE, Open VSX, and over 400 open source projects, including runtimes, tools, registries, specifications, and frameworks for cloud and edge applications, IoT, AI, automotive, systems engineering, open processor designs, and many others. Headquartered in Brussels, Belgium, the Eclipse Foundation is an international non-profit association supported by over 300 members. To learn more, follow us on social media @EclipseFdnLinkedIn, or visit eclipse.org.

Third-party trademarks mentioned are the property of their respective owners.

###

Media contacts:

Schwartz Public Relations (Germany)

Julia Rauch/Marita Bäumer

Sendlinger Straße 42A

80331 Munich

EclipseFoundation@schwartzpr.de

+49 (89) 211 871 -70/ -62

 

514 Media Ltd (France, Italy, Spain)

Benoit Simoneau

benoit@514-media.com

M: +44 (0) 7891 920 370

 

Nichols Communications (Global Press Contact)   

Jay Nichols

jay@nicholscomm.com

+1 408-772-1551


by Anonymous at October 28, 2025 08:45 AM

Why AI Coding Fails - and How to Fix It

by Jonas, Maximilian & Philip at October 28, 2025 12:00 AM

Many developers and teams are experimenting with AI coding — using tools like GitHub Copilot, Cursor, and other AI code assistants — but few manage to make it work reliably in real projects. At …

The post Why AI Coding Fails - and How to Fix It appeared first on EclipseSource.


by Jonas, Maximilian & Philip at October 28, 2025 12:00 AM

Open VSX security update - October 2025

by Anonymous at October 27, 2025 08:38 PM

Open VSX security update - October 2025 Anonymous (not verified)

Over the past few weeks, the Open VSX team and the Eclipse Foundation have been responding to reports of leaked tokens and related malicious activity involving certain extensions hosted on the Open VSX Registry.


by Anonymous at October 27, 2025 08:38 PM

Open VSX security update, October 2025

October 27, 2025 07:30 PM

Over the past few weeks, the Open VSX team and the Eclipse Foundation have been responding to reports of leaked tokens and related malicious activity involving certain extensions hosted on the Open VSX Registry. We want to share a clear summary of what happened, what actions we’ve taken, and what improvements we’re implementing to strengthen the security of the ecosystem.

Background

Earlier this month, our team was alerted to a report from Wiz identifying several extension publishing tokens inadvertently exposed by developers within public repositories. Some of these tokens were associated with Open VSX accounts.

Upon investigation, we confirmed that a small number of tokens had been leaked and could potentially be abused to publish or modify extensions. These exposures were caused by developer mistakes, not a compromise of the Open VSX infrastructure. All affected tokens were revoked immediately once identified.

To improve detection going forward, we introduced a token prefix format in collaboration with MSRC to enable easier and more accurate scanning for exposed tokens across public repositories.

The “GlassWorm” campaign

Around the same time, a separate report from Koi Security described a new malware campaign that leveraged some of these leaked tokens to publish malicious extensions. The report referred to this as a “sel”-propagating worm,” drawing comparisons to the ShaiHulud incident that impacted the npm registry in September.

While the report raises valid concerns, we want to clarify that this was not a self-replicating worm in the traditional sense. The malware in question was designed to steal developer credentials, which could then be used to extend the attacker’s reach, but it did not autonomously propagate through systems or user machines.

We also believe that the reported download count of 35,800 overstates the actual number of affected users, as it includes inflated downloads generated by bots and visibility-boosting tactics used by the threat actors.

All known malicious extensions were removed from Open VSX immediately upon notification, and associated tokens were rotated or revoked without delay.

Status of the incident

As of October 21, 2025, the Open VSX team considers this incident fully contained and closed. There is no indication of ongoing compromise or remaining malicious extensions on the platform.

We continue to collaborate closely with affected developers, ecosystem partners, and independent researchers to ensure transparency and reinforce preventive measures.

Strengthening the platform

This event has underscored the importance of proactive defense across the supply chain, particularly in community-driven ecosystems. To that end, we are implementing several improvements:

  1. Token lifetime limits: All tokens will have shorter validity periods by default, reducing the potential impact of accidental leaks.

  2. Simplified revocation: We are improving internal workflows and developer tooling to make token revocation faster and more seamless upon notification.

  3. Security scanning at publication: Automated scanning of extensions will now occur at the time of publication, helping us detect malicious code patterns or embedded secrets before an extension becomes available to users.

  4. Ecosystem collaboration: We are continuing to work with other marketplace operators, including VS Code and third-party forks, to share intelligence and best practices for extension security.

Help us build a more secure and sustainable open source future

We take this responsibility seriously, and the trust you place in us is paramount. Incidents like this remind us that supply chain security is a shared responsibility: from publishers managing their tokens carefully, to registry maintainers improving detection and response capabilities.

The Open VSX incident is now resolved, but our work on improving the resilience of the ecosystem is ongoing. We remain committed to transparency and to strengthening every part of our platform to ensure that open source innovation continues safely and securely.

Open VSX is built by and for the open source developer community. It needs your support to stay sustainable. Read more about this in our recent blog post.

If you believe you’ve discovered a security issue affecting Open VSX, please reach out to us at openvsx@eclipse-foundation.org.

Thank you for your vigilance, cooperation, and commitment to a safer open source community.


October 27, 2025 07:30 PM

Go Primitive in Java, or Go in a Box

by Donald Raab at October 25, 2025 08:05 PM

We can have our eight Java primitives and travel light in collections too.

Photo by I'M ZION on Unsplash

It’s hard to go fast when you’re in a box

Java has eight primitives. For better or worse, we’ve had them in Java for over 30 years. We use primitives all the time (e.g. loops, if-statements, math, etc.), even when we don’t use them directly (e.g. String).

Java has array type support for all eight primitives. Java has three primitive Stream types (IntStream, LongStream, DoubleStream). Java has zero primitive Collection types. You have to box primitives to use them in collections. This means wrapping boolean, byte, char, short, int, float, long, double in their object wrapper equivalents, Boolean, Byte, Character, Short, Integer, Float, Long, Double.

This is unfortunate. This is the nicest alternative I could come up with instead of saying what I really think, which is, this sucks.

I stopped caring about this “unfortunate situation” thirteen years ago. We added primitive collections to Eclipse Collections because we saw no near or distant future where primitive collection support would exist in Java natively.

We got to work and built solutions to travel light with Java collections a long time ago. You can travel light now as well if you want or need. If there’s something missing that you need, Eclipse Collections is open source and open for contributions. New contributors wanted!

It’s cheaper and faster to travel light

I could show you benchmarks and memory savings of using primitive collections instead of boxed collections. If you need to see these to be convinced of the benefits of primitive collection support in Java, then you probably don’t need support for primitive collections in Java. No need to read any further. Please accept this complimentary set of eight boxes for your collection travels.

If you understand and have a need for primitive collections, Eclipse Collections may have some solutions for you. Read on.

Eight is enough

Eclipse Collections has support for the following primitive collection types.

  • List (all eight primitives)
  • Set (all eight primitives)
  • Stack (all eight primitives)
  • Bag (all eight primitives)
  • LazyIterable (all eight primitives)
  • Map (all combinations except boolean as keys)
  • Interval (IntInterval and LongInterval)
  • String (CharAdapter and CodePointAdapter)

For List, Set, Stack, Bag, and Map, there are both Mutable and Immutable versions of the containers. There is only immutable primitive support for Interval and String types. LazyIterable for primitives is read-only.

Instantiate Them Using Factories

Symmetry and uniformity are very important design considerations in Eclipse Collections. While perfect symmetry is challenging to achieve, there is “good enough” symmetry for most types in the library. The following slide from the dev2next 2025 talk, “Refactoring to Eclipse Collections”, shows the combination of factories available. Credit to Vladimir Zakharov for creating this concise slide.

Slide 5 from the “Refactoring to Eclipse Collections” talk at dev2next 2025

As you might notice on this slide, there are currently missing primitive collection types. There are no BiMap, Multimap, SortedBag, SortedSet, SortedMap types for primitives today. That can change over time, if folks have a specific need. We only add types to Eclipse Collections when there is a real need.

Why no primitive Boolean<V>Map?

The type Map<Boolean, V> in Java has a particular design smell. We specifically designed the primitive collection types in Eclipse Collections so there are no BooleanObjectMap<V> type or Boolean<Primitive>Map types.

Disallowing this kind of type may seem like a poor design decision to folks who enjoy Map-Oriented Programming. After all, the the Collectors.partitioningBy() method returns Map<Boolean, List<T>>, so it must be a good design right? Not all questions have a simple answer, so some questions deserve an entire blog.

Map-Oriented Programming in Java

In modern versions of Java (Java 17+ for LTS users), you can use a Java record to create a concise strong type for what might be considered more generally as a Pair. Eclipse Collections also has Pair, and all combinations of primitive and object Pair types (e.g. IntObjectPair, ShortBytePair, LongBytePair, etc). These are better, safer alternatives to using a Map<Boolean, V> type.

What about primitive support for lambdas?

Eclipse Collections has had primitive collection support since before Java had lambdas (around 2012). Just like the object collections in Eclipse Collections, the primitive collections were designed with a feature rich API. I knew Java would get lambdas eventually, I just wasn’t sure when exactly.

My ten year quest for concise lambda expressions in Java

By the time we added primitive collection support to GS Collections, I believed Concise Lambda Expressions would be included in Java 8. The fundamental problem with lambda support for primitives is the same as collections support for primitives. There is no support for Generic Types over Primitives in Java today. This is a feature that may eventually arrive with Project Valhalla.

My ten year quest for lambda support in Java has been absolutely dwarfed by my twenty-one year wait for generic types over primitives.

I shared what I have been wishing and waiting for in Java in this blog.

What are you wishing or waiting for in Java?

TL;DR… This is what it looks like when you decide to stop wishing or waiting, and just get to work making a Functional Interface named Procedure/Procedure2 (aka Consumer/BiConsumer) work for the primitive types. This is only one of three Functional Interface type categories. There are also Function0/Function/Function2 and Predicate/Predicate2. The combinatorial explosion of these types is explained further in the blog and the “Eclipse Collections Categorically” book.

Functional “Procedure” Interfaces for primitive types in Eclipse Collections

Blogs, Code Katas, and other Resources

If you are interested in learning more about the primitive collection support in Eclipse Collections, the following resources can help.

Blogs

Code Katas

Lost and Found Kata in Eclipse Collections Kata repo. There is a solutions folder for this kata as well.

eclipse-collections-kata/lost-and-found-kata at master · eclipse-collections/eclipse-collections-kata

Book

The book “Eclipse Collections Categorically: Level up your programming game” was first published in March 2025. The book has excellent coverage of working with both object and primitive collections in Eclipse Collections. Various versions of the book are linked from the publisher here. The book is also currently available for free to Amazon Kindle Unlimited subscribers.

Reference Guide

There is an AsciiDoc Reference Guide for Eclipse Collections with a dedicated section on primitive collections here.

Final Thoughts

The extensive primitive collections support in Eclipse Collections has been one of its most popular features. The combination of primitive collections with lambda-enabled API and support for mutable and immutable types is unmatched in any other Java collections library. These are hard problems to solve, but they have been solved problems in Eclipse Collections for well over a decade.

It will be great when Project Valhalla is finally realized and released in Java. Maybe you can afford to wait for Project Valhalla to arrive and finally build the applications and libraries you really want to build. I’m glad we got to work on supporting primitive collections in Eclipse Collections when I was in my early forties. Now I’m in my mid-fifties, and I have decided I’m getting too old to wait for language miracles to arrive.

Java has been good enough since Java 8, and gets better with every release. I go primitive any time I need to. I don’t need to wait for anything.

You can either get to work using what’s available today, or wait and hope for someone to eventually unbox the box you’ve been travelling in. Go primitive in Java, or go in a box. Your choice.

Thanks for reading!

I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions. I am the author of the book, Eclipse Collections Categorically: Level up your programming game.


by Donald Raab at October 25, 2025 08:05 PM

Before the Cloud: Eclipse Foundation’s Quiet Stewardship of Open Source Infrastructure

by Denis Roy at October 24, 2025 08:12 PM

Before the Cloud: Eclipse Foundation’s Quiet Stewardship of Open Source Infrastructure

Long before the cloud era, the Eclipse Foundation quietly served as the backbone of open source stewardship. Its software, frameworks, processes and infrastructure helped define and standardise developer workflows that are now core to modern engineering practices.

As early as 2005, the Eclipse IDE’s modular plugin architecture embodied what we now recognise as today's extension registry model. Developers no longer needed to manually download and configure artifacts; they could be automatically ingested, at high volume, into build and delivery pipelines known today as CI/CD.

Eclipse Foundation’s early success demanded infrastructure that could scale globally without the benefit of GitHub, Cloudflare, AWS, or GCP. Like many pioneering platforms of that time, we had to build performant and resilient systems from the ground up.

Fast forward two decades, and open source infrastructure has become the backbone of software delivery across every industry. Developer platforms now span continents and power everything from national infrastructure to consumer technology. In this landscape, software delivery is no longer just a technical process but a key driver of innovation, competition, and developer velocity. 

Today, the Eclipse Foundation continues its legacy of building dependable open source infrastructure, powering registries, frameworks, and delivery systems that enable millions of developers to innovate at scale. From  open registries like Open VSX to entreprise-grade frameworks such as Jakarta EE, the Foundation provides the scaffolding for the next generation of AI-augmented development. Its vendor-neutral governance ensures that tools, and the innovations they enable, remain open, globally accessible and community-driven.

From IDEs to extension registries, the Eclipse Foundation continues to shape the digital backbone of modern innovation. It remains one of the world’s most trusted homes for open collaboration, enabling developers, communities, and organisations to build the technologies that define the future—at global scale.

Denis Roy

by Denis Roy at October 24, 2025 08:12 PM

AI Coding Training Now Available: Learn the Dibe Coding Methodology

by Jonas, Maximilian & Philip at October 23, 2025 12:00 AM

Over the past two years, AI coding has exploded — with tools and demos promising to transform how we build software. Yet many teams, especially in enterprise environments, still struggle to move …

The post AI Coding Training Now Available: Learn the Dibe Coding Methodology appeared first on EclipseSource.


by Jonas, Maximilian & Philip at October 23, 2025 12:00 AM

On-Demand AI Agent Delegation in Theia AI

by Jonas, Maximilian & Philip at October 21, 2025 12:00 AM

AI-powered development environments are evolving beyond single, monolithic agents. The next step is collaborative AI — a network of specialized agents that each excel at a certain task and can …

The post On-Demand AI Agent Delegation in Theia AI appeared first on EclipseSource.


by Jonas, Maximilian & Philip at October 21, 2025 12:00 AM

How we used Maven relocation for Xtend

by Lorenzo Bettini at October 17, 2025 01:45 PM

In Xtext release 2.40.0, we adopted Maven relocation to move Xtend Maven artifacts’ groupIds from org.eclipse.xtend to org.eclipse.xtext without breaking existing consumers. References: https://github.com/eclipse-xtext/xtext/pull/3461 https://github.com/eclipse-xtext/xtext/issues/3398 https://maven.apache.org/guides/mini/guide-relocation.html Rationale Xtend’s Maven coordinates were relocated to comply with Maven Central’s new publishing requirements after the OSSRH sunset. The new Maven Central publishing portal enforces namespace consistency: all artifacts […]

by Lorenzo Bettini at October 17, 2025 01:45 PM

Eclipse Theia 1.65 Release: News and Noteworthy

by Jonas, Maximilian & Philip at October 16, 2025 12:00 AM

We are happy to announce the Eclipse Theia 1.65 release! The release contains in total 78 merged pull requests. In this article, we will highlight some selected improvements and provide an overview of …

The post Eclipse Theia 1.65 Release: News and Noteworthy appeared first on EclipseSource.


by Jonas, Maximilian & Philip at October 16, 2025 12:00 AM

Spliterating Hairs Results in Spliterating Deja Vu

by Donald Raab at October 15, 2025 04:41 AM

How a “Random” question led me down a Java Spliterator rabbit hole.

Photo by Addy Spartacus on Unsplash

A Better RandomAccess Default From Long Ago

I responded to a comment on a recent blog with a link to a blog that I had a written a few years ago. The blog details the journey that I went on taking a discovery and idea all the way to be included in OpenJDK. This blog and the links it contains are the only organized body of evidence that I am aware of that explain the creation, existence and purpose of RandomAccessSpliterator. The following is the blog.

Traveling the road from Idea all the way to OpenJDK

This story happened a long time ago, but it got me wondering, whatever happened with RandomAccessSpliterator? I mean I knew this class will probably be in Java forever, but I wondered how often it actually gets used today.

Note for the reader: This is the entrance to the rabbit hole. I owe losses of large amounts of personal time to questions like these. If you value your time and accept the universe as it is, then do not ask yourself questions like these. If you do find yourself asking these kinds of questions, you may learn more than you wanted to know.

Find Usages?

One does not simply create RandomAccessSpliterator. It is a default Spliterator, created for RandomAccess List types that have not defined their own spliterator() method. This means, instances of this type can only be discovered at runtime. As evidence, I tried IntelliJ Find Usages on RandomAccessSpliterator on the Eclipse Collections code base. The only place it is created is in a default method on the List interface in the JDK.

The default method on List that creates RandomAccessSpliterator

While I’ve known I cannot just find usages of this type, I thought there must be another way. So I decided to run the ~167K unit tests in the Eclipse Collections test suite and turn a breakpoint on. Then I learned something I had never tried before. You don’t have to suspend a breakpoint. You can just output a message when the breakpoint is hit. Woo hoo! Runtime usages!

I unchecked Suspend, and checked “Breakpoint hit” message

Now when I run the Eclipse Collections unit test suite, this is what I see in the console with this breakpoint set.

Breakpoints that are logged

Now I just Googled to see if I could find a count of the number of times, and StackOverflow had a question and answer. So I found the tab in IntelliJ, and sure enough there’s the same count I did by hand a day earlier. 🤦‍♂️

Number of times this breakpoint is hit

Ok, this is where the story should end. You learned some cool stuff about IntelliJ and debugging and counting breakpoints, win!

I wonder if RandomAccessSpliterator is used anywhere in the JDK code base?

Note to reader: This is where I lose sight of the top of the rabbit hole and enter into free fall into the JDK code base and hours of running JMH Benchmarks.

The Ballad of List12 and ListN

I don’t have the code for running the JDK unit tests on my machine. I’ve never run them, and not sure how I would. That’s a rabbit hole for me to fall down a different day maybe when I’m retired.

I decided to just poke around and try things out with JDK types. I’ll make the story short. I discovered that RandomAccessSpliterator is used by the two classes created by calling List.of(). Yes, the immutable (or unmodifiable if you prefer) lists we’ve been creating since they were added to Java 9, use RandomAccessSpliterator, which was also added in Java 9. Instances of List.of() get RandomAccessSpliterator by default because they don’t define a spliterator() method.

Oh, shaving cream! RandomAccessSpliterator lives!

Now, this rabbit hole, had a little detour. As it turns out, List12 did not define a spliterator() method in Java 21, but it defines one in Java 25. So it must have been added somewhere between Java 21 and 25. The method looks like this in Java 25.

List12 with one element gets Collections.singletonSpliterator() which is package private

I wrote a test to show a bunch of Spliterator types used by commonly used List types in Java.

import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;

import org.junit.jupiter.api.Test;

import static org.junit.jupiter.api.Assertions.assertEquals;

public class SpliteratorTest
{
@Test
public void listNSpliteratorType()
{
List<Integer> integers = List.of(1, 2, 3, 4, 5, 6, 7, 8, 9, 10);
assertEquals(
"RandomAccessSpliterator",
integers.spliterator().getClass().getSimpleName());
}

@Test
public void list12SpliteratorType()
{
List<Integer> list1Of12 = List.of(1);
assertEquals(
"",
list1Of12.spliterator().getClass().getSimpleName());
List<Integer> list2Of12 = List.of(1, 2);
assertEquals(
"RandomAccessSpliterator",
list2Of12.spliterator().getClass().getSimpleName());
}

@Test
public void ArraysAsListSpliteratorType()
{
List<Integer> arraysAsList =
Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8, 9, 10);
assertEquals(
"ArraySpliterator",
arraysAsList.spliterator().getClass().getSimpleName());
}

@Test
public void ArrayListSpliteratorType()
{
List<Integer> arrayList =
new ArrayList<>(List.of(1, 2, 3, 4, 5, 6, 7, 8, 9, 10));
assertEquals(
"ArrayListSpliterator",
arrayList.spliterator().getClass().getSimpleName());
}
}

Now the simple name of the Spliterator used for single element instances of the List12 type is… empty. This is because it is defined as an anonymous inner class. Anonymous indeed!

I was initially surprised to see there are three named Spliterator types here, not two. I was expecting ArrayList to use ArraySpliterator. The reason it does not, is because the Spliterator for ArrayList has to deal with potential ConcurrentModificationException exceptions being thrown if the modCount variable inherited from AbstractList is triggered. RandomAccessSpliterator has a dual mode which checks if a RandomAccess type extends from AbstractList, and if so adds the modCount logic. If not it ignores it.

Ok, so while I think it is really cool to see an idea I had for a default Spliterator implementation 12 years ago is actively used by newer collection types, I found myself asking the question.

Why would ListN use RandomAccessSpliterator instead of ArraySplitterator, since it doesn’t extend AbstractList and doesn’t need to deal with modCount? List12 makes more sense to me, as it is not backed by an array, and is still RandomAccess.

Note to the reader: This is when I saw the same black cat walk by my doorway twice. This is also where the rabbit hole got very deep, as it caused me to spend 8–10 hours just writing and running JMH benchmarks. I am going to keep it short and sweet for you and just share one straightforward benchmark to consider.

Deja Vu

I’ve been here before. Twelve years ago I found myself discovering and proving that IteratorSpliterator was a terrible default for RandomAccess types when using parallelStream(). RandomAccessSpliterator eventually stepped in as a much better default implementation for folks who could not or did not want to provide their own spliterator() override.

While RandomAccessSpliterator is a good default alternative, I believe that ArraySpliterator must be a better performing alternative for a List type with a backing array that is immutable. The array and immutable aspects are the key. All of the complicated logic needed to check for modCount changes goes away. And with an array, we don’t have to use method calls like get() to look up elements at indexes. Win!

This must be measurable, right? I believe so. To the rabbit hole!

Photo by Sincerely Media on Unsplash

After much testing and trying different combinations of things, I decided I would settle on one benchmark to share. If folks want to try out their own benchmarks to prove if this is worth it or not, I wish them luck and much patience. I am satisfied, and believe that having ListN be as fast to iterate with stream() and parallelStream() as Arrays.asList() would be a good thing, and beneficial to the entire Java community.

Note: We have used ArraySpliterator for all of our array backed List types for years in Eclipse Collections. This is for two reasons. One, it’s a simple and fast Spliterator that we didn’t have to write. Two, we don’t use modCount in our collection types, so don’t require the use of modCount sensitive Spliterators.

The benchmark Results

I ran benchmarks calculating the combination of min and max using Stream.reduce(). I ran these benchmarks on my MacBook Pro M2 Max with 12 Cores and 96GB RAM.

Result output (minus the prefix long test name):

Benchmark                    Mode  Cnt     Score    Error  Units
minMaxArrayList thrpt 20 158.375 ± 3.787 ops/s
minMaxArraysAsList thrpt 20 208.214 ± 0.686 ops/s
minMaxListN thrpt 20 97.352 ± 0.717 ops/s
parallelMinMaxArrayList thrpt 20 1149.748 ± 60.342 ops/s
parallelMinMaxArraysAsList thrpt 20 1387.468 ± 8.229 ops/s
parallelMinMaxListN thrpt 20 1062.055 ± 15.685 ops/s

Unit is operations per second so bigger is better.

The results in a chart

The Code:

import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.concurrent.TimeUnit;

import org.eclipse.collections.api.tuple.primitive.IntIntPair;
import org.eclipse.collections.impl.list.Interval;
import org.eclipse.collections.impl.tuple.primitive.PrimitiveTuples;
import org.openjdk.jmh.annotations.Benchmark;
import org.openjdk.jmh.annotations.BenchmarkMode;
import org.openjdk.jmh.annotations.Fork;
import org.openjdk.jmh.annotations.Measurement;
import org.openjdk.jmh.annotations.Mode;
import org.openjdk.jmh.annotations.OutputTimeUnit;
import org.openjdk.jmh.annotations.Scope;
import org.openjdk.jmh.annotations.State;
import org.openjdk.jmh.annotations.Warmup;

@State(Scope.Thread)
@BenchmarkMode(Mode.Throughput)
@OutputTimeUnit(TimeUnit.SECONDS)
@Fork(2)
@Warmup(iterations = 20, time = 2)
@Measurement(iterations = 10, time = 2)
public class RandomAccessVsArraySpliteratorBenchmark
{
private final Interval interval = Interval.oneTo(1_000_000);
private final List<Integer> listN = List.copyOf(interval);
private final List<Integer> arrayList = new ArrayList<>(interval);
private final List<Integer> arraysAsList = Arrays.asList(interval.toArray());

@Benchmark
public IntIntPair minMaxListN()
{
int min = this.listN.stream()
.reduce(Math::min)
.orElse(0);
int max = this.listN.stream()
.reduce(Math::max)
.orElse(0);
return PrimitiveTuples.pair(min, max);
}

@Benchmark
public IntIntPair minMaxArrayList()
{
int min = this.arrayList.stream()
.reduce(Math::min)
.orElse(0);
int max = this.arrayList.stream()
.reduce(Math::max)
.orElse(0);
return PrimitiveTuples.pair(min, max);
}

@Benchmark
public IntIntPair minMaxArraysAsList()
{
int min = this.arraysAsList.stream()
.reduce(Math::min)
.orElse(0);
int max = this.arraysAsList.stream()
.reduce(Math::max)
.orElse(0);
return PrimitiveTuples.pair(min, max);
}

@Benchmark
public IntIntPair parallelMinMaxListN()
{
int min = this.listN.parallelStream()
.reduce(Math::min)
.orElse(0);
int max = this.listN.parallelStream()
.reduce(Math::max)
.orElse(0);
return PrimitiveTuples.pair(min, max);
}

@Benchmark
public IntIntPair parallelMinMaxArrayList()
{
int min = this.arrayList.parallelStream()
.reduce(Math::min)
.orElse(0);
int max = this.arrayList.parallelStream()
.reduce(Math::max)
.orElse(0);
return PrimitiveTuples.pair(min, max);
}

@Benchmark
public IntIntPair parallelMinMaxArraysAsList()
{
int min = this.arraysAsList.parallelStream()
.reduce(Math::min)
.orElse(0);
int max = this.arraysAsList.parallelStream()
.reduce(Math::max)
.orElse(0);
return PrimitiveTuples.pair(min, max);
}
}

This is the limit I am willing to spend any more time on this. I think it shows that ListN could get a decent speedup for this test case by switching to ArraySpliterator. Other test cases may see different results. I’m fairly confident that switching ListN to use ArraySpliterator will not likely result in any degradation of performance, but I have also learned that measuring performance is really hard, especially when it comes to JIT compilers.

Final Thoughts

I learned some new things trying these experiments and diving into these all too familiar looking rabbit holes. I don’t know if this will cause any changes in Java, but I do hope it helps shine a light on a potentially useful performance improvement moving ListN from using RandomAccessSpliterator to using ArraySpliterator.

I also hope this provides some useful information that my readers may not have been aware of.

Thanks for reading!

I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions. I am the author of the book, Eclipse Collections Categorically: Level up your programming game.


by Donald Raab at October 15, 2025 04:41 AM

It's Released: Your Native Claude Code IDE Integration in Theia

by Jonas, Maximilian & Philip at October 14, 2025 12:00 AM

Anthropic’s Claude Code is one of the most advanced AI coding agents available: powerful, autonomous, and loaded with well-designed tools. But until now, the experience always felt somewhat separated …

The post It's Released: Your Native Claude Code IDE Integration in Theia appeared first on EclipseSource.


by Jonas, Maximilian & Philip at October 14, 2025 12:00 AM

Announcing Eclipse Ditto Release 3.8.0

October 10, 2025 12:00 AM

Eclipse Ditto team is excited to announce the availability of a new minor release, including new features: Ditto 3.8.0.

Adoption

Companies are willing to show their adoption of Eclipse Ditto publicly: https://iot.eclipse.org/adopters/?#iot.ditto

When you use Eclipse Ditto it would be great to support the project by putting your logo there.

Changelog

The main improvements and additions of Ditto 3.8.0 are:

  • Diverting Ditto connection responses to other connections (e.g. to allow multi-protocol workflows)
  • Dynamically re-configuring WoT validation settings without restarting Ditto
  • Enforcing that WoT model based thing definitions are used and match a certain pattern when creating new things
  • Support for OAuth2 “password” grant type for authenticating outbound HTTP connections
  • Configure JWT claims to be added as information to command headers
  • Added support for client certificate based authentication for Kafka and AMQP 1.0 connections
  • Extend “Normalized” connection payload mapper to include deletion events
  • Support silent token refresh in the Ditto UI when using SSO via OAuth2/OIDC
  • Enhance conditional updates for merge thing commands to contain several conditions to dynamically decide which parts of a thing to update and which not

The following non-functional work is also included:

  • Improving WoT based validation performance for merge commands
  • Enhancing distributed tracing, e.g. with a span for the authentication step and by adding the error response for failed API requests
  • Updating dependencies to their latest versions
  • Providing additional configuration options to Helm values

The following notable fixes are included:

  • Fixing nginx CORS configuration which caused Safari / iOS browsers to fail with CORS errors
  • Fixing transitive resolving of Thing Models referenced with tm:ref
  • Fixing sorting on array fields in Ditto search
  • Fixing issues around “put-metadata” in combination with merge commands
  • Fixing that certificate chains for client certificate based authentication in Ditto connection was not fully parsed
  • Fixing deployment of Ditto on OpenShift

Please have a look at the 3.8.0 release notes for a more detailed information on the release.

Artifacts

The new Java artifacts have been published at the Eclipse Maven repository as well as Maven central.

The Ditto JavaScript client release was published on npmjs.com:

The Docker images have been pushed to Docker Hub:

The Ditto Helm chart has been published to Docker Hub:



Ditto


The Eclipse Ditto team


October 10, 2025 12:00 AM

Response diversion - Multi-protocol workflows made easy

October 09, 2025 12:00 AM

Today we’re excited to announce a powerful new connectivity feature in Eclipse Ditto: Response Diversion. This feature enables sophisticated multiprotocol workflows by allowing responses from one connection to be redirected to another connection instead of being sent to the originally configured reply target.

With response diversion, Eclipse Ditto becomes even more versatile in bridging different IoT protocols and systems, enabling complex routing scenarios that were previously challenging or impossible to achieve.

The challenge: Multi-protocol IoT landscapes

Modern IoT deployments often involve multiple protocols and systems working together. Consider these common scenarios:

  • Cloud integration: Your devices use MQTT to communicate with AWS IoT Core, but your analytics pipeline consumes data via Kafka
  • Protocol translation: Legacy systems expect HTTP webhooks, but your devices communicate via AMQP
  • Response aggregation: You want to collect all device responses in a central monitoring system regardless of the original protocol

Until now, implementing such multiprotocol workflows required complex external routing logic or multiple intermediate systems. Response diversion brings this capability directly into Ditto’s connectivity layer.

How response diversion works

Response diversion is configured at the connection source level using a key in the specific config and special header mapping keys:

{
  "headerMapping": {
    "divert-response-to-connection": "target-connection-id", 
    "divert-expected-response-types": "response,error,nack"
   }, 
    "specificConfig": {
        "is-diversion-source": "true"
    }
}

And in the target connection, by defining a target. In the case of multiple sources one or exactly the same number of sources targets are required. If multiple targets are configured they are mapped to the sources by order. Only diverted responses will be accepted by source connections which ids are defined in the specific config under the key ‘authorized-connections-as-sources’ in a comma separate format.

{
    "id": "target-connection-id-1",
    "targets": [
        {
            "address": "command/redirected/response",
            "topics": [],
            "qos": 1,
            "authorizationContext": [
                "pre:ditto"
            ],
            "headerMapping": {}
        }
    ],
    "specificConfig": {
        "is-diversion-target": "true"
    }
}
  {
    "targets": [
        {
            "address": "command/redirected/response",
            "topics": [],
            "qos": 1,
            "authorizationContext": [
                "pre:ditto"
            ],
            "headerMapping": {}
        }
    ],
    "specificConfig": {
        "is-diversion-target": "true",
        "authorized-connections-as-sources": "target-connection-id-1,..."
    }
}


When a command is received through a source with response diversion configured, Ditto intercepts the response and routes it through the specified target connection instead of the original reply target.

Real-world use case: AWS IoT Core with Kafka

Let’s explore a practical scenario that demonstrates the power of response diversion. In this setup:

  • Devices communicate with AWS IoT Core via MQTT (bidirectional)
  • Apache Kafka IoT Core pushes device commands to a Kafka topic
  • Device commands are consumed from Kafka topics
  • Responses must go back to AWS IoT Core via MQTT (since IoT Core doesn’t support Kafka consumers)
┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│  AWS IoT Core   │    │   Kafka Bridge  │    │  Apache Kafka   │    │  Eclipse Ditto  │
│    (MQTT)       │    │   /Analytics    │    │                 │    │                 │
│                 │    │                 │    │                 │    │                 │
│ ┌─────────────┐ │    │ ┌─────────────┐ │    │ ┌─────────────┐ │    │ ┌─────────────┐ │
│ │Device       │ │───▶│ │MQTT→Kafka   │ │───▶│ │device-      │ │───▶│ │Kafka Source │ │
│ │Commands     │ │    │ │Bridge       │ │    │ │commands     │ │    │ │Connection   │ │
│ │(MQTT topics)│ │    │ │             │ │    │ │topic        │ │    │ │             │ │
│ └─────────────┘ │    │ └─────────────┘ │    │ └─────────────┘ │    │ └─────────────┘ │
│        ▲        │    │                 │    │                 │    │        │        │
│        │        │    │                 │    │                 │    │        ▼        │
│        │        │    │                 │    │                 │    │ ┌─────────────┐ │
│        │        │    │                 │    │                 │    │ │Command      │ │
│        │        │    │                 │    │                 │    │ │Processing   │ │
│        │        │    │                 │    │                 │    │ │             │ │
│        │        │    │                 │    │                 │    │ └─────────────┘ │
│        │        │    │                 │    │                 │    │        │        │
│        │        │    │                 │    │                 │    │        ▼        │
│        │        │    │                 │    │                 │    │ ┌─────────────┐ │
│        │        │    │                 │    │                 │    │ │Response     │ │
│        │        │    │                 │    │                 │    │ │Diversion    │ │
│        │        │    │                 │    │                 │    │ │Interceptor  │ │
│        │        │    │                 │    │                 │    │ └─────────────┘ │
│        │        │    │                 │    │                 │    │        │        │
│        │        │    │                 │    │                 │    │        ▼        │
│ ┌─────────────┐ │    │                 │    │                 │    │ ┌─────────────┐ │
│ │Device       │ │◀───┼─────────────────┼────┼─────────────────┼────│ │MQTT Target  │ │
│ │Responses    │ │    │                 │    │                 │    │ │Connection   │ │
│ │(MQTT topics)│ │    │                 │    │                 │    │ │(AWS IoT)    │ │
│ └─────────────┘ │    │                 │    │                 │    │ └─────────────┘ │
└─────────────────┘    └─────────────────┘    └─────────────────┘    └─────────────────┘

Legend:
───▶ Command Flow (MQTT → Kafka → Ditto)
◀─── Response Flow (Ditto → MQTT, bypassing Kafka)

Example Configuration

First, create the Kafka connection that consumes device commands:

{
  "id": "kafka-commands-connection",
  "connectionType": "kafka",
  "connectionStatus": "open",
  "uri": "tcp://kafka-broker:9092",
  "specificConfig": {
    "bootstrapServers": "kafka-broker:9092",
    "saslMechanism": "plain"
  },
  "sources": [{
    "addresses": ["device-commands"],
    "authorizationContext": ["ditto:kafka-consumer"],
    "headerMapping": {
      "device-id": "{{ header:device-id }}",
      "divert-response-to-connection": "aws-iot-mqtt-connection",
      "divert-expected-response-types": "response,error"
    }
  }]
}

Next, create the MQTT connection that will handle diverted responses:

{
  "id": "aws-iot-mqtt-connection", 
  "connectionType": "mqtt",
  "connectionStatus": "open",
  "uri": "ssl://your-iot-endpoint.amazonaws.com:8883",
  "sources": [],
  "targets": [
      {
    "address": "device/{{ header:device-id }}/response",
    "topics": [],
    "headerMapping": {
      "device-id": "{{ header:device-id }}",
      "correlation-id": "{{ header:correlation-id }}"
    }
  }
  ],
    "specificConfig": {
        "is-diversion-target": "true"
    }
}

Flow explanation

  1. Command ingestion: Kafka connection consumes device commands from the device-commands topic
  2. Response diversion: Commands are configured to divert responses to the aws-iot-mqtt-connection
  3. Response routing: Responses are automatically published to AWS IoT Core via MQTT on the device-specific response topic
  4. Device notification: Devices receive responses via their subscribed MQTT topics in AWS IoT Core

This setup enables a seamless flow from Kafka-based systems back to MQTT-based device communication without requiring external routing logic.

Try it out

Response diversion is available starting with Eclipse Ditto version 3.8.0. Update your deployment and start experimenting with multi-protocol workflows!

The feature documentation provides comprehensive configuration examples and troubleshooting guidance. We’d love to hear about your use cases and feedback.

Get started with response diversion today and unlock new possibilities for your IoT connectivity architecture.



Ditto


The Eclipse Ditto team


October 09, 2025 12:00 AM

Refactoring to Eclipse Collections with Java 25 at the dev2next Conference

by Donald Raab at October 06, 2025 02:58 AM

Showing what makes Java great after 30 years is the vibrant OSS ecosystem

Picture of Vladimir Zakharov and Donald Raab pointing at the title slide of “Refactoring to Eclipse Collections” at dev2next 2025.Vladimir Zakharov and Donald Raab presenting “Refactoring to Eclipse Collections” at the dev2next Conference 2025

This blog will show you how Vladimir Zakharov and I live-refactored a single test case with nine method category unit tests at dev2next 2025. The test starts off passing using the built in JDK Collections and Streams. We refactored it live in front of an audience to use Eclipse Collections. I will be refactoring the same test case as I write this blog, and explaining different lessons learned along the way. You can follow along as I refactor the code here, or accomplish this on your own by starting with the pre-refactored code available on GitHub. Here are the slides we used for the talk, available on GitHub.

Note: A Decade as OSS at the Eclipse Foundation

The Eclipse Collections library has been available in open source since December 2015, managed as a project at the Eclipse Foundation. Prior to that the GS Collections library, which was the Eclipse Collections predecessor, was open sourced in January 2012. That will be 14 years total in open source at the end of this year.

I have been conditioned for the past decade to start all conversations about Eclipse Collections, with a statement that should be obvious, but unfortunately isn’t. You do not need to use the Eclipse IDE or any other IDE to use Eclipse Collections. Eclipse Collections is a standalone open source collections library for Java. See the following blog for more details.

Explaining the Eclipse prefix in Eclipse Collections

Now that the preamble is out of the way, let’s continue.

The Idea of Refactoring to Eclipse Collections

The idea of “Refactoring to Eclipse Collections” started out as an article by Kristen O’Leary and Vladimir Zakharov in June 2018. The two Goldman Sachs alumni wrote the following article for InfoQ.

Refactoring to Eclipse Collections: Making Your Java Streams Leaner, Meaner, and Cleaner

Kristen and Vlad wouldn’t know it at the time, but they would recognize something fundamentally important in this article, that I would go on to leverage to organize the chapters of my book “Eclipse Collections Categorically” on — Method Categories.

You can see where Vlad and Kristen organized the methods in Eclipse Collections into Method Categories in their article.

Slide with title “Methods [just some of] by Category”Extracted from the Refactoring to Eclipse Collections InfoQ article

Neither Vlad, Kristen, or myself would understand at the time this article was written, or even over the past seven years how important the idea of grouping methods by method category would be for me when I wrote “Eclipse Collections Categorically.” When I wrote the book, I didn’t appreciate at the time that Kristen and Vlad had a similar basic idea in their article. The book took this idea to its natural conclusion, that the idea of Method Categories is a fundamentally missing feature in Java and most other file based programming languages. This feature needs to be added to Java and other languages for developers to be able to better organize their APIs both in the IDE and in documentation (e.g. Javadoc).

Read on to learn more.

Refactoring to Eclipse Collections, Revisited

Vlad approached me with the idea of submitting a talk to dev2next on “Refactoring to Eclipse Collections”, and I agreed.

When the talk was accepted, I thought it would be good to revise the code examples with a familiar domain concept that I had used in my book — Generation. As Java 25 was released a couple weeks before the talk, I upgraded the code examples to use Java 25 with and without Compact Object Headers (JEP 519) enabled. You can find some memory comparison charts in the slide deck linked above.

All of the code examples for Refactoring to Eclipse Collections can be found in the following GitHub repo.

GitHub - vmzakharov/refactor-to-ec-remake: Refactoring Java 8+ streams to idiomatic Eclipse Collections.

Generation Alpha to the Rescue

Everything we have done in the past decade in Java has become a part of the history of Generation Alpha. We don’t hear much about Generation Alpha, because no one from this generation has graduated from high school yet. The beginning of Generation Alpha was 2013, which means no one in Generation Alpha will remember a time before Java had support for concise lambda expressions. Lambdas arrived in March 2014, with the release of Java 8.

Below is the full code for Generation enum that Vlad and I would use in our talk at dev2next 2025. This Java enum is somewhat similar to the Generation enum I use in my book, “Eclipse Collections Categorically.”

package refactortoec.generation;

import java.util.stream.IntStream;

import org.eclipse.collections.impl.list.primitive.IntInterval;

public enum Generation
{
UNCLASSIFIED("Unclassified", 0, 1842),
PROGRESSIVE("Progressive Generation", 1843, 1859),
MISSIONARY("Missionary Generation", 1860, 1882),
LOST("Lost Generation", 1883, 1900),
GREATEST("Greatest Generation", 1901, 1927),
SILENT("Silent Generation", 1928, 1945),
BOOMER("Baby Boomers", 1946, 1964),
X("Generation X", 1965, 1980),
MILLENNIAL("Millennials", 1981, 1996),
Z("Generation Z", 1997, 2012),
ALPHA("Generation Alpha", 2013, 2029);

private final String name;
private final YearRange years;

Generation(String name, int from, int to)
{
this.name = name;
this.years = new YearRange(from, to);
}

public int numberOfYears()
{
return this.years.count();
}

public IntInterval yearsInterval()
{
return this.years.interval();
}

public IntStream yearsStream()
{
return this.years.stream();
}

public boolean yearsCountEqualsEc(int years)
{
return this.yearsInterval().size() == years;
}

public boolean yearsCountEqualsJdk(int years)
{
return this.yearsStream().count() == years;
}

public String getName()
{
return this.name;
}

public boolean contains(int year)
{
return this.years.contains(year);
}
}

For our talk, we introduced a Java record called YearRange, which is used to store the start and end years for each Generation. This is different than the Generation in my book, which just stores an IntInterval. You will see IntInterval can be created from a YearRange by calling the method interval(). Similarly, an IntStream can be created from YearRange by calling stream(). Both of these code paths look very similar. The difference between them is subtle. An instance of IntInterval can be used as many times as a developer needs. An instance of IntStream can only be used once, before the IntStream becomes exhausted and you have to create a new one.

import java.util.stream.IntStream;

import org.eclipse.collections.impl.list.primitive.IntInterval;

public record YearRange(int from, int to)
{
public int count()
{
return this.to - this.from + 1;
}

public boolean contains(int year)
{
return this.from <= year && year <= this.to;
}

public IntStream stream()
{
return IntStream.rangeClosed(this.from, this.to);
}

public IntInterval interval()
{
return IntInterval.fromTo(this.from, this.to);
}
}

GenerationJdk

For our talk, we created a class called GenerationJdk that contains the JDK specific elements of the code. GenerationJdk looks as follows.

package refactortoec.generation;

import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.function.BiFunction;
import java.util.stream.Gatherers;
import java.util.stream.Stream;

public class GenerationJdk
{
public static final Set<Generation> GENERATION_SET =
Set.of(Generation.values());

public static final Map<Integer, Generation> BY_YEAR =
GenerationJdk.groupEachByYear();

private static Map<Integer, Generation> groupEachByYear()
{
Map<Integer, Generation> map = new HashMap<>();
GENERATION_SET.forEach(generation ->
generation.yearsStream()
.forEach(year -> map.put(year, generation)));
return Map.copyOf(map);
}

public static Generation find(int year)
{
return BY_YEAR.getOrDefault(year, Generation.UNCLASSIFIED);
}

public static Stream<List<Generation>> windowFixedGenerations(int size)
{
return Arrays.stream(Generation.values())
.gather(Gatherers.windowFixed(size));
}

public static <IV> IV fold(IV value, BiFunction<IV, Generation, IV> function)
{
return GENERATION_SET.stream()
.gather(Gatherers.fold(() -> value, function))
.findFirst()
.orElse(value);
}
}

GenerationEc

There is an equivalent class that uses Eclipse Collections types and methods called GenerationEc, which looks as follows.

package refactortoec.generation;

import org.eclipse.collections.api.RichIterable;
import org.eclipse.collections.api.block.function.Function2;
import org.eclipse.collections.api.factory.Sets;
import org.eclipse.collections.api.map.primitive.ImmutableIntObjectMap;
import org.eclipse.collections.api.map.primitive.MutableIntObjectMap;
import org.eclipse.collections.api.set.ImmutableSet;
import org.eclipse.collections.impl.factory.primitive.IntObjectMaps;
import org.eclipse.collections.impl.list.fixed.ArrayAdapter;

public class GenerationEc
{
public static final ImmutableSet<Generation> GENERATION_IMMUTABLE_SET =
Sets.immutable.with(Generation.values());

public static final ImmutableIntObjectMap<Generation> BY_YEAR =
GenerationEc.groupEachByYear();

private static ImmutableIntObjectMap<Generation> groupEachByYear()
{
MutableIntObjectMap<Generation> map = IntObjectMaps.mutable.empty();
GENERATION_IMMUTABLE_SET.forEach(generation ->
generation.yearsInterval()
.forEach(year -> map.put(year, generation)));
return map.toImmutable();
}

public static Generation find(int year)
{
return BY_YEAR.getIfAbsent(year, () -> Generation.UNCLASSIFIED);
}

public static RichIterable<RichIterable<Generation>> chunkGenerations(int size)
{
return ArrayAdapter.adapt(Generation.values())
.asLazy()
.chunk(size);
}

public static <IV> IV fold(IV value, Function2<IV, Generation, IV> function)
{
return GENERATION_IMMUTABLE_SET.injectInto(value, function);
}
}

Set vs. ImmutableSet

The primary differences between GenerationJdk and GenerationEc are the types used for GENERATION_SET and IMMUTABLE_GENERATION_SET. In the talk, the differences between Set and ImmutableSet are explained in the following slides. First, we explain the difference of type, and how to be explicit about whether a type is Mutable or Immutable. We show how Eclipse Collections types can be used as drop-in-replacements for JDK types (Step 1 in slide), and how the types on the left can be migrated to more intention revealing types once the types on the right have been refactored (Step 2 in slide).

Determining if a Set is Mutable of Immutable in the JDK and Eclipse Collections

Note: The squirrel at the bottom left of this slide is what I used to mark slides I was presenting during our talk. I couldn’t easily screenshot this squirrel out of the picture. I hope it is not too distracting. :)

An ImmutableSet conveys its intent much more clearly than Set. Set is a mutable interface, which may be optionally mutable, if the type it contains throws exceptions for the mutating methods. This is a surprise better left unhidden and exposed by a more explicit type like ImmutableSet, which has no mutating methods.

The biggest difference between Set and ImmutableSet is the number of methods available directly for developers on the collection types. The following Venn diagram shows the difference in the number of non-overloaded methods.

The number of non-overloaded methods on JDK Set and Eclipse Collections ImmutableSet

The large number of methods on ImmutableSet may seem daunting. This is where method categories help. Instead of sorting and scrolling through 158 methods, the methods can be grouped into just nine categories. The following slide shows how I accomplished this in IntelliJ using Custom Code Folding Regions to emulate Methods Categories, which are available natively in Smalltalk IDEs.

Using Custom Code Folding Regions in IntelliJ to simulate Method Categories

What may be less obvious is that a developer has to look in five places to find all of the behaviors for JDK Set. There are methods in Set, Collections, Stream, Collectors, and Gathers, for a total of 170 methods. Note, not all of the methods in the Collections utility class work for Set. Some are specific to List and Map. There is no organized way of viewing the 64 methods there. Just scroll.

Other Differences in GenerationJdk and GenerationEc

Another difference in these two classes are the groupEachByYear methods. We kept these methods equivalent in that they use nested forEach calls to build a Map. The keys in the map are individual years as int values, and the values are Generation instances corresponding to each year. In the case of the JDK, a Map<Integer, Generation> is used. In the case of EC, an ImmutableIntObjectMap<Generation> is used. The ImmutableIntObjectMap<Generation> reveals the intent that this map cannot be modified, where the Map<Integer, Generation> cannot do this, even thought the Map.copyOf() call creates an immutable copy of the Map. The primitive IntObjectMap used by EC will generate a map that takes less memory than the Map used by JDK because the int values will not be boxed as Integer objects.

The two other differences in these classes are the methods used for windowFixed/chunk and fold. The method chunk in Eclipse Collections can either be used directly by calling chunk on the collection (eager), or by calling asLazy first (lazy). The lazy version is arguably better in the example we use because we don’t hold onto the chunked results after computation is finished. Waste not, want not.

In Eclipse Collections, we categorize chunk as a grouping operation. It groups elements of a collection together based on an int value. So if you have a collection of 10 items and call chunk(3), you will wind up with a collection with 4 collections of size 3, 3, 3, 1.

The method fold is useful for aggregating results. In the test class I will refactor in this blog, we will see how to use fold to calculate the max, min, and sum of items in a collection using fold. In Eclipse Collections, the method that is the equivalent of fold in the JDK is named injectInto.

Refactoring to Eclipse Collections

There is a single test class in the GitHub repository that we leveraged for live refactoring from JDK to Eclipse Collections. The test class is linked below.

refactor-to-ec-remake/src/test/java/refactortoec/generation/GenerationJdkToEcRefactorTest.java at main · vmzakharov/refactor-to-ec-remake

The Javadoc for this class is intended to act as a guide for developers to refactor this class on their own. Check out the whole project from this GitHub repo and give it a try!

The class level Javadoc explains how the test is organized into method categories that will test multiple methods.

/**
* In this test we will refactor from JDK patterns to Eclipse Collections
* patterns. The categories of patterns we will cover in this refactoring are:
*
* <ul>
* <li>Counting - 🧮</li>
* <li>Testing - 🧪</li>
* <li>Finding - 🔎</li>
* <li>Filtering - 🚰</li>
* <li>Grouping - 🏘️</li>
* <li>Converting - 🔌</li>
* <li>Transforming - 🦋</li>
* <li>Chunking - 🖖</li>
* <li>Folding - 🪭</li>
* </ul>
*
* Note: We work with unit tests so we know code works to start, and continues
* to work after the refactoring is complete.
*/

Refactoring to use a drop-in-replacement

The first refactoring we did during our talk was to replace all references in this class to GENERATION_SET, which is stored on GenerationJdk, with GENERATION_IMMUTABLE_SET, which is stored on GenerationEc.

For a small example, the following code would be transformed as follows:

// BEFORE
// Counting with Predicate -> Count of Generation instances that match
long count = GENERATION_SET.stream()
.filter(generation -> generation.contains(1995))
.count();

// AFTER
// Counting with Predicate -> Count of Generation instances that match
long count = GENERATION_IMMUTABLE_SET.stream()
.filter(generation -> generation.contains(1995))
.count();

After the search and replace in the test, we run all of the methods and see that they all still pass.

Now we will continue refactoring each of the method categories included in this test class.

Refactoring Counting 🧮

The first category of methods we will refactor are counting methods.

JDK Collections / Streams

/**
* There are two use cases for counting we will explore.
* <ol>
* <li>Counting with a Predicate -> return is a primitive value</li>
* <li>Counting by a Function -> return is a Map<Integer, Long></li>
* </ol>
*/
@Test
public void counting() // 🧮
{
// Counting with Predicate -> Count of Generation instances that match
long count = GENERATION_IMMUTABLE_SET.stream()
.filter(generation -> generation.contains(1995))
.count();

assertEquals(1L, count);

// Counting by a Function -> Number of years in a Generation ->
// Count of Generations
Map<Integer, Long> generationCountByYears =
GENERATION_IMMUTABLE_SET.stream()
.collect(Collectors.groupingBy(Generation::numberOfYears,
Collectors.counting()));

var expected = new HashMap<>();
expected.put(17, 2L);
expected.put(16, 3L);
expected.put(19, 1L);
expected.put(18, 2L);
expected.put(23, 1L);
expected.put(27, 1L);
expected.put(1843, 1L);
assertEquals(expected, generationCountByYears);
assertNull(generationCountByYears.get(30));
}

Refactoring Counting to Eclipse Collections

@Test
public void counting() // 🧮
{
// Counting with Predicate -> Count of Generation instances that match
int count = GENERATION_IMMUTABLE_SET
.count(generation -> generation.contains(1995));

assertEquals(1, count);

// Counting by a Function -> Number of years in a Generation ->
// Count of Generations
ImmutableBag<Integer> generationCountByYears =
GENERATION_IMMUTABLE_SET.countBy(Generation::numberOfYears);

var expected = Bags.mutable.withOccurrences(17, 2)
.withOccurrences(16, 3)
.withOccurrences(19, 1)
.withOccurrences(18, 2)
.withOccurrences(23, 1)
.withOccurrences(27, 1)
.withOccurrences(1843, 1);
assertEquals(expected, generationCountByYears);
assertEquals(0, generationCountByYears.occurrencesOf(30));
}

Lessons Learned from Counting

Using Java Stream to count, first requires you to learn how to use filter. The method count() on Stream returns a long, but takes no parameter. It is the size of the Stream.

With Eclipse Collections, the count method takes a Predicate as a parameter, and counts the elements that match the Predicate.

Notice that the bun methods disappear here. Eclipse Collections gets to the point immediately. We are using count or countBy. These are active verbs, not gerunds. They do not require bun methods like stream and collect. These methods are available directly on the collections themselves. Both of these methods are eager, not lazy. They have a specific terminal result at the end of computation (int or Bag).

A Stream will return a long for a count, because a Stream can be sourced from things other than collections (e.g. files). Collection types in Java have a max size of int. In the case of Eclipse Collections, the only thing the library deals with are collections, so the result of count will never be bigger than the max size of a collection, which is int.

The less obvious thing that is happening here is the covariant nature of countBy, and other methods on Eclipse Collections Collection types. When a collection type is returned from a method, the source collection determines the result type. In the case of an ImmutableSet<Generation>, which is what GENERATION_IMMUTABLE_SET returns, the result type for countBy is an ImmutableBag<Integer>. The Map returned by the Stream version of the code is not immutable, but you wouldn’t know that from the interface named Map, because it can’t tell you.

Lastly, a Bag is a safer data structure to return than a Map for countBy. This is because a Map will return null for missing keys, where a Bag knows it is a counter, so will return 0 for missing keys when occurrencesOf is used.

Refactoring Testing 🧪

The next category of methods we will refactor are testing methods. A testing method will always return a boolean result.

JDK Collections / Streams

/**
* Testing methods return a boolean. We will explore three testing methods.
* Testing methods are always eager, but can often short-circuit execution,
* meaning they don't have to visit all elements of the collection if the
* condition is met.
*<ol>
*<li>Stream.anyMatch(Predicate) -> RichIterable.anySatisfy(Predicate)</li>
*<li>Stream.allMatch(Predicate) -> RichIterable.allSatisfy(Predicate)</li>
*<li>Stream.noneMatch(Predicate) -> RichIterable.noneSatisfy(Predicate)</li>
*</ol>
*/
@Test
public void testing() // 🧪
{
assertTrue(GENERATION_IMMUTABLE_SET.stream()
.anyMatch(generation -> generation.contains(1995)));
assertFalse(GENERATION_IMMUTABLE_SET.stream()
.allMatch(generation -> generation.contains(1995)));
assertFalse(GENERATION_IMMUTABLE_SET.stream()
.noneMatch(generation -> generation.contains(1995)));
}

Refactoring Testing to Eclipse Collections

@Test
public void testing() // 🧪
{
assertTrue(GENERATION_IMMUTABLE_SET
.anySatisfy(generation -> generation.contains(1995)));
assertFalse(GENERATION_IMMUTABLE_SET
.allSatisfy(generation -> generation.contains(1995)));
assertFalse(GENERATION_IMMUTABLE_SET
.noneSatisfy(generation -> generation.contains(1995)));
}

Lessons Learned from Testing

There are other methods for testing that we did not cover in this refactoring. Examples are contains, isEmpty, notEmpty, containsBy, containsAll, containsAny, containsNone.

The simple pattern to remember when refactoring any/all/None is that the suffix Match in the JDK, becomes Satisfy in Eclipse Collections. The biggest difference is that the call to stream is removed as it is unnecessary. The methods are available directly on the collections themselves in Eclipse Collections.

Refactoring Finding 🔎

The next category of methods are finding methods. A finding method is one that returns an element of the collection. There are methods that can search for elements based on Predicate or Function.

JDK Collections / Streams

/**
* Finding methods return some element of a collection. Finding methods are
* always eager.
* <ol>
* <li>Stream.filter(Predicate).findFirst() -> RichIterable.detect(Predicate) / detectOptional(Predicate)</li>
* <li>Collectors.maxBy(Comparator) -> RichIterable.maxBy(Function)</li>
* <li>Collectors.minBy(Comparator) -> RichIterable.minBy(Function)</li>
* <li>Stream.filter(Predicate.not()) -> RichIterable.reject(Predicate)</li>
* </ol>
*/
@Test
public void finding() // 🔎
{
Generation findFirst =
GENERATION_IMMUTABLE_SET.stream()
.filter(generation -> generation.contains(1995))
.findFirst()
.orElse(null);

assertEquals(MILLENNIAL, findFirst);

Generation notFound =
GENERATION_IMMUTABLE_SET.stream()
.filter(generation -> generation.contains(1795))
.findFirst()
.orElse(UNCLASSIFIED);

assertEquals(UNCLASSIFIED, notFound);

List<Generation> generationsNotUnclassified =
Stream.of(Generation.values())
.filter(gen -> !gen.equals(UNCLASSIFIED))
.toList();

Generation maxByYears =
generationsNotUnclassified.stream()
.collect(Collectors.maxBy(
Comparator.comparing(Generation::numberOfYears)))
.orElse(null);
assertEquals(GREATEST, maxByYears);

Generation minByYears =
generationsNotUnclassified.stream()
.collect(Collectors.minBy(
Comparator.comparing(Generation::numberOfYears)))
.orElse(null);
assertEquals(X, minByYears);
}

Refactoring Finding to Eclipse Collections

@Test
public void finding() // 🔎
{
Generation findFirst = GENERATION_IMMUTABLE_SET
.detect(generation -> generation.contains(1995));

assertEquals(MILLENNIAL, findFirst);

Generation notFound = GENERATION_IMMUTABLE_SET
.detectIfNone(
generation -> generation.contains(1795),
() -> UNCLASSIFIED);

assertEquals(UNCLASSIFIED, notFound);

MutableList<Generation> generationsNotUnclassified =
ArrayAdapter.adapt(Generation.values())
.reject(gen -> gen.equals(UNCLASSIFIED));

Generation maxByYears =
generationsNotUnclassified.maxBy(Generation::numberOfYears);
assertEquals(GREATEST, maxByYears);

Generation minByYears =
generationsNotUnclassified.minBy(Generation::numberOfYears);
assertEquals(X, minByYears);
}

Lessons Learned from Finding

Again, we see that finding in the JDK is dependent on the method filter. The method findFirst is terminal in the JDK and takes no parameters. It returns an Optional, which we then have to query to see if something was actually returned from the call to filter. We write cases where something is found, and something is not found.

Eclipse Collections detect method takes a Predicate as a parameter, and returns a found element or null if something is not found. If we want to protect against the null return case, we can use detectIfNone, which takes a Predicate and Function0 as parameters. The Function0 is evaluated in the case something is not found.

We see that the filter method has no equivalent of a filterNot. Instead, we have to negate a Predicate using an ! in the lambda, or we could wrap a Predicate in a call to Predicate.not().

Eclipse Collections has a method named reject that filters exclusively. As we will see in the next category (filtering), Eclipse Collections also has a method named select which filters inclusively.

Refactoring Filtering 🚰

The filtering category includes methods like filter and partition. In Eclipse Collections, the method names are select (inclusive filter), reject (exclusive filter) and partition (one pass select and reject)

JDK Collections / Streams

/**
* Filtering methods return another Stream or Collection based on a Predicate.
* Filtering can be eager or lazy. We will explore three filtering methods.
* <ol>
* <li>Stream.filter(Predicate) -> RichIterable.select(Predicate)</li>
* <li>Stream.filter(Predicate.not()) -> RichIterable.reject(Predicate)</li>
* <li>Collectors.partitioningBy(Predicate) -> RichIterable.partition(Predicate)</li>
* </ol>
*/
@Test
public void filtering() // 🚰
{
Set<Generation> filteredSelected =
GENERATION_IMMUTABLE_SET.stream()
.filter(generation -> generation.yearsCountEqualsJdk(16))
.collect(Collectors.toUnmodifiableSet());

var expectedSelected = Set.of(X, MILLENNIAL, Z);
assertEquals(expectedSelected, filteredSelected);

Set<Generation> filteredRejected =
GENERATION_IMMUTABLE_SET.stream()
.filter(generation -> !generation.yearsCountEqualsJdk(16))
.collect(Collectors.toUnmodifiableSet());

var expectedRejected = Sets.mutable.with(
ALPHA, UNCLASSIFIED, BOOMER, GREATEST, LOST,
MISSIONARY, PROGRESSIVE, SILENT);
assertEquals(expectedRejected, filteredRejected);

Map<Boolean, Set<Generation>> partition = GENERATION_IMMUTABLE_SET.stream()
.collect(Collectors.partitioningBy(
generation -> generation.yearsCountEqualsJdk(16),
Collectors.toUnmodifiableSet()));

assertEquals(expectedSelected, partition.get(Boolean.TRUE));
assertEquals(expectedRejected, partition.get(Boolean.FALSE));
}

Refactoring Finding to Eclipse Collections

@Test
public void filtering() // 🚰
{
ImmutableSet<Generation> filteredSelected =
GENERATION_IMMUTABLE_SET
.select(generation -> generation.yearsCountEqualsJdk(16));

var expectedSelected = Set.of(X, MILLENNIAL, Z);
assertEquals(expectedSelected, filteredSelected);

ImmutableSet<Generation> filteredRejected =
GENERATION_IMMUTABLE_SET
.reject(generation -> generation.yearsCountEqualsJdk(16));

var expectedRejected = Sets.mutable.with(
ALPHA, UNCLASSIFIED, BOOMER, GREATEST, LOST,
MISSIONARY, PROGRESSIVE, SILENT);
assertEquals(expectedRejected, filteredRejected);

PartitionImmutableSet<Generation> partition = GENERATION_IMMUTABLE_SET
.partition(generation -> generation.yearsCountEqualsJdk(16));

assertEquals(expectedSelected, partition.getSelected());
assertEquals(expectedRejected, partition.getRejected());
}

Lessons Learned from Filtering

While the name filtering makes sense for a method category, the name filter is ambiguous as a method. It is not clear by the name alone whether the method is meant to be an inclusive or exclusive filter. The methods select and reject in Eclipse Collections disambiguate through their names.

The method partition in Eclipse Collections returns a special type, in this case a PartitionMutableSet. Again, we see that methods in EC are covariant, and return specialized types based on the source type.

The filtering methods on Eclipse Collections collection types are all eager. If we want lazy versions of the methods, we can call asLazy() first, and then will have to do something similar to Java Stream and call a terminal method like toList(). There are many more methods available on LazyIterable than Stream, as LazyIterable extends RichIterable.

Now, to address the return type of Map<Boolean, Set<Generation>> from the Collectors.partitioningBy() method. It is difficult (although not impossible) to think of a worse return type for this method. A Map<Boolean, Anything> is a bad idea. I think it is so bad, that Eclipse Collections primitive maps do not support BooleanToAnythingMaps. We explicitly decided not to support these types. There are much better alternatives like using a Record with explicit names, or introducing a specific type as we did in Eclipse Collections for PartitionIterable. If you want me to explain more about why Map<Boolean, Anything> is bad, there is a blog for that, with the title “Map-Oriented Programming in Java.” Enjoy!

Map-Oriented Programming in Java

Refactoring Grouping 🏘️

The grouping category was limited to just groupBy in this talk. There are other methods that are categorized as grouping in Eclipse Collections. You can see the full list of EC methods included in the grouping category in the slide above with the Custom Code Folding Regions demonstrated in IntelliJ.

JDK Collections / Streams

/**
* Grouping methods return a Map with some key calculated by a Function and
* the values contained in a Collection. We will explore one grouping method.
*
* <ol>
* <li>Collectors.groupingBy(Function) -> RichIterable.groupBy(Function)</li>
* </ol>
*/
@Test
public void grouping() // 🏘️
{
Map<Integer, Set<Generation>> generationByYears =
GENERATION_IMMUTABLE_SET.stream()
.collect(Collectors.groupingBy(
Generation::numberOfYears,
Collectors.toSet()));

var expected = new HashMap<>();
expected.put(17, Set.of(ALPHA, PROGRESSIVE));
expected.put(16, Set.of(X, MILLENNIAL, Z));
expected.put(19, Set.of(BOOMER));
expected.put(18, Set.of(SILENT, LOST));
expected.put(23, Set.of(MISSIONARY));
expected.put(27, Set.of(GREATEST));
expected.put(1843, Set.of(UNCLASSIFIED));

assertEquals(expected, generationByYears);
assertNull(generationByYears.get(30));
}

Refactoring Grouping to Eclipse Collections

@Test
public void grouping() // 🏘️
{
ImmutableSetMultimap<Integer, Generation> generationByYears =
GENERATION_IMMUTABLE_SET.groupBy(Generation::numberOfYears);

var expected = Multimaps.immutable.set.empty()
.newWithAll(17, Set.of(ALPHA, PROGRESSIVE))
.newWithAll(16, Set.of(X, MILLENNIAL, Z))
.newWithAll(19, Set.of(BOOMER))
.newWithAll(18, Set.of(SILENT, LOST))
.newWithAll(23, Set.of(MISSIONARY))
.newWithAll(27, Set.of(GREATEST))
.newWithAll(1843, Set.of(UNCLASSIFIED));

assertEquals(expected, generationByYears);
assertTrue(generationByYears.get(30).isEmpty());
}

Lessons Learned from Grouping

I will refer you to the blog on Map-Oriented Programming in Java again. The groupBy method in Eclipse Collections returns a special type called Multimap. A Multimap is a collection type that knows its value types are some type of collection. A Multimap can gracefully handle a sparsely populated data set, by returning an empty collection when a key is missing. A Map will return null for missing keys. The test case illustrates this.

We see yet again, that the groupBy method is covariant on Eclipse Collections types. An ImmutableSet returns an ImmutableSetMultimap when calling groupBy on it.

Creating a Multimap is more involved than creating other types. We use the Multimaps factory class here and choose immutable and set to further refine the Multimap type we want to be an ImmutableSetMultimap. If you go to the first paragraph of this blog, you will find a link to the slides for our talk which includes a slide with all of the combinations of Eclipse Collections factories explained.

Refactoring Converting 🔌

The category of converting includes 29 methods in Eclipse Collections. We only cover the toList and toImmutableList converter methods in this talk. The converter methods in the JDK are limited to toList on Stream, and bunch of toXyz methods on Collectors.

JDK Collections / Streams

/**
* Converting method convert from a source Collection type to a target
* Collection type. Converting methods in both Java and Eclipse Collections
* usually have a prefix of "to". We'll explore a few converting methods
* in this test.
* <ol>
* <li>Collectors.toList() -> RichIterable.toList()</li>
* <li>Stream.toList() -> RichIterable.toImmutableList()</li>
* </ol>
*/
@Test
public void converting() // 🔌
{
List<Generation> mutableList =
GENERATION_IMMUTABLE_SET.stream()
.collect(Collectors.toList());
List<Generation> immutableList =
GENERATION_IMMUTABLE_SET.stream()
.toList();

List<Generation> sortedMutableList =
mutableList.stream()
.sorted(Comparator.comparing(
gen -> gen.yearsStream().findFirst().getAsInt()))
.collect(Collectors.toList());

var expected = Lists.mutable.with(values());
assertEquals(expected, sortedMutableList);

List<Generation> sortedImmutableList =
immutableList.stream()
.sorted(Comparator.comparing(
gen -> gen.yearsStream().findFirst().getAsInt()))
.toList();
assertEquals(expected, sortedImmutableList);
}

Refactoring Converting to Eclipse Collections

@Test
public void converting() // 🔌
{
MutableList<Generation> mutableList =
GENERATION_IMMUTABLE_SET.toList();
ImmutableList<Generation> immutableList =
GENERATION_IMMUTABLE_SET.toImmutableList();

MutableList<Generation> sortedMutableList =
mutableList.toSortedListBy(
gen -> gen.yearsInterval().getFirst());

var expected = Lists.mutable.with(values());
assertEquals(expected, sortedMutableList);

ImmutableList<Generation> sortedImmutableList =
immutableList.toImmutableSortedListBy(
gen -> gen.yearsInterval().getFirst());
assertEquals(expected, sortedImmutableList);
}

Lessons Learned from Converting

The methods for converting from one collection type to another are extremely helpful. They are also extremely limited on the Stream interface. It is confusing that the method named toList on Collectors, does not return the same type as the method named toList on Stream.

While we limited the converting category to methods for converting to mutable and immutable Lists, the following blog shows the large number of potential targets for converting methods prefixed with to in Eclipse Collections.

Converter methods in Eclipse Collections

Refactoring Transforming 🦋

The transforming category includes methods like JDK map and EC collect. These methods transform the element type of a collection to a different type (e.g. Generation -> String).

JDK Collections / Streams

/**
* Transforming methods convert the elements of a collection to another type by
* applying a Function to each element. We'll explore the following methods.
*
* <ol>
* <li>Stream.map() -> RichIterable.collect()</li>
* <li>Collectors.toUnmodifiableSet() -> ???</li>
* </ol>
*
* Note: Certain methods on RichIterable are covariant, so return a type that
* makes sense for the source type.
* Hint: If we collect on an ImmutableSet, the return type is an ImmutableSet.
*/
@Test
public void transforming() // 🦋
{
Set<String> names =
GENERATION_IMMUTABLE_SET.stream()
.map(Generation::getName)
.collect(Collectors.toUnmodifiableSet());

var expected = Sets.immutable.with(
"Unclassified", "Greatest Generation", "Lost Generation", "Millennials",
"Generation X", "Baby Boomers", "Generation Z", "Silent Generation",
"Progressive Generation", "Generation Alpha", "Missionary Generation");
assertEquals(expected, names);

Set<String> mutableNames = names.stream()
.collect(Collectors.toSet());
assertEquals(expected, mutableNames);
}

Refactoring Transforming to Eclipse Collections

@Test
public void transforming() // 🦋
{
ImmutableSet<String> names =
GENERATION_IMMUTABLE_SET.collect(Generation::getName);

var expected = Sets.immutable.with(
"Unclassified", "Greatest Generation", "Lost Generation", "Millennials",
"Generation X", "Baby Boomers", "Generation Z", "Silent Generation",
"Progressive Generation", "Generation Alpha", "Missionary Generation");
assertEquals(expected, names);

MutableSet<String> mutableNames = names.toSet();
assertEquals(expected, mutableNames);
}

Lessons Learned from Transforming

We see the collect method in Eclipse Collections , like select, reject, partition, countBy, groupBy, is covariant. Using collect on an ImmutableSet returns an ImmutableSet. The collect method is the equivalent of map on the JDK Stream type. It is not the same as the collect method on the Stream type.

The following section on collect from the “Eclipse Collections Categorically” book explains the difference between collect on Stream and collect in Eclipse Collections.

Explaining the difference between the method named collect in Eclipse Collections, and collect on Java Stream

Refactoring Chunking 🖖

The category of chunking can also be grouped in the category of grouping. We differentiated it in our talk because the capability of chunking was added as a method named windowFixed to the new Gatherers type in Java. The method that provides the same behavior as windowFixed in Eclipse Collections is simply named chunk.

Note: The hand emoji above reminded me of taking a collection of five fingers and chunking them by two each. This leaves three chunks, with 2, 2, 1 fingers.

JDK Collections / Streams

/**
* Chunking is a kind of grouping method, but for our purposes we will put
* the methods in their own category. Chunking is great for breaking
* collections into smaller collections based on a size parameter.
* We'll explore the following methods.
*
* <ol>
* <li>Stream.gather(Gatherers.windowFixed()) -> RichIterable.chunk()</li>
* <li>Collectors.joining() -> RichIterable.makeString()</li>
* </ol>
*/
@Test
public void chunking() // 🖖
{
Stream<List<Generation>> windowFixedGenerations =
GenerationJdk.windowFixedGenerations(3);
String generationsAsString = windowFixedGenerations.map(Object::toString)
.collect(Collectors.joining(", "));

String expected = """
[UNCLASSIFIED, PROGRESSIVE, MISSIONARY], [LOST, GREATEST, SILENT], \
[BOOMER, X, MILLENNIAL], [Z, ALPHA]""";

assertEquals(expected, generationsAsString);

String yearsAsString = MILLENNIAL.yearsStream()
.boxed()
.gather(Gatherers.windowFixed(4))
.map(Object::toString)
.collect(Collectors.joining(", "));

String expectedYears = """
[1981, 1982, 1983, 1984], [1985, 1986, 1987, 1988], \
[1989, 1990, 1991, 1992], [1993, 1994, 1995, 1996]""";
assertEquals(expectedYears, yearsAsString);
}

The additional code to explore is in GenerationJdk.

public static Stream<List<Generation>> windowFixedGenerations(int size)
{
return Arrays.stream(Generation.values())
.gather(Gatherers.windowFixed(size));
}

Refactoring Chunking to Eclipse Collections

@Test
public void chunking() // 🖖
{
RichIterable<RichIterable<Generation>> chunkedGenerations =
GenerationEc.chunkGenerations(3);
String generationsAsString = chunkedGenerations.makeString(", ");

String expected = """
[UNCLASSIFIED, PROGRESSIVE, MISSIONARY], [LOST, GREATEST, SILENT], \
[BOOMER, X, MILLENNIAL], [Z, ALPHA]""";

assertEquals(expected, generationsAsString);

String yearsAsString = MILLENNIAL.yearsInterval()
.chunk(4)
.makeString(", ");

String expectedYears = """
[1981, 1982, 1983, 1984], [1985, 1986, 1987, 1988], \
[1989, 1990, 1991, 1992], [1993, 1994, 1995, 1996]""";
assertEquals(expectedYears, yearsAsString);
}

The additional code to explore is in GenerationEc.

public static RichIterable<RichIterable<Generation>> chunkGenerations(int size)
{
return ArrayAdapter.adapt(Generation.values())
.asLazy()
.chunk(size);
}

Lessons Learned from Chunking

This is the first time we used Gatherers in this talk. The first thing we can notice about the gather method on Stream, is there is no equivalent of gather on IntStream, LongStream, or DoubleStream. The chunk method on the other hand is available for both Object and primitive collections in Eclipse Collections.

The method named chunk is available as an eager method directly on collections, and also lazily via a call to asLazy. The code could be changed to be eager as follows, but there would be a slight performance hit because a temporary collection would be created as a result.

public static RichIterable<RichIterable<Generation>> chunkGenerations(int size)
{
return ArrayAdapter.adapt(Generation.values())
.chunk(size);
}

Notice how the return type of chunk is still RichIterable<RichIterable<Generation>> when we remove the call to asLazy. This is because a LazyIterable is a RichIterable, and an ImmutableSet is also a RichIterable. They behave differently for certain methods, but have a consistent API.

Refactoring Folding 🪭

The folding category is actually called aggregating in Eclipse Collections. For this talk we separated it out as a category to explain the fold method in the JDK on the Gatherers class. The method that is equivalent to fold in Eclipse Collections is called injectInto.

JDK Collections / Streams

/**
* Folding is a mechanism for reducing a type to some new result type.
* We'll explore folding to calculate a min, max, and sum.
* Methods we'll cover:
* <ol>
* <li>Stream.gather(Gatherers.fold() -> RichIterable.injectInto()</li>
* </ol>
*/
@Test
public void folding() // 🪭
{
Integer maxYears = GenerationJdk.fold(
Integer.MIN_VALUE,
(Integer value, Generation generation) ->
Math.max(value, generation.numberOfYears()));

Integer minYears = GenerationJdk.fold(
Integer.MAX_VALUE,
(Integer value, Generation generation) ->
Math.min(value, generation.numberOfYears()));

Integer sumYears = GenerationJdk.fold(
Integer.valueOf(0),
(Integer value, Generation generation) ->
Integer.sum(value, generation.numberOfYears()));

assertEquals(1843, maxYears);
assertEquals(16, minYears);
assertEquals(2030, sumYears);
}

The additional code to explore is in GenerationJdk.fold().

public static <IV> IV fold(IV value, BiFunction<IV, Generation, IV> function)
{
return GENERATION_SET.stream()
.gather(Gatherers.fold(() -> value, function))
.findFirst()
.orElse(value);
}

Refactoring Folding to Eclipse Collections

@Test
public void folding() // 🪭
{
Integer maxYears = GenerationEc.fold(
Integer.MIN_VALUE,
(Integer value, Generation generation) ->
Math.max(value, generation.numberOfYears()));

Integer minYears = GenerationEc.fold(
Integer.MAX_VALUE,
(Integer value, Generation generation) ->
Math.min(value, generation.numberOfYears()));

Integer sumYears = GenerationEc.fold(
Integer.valueOf(0),
(Integer value, Generation generation) ->
Integer.sum(value, generation.numberOfYears()));

assertEquals(1843, maxYears);
assertEquals(16, minYears);
assertEquals(2030, sumYears);
}

The additional code to explore is in GenerationEc.fold().

public static <IV> IV fold(IV value, Function2<IV, Generation, IV> function)
{
return GENERATION_IMMUTABLE_SET.injectInto(value, function);
}

Lessons Learned from Folding

The approach taken for folding in the JDK is unnecessarily convoluted. If we compare fold and injectInto next to each other, this will be clearer.

// JDK fold
public static <IV> IV fold(IV value, BiFunction<IV, Generation, IV> function)
{
return GENERATION_SET.stream()
.gather(Gatherers.fold(() -> value, function))
.findFirst()
.orElse(value);
}

// EC injectInto
public static <IV> IV fold(IV value, Function2<IV, Generation, IV> function)
{
return GENERATION_IMMUTABLE_SET.injectInto(value, function);
}

The methods fold and injectInto are hard enough to explain, without adding the overhead of Stream, Gatherers, and Optional into the mix.

The following blog explains the method injectInto in more detail. I refer to injectInto as the “Continuum Transfunctioner.” Read the following blog to find out why.

Eclipse Collections by Example: InjectInto

Refactoring a Conclusion

After having given a 75 minute talk at dev2next, and then turning the talk into a blog where I repeat the live refactoring that Vlad and I did in front on an audience, there is very little left for me to say. There is a lot to digest in this blog. I dare say this probably the longest blog I have ever written.

I will simply leave you with our takeaways slide from the talk, and an important section of the book “Eclipse Collections Categorically.”

You get what you settle for, so don’t settle for less than you expect

Note: The following is an excerpt from Chapter one of the book, “Eclipse Collections Categorically.” This section of Chapter one is available in the online reading sample for the book on Amazon.

Tell your collections what you want them to do for you. Don’t ask for their data and do it yourself.

I hope you enjoyed reliving the talk Vlad and I gave at dev2next titled “Refactoring to Eclipse Collections.” I enjoyed writing it, and will see if I can go back and make improvements over time. This blog will hopefully be a good resource for folks seeking to build or reinforce a set of basic skills across several method categories for Eclipse Collections. This blog isn’t as comprehensive as the book I just wrote, but should hopefully be a good starter for what you might have been missing just using Java Collections and Streams for the past 21 years.

Thanks for reading!

I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions. I am the author of the book, Eclipse Collections Categorically: Level up your programming game.


by Donald Raab at October 06, 2025 02:58 AM

Key Highlights from the 2025 Jakarta EE Developer Survey Report

by Tatjana Obradovic at October 01, 2025 02:09 PM

Key Highlights from the 2025 Jakarta EE Developer Survey Report

The results are in! The State of Enterprise Java: 2025 Jakarta EE Developer Survey Report has just been released, offering the industry’s most comprehensive look at the state of enterprise Java. Now in its eighth year, the report captures the perspectives of more than 1700 developers, architects, and decision-makers, a 20% increase in participation compared to 2024.

The survey results give us insight into Jakarta EE’s role as the leading framework for building modern, cloud native Java applications. With the release of Jakarta EE 11, the community’s commitment to modernisation is clear, and adoption trends confirm its central role in shaping the future of enterprise Java. Here are a few of the major findings from this year’s report: 

Jakarta EE Adoption Surpasses Spring

For the first time, more developers reported using Jakarta EE (58%) than Spring (56%). This clearly indicates growing awareness that Jakarta EE provides the foundation for popular frameworks like Spring. This milestone underscores Jakarta EE’s momentum and the community’s confidence in its role as the foundation for enterprise Java in the cloud era. 

Rapid Uptake of Jakarta EE 11

Released earlier this year, Jakarta EE 11 has already been adopted by 18% of respondents. Thanks to its staged release model, with Core and Web Profiles first, followed by the full platform release, developers are migrating faster than ever from older versions.

Shifts in Java SE Versions

The community continues to embrace newer Java versions. Java 21 adoption leapt to 43%, up from 30% in 2024, while older versions like Java 8 and 17 declined. Interestingly, Java 11 showed a rebound at 37%, signaling that organisations continue to balance modernisation with stability.

Cloud Migration Strategies Evolve

While lift-and-shift (22%) remains the dominant approach, developers are increasingly exploring modernisation paths. Strategies include gradual migration with microservices (14%), modernising apps to leverage cloud-native features (14%), and full cloud-native builds (14%). At the same time, 20% remain uncertain, highlighting a need for clear guidance in this complex journey.

Community Priorities

Survey respondents reaffirmed priorities around cloud native readiness and faster specification adoption, while also emphasising innovation and strong alignment with Java SE.

Why This Matters

These findings highlight not only Jakarta EE’s accelerating momentum but also the vibrant role the community plays in steering its evolution. With enterprise Java powering mission-critical systems across industries, the insights from this survey provide a roadmap for organisations modernising their applications in an increasingly cloud native world.

A Call to the Community

The Jakarta EE Developer Survey continues to serve as a vital barometer of the ecosystem. With the Jakarta EE Working Group hard at work on the next release, including innovative features, there’s never been a better time to get involved. Whether you’re a developer, architect, or enterprise decision-maker, now is the perfect time to get involved:

With the Jakarta EE Working Group already preparing for the next release, including new cloud native capabilities, the momentum is undeniable. Together, we are building the future of enterprise Java.

Tatjana Obradovic

by Tatjana Obradovic at October 01, 2025 02:09 PM

Welcome Sonnet 4.5 to Theia AI (and Theia IDE)!

by Jonas, Maximilian & Philip at October 01, 2025 12:00 AM

Developers and tool builders can use Anthropic’s Sonnet 4.5 directly in Theia AI and the AI-powered Theia IDE, without any additional glue code. Just add "sonnet-4.5" to your model list in your …

The post Welcome Sonnet 4.5 to Theia AI (and Theia IDE)! appeared first on EclipseSource.


by Jonas, Maximilian & Philip at October 01, 2025 12:00 AM

Testing and developing SWT on GTK

by Jonah Graham at September 30, 2025 03:21 PM

I have recently started working on improved support of GTK4 in SWT and I have been trying to untangle the various options that affect SWT + GTK and how everything goes together.

Environment Variables

These are key environment variables that control where and how SWT draws in GTK land.

  • SWT_GTK4: If this is set to 1 then SWT will attempt to use GTK4 libraries
  • GDK_BACKEND: Which backend the GDK layer (a layer below GTK) uses to draw. Can be set to x11 or wayland.
  • DISPLAY: when GDK_BACKEND is x11, controls which display the program is drawn on.

If SWT_GTK4 or GDK_BACKEND is set to a value that is not supported, then generally the code gracefully falls back to the other value. For example, setting SWT_GTK4=1 without GTK4 libraries will attempt to load GTK3 libraries.

If DISPLAY is set to an invalid value, you will generally get a org.eclipse.swt.SWTError: No more handles [gtk_init_check() failed] exception (although there are other reasons you can get that exception).

GDK_BACKEND is often set by unexpected places. For example on my machine I often find GDK_BACKEND set to x11, even though I have not requested that. Other places, such as VSCode may force setting GDK_BACKEND depending on different circumstances. Therefore I recommend being explicit/careful with GDK_BACKEND to ensure that SWT is using the backend you expect.

X11 and Wayland

When Wayland is in use, and GDK_BACKEND=x11, then Xwayland is used to bridge the gap between an application written to use X11 and the user’s display. Sometimes the behaviour of Xwayland and its interactions can differ from using a machine with X as the real display. To test this you may want to install a machine (or VM) with a distro that uses X11 natively, such as Xubuntu. The alternative is to use a VNC server (see below section).

X11 VNC Server

Rather than installing a VM or otherwise setting up a different machine you can use a VNC Server running an X server. This has the added benefit of giving a mostly accurate X11 experience, but also benefits from maintaining its own focus and drawing, allowing X11 tests to run without interrupting your development environment.

In the past I have recommended using Xvfb as documented in CDT’s testing manual. However, for my current SWT development I have used tiger VNC so I can see and interact with the window under test.

When I was experimenting on setting this up I seemed to have accidentally changed my Ubuntu theme. I was doing a bunch of experimenting, so I’m not sure what exactly I did. I have included the steps I believe are necessary below, but I may have edited out an important step – if so, please comment below and I can update the document.

These are the steps to setup and configure tiger vnc that worked for me on my Ubuntu 25.04 machine:

  • sudo apt install tigervnc-standalone-server tigervnc-common Install the VNC tools
  • sudo apt install xfce4 xfce4-goodies Install an X11 based window manager and basic tools (there are probably some more minimal sets of things that could be installed here)
  • vncpasswd Configure VNC with a password access control
  • sudo vi /etc/X11/Xtigervnc-session Edit how X11 session is started. I found that the default didn’t work well, probably because xfce4 was not the only thing installed on my machine and the Xsession script didn’t quite know what to do. The exec /etc/X11/Xsession "$@" didn’t launch successfully, so I replaced that line with these lines:
    unset SESSION_MANAGER
    unset DBUS_SESSION_BUS_ADDRESS
    exec startxfce4
    The SESSION_MANAGER and DBUS_SESSION_BUS_ADDRESS are unset because I wanted to keep this session independent of other things running on my machine and I was getting errors without them unset.
  • vncserver :99 Start the VNC Server – adjust :99 for the display you want to use, you set DISPLAY environment variable to :99 in this case.
  • xtigervncviewer -SecurityTypes VncAuth -passwd /tmp/pathhere/passwd :99 Start the viewer, use the command that vncserver output as part of its startup

Wayland Remote connection

I have not had the opportunity to use this much yet, but recent Ubuntu machines come with desktop sharing using RDP based on gnome-remote-desktop. This should allow connecting to a Ubuntu machine and use Wayland remotely. Enable it from Settings -> System -> Remote Desktop and connect to the machine using Remote Desktop.

What to test?

Now that I am developing SWT, specifically targetting GTK4 work, there are different configurations of the above to test. My primary focus is to test:

  • SWT_GTK4=0 with GDK_BACKEND=x11 running on the default DISPLAY that is connected to Xwayland
  • SWT_GTK4=1 with GDK_BACKEND=wayland (in this case DISPLAY is unused)

However these additional settings seem useful to test, especially as x11 backend sometimes seems to be used unexpectedly on wayland:

  • SWT_GTK4=0 with GDK_BACKEND=x11 running on the DISPLAY connected to my VNC. This is really useful for when I want to leave tests running in the background
  • SWT_GTK4=1 with GDK_BACKEND=x11 the behaviour of various things (such as the Clipboard) is different when using GTK4 with wayland. I don’t know how important this use case is long term
  • SWT_GTK4=0 with GDK_BACKEND=wayland – I don’t know if this really adds anything and have hardly tried this combination.

Run Configurations

Here is what a few of my run configurations look like


by Jonah Graham at September 30, 2025 03:21 PM

The Eclipse Foundation Releases the 2025 Jakarta EE Developer Survey Report

by Jacob Harris at September 30, 2025 08:45 AM

The Eclipse Foundation Releases the 2025 Jakarta EE Developer Survey Report Jacob Harris

BRUSSELS – 30 September 2025 – The Eclipse Foundation, one of the world’s largest open source software foundations, today announced the availability of The State of Enterprise Java: 2025 Jakarta EE Developer Survey Report, the industry’s most comprehensive resource for technical insights into enterprise Java. Now in its eighth year, the report highlights accelerating momentum for Jakarta EE adoption and its growing role in powering cloud native applications. The 2025 Jakarta EE Developer Survey Report is available for download in its entirety here.

“With the arrival of Jakarta EE 11, it’s clear the community is prioritizing modernization of their Java infrastructure,” said Mike Milinkovich, executive director of the Eclipse Foundation. “This reflects our commitment to establishing Jakarta EE as a world-class platform for enterprise cloud native development. It’s exciting to see the Java ecosystem embracing this community-led transition.”

Conducted from March 18 to June 5, 2025, the survey collected insights from more than 1700 participants, a 20% increase over 2024, making it one of the most comprehensive community-driven views into the enterprise Java ecosystem.

Key findings from the 2025 Jakarta EE Developer Survey Report:

  • Jakarta EE momentum grows: Jakarta EE adoption has surpassed Spring for the first time, with 58% of respondents using Jakarta EE compared to 56% for Spring. This marks a significant milestone and confirms Jakarta EE’s position as the leading Java framework for building cloud native applications. The data reflects the growing recognition of Jakarta EE’s foundational role in modern enterprise Java.
  • Jakarta EE 11 is being rapidly adopted by the community: Jakarta EE 11 has already been adopted by 18% of respondents. This early traction shows strong interest across regions and company sizes. The community’s flexible, staged release model, which provides early access to Core and Web Profiles, is helping developers move away from older Java EE versions and adopt new innovations more quickly.
  • Java version shifts: Adoption of Java 21 jumped to 43%, up from 30% in 2024. Java 17 and Java 8 both saw declines, while Java 11 experienced a rebound and now stands at 37%. The data suggests that developers are becoming more willing to adopt newer Java versions shortly after release.
  • Cloud migration strategies: Lift-and-shift remains the leading approach (22%), but teams are also weighing a variety of modernization paths. Some plan to gradually migrate with microservices (14%), others are modernizing applications to leverage cloud features (14%), while a portion are already fully cloud-based (14%). At the same time, uncertainty is growing, with 20% of developers still unsure about their strategy.
  • Community priorities: Cloud native readiness and faster specification adoption top the agenda, alongside steady interest in innovation and Java SE alignment.

A call to the community

The Jakarta EE Developer Survey remains a vital resource for developers, architects, and business leaders, offering a clear view into current trends and future directions for enterprise Java.

The Jakarta EE community welcomes contributions and participation from individuals and organisations alike. With the Jakarta EE Working Group hard at work on the next release, including innovative cloud native features, there’s never been a better time to get involved.  Learn more and connect with the global community here.  

For organisations that rely on enterprise Java, membership in the Jakarta EE Working Group offers a unique opportunity to shape its future, while benefiting from marketing initiatives and direct engagement with key contributors. Discover the benefits of membership here

 

About the Eclipse Foundation

The Eclipse Foundation provides our global community of individuals and organisations with a business-friendly environment for open source software collaboration and innovation. We host the Eclipse IDE, Adoptium, Software Defined Vehicle, Jakarta EE, Open VSX, and over 400 open source projects, including runtimes, tools, registries, specifications, and frameworks for cloud and edge applications, IoT, AI, automotive, systems engineering, open processor designs, and many others. Headquartered in Brussels, Belgium, the Eclipse Foundation is an international non-profit association supported by over 300 members. To learn more, follow us on social media @EclipseFdnLinkedIn, or visit eclipse.org.

 

Third-party trademarks mentioned are the property of their respective owners.

 

###

Media contacts:

Schwartz Public Relations (Germany)

Julia Rauch/Marita Bäumer

Sendlinger Straße 42A

80331 Munich

EclipseFoundation@schwartzpr.de

+49 (89) 211 871 -43 / -62

 

514 Media Ltd (France, Italy, Spain)

Benoit Simoneau

benoit@514-media.com

M: +44 (0) 7891 920 370

 

Nichols Communications (Global Press Contact)   

Jay Nichols

jay@nicholscomm.com

+1 408-772-1551


by Jacob Harris at September 30, 2025 08:45 AM

Member case study: Bloomberg’s shift to open source Java

by Jacob Harris at September 30, 2025 08:30 AM

Member case study: Bloomberg’s shift to open source Java Jacob Harris

By adopting Eclipse Temurin and joining the Adoptium Working Group, Bloomberg is strengthening their infrastructure, reducing costs, and leading open source innovation.

 


by Jacob Harris at September 30, 2025 08:30 AM

From Excess to Balance: The Collapse of All-You-Can-Eat

by Denis Roy at September 24, 2025 01:50 PM

From Excess to Balance: The Collapse of All-You-Can-Eat

A few years ago, I noticed that things were changing in the Eclipse Foundation's (EF) IT operations: we were adding servers, and lots of them.

Trays of 3U mega-machines, packing 14 compute units each, with on-board switches, immense fans and drawing much electrical power, providing our community with CPU cycles galore. Storage devices could not keep up, so in came the clustered mega-storage solution, nine massive machines with drives and drives and drives, coupled with expensive switching gear to link everything together.

And yet, it's still not enough. And it's unsustainable.

You may have heard a new buzzword that's been making inroads into the IT and Developer mainstreams: sustainability. There are a few articles floating about that mention it. The Eclipse Foundation is not immune to the unsustainable practice of unlimited consumption, and at the IT Desk, we're pivoting. We have to.

It's all about fairness. Responsible usage is a shared task to be supported by all, not just a few. In the following months, the engineers in the EF IT team will work towards measuring what matters and drawing baselines for reasonable consumption. Our systems will then be adapted to inform you if those reasonable consumption limits have been reached.

What does this mean? Well, that build that has been running continuously in the background may come to a stop, with an invitation to resume it -- tomorrow. The 275MB of the same dependencies that are downloaded 5x each day may fail after the third time, inviting you to resume -- later.  Those 40,000 files produced by each build may be acceptable -- once, but not continuously.

The EF is here to help. We'll strive to provide visibility ands predictability in our operations. We'll start in observer-mode first. We'll communicate and share our findings. We'll help you adapt to the new sustainable environment. 

The burden of responsible usage belongs to all of us -- for a fair, open and sustainable future.

Denis Roy

by Denis Roy at September 24, 2025 01:50 PM

Businesses built on open infrastructure have a responsibility to sustain it

by Mike Milinkovich at September 23, 2025 01:04 PM

The global software ecosystem runs on open source infrastructure. As demand grows, we invite the businesses who rely on it most to play a larger role in sustaining it.

Open source infrastructure is the backbone of the global digital economy. From registries to runtimes, open source underpins the tools, frameworks, and platforms that developers and enterprises rely on every day. Yet as demand for these systems grows, so too does the urgency for those who depend on them most to play a larger role in sustaining their future.

Today, the Eclipse Foundation, alongside Alpha-Omega, OpenJS Foundation, Open SSF, Packagist (Composer), the Python Software Foundation (PyPI), the Rust Foundation (crates.io), and Sonatype (Maven Central), released a joint open letter urging greater investment and support for open infrastructure. The letter calls on those who benefit most from these critical digital resources to take meaningful steps toward ensuring their long-term sustainability and responsible stewardship.

The scale of open source’s impact cannot be overstated: A 2024 Harvard study, The Value of Open Source Software, estimated that the supply-side value of widely used OSS is estimated to top $4.15 billion, while the demand-side value reached $8.8 trillion. Even more striking, 96% of that value came from the work of just 5% of OSS developers. The authors of the study estimate that without open source, organisations would need to spend more than 3.5 times their current software budgets to replicate the same capabilities.

This open ecosystem now powers much of the software industry worldwide, a sector worth trillions of dollars. Yet the investment required to sustain its underlying infrastructure has not kept pace. Running enterprise-grade infrastructure that provides zero downtime, continuous monitoring, traceability, and secure global distribution carries very real costs. The rapid rise of generative and agentic AI has only added to the strain, driving massive new workloads, many of them automated and inefficient.

The message is clear: with meaningful financial support and collaboration from industry, we can secure the long-term strength of the open infrastructure you rely on. Without that shared commitment, these vital resources are at risk.

Open VSX: Critical infrastructure worth investing in

The Eclipse Foundation stewards Open VSX, the world’s largest open source registry for VS Code extensions. Originally created to support Eclipse Foundation projects, it has grown into essential infrastructure for enterprises, serving millions of developers. Today it is the default marketplace for many VS Code forks and cloud environments, and as AI-native development and platform engineering accelerate, Open VSX is emerging as a backbone of extension infrastructure used by AI-driven development tools.

Open VSX currently handles over 100 million downloads each month, a nearly 4x increase since early 2024. This rapid growth underscores the accelerating demand across the ecosystem. Innovative, high-growth companies like Cursor, Windsurf, StackBlitz, and GitPod (now Ona), are just a few of the many organisations building on and benefiting from Open VSX. It is enterprise-class infrastructure that requires significant investment in security, staffing, maintenance, and operations. 

Yet there is a clear imbalance between consumption and contribution. 

Since its launch in September 2022:

  • Over 3,000 issues have been submitted by more than 2,500 individuals
  • Around 1,200 pull requests have been submitted, but only by 43 contributors

In a global ecosystem with tens of thousands of users, fewer than 50 people are doing the work to keep things running and improving. That gap between use and support is difficult to maintain over the long term.

A proven model for sustainability

The Eclipse Foundation also stewards Eclipse Temurin, the open source Java runtime provided by the Adoptium Working Group. With more than 700 million downloads and counting, Temurin has become a cornerstone of the Java ecosystem, offering enterprises a cost-effective, production-grade option.

To help maintain that momentum, the Adoptium Working Group launched the Eclipse Temurin Sustainer Program, designed to encourage reinvestment in the project and support faster releases, stronger security, and improved test infrastructure. The new Temurin ROI calculator shows that enterprises can save an average of $1.6 million annually by switching to open source Java.

Together, Open VSX and Temurin demonstrate what is possible when there is shared investment in critical open source infrastructure. But the current model of unlimited, no-cost use cannot continue indefinitely. The shared goal must be to create a sustainable and scalable model in which commercial consumers of these services provide the primary financial support. At the same time, it is essential to preserve free access for open source users, including individual developers, maintainers, and academic institutions.

We encourage all adopters and enterprises to get involved:

  • Contribute to the code: Review issues, submit patches, and help evolve the projects in the open under Eclipse Foundation governance.
  • Sustain what you use: Support hosting, testing, and security through membership, sponsorship, or other financial contributions, collaborating with peers to keep essential open infrastructure strong.

Investing now helps ensure the systems you depend on remain resilient, secure, and accessible for everyone.

Looking ahead

The growth of Open VSX and Eclipse Temurin underscores their value and importance. They have become cornerstones of modern development, serving a global community and fueling innovation across industries. But growth must be matched with sustainability. Because those who benefit most have not always stepped up to support these projects, we are implementing measures such as rate limiting. This is not about restricting access. It is about keeping the doors open in a way that is fair and responsible.

We are at a turning point. The future of open source infrastructure depends on more than goodwill. I remain optimistic that we can meet this challenge. By working together, industry and the open source community can ensure that these vital systems remain reliable, resilient, and accessible to all. I invite you to join us in honouring the spirit of open source by aligning responsibility with usage and helping to build a sustainable future for shared digital infrastructure.


by Mike Milinkovich at September 23, 2025 01:04 PM

Businesses built on open infrastructure have a responsibility to sustain it

by Jacob Harris at September 23, 2025 10:00 AM

Businesses built on open infrastructure have a responsibility to sustain it Jacob Harris

The global software ecosystem runs on open source infrastructure. As demand grows, we invite the businesses who rely on it most to play a larger role in sustaining it.


 


by Jacob Harris at September 23, 2025 10:00 AM

Open infrastructure is not free: A joint statement on sustainable stewardship

by Jacob Harris at September 23, 2025 08:45 AM

Open infrastructure is not free: A joint statement on sustainable stewardship Jacob Harris

An Open Letter from the Stewards of Public Open Source Infrastructure (originally published on openssf.org)

Over the past two decades, open source has revolutionized the way software is developed. Every modern application, whether written in Java, JavaScript, Python, Rust, PHP, or beyond, depends on public package registries like Maven Central, PyPI, crates.io, Packagist and open-vsx to retrieve, share, and validate dependencies. These registries have become foundational digital infrastructure – not just for open source, but for the global software supply chain. 

Beyond package registries, open source projects also rely on essential systems for building, testing, analyzing, deploying, and distributing software. These also include content delivery networks (CDNs) that offer global reach and performance at scale, along with donated (usually cloud) computing power and storage to support them. 

And yet, for all their importance, most of these systems operate under a dangerously fragile premise: They are often maintained, operated, and funded in ways that rely on goodwill, rather than mechanisms that align responsibility with usage. 

Despite serving billions (perhaps even trillions) of downloads each month (largely driven by commercial-scale consumption), many of these services are funded by a small group of benefactors. Sometimes they are supported by commercial vendors, such as Sonatype (Maven Central), GitHub (npm) or Microsoft (NuGet). At other times, they are supported by nonprofit foundations that rely on grants, donations, and sponsorships to cover their maintenance, operation, and staffing. 

Regardless of the operating model, the pattern remains the same: a small number of organizations absorb the majority of infrastructure costs, while the overwhelming majority of large-scale users, including commercial entities that generate demand and extract economic value, consume these services without contributing to their sustainability. 

Modern Expectations, Real Infrastructure 

Not long ago, maintaining an open source project meant uploading a tarball from your local machine to a website. Today, expectations are very different:

  • Dependency resolution and distribution must be fast, reliable, and global. 

  • Publishing must be verifiable, signed, and immutable. 

  • Continuous integration (CI) pipelines expect deterministic builds with zero downtime. 

  • Security tooling expects an immediate response from public registries. 

  • Governments and enterprises demand continuous monitoring, traceability, and auditability of systems. 

  • New regulatory requirements, such as the EU Cyber Resilience Act (CRA), are further increasing compliance obligations and documentation demands, adding overhead for already resource-constrained ecosystems. 

  • Infrastructure must be responsive to other types of attacks, such as spam and increased supply chain attacks involving malicious components that need to be removed. 

These expectations come with real costs in developer time, bandwidth, computing power, storage, CDN distribution, operational, and emergency response support. Yet, across ecosystems, most organizations that benefit from these services do not contribute financially, leaving a small group of stewards to carry the burden. 

Automated CI systems, large-scale dependency scanners, and ephemeral container builds, which are often operated by companies, place enormous strain on infrastructure. These commercial-scale workloads often run without caching, throttling, or even awareness of the strain they impose. The rise of Generative and Agentic AI is driving a further explosion of machine-driven, often wasteful automated usage, compounding the existing challenges. 

The illusion of “free and infinite” infrastructure encourages wasteful usage. 

Proprietary Software distribution 

In many cases, public registries are now used to distribute not only open source libraries but also proprietary software, often as binaries or software development kits (SDKs) packaged as dependencies. These projects may have an open source license, but they are not functional except as part of a paid product or platform.. 

For the publisher, this model is efficient. It provides the reliability, performance, and global reach of public infrastructure without having to build or maintain it. In effect, public registries have become free global CDNs for commercial vendors. 

We don’t believe this is inherently wrong. In fact, it’s somewhat understandable and speaks to the power of the open source development model. Public registries offer speed, global availability, and a trusted distribution infrastructure already used by their target users, making it sensible for commercial publishers to gravitate toward them. However, it is essential to acknowledge that this was not the original intention of these systems. Open source packaging ecosystems were created to support the distribution of open, community-driven software, not as a general-purpose backend for proprietary product delivery. If these registries are now serving both roles, and doing

so at a massive scale, that’s fine. But it also means it’s time to bring expectations and incentives into alignment. 

Commercial-scale use without commercial-scale support is unsustainable. 

Moving Towards Sustainability 

Open source infrastructure cannot be expected to operate indefinitely on unbalanced generosity. The real challenge is creating sustainable funding models that scale with usage, rather than relying on informal and inconsistent support. 

There is a difference between: 

  • Operating sustainably, and 

  • Functioning without guardrails, with no meaningful link between usage and responsibility. 

Today, that distinction is often blurred. Open source infrastructure, whether backed by companies or community-led foundations, faces rising demands, fueled by enterprise-scale consumption, without reliable mechanisms to scale funding accordingly. Documented examples demonstrate how this imbalance drives ecosystem costs, highlighting the real-world consequences of an illusion that all usage is free and unlimited. 

For foundations in particular, this challenge can be especially acute. Many are entrusted with running critical public services, yet must do so through donor funding, grants, and time-limited sponsorships. This makes long-term planning difficult and often limits their ability to invest proactively in staffing, supply chain security, availability, and scalability. Meanwhile, many of these repositories are experiencing exponential growth in demand, while the growth in sponsor support is at best linear, posing a challenge to the financial stability of the nonprofit organizations managing them. 

At the same time, the long-standing challenge of maintainer funding remains unresolved. Despite years of experiments and well-intentioned initiatives, most maintainers of critical projects still receive little or no sustained support, leaving them to shoulder enormous responsibility in their personal time. In many cases, these same underfunded projects are supported by the very foundations already carrying the burden of infrastructure costs. In others, scarce funds are diverted to cover the operational and staffing needs of the infrastructure itself. 

If we were able to bring greater balance and alignment between usage and funding of open source infrastructure, it would not only strengthen the resilience of the systems we all depend on, but it would also free up existing investments, giving foundations more room to directly support the maintainers who form the backbone of open source. 

Billion-dollar ecosystems cannot stand on foundations built of goodwill and unpaid weekends.

What Needs to Change 

It is time to adopt practical and sustainable approaches that better align usage with costs. While each ecosystem will adopt the approaches that make the most sense in its own context, the need for action is universal. These are the areas where action should be investigated: 

  • Commercial and institutional partnerships that help fund infrastructure in proportion to usage or in exchange for strategic benefits. 

  • Tiered access models that maintain openness for general and individual use while providing scaled performance or reliability options for high-volume consumers. 

  • Value-added capabilities that commercial entities might find valuable, such as usage statistics. 

These are not radical ideas. They are practical, commonsense measures already used in other shared systems, such as Internet bandwidth and cloud computing. They keep open infrastructure accessible while promoting responsibility at scale. 

Sustainability is not about closing access; it’s about keeping the doors open and investing for the future. 

This Is a Shared Resource and a Shared Responsibility 

We are proud to operate the infrastructure and systems that power the open source ecosystem and modern software development. These systems serve developers in every field, across every industry, and in every region of the world. 

But their sustainability cannot continue to rely solely on a small group of donors or silent benefactors. We must shift from a culture of invisible dependence to one of balanced and aligned investments. 

This is not (yet) a crisis. But it is a critical inflection point. 

If we act now to evolve our models, creating room for participation, partnership, and shared responsibility, we can maintain the strength, stability, and accessibility of these systems for everyone. 

Without action, the foundation beneath modern software will give way. With action -- shared, aligned, and sustained -- we can ensure these systems remain strong, secure, and open to all. 

How You Can Help 

While each ecosystem may adopt different approaches, there are clear ways for organizations and individuals to begin engaging now:

  • Show Up and Learn: Connect with the foundations and organizations that maintain the infrastructure you depend on. Understand their operational realities, funding models, and needs. 

  • Align Usage with Responsibility: If your organization is a high-volume consumer, review your practices. Implement caching, reduce redundant traffic, and engage with stewards on how you can contribute proportionally.  

  • Build With Care: If you create build tools, frameworks, or security products, consider how your defaults and behaviors impact public infrastructure. Reduce unnecessary requests, make proxy usage easier, and document best practices so your users can minimize their footprint. 

  • Become a Financial Partner: Support foundations and projects directly, through membership, sponsorship, or by employing maintainers. Predictable funding enables proactive investment in security and scalability. 

Awareness is important, but awareness alone is not enough. These systems will only remain sustainable if those who benefit most also share in their support. 

What’s Next 

This open letter serves as a starting point, not a finish. As stewards of this shared infrastructure, we will continue to work together with foundations, governments, and industry partners to turn principles into practice. Each ecosystem will pursue the models that make sense in its own context, but all share the same direction: aligning responsibility with usage to ensure resilience. 

Future changes may take various forms, ranging from new funding partnerships to revised usage policies to expanded collaboration with governments and enterprises. What matters most is that the status quo cannot hold. 

We invite you to engage with us in this work: learn from the communities that maintain your dependencies, bring forward ideas, and be prepared for a world where sustainability is not optional but expected. 

 

Signed by: 

Alpha-Omega 

Eclipse Foundation (Open VSX) 

OpenJS Foundation 

Open Source Security Foundation 

Packagist (Composer) 

Python Software Foundation (PyPI) 

Rust Foundation (crates.io) 

Sonatype (Maven Central)

 

Organizational signatures indicate endorsement by the listed entity. Additional organizations may be added over time. 

Acknowledgments: Thanks to contributors from the above organizations and the broader community for review and input.

Image
alt

by Jacob Harris at September 23, 2025 08:45 AM

The Eclipse Theia Community Release 2025-08

by Jonas, Maximilian & Philip at September 11, 2025 12:00 AM

We are happy to announce the eleventh Eclipse Theia community release, “2025-08,” incorporating the latest advances from Theia releases 1.62, 1.63, and 1.64. New to Eclipse Theia? It is the …

The post The Eclipse Theia Community Release 2025-08 appeared first on EclipseSource.


by Jonas, Maximilian & Philip at September 11, 2025 12:00 AM

Building MCP Servers: Tool Descriptions + Service Contracts = Dynamic Tool Groups

by Scott Lewis (noreply@blogger.com) at September 09, 2025 12:18 AM

The Model Context Protocol (MCP) can easily be used to expose APIs and services in the form of MCP tools...i.e. functions/methods that can take input, perform some actions based upon that input, and produce output, without specifying a particular language or runtime.

OSGi Services (and Remote Services) provide a dynamic, flexible, secure environment for microservices, with clear well-established mechanisms for separating service contracts from service implementations.

One way to think of a service contract for large language models (LLMs) is that the service contract can be enhanced to provide LLM-processable metadata for each tool/method/function.  Any service contract can still be used by human developers (API consumers), but with tool-specific meta-data/descriptions added, larger service contracts can be also used by any model.

Since service contracts in most languages are sets of functions/methods, the service contract can also be used to represent groupings of MCP tools, or Dynamic MCP ToolGroups.   The example on the MCPToolGroups page and on the Bndtools project templates, is a simple example of grouping a set of related functions/methods into a service contract and including MCP tool meta-data (tool and tool param text descriptions).












by Scott Lewis (noreply@blogger.com) at September 09, 2025 12:18 AM

Task Engineering in AI Coding: How to Break Problems Into AI-Ready Pieces

by Jonas, Maximilian & Philip at September 09, 2025 12:00 AM

AI is changing how we code—but not what makes coding successful. Great software still depends on clarity, structure, and deliberate decision-making. Where many developers rush to feed an entire …

The post Task Engineering in AI Coding: How to Break Problems Into AI-Ready Pieces appeared first on EclipseSource.


by Jonas, Maximilian & Philip at September 09, 2025 12:00 AM

Eclipse Collections Categorically: Level up your programming game

September 09, 2025 12:00 AM

Eclipse Collections is a wonderful library for Java developers that provides a rich… uh… collection (set? list? bag?) of data structures and algorithms that will serve your every need. Eclipse Collections offers alternatives to the standard Java Collections Framework with a focus on memory efficiency and speed; using Eclipse Collections will improve the performance in your applications and increase your productivity. Eclipse Collections Categorically will tell you everything that you need to know about Eclipse Collections.

September 09, 2025 12:00 AM

Explaining the Eclipse prefix in Eclipse Collections

by Donald Raab at September 01, 2025 07:16 PM

Eclipse Collections is a standalone open source collections library for Java

Photo by Erika Löwe on Unsplash

After a decade of Eclipse Collections existence as a project at the Eclipse Foundation, I still find myself having to explain the difference between Eclipse Collections, Eclipse IDE, and the Eclipse Foundation to developers who have the mistaken impression that Eclipse Collections is part of or requires the Eclipse IDE. Eclipse Collections is a standalone Java library which is a project managed at the Eclipse Foundation. It is not part of and does not require you to use the Eclipse IDE to use it.

The prefix Eclipse in Eclipse Collections comes from the Eclipse Foundation, not the Eclipse IDE. The first two bullets below should be enough to make it clear what Eclipse Collections is and that it has no dependencies on any IDE. The first five bullets explain the existence of the Eclipse Foundation and how it relates to Eclipse Collections and the Eclipse IDE. The remaining five bullets are there to help clear up any remaining doubts as to the existence of a dependent relationship between Eclipse Collections and the Eclipse IDE or any other IDE. There is no such dependency.

Clarifying the Eclipse prefix in Eclipse Collections

  • Eclipse Collections is a standalone open source Java collections library.
  • Eclipse Collections was formerly known as GS Collections.
  • Eclipse IDE is an open source Integrated Development Environment.
  • Eclipse Foundation is an open source foundation like Apache Software Foundation, Linux Foundation, etc.
  • Eclipse Collections and the Eclipse IDE are separate projects managed at the Eclipse Foundation.
  • Eclipse Collections isn’t dependent on the Eclipse IDE.
  • The Eclipse IDE isn’t dependent on Eclipse Collections.
  • Developers who use IntelliJ, NetBeans, VS Code and other Java IDEs can use Eclipse Collections.
  • Developers who use the Eclipse IDE can also use Eclipse Collections.
  • Developers can use Eclipse Collections without using any IDE, as Eclipse Collections is a standalone Java library and not part of any IDE.

The prefix Eclipse was first used with the Eclipse IDE in 2001. The prefix Eclipse was later was used to name the Eclipse Foundation in 2004. Eclipse Collections joined the Eclipse Foundation as a Java project at the end of 2015. All three share the prefix Eclipse in common, similar to many of the projects at Apache sharing the Apache prefix (e.g. Spark, Tomcat, Commons, etc.).

This a public service that I provide to the open source development community for free in an attempt to clarify any lingering confusion caused by the Eclipse prefix.

Thank you for reading!

I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions. I am the author of the book, Eclipse Collections Categorically: Level up your programming game.


by Donald Raab at September 01, 2025 07:16 PM

I‘m not Cattle

by Donald Raab at August 30, 2025 07:20 PM

I just enjoy being heard

Photo by Michael Starkie on Unsplash

I will keep writing.
I enjoy human feedback.
Thank you for reading.

All I needed to say fit in this haiku. 🙏

I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions. I am the author of the book, Eclipse Collections Categorically: Level up your programming game.


by Donald Raab at August 30, 2025 07:20 PM

Eclipse in Wayland (2025)

by Lorenzo Bettini at August 29, 2025 08:43 AM

Let’s see what Eclipse looks like in Wayland in 2025. I report some screenshots of a few Wayland Window Managers and Desktop Environments. Sway Eclipse looks good in Sway: Hyprland The same can be said for Hyprland, especially now that the infamous bug has been solved: GNOME No problems on GNOME either; I’d expect that […]

by Lorenzo Bettini at August 29, 2025 08:43 AM

OCX 2026: Six Communities. One Shared Experience

by Natalia Loungou at August 28, 2025 12:43 PM

OCX 2026: Six Communities. One Shared Experience Natalia Loungou

The global open source community will gather in Brussels from 21 to 23 April 2026 for the Open Community Experience (OCX) 2026. It isn’t just another conference; it’s where six open source communities collide to shape the future of software, developer tools, mobility, AI, compliance, and research. With one pass, you get access to six experiences and endless opportunities.


by Natalia Loungou at August 28, 2025 12:43 PM

Building MCP Servers: Dynamic Tool Groups

by Scott Lewis (noreply@blogger.com) at August 26, 2025 12:04 AM

Currently, adding tools to MCP servers is a static process.  i.e. a new tool is designed and implemented, MCP meta-data (descriptions) added via annotations, decorators, or code, the new code is added to the MCP server,  things are compiled and started, tested, debugged, etc.

As well, there is currently no mcp concept of tool 'groups'...i.e. multiple tools that are grouped together based upon function, common use case, organization, or discoverability.  Most current MCP servers have a flat namespace of tools.

I've created a repo with a small set of classes, based upon the mcp-java-sdk and the mcp-annotations projects, that supports the dynamic adding and removing of tool groups from mcp servers.

In environments with the OSGi service registry, this allows the easy, dynamic, and secure (type safe) adding and removing of OSGi services (and/or remote services) to MCP servers.


by Scott Lewis (noreply@blogger.com) at August 26, 2025 12:04 AM

How AI and MCP Supercharge GitHub Workflows in Theia IDE

by Jonas, Maximilian & Philip at August 26, 2025 12:00 AM

How can AI make your GitHub workflows faster, smarter, and less repetitive? In this new video, we show how the GitHub MCP server, connected to the AI-powered Theia IDE, can automate three common …

The post How AI and MCP Supercharge GitHub Workflows in Theia IDE appeared first on EclipseSource.


by Jonas, Maximilian & Philip at August 26, 2025 12:00 AM

Building MCP Servers: Alternative Transports

by Scott Lewis (noreply@blogger.com) at August 22, 2025 02:38 AM

Many developers are creating their own MCP Servers, to integrate their applications and APIs with AI models in a consistent way across multiple models.

The MCP SDKs currently provide two transports:  stdio (standard io), and http sse.  

I've created a new open source repo for alternative mcp transports.  

The intention is to provide alternative transports with other hardware, security, and trust properties.  

The first instance is based upon Unix Domain Sockets, which restricts the inter-process communication to processes running on the same operating system.








by Scott Lewis (noreply@blogger.com) at August 22, 2025 02:38 AM

Updated Eclipse Theia FAQ – Clearing Up the Most Common Misunderstandings

by Jonas, Maximilian & Philip at August 21, 2025 12:00 AM

We’ve significantly updated our FAQ for Eclipse Theia adopters and users. The rewritten FAQ addresses the questions we hear most often from the community and potential adopters — and clears up some of …

The post Updated Eclipse Theia FAQ – Clearing Up the Most Common Misunderstandings appeared first on EclipseSource.


by Jonas, Maximilian & Philip at August 21, 2025 12:00 AM

The Author’s Inside Guide to Reading Eclipse Collections Categorically

by Donald Raab at August 19, 2025 12:29 PM

TL;DR — Read Chapters 1, 2, 3. Jump to 11. Skim 4–10. Dive in as desired.

Eclipse Collections Categorically is written as a story, but organized to skim and jump around

The book, Eclipse Collections Categorically, will continue to be available to Kindle Unlimited subscribers until October 12, 2025.

The book is also available to purchase in print versions at Amazon and Barnes & Noble.

This blog can help readers determine the best options for reading the book given time constraints.

How to learn a feature-rich API

Eclipse Collections Categorically overcomes the challenge of learning and comprehending a feature-rich API by grouping the methods into method categories. This was an innovative information chunking technique I learned from the classic Smalltalk programming language in the 1990s. What the book doesn’t tell you is how to go about reading it to maximize your reading and learning style given constraints on your time.

The book can be read as a story or as a reference guide.

At 429 pages in paperback, and 377 pages in the larger hardcover book, it can take a while to read the whole book. The good news is that the book was designed to be read end-to-end or be picked up and read at any point as a reference. The decision of how best to read the book is up to the reader.

1. Read the Preface

The Preface is the story of where, why, and how Eclipse Collections was developed. It is an important backstory, if you want to understand what drove me to create an open source Java collections library that needed lambdas, a decade before lambdas arrived in Java, and then write a book about it two decades later.

**The Preface is free to read in the online reading sample at Amazon.

2. Read the Introduction and THIS section

The Introduction tells you how the book is organized. This will help inform you as how and where you want to spend your time. The rest of the Introduction tells you how to acquire Eclipse Collections and access the source. There is a new GitHub project that does that as well, and has the added benefit of including the latest version of Eclipse Collections (13.0) that was released at the end of June, 2025.

**The Introduction is free to read in the online reading sample at Amazon.

New GitHub Repository with additional resources

The following GitHub project can be used as a hand-on resource to follow along with the code examples in the book. Some folks learn best by doing. This repository was created after the book was published. There is a Maven project and a sample of the examples from the book that can be run, since they are executable tests.

GitHub - sensiblesymmetry/ec-categorically: Resources for Eclipse Collections Categorically book

There are two code examples per chapter shared in this repo (it is only a sample), but the project is setup and includes the dependencies needed to personally explore and try any of the examples in the book.

3. Decide how you want to read the book

The manner in which you decide to read the book depends on what you are looking to get out of it. Once you understand how the book is organized in the Introduction, your decision on how to approach reading the book should become clearer.

Now that there is a GitHub repo with resources to accompany the book, it will be easier to take a hands-on approach for some folks who like to experiment and see code run. The code examples in the book are effectively the solutions to a code kata, which is focused on learning the Eclipse Collections API in a comprehensive manner.

Option A: Read from Beginning to End

The chapters in Eclipse Collections Categorically are organized by method categories. The chapters are ordered specifically to help developers new to Eclipse Collections build skills and understanding in an incremental fashion. There is a story that builds upon previous chapters as the reader progresses.

I wanted to write a story that could be read from beginning to end. Depending on the speed you read and the time you have to focus, this can take the average reader a while.

If you are time-constrained and just want to learn some of the big ideas covered in the book, then I would suggest Option B to start your journey.

Option B: Read Chapters 1, 2, 3. Jump to 11. Skim 4–10.

Chapters 1, 2, 3 and give you all that you need to get started on a journey of learning Eclipse Collections, using method categories as an indexed guide (aka, Categorically). Chapter 11 is the summary chapter for the book. Chapter 11 shows you the symmetry that exists in the library, and how it can aid your learning as you use Eclipse Collections in your projects.

Chapter 3 takes you on a journey through a straightforward but surprising method category — counting. I recommend reading Chapter 3 from beginning to end as it will help you understand the symmetry of chapters 4 through 10.

Chapters 4–10 cover additional method categories (testing, finding, filtering, transforming, etc.). You can read them straight through or jump around them in any order. One option that someone has shared with me that worked well for them was to skim chapters 4 through 10 to see what was in them, and then go back and focus on particular sections when you want more detail on various methods. Chapters 3–10 will help you learn different techniques for accomplishing things with the Eclipse Collections API. They are an efficient index into the 134 methods you can see in the diagrams in the image above.

4. The Appendices

There is a lot of content in the appendices. I would suggest reading them in any order that interests you. There is a lot of interesting data, background, and some advice on using collections effectively in object-oriented domains in Java.

The Introduction covers what is in the appendices, so I won’t repeat it here.

Enjoy the book!

I hope you can take advantage of and enjoy the limited time free book promotion. If you have Kindle Unlimited, you have until October 12th, 2025. If you enjoy the book, I hope you will consider purchasing a print or digital copy and making it a permanent part of your physical or virtual bookshelf.

Thanks for reading, and enjoy!

I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions. I am also the author of the book, Eclipse Collections Categorically: Level up your programming game.


by Donald Raab at August 19, 2025 12:29 PM

GPT-5 vs Sonnet-4: Side-by-Side on Real Coding Tasks

by Jonas, Maximilian & Philip at August 19, 2025 12:00 AM

Two of today’s most popular AI coding models—GPT-5 and Sonnet-4—are often compared using benchmarks or synthetic tasks. But how do they behave in real-world coding scenarios? In this video, we put …

The post GPT-5 vs Sonnet-4: Side-by-Side on Real Coding Tasks appeared first on EclipseSource.


by Jonas, Maximilian & Philip at August 19, 2025 12:00 AM

Eclipse Collections 12.0 and 13.0 Packed, Released and Ready to Go!

by Donald Raab at August 15, 2025 12:22 AM

Back to back releases delivered for twice the back to school fun!

Photo by Andrew Neel on Unsplash

Eclipse Collections 11.1 was released in July 2022. Eclipse Collections 12.0 and 13.0 were both released just a a few days apart in June 2025. The 12.0 release had three years worth of work, so the release notes took us longer put together and review.

The releases and release notes are ready to go

Eclipse Collections 12.0 has been in development on Java 11 for a couple of years now. The 12.0 release is a packed with loads of features and improvements.

Read the 12.0 release notes to find out more.

Eclipse Collections 13.0 was released using Java 17. There were no new features added in Eclipse Collections 13.0. The release is compiled using Java 17, which means the end of development for Eclipse Collections with Java 11.

Read the 13.0 release notes to find out more.

Eclipse Collections 14.0 is in development now using Java 17. This means we can finally take advantage of some new language features in the Eclipse Collections project like Records, Text Blocks, and Pattern Matching for instanceof. This also means to leverage Eclipse Collections 13.0 or 14.0 (when it is released), you will need to be on at least Java 17. Progress happens.

Thank you to the contributors

There were a total of 26 contributors who made contributions to Eclipse Collections 12.0. I want to thank all of the contributors for their time, effort, and contributions. It’s through our community of open source contributors that Eclipse Collections continues to evolve and adapt to new requirements and use cases.

Thank you to my co-project lead

Releases are the toughest and least recognized job in an open source project. Nikhil Nanivadekar is the unsung hero of both of these releases. Please join me in thanking Nikhil for all of his amazing hard work getting these releases over the finish line. Thank you!!!

Photo by Wilhelm Gunkel on Unsplash

Congratulations to our new committers

Eclipse Collections recently grew from six to eight committers on the project. Congratulations to Vladimir Zakharov and Desislav Petrov for being elected to committer status on the project! Welcome!

Project Leads and Committers for Eclipse Collections at the Eclipse Foundation

https://projects.eclipse.org/projects/technology.collections/who

Thank you to our users!

Eclipse Collections passed one million downloads for the first time in a single month in March 2025 from Maven Central. Thank you to all the projects out there that depend on Eclipse Collections!

Time for another five year wish list

It’s almost been five years since I wrote down everything I wished would eventually be done in Eclipse Collections. There has been some great progress, especially with JUnit 4 -> 5 migration and the evolution of a Java Data Frame library built around Eclipse Collections called dataframe-ec.

GitHub - vmzakharov/dataframe-ec: A tabular data structure (aka a data frame) based on the Eclipse Collections framework

We’ve upgraded to Java 11, and then upgraded again to Java 17. I didn’t see that one coming five years ago. We got help from the community simplifying how we deal with Java Modules and OSGi. There have been a bunch of optimizations, and that work continues. I even finally got around to doing some cleanup of our performance tests.

What I hadn’t planned five years ago was writing a book about Eclipse Collections. That happened as well. While I had to use Eclipse Collections 11.1 to write the book, it is the most comprehensive documentation available for the library. The book is currently available to Kindle Unlimited Readers for free until mid-October. So please enjoy reading it there if you already pay for the service! More details about how to obtain the book in the blog below.

Book: Eclipse Collections Categorically

There is still a ton of work to do, and we’re always looking for new contributors who want to help make a difference, no matter how big or small. We got asked recently if we would take any more language translations of our website. Absolutely! The only challenge is that someone has to write the translation, and then someone else has to review it. We’ve done this eleven times so far, and this is one of my favorite contributions from the community, because while Eclipse Collections is a global community and we want to be locally accessible.

I’ll write an end of year blog and opine on some of the big and small things we have left to do, in the hopes that folks from the community will see something that inspires and motivates them to make a contribution.

Thanks for reading!

I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions. I am also the author of the book, Eclipse Collections Categorically: Level up your programming game.


by Donald Raab at August 15, 2025 12:22 AM

Eclipse Theia 1.64 Release: News and Noteworthy

by Jonas, Maximilian & Philip at August 13, 2025 12:00 AM

We are happy to announce the Eclipse Theia 1.64 release! The release contains in total 60 merged pull requests. In this article, we will highlight some selected improvements and provide an overview of …

The post Eclipse Theia 1.64 Release: News and Noteworthy appeared first on EclipseSource.


by Jonas, Maximilian & Philip at August 13, 2025 12:00 AM

Theia AI and Theia IDE Now Support GPT-5—Out of the Box!

by Jonas, Maximilian & Philip at August 12, 2025 12:00 AM

Developers and tool builders can now use OpenAI’s GPT-5 directly in Theia AI and the AI-powered Theia IDE, without additional integration work. Just add "gpt-5" (or its variants like mini or nano) to …

The post Theia AI and Theia IDE Now Support GPT-5—Out of the Box! appeared first on EclipseSource.


by Jonas, Maximilian & Philip at August 12, 2025 12:00 AM

A Native Claude Code IDE? How It Could Look Like with Eclipse Theia

by Jonas, Maximilian & Philip at August 07, 2025 12:00 AM

What if Claude Code, Anthropic’s powerful AI coding agent, wasn’t just a terminal app but a truly native part of your IDE? That’s the question we explored in our latest side project at EclipseSource. …

The post A Native Claude Code IDE? How It Could Look Like with Eclipse Theia appeared first on EclipseSource.


by Jonas, Maximilian & Philip at August 07, 2025 12:00 AM

Agent-to-Agent Delegation in the AI-powered Theia IDE / Theia AI

by Jonas, Maximilian & Philip at August 05, 2025 12:00 AM

Automating workflows with AI just took a leap forward. Theia AI and the AI-powered Theia IDE now supports agent-to-agent delegation, enabling one AI agent to delegate specific tasks - like reporting …

The post Agent-to-Agent Delegation in the AI-powered Theia IDE / Theia AI appeared first on EclipseSource.


by Jonas, Maximilian & Philip at August 05, 2025 12:00 AM

Building MCP Servers: Preventing AI Monopolies

by Scott Lewis (noreply@blogger.com) at August 02, 2025 09:21 PM

I recently read an insightful article about using open protocols (MCP in this case) to prevent user context/data lock-in at the AI application layer:

Open Protocols Can Prevent AI Monopolies

In the spirit of this article, I've decided to make an initial code contribution to the MCP java sdk project




by Scott Lewis (noreply@blogger.com) at August 02, 2025 09:21 PM

Langium 4.0 is released!

July 31, 2025 12:00 AM

Langium 4.0 is released! This release brings multi-reference support, infix operator rules, strict mode for Langium grammars, and more!

July 31, 2025 12:00 AM

Enhanced Image Support in the AI-powered Theia IDE / Theia AI

by Jonas, Maximilian & Philip at July 31, 2025 12:00 AM

They say a picture is worth a thousand words. When describing UI issues to an AI assistant, it’s worth even more. The AI-powered Theia IDE now features rich image support, allowing you to communicate …

The post Enhanced Image Support in the AI-powered Theia IDE / Theia AI appeared first on EclipseSource.


by Jonas, Maximilian & Philip at July 31, 2025 12:00 AM

Migrating Eclipse and RCP Tools to the Web

by Jonas, Maximilian & Philip at July 30, 2025 12:00 AM

Over almost two decades, the Eclipse Platform and Eclipse RCP have powered countless mission-critical tools and IDEs. But as outlined in our recent article on the future of Eclipse RCP, the technology …

The post Migrating Eclipse and RCP Tools to the Web appeared first on EclipseSource.


by Jonas, Maximilian & Philip at July 30, 2025 12:00 AM

Interactive AI Responses in Your Custom GitHub Copilot – New Theia AI Tutorial

by Jonas, Maximilian & Philip at July 29, 2025 12:00 AM

Ever wanted your AI assistant to do more than just produce text? Our new video tutorial shows how to make your custom Copilot, built on Eclipse Theia, more interactive and visual, tailored to your …

The post Interactive AI Responses in Your Custom GitHub Copilot – New Theia AI Tutorial appeared first on EclipseSource.


by Jonas, Maximilian & Philip at July 29, 2025 12:00 AM

AI Coding at Scale: Structure Your Workflow with Dibe Coding

by Jonas, Maximilian & Philip at July 24, 2025 12:00 AM

AI-powered development is everywhere. From YouTube tutorials to conference talks, from open-source demos to enterprise prototypes - coding with AI is the new frontier. One-shot prompts that generate …

The post AI Coding at Scale: Structure Your Workflow with Dibe Coding appeared first on EclipseSource.


by Jonas, Maximilian & Philip at July 24, 2025 12:00 AM

EclipseSource Ends Maintenance of the Eclipse Modeling Tools Package - Here's Why

by Jonas, Maximilian & Philip at July 22, 2025 12:00 AM

For almost a decade, EclipseSource has proudly maintained and contributed to the Eclipse Modeling Tools package - a curated edition of the Eclipse IDE tailored for modeling technologies. This Eclipse …

The post EclipseSource Ends Maintenance of the Eclipse Modeling Tools Package - Here's Why appeared first on EclipseSource.


by Jonas, Maximilian & Philip at July 22, 2025 12:00 AM

The Future of the Eclipse Platform and Eclipse RCP

by Jonas, Maximilian & Philip at July 16, 2025 12:00 AM

Over almost two decades, the Eclipse Platform and Rich Client Platform (RCP) have been foundational technologies for building extensible desktop applications, tools, and custom IDEs. From engineering …

The post The Future of the Eclipse Platform and Eclipse RCP appeared first on EclipseSource.


by Jonas, Maximilian & Philip at July 16, 2025 12:00 AM

What are you wishing or waiting for in Java?

by Donald Raab at July 13, 2025 06:28 PM

Java grants new wishes every six months in feature releases.

Photo by Louis Hansel on Unsplash

Wishing or Waiting?

I made a wish in 2004 for lambdas to find their way into Java. I waited for a decade for lambdas to arrive, and they eventually arrived. Java has been a much better programming language to work in ever since. Lambdas, method references, and default methods were all amazing features in Java 8 that library developers could use to develop even better products. Java 8 kind of gave me everything I ever wanted in Java, except for eight tiny little things with generics.

In the rest of this blog, I will explain a wish we are still patiently hoping will arrive in the the JDK one day, but that we decided over a decade ago to not wait for. Waiting for the feature would have hamstrung development of Eclipse Collections primitive collection support. Not waiting was absolutely the right decision for the library. Eclipse Collections users have been able to leverage a plethora of Functional Interface type combinations with lambdas for the past decade. This combination of Functional Interfaces will likely not be found anywhere else. I’m sure many developers have had to create a smaller number of their own named Functional Interfaces when they needed them, just as we have in Eclipse Collections. Java Stream did exactly the same thing with IntStream, LongStream, and DoubleStream, just on a smaller scale than we did in Eclipse Collections.

Library first, Valhalla later

Lambdas implemented with nominal types simultaneously created a tremendous opportunity, and unfortunate deficiency for Java. Java has support for lambdas, but does not have support for Generics Over Primitive Types. We have Object and primitive collections in Eclipse Collections, and wanted our Functional Interfaces to work with all combinations of primitive types.

We decided to solve this problem in the library, as we couldn’t wait for the language to have support for primitives in generics. Sometimes library developers are faced with the difficult decision of incurring cost and implementing something today, that may be made much simpler in the future with improved language support. We made the decision to implement primitive collections with a feature-rich API around 2012 (pre-lambda days), so the cost of implementing the interfaces has long been paid off in my opinion. Code generation using StringTemplate made the problem much easier to implement.

Below is an image depicting one dimension of the resulting cost of supporting all primitive types in Functional Interface type combinations in Eclipse Collections. This mind map shows the explosion of types that occurred for one set of Functional Interfaces called Procedure. Procedure is the equivalent of Consumer in the JDK. Ideally, there should only be two Procedure Functional Interfaces in Eclipse Collections — Procedure and Procedure2. As the diagram below shows, there are a lot more than two Procedure types in Eclipse Collections.

Mind map drawn with my favorite UML/Mindmap tool — Astah UML

Note, Procedure has a void return type. The picture for Predicate would be equally and painfully annoying to draw, because Predicate has a boolean return type. The picture for Function which has a variable return type would be too painful for any human to draw by hand. You just need to imagine it is really bad. Yes, Eclipse Collections did the hard work of generating all of the Functional Interface types to support primitives for Procedure, Predicate, Function, and Comparator. Note: Click the links to see the actual code generated interfaces in JavaDoc.

I created a much abbreviated version of this picture in a table in the book Eclipse Collections Categorically. The table requires you to imagine replacing a wildcard (?) with combinations of boolean, byte, char, short, int, float, long, double and sometimes Object. This shows why Function is even worse. It has an extra ? to deal with in the type names for the return type (see row 2 and 4 below).

The bottom two rows show the abbreviated version of “Procedure” Functional Interfaces

The resulting explosion takes eight rows from this table and results in hundreds of named types being code generated to support primitives in Functional Interface types. If the support for generics for primitives eventually arrives in Valhalla, we would only need to have eight Functional Interface type in Eclipse Collections.

The types probably have very little meaning for folks reading this blog, without knowledge of the Eclipse Collections API and how these types are encountered and/or used by developers. Below, Example 21 from the Kindle edition of Eclipse Collections Categorically shows two methods that use primitive functional interfaces that are code generated. The method select on an IntList takes an IntPredicate. As explained in comment 2, an inlined lambda is used here, so the method parameter type is never seen in the code by the reader. The method collect on IntList takes an IntToObjectFunction<Integer>. As explained but not visible in comment 4, an inlined method reference is used here, so the method parameter type is never seen in the code by the reader.

If we had waited for Valhalla to solve this type explosion problem for Eclipse Collections, we would still be waiting unfortunately. We decided to go library first, and worry about how we can possibly reduce the number of necessary types later when Valhalla with Generics Over Primitive Types arrives in some form. The Valhalla strategy has progressed and evolved over the last decade. Thankfully, we committed to and have stuck with our approach delivering much appreciated primitive collection support to our user base. It was an expense to “bite the bullet” over a decade ago and code generate all of these Functional Interfaces, but it was a one time expense. Since then we’ve been able to focus on API development and evolution on our Object and primitive container types. Lambdas and method references are available in so many magical combinations of types for Eclipse Collections.

So if we’re not waiting for Valhalla, what have we been waiting for in Java for Eclipse Collections?

We’ve been waiting for Java 17!

Wait… what? Why Java 17? What about Java 21 or the soon to be released Java 25?

Core Java library development necessarily lags behind application development in terms of language version usage. We’ve been developing using Java 11 for Eclipse Collections development for the past couple of years, but honestly there are not that many interesting language features for Eclipse Collections in 11 compared to 17. Eclipse Collections 11.x was compiled using Java 8, which is now eleven years old. Java 11 is now almost 7 years old. Java 17 is only 4 years old! Development in Eclipse Collections on Java 17 will now feel like driving in a certified pre-owned car! It might have 40K miles on it, but it is super reliable and still goes vroom!!! We are excited by the upgrade!

Eclipse Collections just went through a rapid upgrade cycle in order for us to adopt Java 17 in a release and development. Eclipse Collections 11.1 was released on Java 8, almost three years ago. Eclipse Collections 12.0 was just released on Java 11 a few weeks ago. Eclipse Collections 13.0, which is identical in features to 12.0, was released on Java 17 three days later. Development on Eclipse Collections 14.x is baselined to Java 17.

Note: The release notes and release blog for Eclipse Collections 12.0 and 13.0 are still forthcoming. Stay tuned!

We’ve been waiting for four years to upgrade Eclipse Collections development to Java 17. With the imminent release of Java 25 in September, and the completion of the Eclipse Collections 12.0 release, we decided to have another major Eclipse Collections release to just uptick the version of Java we use to compile. Upgrading to a new language version is straightforward for us in Eclipse Collections, as we continue testing against LTS versions of JDK, the current JDK release, and the current early access (EA) version of OpenJDK.

There are some features in the language in Java 17 that we can now use to simplify code in Eclipse Collections. Pattern matching for instanceof is one feature, Java Records, and Sealed Types are some others. If Pattern Matching for Switch wasn’t in preview in Java 17, we would use that in the Iterate and other utility classes as well. I hope the upgrade is another reason for developers to consider using and contributing to Eclipse Collections. The library continues to evolve thanks to folks from the open source community.

Stop wishing and waiting for unicorns

Java continues to evolve at a rapid pace. If you are waiting for something specific and epic to arrive (like Valhalla), you may be waiting for a bit still. You may also get something slightly different than you originally wished for when it arrives. I feel like we are closer than ever to seeing some major changes in the language from Valhalla, but I’m glad that we did not wait for it for Eclipse Collections. I’m excited to see if it will eventually help us reduce the number of interfaces and classes we have to support in Eclipse Collections today.

A healthy horse you can ride today is better than the unicorn you are wishing might arrive one day.

Java is great today, no matter what version you use. Java has been great since Java 8 in my opinion, and continues to get better with each release. Eclipse Collections started out its existence developed using Java SE 1.4. I will save you the trial and tribulation stories of adding generics support to Eclipse Collections in its proprietary form when I worked at Goldman Sachs after Java 5 arrived. Java 8 was, for obvious reasons, my favorite release with my most important wish granted — lambdas. It even came with a few awesome extra wishes I didn’t make including default methods, method references, and Java Stream.

If you’re wishing you had a more feature-rich collections API available in Java today, I have great news for you. That already exists in third-party library form, just not in the JDK. Eclipse Collections has been evolving a feature-rich collections API for over twenty years solving mission critical problems for applications and libraries. If you want to learn more about the Eclipse Collections API, I can recommend my book, Eclipse Collections Categorically. There are other free learning options like blogs, katas, reference guide and the Eclipse Collections README as well.

On the Valhalla waiting front, I am happy for the work to get done well instead of fast. It is, after all, an “epic refactor”, and we need it to be both useful and not break all the apps we’ve written the past 25 years. Evolution is hard and takes time. When enough interesting functionality from Valhalla arrives, I will likely advocate to get Eclipse Collections development upgraded as soon as possible so we can begin leveraging it.

Thank you for reading!

I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions. I am also the author of the book, Eclipse Collections Categorically: Level up your programming game.

Thank you for being a part of the community

Before you go:


What are you wishing or waiting for in Java? was originally published in Stackademic on Medium, where people are continuing the conversation by highlighting and responding to this story.


by Donald Raab at July 13, 2025 06:28 PM

Building MCP Servers - part 3: Security

by Scott Lewis (noreply@blogger.com) at July 11, 2025 10:46 PM

There have been recent reports of critical security vulnerabilities on the mcp-remote project, and the mcp inspector project.

I do not know all the technical details of the exploits, but it appears to me that in both cases it has to do vulnerabilities introduced by the MCP Server implementation. and use of the stdio MCP transport.

I want to emphasize that example described in these two posts

Integration via Remote Tools

Example Using Remote Services

is using mechanisms that are...though heavy usage by commercial server technologies over the past 10 years...not subject to the same sorts of remote vulnerabilities seen by the mcp-remote and mcp-inspector projects.   

Also, the flexibility in discovery and distribution provided by the RSA Specification and the RSA implementation used, allows for addressing MCP Server remote tools, or protocol weaknesses, quickly and easily, without having to update the MCP Server or tooling implementation code.  

 

 

 


by Scott Lewis (noreply@blogger.com) at July 11, 2025 10:46 PM

Eclipse Theia 1.63 Release: News and Noteworthy

by Jonas, Maximilian & Philip at July 10, 2025 12:00 AM

We are happy to announce the Eclipse Theia 1.63 release! The release contains in total 99 merged pull requests. In this article, we will highlight some selected improvements and provide an overview of …

The post Eclipse Theia 1.63 Release: News and Noteworthy appeared first on EclipseSource.


by Jonas, Maximilian & Philip at July 10, 2025 12:00 AM

AI Driven E2E Testing with the Theia IDE

by Jonas, Maximilian & Philip at July 09, 2025 12:00 AM

Manual testing takes time. Writing end-to-end tests takes even more. What if you could automatically test your web app by simply talking to an AI agent? Now you can-with the App Tester Agent in the …

The post AI Driven E2E Testing with the Theia IDE appeared first on EclipseSource.


by Jonas, Maximilian & Philip at July 09, 2025 12:00 AM

Building MCP Servers: Integration via Remote Tools

by Scott Lewis (noreply@blogger.com) at July 08, 2025 07:35 PM

It has become popular to build Model Context Protocol Servers.  This makes a lot of sense from the developer-as-integrator point of view, since the MCP specification and multi-language SDKs make it possible to easily integrate resources, prompts, and tools into multiple LLMs without having to use the model-and-language-specific model APIs directly.

MCP tools spec provides a general way for LLMs to use tool meta-data (e.g. text descriptions) for the tool's required input data, behavior, and output data.  These text descriptions can then be used by the LLM...in combination with interaction with the user...to decide when and how to use the tool...i.e. to call the function and provide some output to the LLM and/or the user.

Building an MCP Server

When creating a new MCP Server, it's easiest to create the tool metadata and implement the tool functionality as part of a new MCP server implementation.   But this approach requires that every new tool (or integration with existing API/servers) results in a new MCP server or an update/new version of an existing MCP server.

Remote Tools

It's frequently better architecture to decouple the meta-data declaration and implementation of a given tool from the MCP Server itself, and allow the MCP Server to dynamically add/remove tools at runtime, as tools can then be discovered, added, meta-data made available to the model(s), called, evaluated, and potentially removed or updated without the creation of an entirely new MCP Server, but rather dynamically discovering, securing, importing, using, evaluating, updating, and removing tools from an MCP Server.  

This approach is potentially more secure (as it allows tool-specific authentication and access control), more flexible, and more scalable, since remote tools can be distributed on multiple hosts over a network.  And it allows easy integration with existing APIs.

In the next post I describe a working example that uses remote tools.


by Scott Lewis (noreply@blogger.com) at July 08, 2025 07:35 PM

Building MCP Servers - part 2: Example Using Remote Services and Bndtools

by Scott Lewis (noreply@blogger.com) at July 08, 2025 07:23 PM

In a previous post, I described how using dynamic remote tools could make building MCP Servers more flexible, more secure, and more scalable.  In this post, I show an example MCP Server that uses remote services and Bndtools to build.

To use/try this example yourself see Installing Bndtools with Remote Services Support below.

Create a Bndtools Workspace via File->New->Bndtools Workspace, and choose the ECF/bndtools.workspace template from this dialog


Choose Next->Finish

Choose File->New->Bnd OSGi Project to show this dialog


There are 4 project templates.  They should each be created in your workspace in turn:

1. MCP ArithmeticTools API Project  (example name: org.test.api)

2. MCP ArithmeticTools Impl Server Project (ArimethicTools RS Server) (example name: org.test.impl)

3. MCP ArithmeticTools Consumer Project (MCP Server Impl/ArithmeticTools Consumer) (example name: org.test.mcpserver)

4. MCP ArithmeticTools Test Client Project (MCP Client) (example name: org.test.mcpclient)

Note:  For the Impl/Consumer/Test Client project creations you will be prompted to provide the name of the project that you specified for the API Project (1)


Click Finish for each of the first 3 (server) projects.

If you wish to use your own MCP Client, and start the MCP Server (MCP ArithmeticTools Consumer Project) from your own MCP Client (stdio transport), see the Readme.md in the MCP ArithmeticTools Consumer project.

MCP ArithmeticTools Example Test Client

There is also a MCP Example Test Client Project template, implemented via the MCP Java SDK that makes a couple of test calls to the MCP Server/Arithmetic Tools Consumer server.

Note:  When creating the test client, the project template will as for you to specify the names of the API project (1) and the MCP Server/ArithmeticTools Consumer project (3).   For example:


Click Finish

You should have these four projects in the Bndtools Explorer


You can open source for any/all the projects, set breakpoints, etc.

Launching the ArithmeticTools Remote Service Impl server

The ArithmeticTools Impl server (2) must be launched first

To launch the ArithmeticTools Impl (2), see the Readme.md in that project

Launching MCP Client and MCP Server (with stdio transport)

The MCP Client (4) may then be launched, and it will launch the MCP Server (3) and use the MCP stdio transport.   The MCP Client (4) Readme.md more info on launch and expected output of the MCP Client.

You may examine or set breakpoints in the ArithmeticTools Impl Server (2) or the MCP Client (4) to examine the communication sequence for calling the ArithmeticTools.add or multiple tools.

Installing Bndtools with Remote Services Support

1. Install Bndtools 7.1 

2. Add ECF Latest Update site to Eclipse install with URL: https://download.eclipse.org/rt/ecf/latest/site.p2

3. Install Feature:  SDK for Bndtools 7.1+




by Scott Lewis (noreply@blogger.com) at July 08, 2025 07:23 PM

Theia Coder Agent Mode: From AI Assistant to Autonomous Developer

by Jonas, Maximilian & Philip at July 08, 2025 12:00 AM

Tired of micromanaging every code suggestion your AI coding assistant makes—manually reviewing each file and fixing its mistakes yourself? There’s now a faster, more powerful alternative: Agent Mode …

The post Theia Coder Agent Mode: From AI Assistant to Autonomous Developer appeared first on EclipseSource.


by Jonas, Maximilian & Philip at July 08, 2025 12:00 AM

Collaborative coding in the browser: The OCT Playground

July 03, 2025 12:00 AM

OCT is the first open source framework to support live collaboration between IDEs and web apps—a strategic advantage for flexible cross-platform development.

July 03, 2025 12:00 AM

Eclipse Open VSX Registry Security Advisory

July 02, 2025 08:15 AM

This security advisory provides additional technical details following our initial statement and the corresponding CVE record.

TL;DR

A vulnerability in the Eclipse Open VSX Registry’s automated publishing system could have allowed unauthorized extension uploads. It did not affect existing extensions or admin functions.

The issue was reported on May 4, 2025, fully fixed by June 24, and followed by a complete audit. No evidence of compromise was found, but 81 extensions were proactively deactivated as a precaution.

The standard publishing process was not impacted. Recommendations have been issued to reduce future risk.

The Issue

On May 4, 2025, the Eclipse Foundation Security Team received a notification from Koi Security researchers about a potential vulnerability in the Eclipse Open VSX Registry extension publication process. The Security Team promptly contacted the Open VSX team, which confirmed the issue and began working on a fix. A first version of the fix was proposed within two weeks.

The Eclipse Open VSX Registry allows developers to publish extensions via CI/CD systems. To increase the availability of widely used extensions to its growing user base, it also includes a mechanism that automatically pulls, builds, and publishes a curated list of extensions. This list is publicly maintained in a configuration file. The vulnerability was found in this automated process.

Specifically, build scripts were executed without proper isolation, which could have inadvertently exposed a privileged token. This token allowed publishing of new extension versions under any namespace, including those not owned by an attacker. However, it did not allow deletion of existing extensions, overwriting of published versions, or access to administrative features of the registry.

To exploit this vulnerability, an attacker would need to either:

  • Take over an already accepted extension (e.g., by compromising the developer’s account) and inject malicious code to exfiltrate the token; or
  • Submit a new extension for inclusion in the auto-publish list, have it accepted (following a manual review of the pull request), and later push a new version with code designed to exfiltrate the token.

In both scenarios, any extension published using the token would appear to originate from the privileged user, which serves as a basis for the ongoing investigation into potential exploitation.

Eclipse Open VSX Registry

The Fix

The Eclipse Open VSX team implemented sandboxing for the extension build process to isolate builds and protect credentials. The fix underwent several iterations and was successfully deployed on June 24, 2025.

Importantly, this vulnerability only affected the auto-publishing mechanism. An attacker would have needed to control an extension listed for automatic updates or get their own extension added to the list. The standard extension publishing workflow is not affected by this vulnerability.

Timeline Summary

  • May 4, 2025 – Vulnerability reported by Koi Security researchers
  • May 5–17, 2025 – Issue confirmed and first version of fix developed
  • May 17June 24, 2025 – Development and testing of updated versions of the fix
  • June 24, 2025 – Final fix approved and deployed; privileged token rotated
  • Post-deployment – Audit of all affected extensions completed
  • June 27, 2025CVE-2025-6705 published
  • July 2, 2025 – Security advisory published

Investigation Summary

As the root cause was the lack of build isolation, the implemented patch introduces sandboxing and separation between build processes. It was deployed on June 24, 2025, and the potentially exposed privileged token was rotated following the deployment.

To determine whether the vulnerability had been exploited, the Eclipse Security and Open VSX teams audited all extensions published using the privileged token. These were cross-referenced with all extensions listed for automatic publication, whether currently or previously included. The focus was first on extensions that were published using the privileged token but were never added to the auto-publish list.

Findings: 14 extensions, encompassing a total of 20 unique published versions, were identified as having been published by the privileged user without a clear link to the auto-publish list. Although this is atypical, there has long existed a one-off workflow allowing Open VSX Registry operators to publish new extensions manually. It is most likely that these 20 revisions were published using this workflow.

As automation had reached its limits, the team manually reviewed the suspicious extensions. All were deemed legitimate and did not display signs of compromise. Indicators of compromise considered included:

  • Mismatches between publication dates and repository tags/releases
  • Publication of unknown versions
  • Discrepancies between Eclipse Open VSX and the Microsoft Visual Studio Code Marketplace
  • Sudden change in publishing behavior (e.g., from the extension owner to the privileged user)
  • Other anomalous patterns

We then examined extensions legitimately published by the privileged user (by virtue of being listed for auto-publishing), searching for irregularities based on the indicators mentioned above. We identified 51 such extensions (61 unique extension versions), warranting further manual investigation. In all cases, however, the anomalies were ultimately ruled out. For example, while certain extension versions lacked a corresponding tag or formal release in their source repository, their version numbers had previously appeared in the build configuration history within the source repository. These versions were never officially released, but their publication dates aligned with the commit dates, strengthening the confidence that these were false positives rather than signs of compromise.

Conclusion: None of the 65 identified extensions (81 distinct published versions) showed evidence of being compromised. Nevertheless, as a precaution, all 81 versions have been deactivated while we contact their respective authors. Should evidence of compromise emerge, further advisories will be issued.

Recommendations

Based on our findings, we recommend the following actions to address the root causes of this vulnerability:

For Open VSX Registry operators:

  • Mitigate risk from untrusted code: enforce a documented vetting process for new extensions before adding them to the auto-publish list. This limits exposure from potentially malicious submissions.
  • Reduce exposure window: periodically review accepted extensions, especially after updates, to detect suspicious behavior that may emerge post-approval.
  • Contain credential blast radius: replace shared, privileged tokens with namespace-specific credentials. This enforces the principle of least privilege and prevents cross-namespace publishing.
  • Eliminate insecure workflows: consider disabling the auto-publishing mechanism entirely, or at a minimum, remove the one-off manual publishing feature, which bypasses the protections applied to the automated pipeline.

For the Open VSX user community:

  • Exercise caution when installing or updating extensions. Extensions can access your development environment and may introduce risk if compromised. They are a critical part of the software supply chain and must be treated as such.

More Information

The Eclipse Open VSX Registry has grown significantly in popularity. We are grateful to Koi Security researchers for their responsible disclosure and encourage all users and vendors relying on Eclipse Open VSX to contribute to its continued security and sustainability. The Eclipse Foundation remains committed to ensuring the Open VSX Registry is a safe, trusted, and reliable platform for distributing and consuming secure, high-quality extensions.

For technical or security-related inquiries, please contact the Eclipse Foundation Security Team at security@eclipse-foundation.org.


July 02, 2025 08:15 AM

Structured AI Coding with Task Context: A Better Way to Work with AI Agents

by Jonas, Maximilian & Philip at July 01, 2025 12:00 AM

Are you using AI agents to assist with feature development or bug fixing—only to find yourself stuck in an unstructured back-and-forth chat session? Of course, you could wait for the next generation …

The post Structured AI Coding with Task Context: A Better Way to Work with AI Agents appeared first on EclipseSource.


by Jonas, Maximilian & Philip at July 01, 2025 12:00 AM

A Brazilian Dream: Otavio Santana’s Rise Through Open Source

by Tatjana Obradovic at June 30, 2025 05:12 PM

A Brazilian Dream: Otavio Santana’s Rise Through Open Source

“Sometimes, all someone needs is to believe it's possible”

Brazil stands at the heart of the Global South, shaping global technology and community movements. Follow us as we explore Otavio Santana’s inspiring journey from his beginnings in Salvador, Brazil, to the international stages of Java stardom. Discover how community, discipline, and an insatiable passion for learning empowered him to overcome social, economic, and linguistic barriers, and ultimately thrive in the world of open source software.

Growing Up in Salvador: The Start of a Remarkable Journey

Otavio Santana was born and raised in Salvador, Brazil’s sixth-largest city. His childhood was marked by economic hardship. “I came from a poor family, so I didn't have opportunities,” Otavio recalls. Raised primarily by his mother, he navigated a world of limitations with determination. Initially drawn to history and music, he couldn’t pursue those passions in academia and instead turned to computer science, a decision that would change his life.

Due to financial constraints, Otavio attended a private university that allowed him to work part-time and support his family. There, he learned C and C++, and secured his first job in the field. However, his interest in contributing to open source projects was hindered by a significant obstacle: he did not speak English.

Finding Belonging in the Java Community

Everything changed in 2010 when Otavio joined the Java community. Despite his limited English, he was welcomed with kindness and encouragement. “People tried to communicate with me, even though I didn’t speak a single word of English,” he says. This early support sparked a deep appreciation for the community and a growing interest in the language itself.

Though he initially doubted his ability to master Java, the welcoming spirit of the community helped him gain confidence. Java also promised better job prospects and higher salaries, making it a practical choice. In 2012, Otavio moved to São Paulo to pursue new opportunities and immerse himself further in the open source world. The more he contributed, the more awards he earned. To date, he has received numerous accolades, including several JCP Awards, the Java Champion title, and the Duke’s Choice Award. He has also authored several books.

Discipline, Dedication, and a Community of Support

Otavio’s progress was built on unwavering discipline. He has followed a routine of early mornings dedicated to study and open source contributions, with weekends often spent learning. “I consistently wake up at 5 a.m. to study or contribute to open source projects,” he says. This dedication led him to contribute to major projects such as integrating Java with Apache Cassandra, OpenJDK, and Adopt a JSR.

Through conferences like JavaOne, Otavio met other open source leaders, including fellow Brazilian Bruno Souza. Despite his language struggles, his work spoke volumes, soon leading to his first job opportunity. “My English was terrible, but my future boss trusted me because I was involved in open source.”

Chicken and Egg Problem: Challenges for Developers in the Global South

Otavio’s story also highlights systemic challenges faced by developers across Latin America. He refers to the chicken-and-egg problem of English proficiency: without good English, it is difficult to secure a job at an international company; without such a job, it is hard to improve one’s English. “Most people are just in survival mode – studying and working to support their families,” he explains.

The pandemic intensified these challenges. Many IT workers in Brazil were employed by government entities that were unprepared for remote work. As a result, layoffs increased. At the same time, the global shift to remote work raised the bar for both technical and linguistic qualifications.

Why Open Source Matters – Especially in Brazil

“Open source is a huge opportunity for poor people,” Otavio affirms. For developers in Brazil and similar contexts, open source offers a unique gateway to skill development, networking, and career growth. It is also one of the few avenues where economic background matters less than passion and contribution.

His first open source role did not come through traditional hiring channels, but from being noticed at a JavaOne conference. Open source became his credential, his classroom, and his bridge to the world.

Moving to Portugal: Expanding Horizons

Otavio eventually moved to Portugal, a decision driven by access to more tech conferences and a thriving open source scene. Events such as Devoxx and EclipseCon (now Open Community Experience) offered opportunities that were less accessible in Latin America. Being closer to these events made it easier for Otavio to stay engaged and connected with the global community.

How Open Source Foundations Can Help

Otavio believes that open source foundations can do more to include developers from Latin America. His recommendations include:

  • Showcasing diverse role models: Interviews like his can inspire others to see what’s possible.

     
  • Organising hackathons: Practical, hands-on experiences help new contributors break the ice.

     
  • Fostering community connections: Introductions to active contributors can help newcomers envision their own paths.

     

“Sometimes, all someone needs is to believe it's possible,” Otavio says.

Otavio Santana’s story is a compelling reminder of how open source can transform lives. For him, and for many who follow a similar path, it’s not just a way to write code, but a path to greater possibilities: to dream boldly, reach globally, and create a future once thought unattainable.

Let’s make sure we keep those doors open for the next Otavio.

Otavio has made significant contributions to the Java and open source ecosystems. Since Java 8, he has helped shape the direction and objectives of the Java platform as a member of the JCP Executive Committee. Additionally, he serves as a committer and leader in several open source projects and specifications, showcasing his dedication to the community.

 

Recognised for his impactful work, he has received numerous accolades, including all categories of the JCP Awards and the Duke’s Choice Award. He is also a distinguished Java Champion and a member of the Oracle ACE programme.

 

Beyond technology, he is an enthusiast for history and the economy. He loves traveling, programming, and learning languages. He speaks Portuguese, English, Spanish, Italian, and French fluently and has a particular talent for dad jokes.

Tatjana Obradovic

by Tatjana Obradovic at June 30, 2025 05:12 PM

Vulnerability in Eclipse Open VSX Registry extension publication process

June 27, 2025 08:15 AM

On May 4th, the Eclipse Foundation (EF) Security Team received a notification from researchers at Koi Security regarding a potential issue in the Eclipse Open VSX marketplace extension publication process. The EF Security Team immediately contacted the Eclipse Open VSX team, and upon confirming the issue, work on a fix was promptly initiated.

Following several iterations and thorough testing (necessary due to the intrusive nature of the change to the extension build process) the fix was successfully deployed on June 24th.

Eclipse Open VSX Registry

We would like to thank the researchers for reporting the issue, reviewing the proposed fixes, and supporting the resolution process, as well as the members of the Eclipse Open VSX team who were involved.

The researchers have published their findings at Koi Security’s blog, providing further insight into the issue. Additionally, we have published CVE-2025-6705 to track and document this vulnerability.

A more detailed technical security advisory will be published in the coming days.

Eclipse Open VSX has grown in popularity in recent months, and we’re grateful to independent researchers for their investigation and responsible disclosure. We encourage all projects that depend on Eclipse Open VSX to consider contributing to or financially supporting the initiative.


June 27, 2025 08:15 AM

Back to the top