March 08, 2026
Performance of Lazy and Eager Iteration Patterns on Small Lists in Java
by Donald Raab at March 08, 2026 09:56 PM
Exploring a blind spot in my Eclipse Collections performance benchmarks.
Performance matters, right?
I’ve written and run a lot of performance benchmarks over the years comparing various collections frameworks in Java to Eclipse Collections. My general feeling after years of doing this is that micro-benchmarks don’t really matter for most developers. The only performance benchmarks that matter are the ones developers can measure in their own applications with real data. If you see a performance problem, and it is due to your collections framework, then you can go fix it. Otherwise, ignorance is bliss.
For Eclipse Collections, the performance benchmarks we write and run are primarily intended for one purpose. We want to try and match the performance of a hand-written for-loop for the named general iteration patterns. When we do this effectively, developers will have to worry less about the potential of a performance bottleneck and focus on the readability of their code. Note: It is hard to match the performance of a hand written for-loop for a specific use case. When performance is drop- dead-critical, then write the for-loop.
We have tried to make sure the performance of Stream when used with Eclipse Collections types is comparable to similar types in Java Collections. This is why I advocated for the introduction of RandomAccessSpliterator a long time ago as the default Spliterator for RandomAccess lists, so parallel streams would work well with Eclipse Collections types, before we baselined development on Java 8 and could override the spliterator() method. I have also been advocating more recently for ListN in Java Collections to use ArraySpliterator instead of RandomAccessSpliteraotr. You can read more about this in the following blog.
Spliterating Hairs Results in Spliterating Deja Vu
Blind Spots
I’ve worked in large memory spaces where Java heaps have millions of collections and collections sometimes have millions of objects in them. This is the space where Eclipse Collections was born. I faced in-memory scaling challenges two decades ago that many developers will never encounter in their careers. We’ve spent a lot of time optimizing two ends of the memory spectrum… small collections and large collections. Users of Eclipse Collections benefit from the library having faced these challenges, even if they never have to. We’ve focused primarily on static footprint of collections, not dynamic footprint of things like Java Stream and Eclipse Collections LazyIterable. I’ve measured the performance of both over the years, and have even reported bugs to the JDK for parallel streams that were eventually fixed and resulted in RandomAccessSpliterator being added to the JDK.
I’ve never looked at the memory footprint of a Stream or a LazyIterable before. These are transient short-lived objects so they never registered as being a particularly interesting place to focus. That was until this past week, when I was audibly surprised after taking a look.
Developers today may think memory is free. Garbage collection is fast. Creating objects is blindingly fast. While these may be true statements, there is another truth that becomes important in high performance applications. The fastest garbage to collect is the garbage you don’t create. The cost of a single small object is nothing. Throw object creation in a tight loop with millions of iterations, and all of a sudden super fast creations might register as something measurable.
The Memory
I wrote a blog in the past couple of days that measures the memory cost of a Stream and LazyIterable for solving the same Stream.filter().map().sum() problem.
TL;DR — A Stream.filter().map() instance will cost you 200 bytes. A LazyIterable.select().collect() instance will cost you 48 bytes. They solve the same problem. This measurement was taken for an empty list, but I believe the size of the collection doesn’t matter. The memory cost of a Stream pipeline will vary based on the number of steps in the pipeline.
The Performance for Small Lists
I decided to measure the performance of Lists with sizes 1, 5, 10, 50, 100. The thing I was hoping to see if the performance cost of creating the 200 bytes for Stream vs. 48 bytes LazyIterable would register as significant, and at what size would this cost mostly disappear. I already knew that Stream and LazyIterable are comparable at larger sizes. Sometimes one does better than another for various reasons and optimizations.
Unit Tests
I wrote some unit tests to capture the behavior and make sure the results are the same. I added them to the repo for the Refactoring to Eclipse Collections talk Vladimir Zakharov and I gave at the dev2next conference last year.
Benchmark Code
I wrote the JMH benchmarks which match the unit tests to the same repo.
I wrote two sets of tests (filter().count() and filter().map().sum()) with four variations — Stream, LazyIterable, Eager, and Eager Optimized.
The Memory Cost
I wrote memory benchmarks for Stream.filter() / LazyIterable.select() and Stream.filter().map() / LazyIterable.select().collect().
The Results
I’ve include two charts here for the two sets of tests, for all five list sizes, and the four variations. The results are measuring throughput, as operations per millisecond, so bigger is better. I ran these tests on my Mac Book Pro, Apple M2 Max (12 core — 8 performance, 4 efficiency), 96GB RAM. As these are simple serial tests on small collections, the hardware was probably overkill. You can try the source on your own hardware if you like. Let me know if you see any glaring mistakes.
Performance for lazy and eager Stream.filter().count() equivalentsThere was some error noise (± 2956.590) in the EC Eager case for 50 elements. My expectation should be that it is faster than EC Eager at 100 elements
Performance for lazy and eager Stream.filter().map().sum() equivalentsThe results in text form:
Benchmark (size) Mode Cnt Score Error Units
SmallStreamLazyIterableBenchmark.arrayListStreamFilterCount 1 thrpt 20 57831.926 ± 701.235 ops/ms
SmallStreamLazyIterableBenchmark.arrayListStreamFilterCount 5 thrpt 20 49273.490 ± 356.967 ops/ms
SmallStreamLazyIterableBenchmark.arrayListStreamFilterCount 10 thrpt 20 41765.612 ± 171.564 ops/ms
SmallStreamLazyIterableBenchmark.arrayListStreamFilterCount 50 thrpt 20 20407.347 ± 97.557 ops/ms
SmallStreamLazyIterableBenchmark.arrayListStreamFilterCount 100 thrpt 20 12079.253 ± 195.845 ops/ms
SmallStreamLazyIterableBenchmark.arrayListStreamFilterMapSum 1 thrpt 20 31434.719 ± 630.681 ops/ms
SmallStreamLazyIterableBenchmark.arrayListStreamFilterMapSum 5 thrpt 20 25511.358 ± 327.233 ops/ms
SmallStreamLazyIterableBenchmark.arrayListStreamFilterMapSum 10 thrpt 20 23800.233 ± 255.074 ops/ms
SmallStreamLazyIterableBenchmark.arrayListStreamFilterMapSum 50 thrpt 20 14596.441 ± 141.447 ops/ms
SmallStreamLazyIterableBenchmark.arrayListStreamFilterMapSum 100 thrpt 20 9389.245 ± 62.915 ops/ms
SmallStreamLazyIterableBenchmark.fastListEagerCount 1 thrpt 20 571495.288 ± 2672.093 ops/ms
SmallStreamLazyIterableBenchmark.fastListEagerCount 5 thrpt 20 213158.181 ± 680.434 ops/ms
SmallStreamLazyIterableBenchmark.fastListEagerCount 10 thrpt 20 118168.228 ± 335.752 ops/ms
SmallStreamLazyIterableBenchmark.fastListEagerCount 50 thrpt 20 25864.666 ± 81.437 ops/ms
SmallStreamLazyIterableBenchmark.fastListEagerCount 100 thrpt 20 12864.977 ± 101.273 ops/ms
SmallStreamLazyIterableBenchmark.fastListEagerSelectCollectSum 1 thrpt 20 573216.374 ± 2188.233 ops/ms
SmallStreamLazyIterableBenchmark.fastListEagerSelectCollectSum 5 thrpt 20 16134.291 ± 526.285 ops/ms
SmallStreamLazyIterableBenchmark.fastListEagerSelectCollectSum 10 thrpt 20 11635.596 ± 1355.091 ops/ms
SmallStreamLazyIterableBenchmark.fastListEagerSelectCollectSum 50 thrpt 20 8271.333 ± 26.414 ops/ms
SmallStreamLazyIterableBenchmark.fastListEagerSelectCollectSum 100 thrpt 20 4357.413 ± 22.928 ops/ms
SmallStreamLazyIterableBenchmark.fastListEagerSelectSize 1 thrpt 20 574042.258 ± 1822.863 ops/ms
SmallStreamLazyIterableBenchmark.fastListEagerSelectSize 5 thrpt 20 118128.025 ± 411.691 ops/ms
SmallStreamLazyIterableBenchmark.fastListEagerSelectSize 10 thrpt 20 64753.103 ± 115.509 ops/ms
SmallStreamLazyIterableBenchmark.fastListEagerSelectSize 50 thrpt 20 4801.717 ± 2956.590 ops/ms
SmallStreamLazyIterableBenchmark.fastListEagerSelectSize 100 thrpt 20 6265.404 ± 103.809 ops/ms
SmallStreamLazyIterableBenchmark.fastListEagerSumOfLong 1 thrpt 20 573842.288 ± 2260.326 ops/ms
SmallStreamLazyIterableBenchmark.fastListEagerSumOfLong 5 thrpt 20 235113.150 ± 8952.776 ops/ms
SmallStreamLazyIterableBenchmark.fastListEagerSumOfLong 10 thrpt 20 137508.472 ± 752.269 ops/ms
SmallStreamLazyIterableBenchmark.fastListEagerSumOfLong 50 thrpt 20 30764.535 ± 81.984 ops/ms
SmallStreamLazyIterableBenchmark.fastListEagerSumOfLong 100 thrpt 20 15468.126 ± 33.498 ops/ms
SmallStreamLazyIterableBenchmark.fastListLazyIterableSelectCollectSum 1 thrpt 20 575465.394 ± 1774.711 ops/ms
SmallStreamLazyIterableBenchmark.fastListLazyIterableSelectCollectSum 5 thrpt 20 234869.908 ± 650.197 ops/ms
SmallStreamLazyIterableBenchmark.fastListLazyIterableSelectCollectSum 10 thrpt 20 134467.837 ± 301.998 ops/ms
SmallStreamLazyIterableBenchmark.fastListLazyIterableSelectCollectSum 50 thrpt 20 30285.131 ± 98.387 ops/ms
SmallStreamLazyIterableBenchmark.fastListLazyIterableSelectCollectSum 100 thrpt 20 15318.721 ± 45.209 ops/ms
SmallStreamLazyIterableBenchmark.fastListLazyIterableSelectSize 1 thrpt 20 573569.432 ± 2303.505 ops/ms
SmallStreamLazyIterableBenchmark.fastListLazyIterableSelectSize 5 thrpt 20 216884.033 ± 3666.715 ops/ms
SmallStreamLazyIterableBenchmark.fastListLazyIterableSelectSize 10 thrpt 20 119465.163 ± 323.177 ops/ms
SmallStreamLazyIterableBenchmark.fastListLazyIterableSelectSize 50 thrpt 20 26293.923 ± 74.341 ops/ms
SmallStreamLazyIterableBenchmark.fastListLazyIterableSelectSize 100 thrpt 20 13202.057 ± 35.595 ops/ms
Lazy or Eager?
The answer is yes. I’ve written other blogs that give advice when to use Lazy vs. Eager iteration patterns. Lazy is a nice optimization as the size of your collections grow, removing the need in a multi-step pipeline of creating extra temporary collections. As you can see from the results though, where there are eager patterns, there are sometimes optimized eager/fused patterns, that allow for the fusing of multiple pipelines in a single call. I’m not going to explain much more in this blog about this. For further reading, and if you’re in interested in parallel and primitive discussions read the following blog.
The 4 am Jamestown-Scotland ferry and other optimization strategies
TL;DR? Here’s the summary for the folks who need to worry about performance.
From “The 4am Jamestown-Scotland ferry and other optimization strategies” blogFinal Thoughts
Performance only matters when you need it and don’t have it. It helps to know you have options. We all have blind spots when it comes to performance. I have focused most of my concerns in Eclipse Collections over the years on reducing memory, avoiding unnecessary garbage creation, and API readability. Good performance results usually happen as a side-effect. When we can reliably measure a significant memory or performance issue, we try and fix it. Generating 48 bytes for a LazyIterable on an empty list feels significant enough to me.
I did not measure the performance of empty here. I think we can extrapolate from the performance of single element collections that creating the garbage unnecessarily is going to far outweigh the performance cost of doing nothing on an empty collection.
As I said in the “Empty Should be Empty” blog, I will be looking to see if we can optimize for the empty case by returning a singleton LazyIterable that doesn’t generate new garbage on every call and does the right thing for an empty LazyIterable, aka. nothing.
Thanks for reading!
I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions. I am the author of the book, Eclipse Collections Categorically: Level up your programming game.
March 07, 2026
Empty Should be Empty
by Donald Raab at March 07, 2026 04:05 AM
Why does empty cost so much in Java sometimes?
Back to the Future
In 2012 at the JVM Language Summit, I gave a talk on designing a collections framework in Java. One of the memory tuning tricks I shared with the audience in my talk was “Empty should be empty.”
JVMLS 2012 — Slide 10 of “A Java collections framework design”I was not the only developer with this idea. The FastList optimization for empty was a confirmation for the Core JDK team that optimizing ArrayList, HashSet, and HashMap for empty was a good idea. The optimization for lazy initializing ArrayList wound up being introduced in a point release of Java 7. This has been a great memory savings strategy for Java developers for over a decade.
Sometimes Empty is not Empty
I came across a surprise this week, that is motivating me to look into an optimization for Eclipse Collections. I hope some clever optimization might be possible for Java Collections and Streams as well.
I have been writing blogs about some memory savings that are possible by enabling Compact Object Headers in Java 25 with Eclipse Collections.
One Positive Effect of Java 25 with Compact Object Headers Enabled
I randomly decided this week to test something I have never looked at before. I wanted to know how the memory footprint for a Java Stream compared to the footprint of a LazyIterable from Eclipse Collections. I’ve compared the performance of both before, mostly for large collections, but I’ve never looked at the memory footprint before. I used Java Object Layout (JOL) with Java 25 and Compact Object Headers (COH) enabled to measure a Stream from an empty ArrayList, and a LazyIterable from an empty FastList. An empty ArrayList costs 24 bytes with COH enabled. An empty FastList costs 16 bytes with COH enabled. Empty should be the ideal case for both Stream and LazyIterable.
Hold onto your bytes
Here is some code using JOL that shows the cost of an empty ArrayList compared to a Stream created by calling stream() on an empty ArrayList.
@Test
public void emptyArrayListAndStream()
{
// An Empty ArrayList costs 40 bytes
// 24 bytes for ArrayList + 16 bytes for singleton Empty Array
assertEquals(
40, GraphLayout.parseInstance(new ArrayList<>()).totalSize());
assertEquals(
24, ClassLayout.parseInstance(new ArrayList<>()).instanceSize());
Stream<?> stream =
new ArrayList<>().stream()
.filter(each -> true)
.map(each -> each);
// An Empty Filter/Map Stream costs 240 bytes
// Cost includes empty ArrayList and Singleton Empty Array at 40 bytes
assertEquals(
240, GraphLayout.parseInstance(stream).totalSize());
}
Holy $h!t!
An empty Stream costs 240 bytes. If we subtract the ArrayList instance cost from the Stream, it still costs 200 bytes. That’s a lot of bytes for empty.
Here is some code using JOL that shows the cost of an empty FastList compared to a LazyIterable created by calling asLazy() on an empty FastList.
@Test
public void emptyFastListAndLazyIterable()
{
// An Empty ArrayList costs 32 bytes
// 16 bytes for FastList + 16 bytes for singleton Empty Array
assertEquals(
32, GraphLayout.parseInstance(new FastList<>()).totalSize());
assertEquals(
16, ClassLayout.parseInstance(new FastList<>()).instanceSize());
LazyIterable<?> lazyIterable =
new FastList<>().asLazy()
.select(each -> true)
.collect(each -> each);
// An Empty Select/Collect LazyIterable costs 80 bytes
// Cost includes empty FastList and Singleton Empty Array at 32 bytes
assertEquals(
80, GraphLayout.parseInstance(lazyIterable).totalSize());
}
An empty LazyIterable costs 80 bytes. If we subtract the FastList instance from the LazyIterable cost, it costs 48 bytes. This is much more reasonable than Stream, but is still more expensive than the empty FastList.
Investigating the not so Empty
If we use JOL GraphLayout to look at what is in the the empty Stream instance footprint, we see the following:
java.util.stream.ReferencePipeline$3@3e44f2a5d footprint:
COUNT AVG SUM DESCRIPTION
1 16 16 [Ljava.lang.Object;
1 24 24 java.util.ArrayList
1 24 24 java.util.ArrayList$ArrayListSpliterator
1 56 56 java.util.stream.ReferencePipeline$2
1 56 56 java.util.stream.ReferencePipeline$3
1 48 48 java.util.stream.ReferencePipeline$Head
1 8 8 refactortoec.generation.GenerationMemoryTest$$Lambda/0x00000f00012c3800
1 8 8 refactortoec.generation.GenerationMemoryTest$$Lambda/0x00000f00012c3c00
8 240 (total)
If we compare this to LazyIterable, we see the following:
org.eclipse.collections.impl.lazy.CollectIterable@71e9ebaed footprint:
COUNT AVG SUM DESCRIPTION
1 16 16 [Ljava.lang.Object;
1 16 16 org.eclipse.collections.impl.lazy.CollectIterable
1 16 16 org.eclipse.collections.impl.lazy.SelectIterable
1 16 16 org.eclipse.collections.impl.list.mutable.FastList
1 8 8 refactortoec.generation.GenerationMemoryTest$$Lambda/0x000000ff01299400
1 8 8 refactortoec.generation.GenerationMemoryTest$$Lambda/0x000000ff0129ac00
6 80 (total)
I understand if you need to sit down now after seeing this. I needed to sit down after seeing this. How is it possible that I have missed this for the past 14 years?!?!?!
Both lists are empty, and the Stream instance takes an order of magnitude more memory. If you’re still not sitting. Let me initialize both lists with a 10 element array, which is what would have happened prior to the lazy initialization optimization I started this blog off with.
Here is the Stream for a pre-sized ArrayList of size 10. The ArrayList is 80 bytes. The Stream is still larger than the ArrayList at 200 bytes.
java.util.stream.ReferencePipeline$3@3e44f2a5d footprint:
COUNT AVG SUM DESCRIPTION
1 56 56 [Ljava.lang.Object;
1 24 24 java.util.ArrayList
1 24 24 java.util.ArrayList$ArrayListSpliterator
1 56 56 java.util.stream.ReferencePipeline$2
1 56 56 java.util.stream.ReferencePipeline$3
1 48 48 java.util.stream.ReferencePipeline$Head
1 8 8 refactortoec.generation.GenerationMemoryTest$$Lambda/0x00001000012c3800
1 8 8 refactortoec.generation.GenerationMemoryTest$$Lambda/0x00001000012c3c00
8 280 (total)
Here is the LazyIterable for a pre-sized FastList of size 10. The FastList is 72 bytes. The LazyIterable is now less expensive than the FastList at 48 bytes.
org.eclipse.collections.impl.lazy.CollectIterable@71e9ebaed footprint:
COUNT AVG SUM DESCRIPTION
1 56 56 [Ljava.lang.Object;
1 16 16 org.eclipse.collections.impl.lazy.CollectIterable
1 16 16 org.eclipse.collections.impl.lazy.SelectIterable
1 16 16 org.eclipse.collections.impl.list.mutable.FastList
1 8 8 refactortoec.generation.GenerationMemoryTest$$Lambda/0x00001ff801299400
1 8 8 refactortoec.generation.GenerationMemoryTest$$Lambda/0x00001ff80129ac00
6 120 (total)
While I’m happy to see LazyIterable is a lot lighter than Stream, I am unhappy with both Stream and LazyIterable for empty collections.
Empty Should and Maybe Can be Empty
It should be possible for empty FastList to return a singleton instance of an empty LazyIterable. This is what I would refer to a no-brainer “empty should be empty” optimization.
I am not sure a similar singleton approach would be possible for a Stream, because a Stream can only be used once. Some other empty collection optimization might be possible however, creating a lighter weight Stream. Note: I updated this blog after I published it to reflect this, because I initially thought this would be a good optimization for empty Java Collections and Stream as well.
Update: Here’s some good news for Stream and Compact Object Headers (COH). After I published this blog, I corrected my initial mistake of thinking an empty singleton Stream might be a possibility. Then I realized there is something else worth showing. This is the memory footprint reported by JOL without Compact Object Headers enabled. So if you’re wondering whether the COH feature will be helpful, this might be some more evidence for you.
java.util.stream.ReferencePipeline$3@3224a577d footprint:
COUNT AVG SUM DESCRIPTION
1 16 16 [Ljava.lang.Object;
1 24 24 java.util.ArrayList
1 32 32 java.util.ArrayList$ArrayListSpliterator
1 56 56 java.util.stream.ReferencePipeline$2
1 56 56 java.util.stream.ReferencePipeline$3
1 56 56 java.util.stream.ReferencePipeline$Head
1 16 16 refactortoec.generation.GenerationMemoryTest$$Lambda/0x00000ff8011a6330
1 16 16 refactortoec.generation.GenerationMemoryTest$$Lambda/0x00000ff8011a6588
8 272 (total)
The cost of a Stream compared to a LazyIterable is stark in comparison. I suspect this might show up in performance comparisons of small lists using Stream and LazyIterable. While I am not going to write any microbenchmarks for this here, I may explore this in a separate blog.
I will be looking to optimize this for asLazy() in Eclipse Collections for all empty List, Set, Stack, Bag types.
Thanks for reading!
I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions. I am the author of the book, Eclipse Collections Categorically: Level up your programming game.
March 05, 2026
Eclipse Foundation showcases open source innovation at embedded world 2026; Releases 2025 IoT and Embedded Survey Report
by Natalia Loungou at March 05, 2026 12:00 PM
BRUSSELS - March 5, 2026 - The Eclipse Foundation, one of the world’s largest open source software foundations, will highlight the accelerating role of open source in embedded systems and IoT at embedded world 2026, taking place March 10–12 in Nuremberg, Germany. Exhibiting in Hall 4, Booth 4-554, the Foundation will present a comprehensive portfolio of technologies spanning industrial IoT, automotive software, edge AI, real-time operating systems, and regulatory alignment for software development.
At the event, the Eclipse Foundation and its members will demonstrate how collaborative, vendor-neutral open source initiatives are enabling secure, scalable, and interoperable embedded solutions across industries.
“Open source is driving the next generation of embedded and IoT innovation,” said Mike Milinkovich, Executive Director of the Eclipse Foundation. “From silicon and real-time operating systems to AI-enabled development tools and software-defined vehicles, open collaboration enables secure, interoperable technologies that industries worldwide can adopt with confidence.”
Featured Innovations
Key initiatives on display include:
OpenHW Foundation and Eclipse ThreadX
Open source embedded innovation spanning silicon, processor architectures, and real-time operating systems.
AI-enabled development tools
Trusted open source development tools, IDEs, and extensible platforms that support modern embedded software development.
Software Defined Vehicle (SDV)
Open collaboration advancing next-generation automotive software architectures.
Open Regulatory Compliance (ORC)
A community-driven initiative advancing practical approaches to Cyber Resilience Act readiness and evolving regulatory requirements.
Joining the Eclipse Foundation at the booth, members Eurotech and Codethink will present live demonstrations and share domain expertise.
Enterprise-Grade Edge AIoT by Eurotech will feature advanced edge computing and AIoT solutions designed for industrial deployments.
"We’re excited to join the Eclipse Foundation booth at Embedded World 2026,” expressed Robert Andres, Head of Partner Ecosystem at Eurotech. “It provides the perfect occasion to demonstrate that ‘secure, enterprise-grade’ and ‘open source’ are not a contradiction. We do this by showing Eclipse IoT projects at the core of critical infrastructure projects in energy distribution and we are available to discuss how Eurotech, with its commercial offering based on the Eclipse Kura project, as an important aspect of achieving compliance with CRA, NIS2 or IEC 62443, is addressing cybersecurity requirements like SBOM and vulnerability management with fundamental support from the Eclipse Foundation.”
Codethink’s Trust Evidence initiative will demonstrate practical approaches to software assurance and trustworthiness in safety- and security-critical systems.
"Embedded systems don’t just need code that runs, they need software you can prove,” said John Ellis, President & Head of Product at Codethink. “Open collaboration and transparent workflows let teams ship not only features, but evidence: provenance, repeatability, and security that holds up over years, not just release cycles.”
2025 IoT and Embedded Developer Survey Report
In conjunction with embedded world 2026, the Eclipse Foundation will release the 2025 IoT and Embedded Developer Survey Report.
Developed in collaboration with the Eclipse IoT, Sparkplug, and Oniro Working Groups, as well as the Eclipse ThreadX project, the Software Defined Vehicle initiative, and the OpenHW Foundation, the report delivers data-driven insight into the trends and technologies shaping global IoT and embedded development.
This year’s findings examine:
- The increasingly complex regulatory environment
- Supply chain disruptions and their impact on product development
- Developer and engineering decision-maker technology preferences
- The growing role of open source in enabling security, compliance, and digital sovereignty
The complete report is available for download here.
Conference participation
Beyond the exhibition floor, the Eclipse Foundation and its members will contribute to the embedded world conference program.
On Tuesday, March 10, from 10:00 to 12:45, Frédéric Desbiens, Head of IoT and Embedded Solutions at the Eclipse Foundation, will moderate expert sessions in Track 5.4, Software Architectures, featuring speakers from Synaptrix Technologies, ETAS (Bosch), and Elektrobit Automotive. Detailed session information is available here.
Representatives from the OpenHW Foundation, Eclipse ThreadX, the Software Defined Vehicle initiative, and Eclipse IoT Working Group member Eurotech will be available at Booth 4-554 to discuss the survey findings and demonstrate the technologies shaping the future of IoT and embedded systems.
About the Eclipse Foundation
The Eclipse Foundation provides a global community of individuals and organisations with a vendor-neutral, business-friendly environment for open source collaboration and innovation. We host Adoptium, the Eclipse IDE, Jakarta EE, Open VSX, Software Defined Vehicle, and more than 400 high-impact open source projects. Headquartered in Brussels, Belgium, we are an international non-profit association supported by over 300 members. Our events, including Open Community Experience (OCX), bring together developers, industry leaders, and researchers from around the world. To learn more, follow us on X and LinkedIn, or visit eclipse.org.
Media contacts:
Schwartz Public Relations (Germany)
Julia Rauch/Marita Bäumer
Sendlinger Straße 42A
80331 Munich
EclipseFoundation@schwartzpr.de
+49 (89) 211 871 -70/ -62
514 Media Ltd (France, Italy, Spain)
Benoit Simoneau
M: +44 (0) 7891 920 370
Nichols Communications (Global Press Contact)
Jay Nichols
+1 408-772-1551
March 03, 2026
Open VSX Registry surpasses 300 million monthly downloads, as industry leaders back critical developer infrastructure
by Natalia Loungou at March 03, 2026 02:00 PM
BRUSSELS - March 3, 2026 - The Eclipse Foundation today announced major security and infrastructure advancements to the Open VSX Registry as it surpasses 300 million downloads per month, underscoring its role as critical infrastructure for AI-native and cloud-based development platforms.
The Open VSX Registry is the vendor-neutral extension registry for tools built on the VS Code™ extension API. It powers a growing ecosystem of AI-enabled and cloud-based developer platforms, including Amazon’s Kiro, Google’s Antigravity, Cursor, IBM’s Bob, VSCodium, Windsurf, Ona (formerly Gitpod), and others.
With peak daily traffic exceeding 50 million requests and more than 10,000 extensions from over 6,500 publishers, Open VSX has become a critical production dependency for developer platforms serving millions of users worldwide. As adoption deepens across production workflows, major commercial adopters are contributing to efforts that strengthen Open VSX’s security, reliability, and long-term sustainability.
“Open VSX has evolved into foundational infrastructure for the global developer ecosystem,” said Mike Milinkovich, Executive Director of the Eclipse Foundation. “As adoption accelerates across AI-native and cloud-based development platforms, we are investing to ensure the registry remains secure, resilient, and vendor-neutral. Support from leading commercial adopters reinforces Open VSX as trusted, shared infrastructure.”
Industry leaders invest in sustaining shared infrastructure
At this level of usage, reliability and shared accountability are no longer optional. As AI-native IDEs and cloud development environments scale rapidly, extension distribution has become mission-critical infrastructure rather than a background service.
Amazon Web Services (AWS) has made a strategic investment to strengthen the reliability, performance, and security of open infrastructure operated by the Eclipse Foundation, including the Open VSX Registry. AWS support accelerates improvements in traffic management, malware detection, and platform resilience.
Cursor, one of the fastest-growing AI-native developer environments, is also supporting Open VSX as its extension traffic continues to grow, underscoring the registry’s role in modern development workflows.
Together, these investments reflect a broader industry shift toward shared responsibility for the sustainability of critical open source infrastructure.
Proactive supply chain protection
Sustaining trust requires both technical safeguards and operational investment. The Open VSX Registry has introduced a new pre-publication verification framework to identify security risks before extensions are published.
The framework enables the registry to:
- Detect namespace impersonation and extension name spoofing
- Flag exposed credentials or embedded secrets
- Scan for known malicious patterns
- Quarantine suspicious uploads for review prior to publication
The system is designed to evolve alongside emerging threat models. The registry is also implementing responsible rate limiting and traffic management to ensure sustainable growth and consistent availability during periods of elevated demand. Rate limiting is targeted at sustained, high-volume automated traffic. For the vast majority of developers and open source projects, normal usage will remain unchanged.
Infrastructure designed for reliability at global scale
Security alone is not enough. Platform resilience must scale alongside demand. To support continued growth, the Open VSX Registry is transitioning to a hybrid, multi-region architecture. Core services will operate in AWS in Europe as the primary production environment, with a fully operational on-premises deployment in Canada maintained as an independent secondary environment.
All registry data, backups, and telemetry remain within these regions and are encrypted in transit and at rest, reinforcing vendor-neutral extension distribution aligned with global expectations for trust and operational independence.
This architecture reduces single points of failure and strengthens resilience for the expanding ecosystem of AI-native and cloud-based development platforms that rely on the Open VSX Registry.
Open VSX will be a Platinum Sponsor and active participant at OCX 2026, the Eclipse Foundation’s flagship developer conference, April 21–23, 2026, in Brussels. Additional updates and ecosystem developments will be shared at the event. Registration for OCX is now open. Developers, contributors, and ecosystem partners are encouraged to attend sessions and join discussions shaping the future of open developer tooling and infrastructure, including the latest advances in AI-integrated developer environments.
Trusted, open infrastructure requires both institutional support and community contribution. Organisations that depend on the Open VSX Registry are encouraged to participate through sponsorship or membership, and developers are invited to contribute to the project and help shape its future.
About the Open VSX Registry
The Open VSX Registry is the open, vendor-neutral extension registry for tools built on the VS Code™ extension API. Governed transparently under the Eclipse Foundation, it provides developers, publishers, and platform builders with a trusted open alternative to proprietary extension marketplaces. Because Eclipse Open VSX is open source and self-hostable, organisations may also deploy their own internal registry implementations as needed.
About the Eclipse Foundation
The Eclipse Foundation provides a global community of individuals and organisations with a vendor-neutral, business-friendly environment for open source collaboration and innovation. We host Adoptium, the Eclipse IDE, Jakarta EE, Open VSX, Software Defined Vehicle, and more than 400 high-impact open source projects. Headquartered in Brussels, Belgium, we are an international non-profit association supported by over 300 members. Our events, including Open Community Experience (OCX), bring together developers, industry leaders, and researchers from around the world. To learn more, follow us on X and LinkedIn, or visit eclipse.org.
Media contacts:
Schwartz Public Relations (Germany)
Julia Rauch/Marita Bäumer
Sendlinger Straße 42A
80331 Munich
EclipseFoundation@schwartzpr.de
+49 (89) 211 871 -70/ -62
514 Media Ltd (France, Italy, Spain)
Benoit Simoneau
M: +44 (0) 7891 920 370
Nichols Communications (Global Press Contact)
Jay Nichols
+1 408-772-1551
March 02, 2026
2026 Eclipse Foundation Board Election Results
by Gesine Freund at March 02, 2026 03:45 PM
The Eclipse Foundation would like to thank everyone who participated in this year’s election process and is pleased to announce the results of the 2026 Eclipse Foundation Contributing Member and Committer Member elections for representatives to the foundation’s board. These positions are a vitally important part of the Eclipse Foundation's governance.
Hendrik Ebbers, Johannes Matheis are returning, and Antonio J. Jara will be joining as the Contributing Member representatives. Ed Merks and Shelley Lambert will be returning, and Sebastian Schildt will be joining as the Committer Member representatives. Congratulations! We're looking forward to working with them on the Board, effective 1 April 2026.
We thank Christophe Biquard, Angelo Corsaro, Matthew Khouzam, Manoj Nalledathu Palat, and Carlo Piana for running in this year’s election.
We also thank Matthew Khouzam for his many years of service to the Eclipse Foundation Board.
February 26, 2026
Eclipse Foundation unveils full agenda for OCX 2026
by Natalia Loungou at February 26, 2026 12:00 PM
BRUSSELS - February 26, 2026 - The Eclipse Foundation, one of the world’s largest open source software organisations, has announced the full agenda for Open Community Experience (OCX 2026), taking place 21–23 April 2026 at The EGG conference centre in Brussels, Belgium.
OCX is the Foundation’s flagship conference and one of Europe’s leading gatherings for open source professionals, featuring nearly 150 sessions across six thematic tracks and five collocated communities. Over three dynamic days, developers, architects, researchers, and industry leaders will dive into the technologies, standards, and collaboration models shaping the next era of open innovation.
Following a successful debut in 2024, OCX 2026 returns with a significantly expanded technical program, growing from three to five collocated communities and increasing session volume by more than 35 percent. Each community offers deep domain expertise while connecting attendees to the broader open source ecosystem.
“Like the Eclipse Foundation itself, OCX 2026 continues to grow in scope, diversity, and depth,” said Mike Milinkovich, executive director of the Eclipse Foundation. “As AI reshapes software development, software-defined vehicles transform mobility, and new regulations redefine accountability, open collaboration has never been more essential. This year’s agenda reflects the real challenges and real opportunities facing our community.”
What developers will find at OCX 2026
OCX 2026 delivers three days of practical engineering insight, strategic perspective, and meaningful community-driven collaboration.
Main OCX program: Cross-domain innovation in action
The core program brings together leading practitioners to address software security, enterprise Java evolution, embedded and edge computing, cloud-native architectures, governance, and cross-industry innovation. Sessions are grounded in real-world implementation and built for professionals shipping and scaling software. Attendees leave with actionable techniques, architectural insight, and proven approaches they can apply immediately.
Five collocated events within a single conference experience
OCX 2026 features five collocated events within the conference, each offering a concentrated experience for specific technical communities while remaining fully accessible with a single pass.
- Open Community for Tooling
Formerly EclipseCon, this event continues the legacy of one of the industry’s most respected developer tooling communities. Topics include AI-assisted IDEs, modeling frameworks, language servers, next-generation workflows, and productivity engineering. If you build tools or depend on them to deliver high-performance software, this community is essential.
Speaker Highlights:
- Programming in Every Language: Building Cultural Tools with Langium - Malik Lanlokun
- AI in Action: The Ultimate Live Demo with Theia AI - Jonas Helming
- Open Community for Automotive
Open collaboration is accelerating the shift to software-defined vehicles. This event explores open standards, safety-conscious architectures, and real-world implementations. Engineers and platform architects will gain practical insight into the software foundations powering connected mobility.
Speaker Highlights:
- Diagnostics Reimagined: How Eclipse OpenSOVD Powers Open Collaboration and Standard Evolution - Thilo Schmitt & Alexander Mohr
- Fifty Shades of SDV: A Blueprint-Driven Roadmap for Orchestration Adoption - Naci Dai & Oliver Kral
- Open Community for AI
Open source is redefining how AI systems are built, validated, and governed. This event examines trustworthy AI frameworks, open model ecosystems, responsible governance strategies, data sharing and dataspaces, and production deployment lessons. Developers building production-grade AI systems will find both technical depth and forward-looking guidance.
Speaker Highlights:
- Commit to Quality: AI-Enhanced Testing in Open Source - Shelley Lambert & Longyu Zhang
- Understanding Machine Decisions - Haishi Bai
-
Open Community for Compliance
With regulations such as the EU Cyber Resilience Act (CRA) raising expectations for secure software, compliance is now a core engineering concern. This event equips teams with practical strategies for security, licensing, and regulatory alignment without slowing innovation. If you deliver software into the European market, this content is critical.
Speaker Highlights:
- Taming the SBOM Chaos – A Legal Compass for the CRA and Open Source Compliance - Hendrik Schöttle
- Layered Compliance: Using the Swiss Cheese Model to Prevent Catastrophic Failure - Georg Link
- Open Community for Research
Presented in collaboration with the Apereo Foundation and academic partners, this event demonstrates how open source accelerates the path from research to scalable, real-world systems. Topics include reproducibility, open science practices, and production-ready implementations bridging academia and industry.
Speaker Highlights:
- VOStack open-source Software Stack for the virtualization of IoT devices - Anastasios Zafeiropoulos
- UniTime Overview: From Research to Practice - Tomas Muller
Featured keynote speakers
OCX 2026 brings together perspectives from high-performance sport, European digital policy, and global open source leadership.
- Ruth Buscombe - Formula 1 race strategist and F1TV analyst, Buscombe opens the conference with “The Winning Formula: What F1 Teaches Us About Marginal Gains, Teamwork, and Data-Driven Decision Making.” Her keynote connects the precision of elite motorsport with the collaborative performance culture of open source communities.
- Rolf Riemenschneider - Head of Sector IoT at the European Commission, Riemenschneider leads research and innovation under Horizon Europe, coordinating strategy for Cloud-Edge Computing and the Internet of Things. His keynote will explore Europe’s digital trajectory and the role of open technologies in strategic data spaces.
- Mike Milinkovich - Executive Director of the Eclipse Foundation since 2004, Milinkovich is a long-standing open source leader who has served on the boards of the Open Source Initiative, the OpenJDK community, and the Executive Committee of the Java Community Process. He will share insight on the evolving role of open collaboration in a rapidly shifting technology landscape.
Register now and be part of what’s next
OCX 2026 is built for developers, architects, researchers, and decision-makers driving modern software forward. One registration unlocks the entire experience, including the main program and all five co-located events.
- Review the agenda
- Plan your sessions
- Secure your place at OCX 2026 and register now.
Discounted registration rates are available until March 16. Capacity at The EGG conference centre is limited, and early registration is recommended.
Thanks to our sponsors
OCX 2026 is made possible through the generous support of our sponsors, including SAP, TypeFox, EclipseSource, Azul, Obeo, Equo Tech, ETAS, Eurotech, LG, Red Hat, KentYou, OSGi, and Mercedes-Benz Tech Innovation GmbH. Their leadership and commitment to open source innovation help drive collaboration across industries and strengthen the global open source ecosystem. We sincerely thank all of our sponsors for their partnership and commitment.
Organisations interested in supporting OCX 2026 and engaging with the global open source ecosystem are invited to contact sponsors@OCXconf.org to request sponsorship information and review the sponsorship prospectus.
About the Eclipse Foundation
The Eclipse Foundation provides a global community of individuals and organisations with a vendor-neutral, business-friendly environment for open source collaboration and innovation. We host Adoptium, the Eclipse IDE, Jakarta EE, Open VSX, Software Defined Vehicle, and more than 400 high-impact open source projects. Headquartered in Brussels, Belgium, we are an international non-profit association supported by over 300 members. Our events, including Open Community Experience (OCX), bring together developers, industry leaders, and researchers from around the world. To learn more, follow us on X and LinkedIn, or visit eclipse.org.
Media contacts:
Schwartz Public Relations (Germany)
Julia Rauch/Marita Bäumer
Sendlinger Straße 42A
80331 Munich
EclipseFoundation@schwartzpr.de
+49 (89) 211 871 -70/ -62
514 Media Ltd (France, Italy, Spain)
Benoit Simoneau
M: +44 (0) 7891 920 370
Nichols Communications (Global Press Contact)
Jay Nichols
+1 408-772-1551
February 25, 2026
Hardening the Open VSX Registry: Keeping it reliable at scale
by Denis Roy at February 25, 2026 02:45 PM
Denis Roy, Head of Information Technology, Eclipse Foundation
As the Open VSX ecosystem continues to grow, keeping the registry stable is a top priority. Behind the scenes, we are strengthening the infrastructure so that even during peak loads or major provider outages, developer workflows remain uninterrupted.
In recent posts, we shared how the Open VSX Registry is strengthening supply-chain security with pre-publish checks and introducing operational guardrails through rate limiting to scale responsibly. As adoption and usage increase, the underlying infrastructure behind those improvements becomes just as important. This post focuses on that work: improving availability, reducing single points of failure, and making recovery faster and more predictable when incidents occur.
A hybrid, fail-safe architecture
We are currently transitioning to a hybrid infrastructure model, moving core services to AWS as our primary environment, while keeping our on-premise infrastructure fully operational as a secondary site. Our primary AWS deployment is hosted in Europe, with our on-premises secondary environment located in Canada, reinforcing both operational resilience and regional diversity.
This is deliberate architectural diversity. AWS provides scale and flexibility. Our on-premise environment provides an independent fallback. If a cloud region experiences an outage, services can shift to infrastructure under our direct control.
The objective is simple: keep the registry online even when part of the underlying environment is not.
High-availability storage
Compute alone does not keep a registry running. The data must be available wherever the service is active.
As part of our infrastructure improvement plan, we are adding a dedicated fallback storage cluster and synchronizing extension binaries and metadata across locations. This reduces reliance on any single storage layer and prevents situations where one environment is healthy but lacks the data it needs.
If one storage layer becomes unreachable, the other is ready to step in.
Seeing issues before they become outages
Reducing downtime starts with visibility.
We are modernizing our observability stack across both cloud and on-prem environments, strengthening monitoring, centralized logging, and real-time alerting. This makes it easier to detect slowdowns, rising error rates, or unusual traffic patterns before they impact users.
Earlier detection leads to faster resolution and fewer user-visible incidents.
Faster recovery through clearer process
Technology improves reliability. Process makes it consistent.
We are formalizing incident response and recovery procedures for our multi-site architecture. Updated runbooks and rehearsed failover scenarios reduce mean time to recovery and remove uncertainty during high-pressure events.
When something does go wrong, clarity and speed make all the difference.
Why this work matters
The Open VSX Registry now supports a rapidly expanding ecosystem of developer platforms, CI systems, and AI-enabled tools. Growth brings higher expectations for uptime and reliability.
These infrastructure improvements are a long-term investment in keeping the Open VSX Registry stable, secure, and dependable as it scales.
Security builds trust. Operational guardrails support sustainability. Infrastructure upgrades ensure the service remains available when it matters most.
The Open VSX Registry is shared public infrastructure. Keeping it reliable requires continuous investment, thoughtful architecture, and disciplined operations. This work strengthens the registry so developers, publishers, and platform providers can rely on it with confidence, today and as the ecosystem continues to evolve.
It’s a team effort
This work reflects the effort of many people across the Eclipse Foundation and the broader Open VSX community. From the IT teams to Software Development, Security and beyond, including our community of users, developers, testers and integrators, all have contributed to making Open VSX a world‑class, high‑value extension registry that continues to grow through focused stewardship, open collaboration, and a commitment to empowering developers everywhere.
We also appreciate the collaboration of our cloud and infrastructure partners who continue to support the reliability and performance of the Open VSX Registry.
February 18, 2026
Eclipse RCP Book - Fourth Edition
by Lars at February 18, 2026 12:00 AM
I’m happy to announce the release of the fourth edition of the Eclipse RCP book, updated for Eclipse 2025-12.

Eclipse Rich Client Platform (RCP) powers some of the world’s most sophisticated desktop applications. This comprehensive guide takes you from first principles to professional application development, combining crystal-clear explanations with hands-on exercises that build a complete working application.
You’ll learn:
- Building modern UIs with SWT, JFace, and CSS styling
- OSGi modularity, dependency injection, and platform services
- Command-line builds with Maven and Tycho
- Migrating legacy Eclipse 3.x applications
Whether you’re building your first Eclipse application or modernizing existing systems, this book provides the complete roadmap. Each chapter pairs thorough explanations with detailed exercises.
This fourth edition is fully updated for Eclipse 2025-12 and reflects real-world best practices from hundreds of training sessions and production projects.
Get the book on Amazon:
February 16, 2026
What if the add and remove methods in java.util.Collection had fluent counterparts?
by Donald Raab at February 16, 2026 08:21 PM
Eclipse Collections has complements to add/remove that can be chained.
The truth behind add and remove
The add and remove methods on java.util.Collection return boolean. The boolean return value is useful for Set types for add, but not very useful for List types. The method add will return whether an element is added to the Collection type, which is always true for a List, but not necessarily for a Set. The method remove will return whether an element is removed from a Collection type, which will behave similarly for a Set and List.
One of the downsides of add and remove returning boolean is that they can’t be chained, and result in the need for multiple statements or static factory methods like of() on List, Set, and Map.
This is a truth that Java developers have dealt with for 28 years. So while we can’t chain add and remove methods in Java alone, Eclipse Collections provides alternatives named with and without that make chaining possible.
The fluent methods with and without
The with method in the Eclipse Collections MutableCollection interface adds the element passed as a parameter to the underlying collection, and returns the same collection. The without method in MutableCollection removes the element passed as a parameter, and also returns the same collection. Both methods mutate the underlying collection, they do not return copies of the collection.
The methods with and without are covariant. This means on MutableCollection, they return MutableCollection. On MutableList, they return MutableList. On MutableSet, they return MutableSet. Etc.
Here’s an example that compares the difference between using add and with on a java.util.List and MutableList.
@Test
public void addListVsWithMutableList()
{
var expected = Lists.immutable.of("Mary", "Ted", "Sally");
List<String> jdkList = new ArrayList<>();
jdkList.add("Mary");
jdkList.add("Ted");
jdkList.add("Sally");
Assertions.assertEquals(expected, jdkList);
MutableList<String> ecMutableList = Lists.mutable.empty();
ecMutableList.with("Mary").with("Ted").with("Sally");
Assertions.assertEquals(expected, ecMutableList);
}
Here’s an example that compares the difference between using remove and without on a java.util.List and MutableList.
@Test
public void removeListVsWithoutMutableList()
{
var expected = Lists.immutable.of("Mary", "Sally");
List<String> jdkList = new ArrayList<>();
jdkList.add("Mary");
jdkList.add("Ted");
jdkList.add("Sally");
jdkList.remove("Ted");
Assertions.assertEquals(expected, jdkList);
MutableList<String> ecMutableList = Lists.mutable.empty();
ecMutableList.with("Mary").with("Ted").with("Sally").without("Ted");
Assertions.assertEquals(expected, ecMutableList);
}
There are fluent equivalents to addAll and removeAll in Eclipse Collections MutableCollection interface named withAll and withoutAll.
The definitions of with, without, withAll, and withoutAll on MutableCollection are straightforward. You will notice a very subtle difference between addAll and withAll, and removeAll and withoutAll.
// Collection interface
boolean add(E e);
boolean remove(Object o);
boolean addAll(Collection<? extends E> c);
boolean removeAll(Collection<?> c);
// MutableCollection interface
MutableCollection<T> with(T element);
MutableCollection<T> without(T element);
MutableCollection<T> withAll(Iterable<? extends T> elements);
MutableCollection<T> withoutAll(Iterable<? extends T> elements);
The methods withAll and withoutAll take Iterable instead of Collection as a parameter, which makes them more useful with additional types. For instance, you can even use them with Stream, by using a little trick as shown in the following example.
@Test
public void addAllListVsWithAllMutableList()
{
Supplier<Stream<String>> one = () -> Stream.of("a", "b", "c");
Supplier<Stream<String>> two = () -> Stream.of("d", "e", "f");
Supplier<Stream<String>> three = () -> Stream.of("g", "h", "i");
var expected = List.of("a", "b", "c", "d", "e", "f", "g", "h", "i");
List<String> jdkList = new ArrayList<>();
jdkList.addAll(one.get().toList());
jdkList.addAll(two.get().toList());
jdkList.addAll(three.get().toList());
Assertions.assertEquals(expected, jdkList);
MutableList<String> ecMutableList = Lists.mutable.<String>empty()
.withAll(one.get()::iterator)
.withAll(two.get()::iterator)
.withAll(three.get()::iterator);
Assertions.assertEquals(expected, ecMutableList);
}
In the withAll examples, the Stream instances are not copied into a List before being added to the target List. Instead, the Stream is converted into an Iterable by using a method reference to the Stream::iterator method.
Additional Resources
If you would like to see more examples of with, without, withAll, and withoutAll, there are a couple blogs I would recommend. I wrote the following blog seven years ago, and it is as applicable today as it was back then.
As a matter of Factory — Part 3 (Method Chaining)
If we go back a couple years more, I wrote a blog that explains the choice of with and without as preferred prepositions to the method named of. Each person has their preposition preference, but of does not have a natural opposite like with.
I hope you found this short reference for the fluent with, without, withAll and withoutAll methods in Eclipse Collections MutableCollection interface helpful. There are more exhaustive and explanatory references in the blogs above if you are curious and want to learn more.
Thanks for reading!
I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions. I am the author of the book, Eclipse Collections Categorically: Level up your programming game.
February 11, 2026
Scaling the Open VSX Registry responsibly with rate limiting
February 11, 2026 08:05 PM
The Open VSX Registry has become widely used infrastructure for modern developer tools. That growth reflects strong trust from the ecosystem, and it brings a shared responsibility to keep the Registry reliable, predictable, and equitable for everyone who depends on it.
In a previous post, I shared an update on strengthening supply-chain security in the Open VSX Registry, including the introduction of pre-publish checks for extensions. This post focuses on the operational side of the same goal: ensuring the Registry remains resilient and sustainable as usage continues to grow.
The Open VSX Registry is free to use, but not free to operate
Operating a global extension registry requires sustained investment in:
- Compute and storage to serve and index extensions at scale
- Bandwidth to deliver downloads and metadata worldwide
- Security to protect users, publishers, and the service itself
- Staff to operate, monitor, secure, and support the Registry
These costs scale directly with usage.
AI-driven usage is scaling faster than ever
Demand on the Open VSX Registry is increasing rapidly, and AI-enabled development is accelerating that trend. A single developer can now orchestrate dozens of agents and automated workflows, generating traffic that previously would have required entire teams. In practical terms, that can mean the equivalent load of twenty or more traditional users, with direct impact on compute, bandwidth, storage, security capacity, and operational oversight.
This is not unique to the Open VSX Registry. It is an industry-wide challenge. Stewards of public package registries such as Maven Central, PyPI, crates.io, and Packagist have recently raised the same sustainability concerns in a joint statement on sustainable stewardship. Mike Milinkovich, Executive Director of the Eclipse Foundation, echoed that message in his post on aligning responsibility with usage.
As reliance on shared open infrastructure grows, sustaining it becomes a collective responsibility across the ecosystem.
Open VSX is critical, and often invisible, infrastructure
Many developers and organisations may not realise how often they rely on the Open VSX Registry. It provides the extension infrastructure behind a growing number of modern developer platforms and tools, including Amazon’s Kiro, Cursor, Google Antigravity, Windsurf, VSCodium, IBM’s Project Bob, Trae, Ona (formerly Gitpod), and others.
If you use one of these tools, you use the Open VSX Registry.
The Open VSX Registry remains a neutral, vendor-independent public service, operated in the open and governed by the Eclipse Foundation for the benefit of the entire ecosystem.
For developers, the expectation is simple: Open VSX should remain fast, stable, secure, and dependable as the ecosystem grows.
As more platforms and automated systems rely on the Registry, continuous machine-driven traffic can place sustained load on shared infrastructure. Without clear operational guardrails, that can affect performance and availability for everyone.
A practical step for sustainable and reliable operations
Usage has shifted from primarily human-driven access to continuous automation driven by CI systems, cloud-based tooling, and AI-enabled workflows. That shift requires operational controls that scale predictably.
Rate limiting provides a structured way to manage high-volume automated traffic while preserving the performance developers expect. It also ensures that operational decisions are based on real usage patterns and that expectations for large-scale consumption are clear and transparent.
Rate limits aren’t entirely new. Like most public infrastructure services, the Open VSX Registry has long had baseline protections in place to prevent sustained high-volume usage from degrading performance for everyone. What’s changing now is that we’re moving from a one-size-fits-all approach to defined tiers that more accurately reflect different usage patterns. This allows us to keep the Registry stable and responsive for developers and open source projects, while providing a clear, supported path for sustained at-scale consumption.
For individual developers and open source projects, day-to-day workflows remain unchanged. Publishing extensions, searching the registry, and installing tools will continue to work as they always have for typical usage.
A measured, transparent rollout
Rate limiting will be introduced incrementally, with an emphasis on platform health and operational stability.
The initial phase focuses on visibility and observation before any limits are adjusted. This includes improved insight into traffic patterns for registered consumers, baseline protections for anonymous high-volume usage, and a monitoring period before any limits are adjusted.
This work is being done in the open so the community can follow what is changing and why. Progress and discussion are tracked publicly in the Open VSX deployment issue:
https://github.com/EclipseFdn/open-vsx.org/issues/5970
What this means for the community
The goal is to keep the Open VSX Registry reliable and fair as it scales, while minimizing impact on normal use.
For most users, nothing should feel different. Developers should see little to no impact, and publishers should not experience disruption to normal publishing workflows. Sustained, high-volume automated consumers may need to coordinate with the Registry to ensure their usage can be supported reliably over time.
Organisations that depend on the Open VSX Registry for sustained or commercial-scale usage are encouraged to get in touch. Coordinating early helps us plan capacity, maintain reliability, and support the broader ecosystem. Please contact the Open VSX Registry team at infrastructure@eclipse-foundation.org.
The intent is not restriction, but clarity in support of fairness, stability, and long-term sustainability.
Looking ahead
Automation is reshaping how developer infrastructure is consumed. Responsible rate limiting is one step toward ensuring the Open VSX Registry can continue to serve the ecosystem reliably as those patterns evolve.
We will continue to adapt based on real-world usage and community input, with the goal of keeping the Open VSX Registry a dependable shared resource for the long term.
February 10, 2026
Eclipse JKube 1.19 is now available!
February 10, 2026 02:00 PM
On behalf of the Eclipse JKube
team and everyone who has contributed, I'm happy to announce that Eclipse JKube 1.19.0 has been
released and is now available from
Maven Central 🎉.
Thanks to all of you who have contributed with issue reports, pull requests, feedback, and spreading the word with blogs, videos, comments, and so on. We really appreciate your help, keep it up!
What's new?
Without further ado, let's have a look at the most significant updates:
- Improved Spring Boot health probes configuration
- ECR registry authentication with AWS SDK v2
- IngressClassName support for Ingress resources
- Updated base images from UBI 8 to UBI 9
- Reduced dependencies (Guava removal)
- 🐛 Many other bug-fixes and minor improvements
Improved Spring Boot health probes configuration
This release brings several improvements for the Spring Boot health probes configuration:
- JKube now uses the correct
management.endpoint.health.probes.enabledproperty. The previous property (management.health.probes.enabled) was deprecated in Spring Boot 2.3.2. - Added support for
server.ssl.enabledandmanagement.server.ssl.enabledproperties to enable liveness/readiness probes for Spring Boot Actuator. This allows for easier environment-specific SSL configuration.
ECR registry authentication with AWS SDK v2
JKube now supports Amazon ECR registry authentication using AWS SDK Java v2. This update ensures compatibility with the latest AWS SDK and provides a more robust authentication mechanism when pushing images to Amazon Elastic Container Registry.
IngressClassName support for Ingress resources
The IngressClassName field is now supported in the NetworkingV1IngressGenerator.
This is essential for Kubernetes environments with multiple ingress controllers, allowing you to specify which ingress controller should handle your Ingress resources.
Using this release
If your project is based on Maven, you just need to add the Kubernetes Maven plugin or the OpenShift Maven plugin to your plugin dependencies:
<plugin>
<groupId>org.eclipse.jkube</groupId>
<artifactId>kubernetes-maven-plugin</artifactId>
<version>1.19.0</version>
</plugin>If your project is based on Gradle, you just need to add the Kubernetes Gradle plugin or the OpenShift Gradle plugin to your plugin dependencies:
plugins {
id 'org.eclipse.jkube.kubernetes' version '1.19.0'
}How can you help?
If you're interested in helping out and are a first-time contributor, check out the "first-timers-only" tag in the issue repository. We've tagged extremely easy issues so that you can get started contributing to Open Source and the Eclipse organization.
If you are a more experienced developer or have already contributed to JKube, check the "help wanted" tag.
We're also excited to read articles and posts mentioning our project and sharing the user experience. Feedback is the only way to improve.
Project Page | GitHub | Issues | Gitter | Mailing list | Stack Overflow

February 05, 2026
We’re hiring: improving the services that support a global open source community
February 05, 2026 07:02 PM
The Eclipse Foundation supports a global open source community by providing trusted platforms, services, and governance. As a vendor-neutral organisation, we operate infrastructure that enables collaboration across projects, organisations, and industries.
This infrastructure supports project governance, developer tooling, and day-to-day operations across Eclipse open source projects. While much of it runs quietly in the background, it plays a critical role in the health, security, and sustainability of those projects.
We are expanding the Software Development team with two new roles. Both positions involve contributing to the design, development, and operation of services that are widely used, security-sensitive, and expected to operate reliably at scale.
Software engineer: security and detection
One of the roles is a Software Engineer position with a focus on security and detection engineering, alongside general development and operations.
This role will work on Open VSX Registry, an open source registry for VS Code extensions operated by the Eclipse Foundation. As adoption grows, maintaining the integrity and trustworthiness of the registry requires continuous analysis, detection, and operational safeguards.
In this role, you will:
- Analyse suspicious or malicious extensions and related artefacts
- Develop, test, and maintain YARA rules to detect malicious or policy-violating content
- Design, implement and contribute improvements to backend services, including new features, abuse prevention, rate-limiting, and operational safeguards
This is hands-on work that combines backend development with practical security analysis. The outcome directly improves the reliability, integrity, and operation of services that are part of the developer tooling supply chain.
For more context on this work, see my recent post on strengthening supply-chain security in Open VSX.
To apply:
https://eclipsefoundation.applytojob.com/apply/eXFgacP5SJ/Software-Engineer
Software developer: open source project tooling and services
The second role is a Software Developer position focused on improving the tools and services that support Eclipse open source projects.
This work centres on maintaining and evolving systems that our open source projects and contributors rely on every day. It includes:
- Maintaining and modernising project-facing applications such as projects.eclipse.org, built with Drupal and PHP
- Developing Python tooling to automate internal processes and improve project metrics
- Improving services written in Java or JavaScript that support project governance workflows
As with the Software Engineer role, this position involves contributing to production services. The focus is on incremental improvement, reducing technical debt, and ensuring systems remain maintainable, secure, and reliable as they evolve.
To apply:
https://eclipsefoundation.applytojob.com/apply/mvaSS7T8Ox/Software-Developer
What we are looking for
Across both roles, we are looking for people who:
- Take a pragmatic approach to problem solving
- Are comfortable working in a remote, open source environment
- Value clear documentation and thoughtful communication
- Enjoy understanding how systems work and how to improve them over time
If you are interested in working on open source infrastructure with real users and real impact, we would be happy to hear from you.
February 03, 2026
The Eclipse Foundation and ECSO formalise cooperation through new Memorandum of Understanding
by Shanda Giacomoni at February 03, 2026 02:58 PM
The European Cyber Security Organisation (ECSO) and the Eclipse Foundation have formalised a Memorandum of Understanding (MoU), establishing a framework for close cooperation between the two organisations.
February 02, 2026
WoT Tooling: Code Generation and OpenAPI from Thing Models
February 02, 2026 12:00 AM
Eclipse Ditto’s W3C WoT (Web of Things) integration lets you reference Thing Models in Thing Definitions, generate Thing Descriptions, and create Thing skeletons at runtime. Alongside that, the ditto-wot-tooling project provides build-time and CLI tools to generate Kotlin code and OpenAPI specifications from the same WoT Thing Models.
This post gives an overview of the available tools, their configuration, and some best practices so the Ditto community can use them effectively.
Overview of ditto-wot-tooling
The ditto-wot-tooling repository hosts two main tools:
| Tool | Purpose |
|---|---|
| WoT Kotlin Generator | Maven plugin that downloads a WoT Thing Model (JSON-LD) via HTTP and generates Kotlin data classes and path helpers for type-safe use in your application. |
| WoT to OpenAPI Generator | Converts WoT Thing Models into OpenAPI 3.1.0 specifications that describe Ditto’s HTTP API for Things conforming to that model. Usable as CLI or library. |
Both tools consume Thing Models from a URL (e.g. a deployed model registry). They complement Ditto’s runtime WoT support: Ditto fetches Thing Models (TMs) to build skeletons and Thing Descriptions; the tooling uses the same TMs at build time or in CI to generate client code and API docs.
WoT Kotlin Generator Maven plugin
The WoT Kotlin Generator produces Kotlin code (data classes, builders, path DSL) from a single Thing Model URL. The generated code aligns with Ditto’s API and can be used to build merge commands, RQL filters, and path references in a type-safe way.
Why use generated models?
Using the Kotlin generator gives you a single source of truth: your backend models are derived directly from the WoT Thing Models, which act as the schema for your Things. Ditto supports WoT-based validation of Things and Features against the referenced Thing Model. If you maintain models by hand, they can drift from the WoT schema and become incompatible—leading to validation errors at runtime, failed updates, or subtle bugs. Generated models stay in sync with the Thing Model you point the plugin at, so the payloads you build are valid by construction. That makes development easier, safer, and more predictable: you get compile-time safety and alignment with Ditto’s validation instead of discovering mismatches only when Ditto rejects a request.
Maven setup
Add the common-models dependency (required by the generated code) and the plugin to your pom.xml:
<dependency>
<groupId>org.eclipse.ditto</groupId>
<artifactId>wot-kotlin-generator-common-models</artifactId>
<version>1.0.0</version>
</dependency>
<plugin>
<groupId>org.eclipse.ditto</groupId>
<artifactId>wot-kotlin-generator-maven-plugin</artifactId>
<version>1.0.0</version>
<executions>
<execution>
<id>code-generator-my-model</id>
<phase>generate-sources</phase>
<goals>
<goal>codegen</goal>
</goals>
<configuration>
<thingModelUrl>${modelBaseUrl}/my-domain/my-model-${my-model.version}.tm.jsonld</thingModelUrl>
<packageName>com.example.wot.model.mymodel</packageName>
<classNamingStrategy>ORIGINAL_THEN_COMPOUND</classNamingStrategy>
</configuration>
</execution>
</executions>
</plugin>
Full plugin configuration options
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
thingModelUrl |
String | Yes | - | Full HTTP(S) URL of the WoT Thing Model (JSON-LD). The plugin downloads it at build time. |
packageName |
String | No | org.eclipse.ditto.wot.kotlin.generator.model |
Target package for generated Kotlin classes. |
outputDir |
String | No | target/generated-sources |
Directory where generated sources are written (Maven adds it as a source root). |
enumGenerationStrategy |
String | No | INLINE |
How to generate enums: INLINE (nested in the class that uses them) or SEPARATE_CLASS (standalone enum classes). |
classNamingStrategy |
String | No | COMPOUND_ALL |
How to name generated classes: COMPOUND_ALL (e.g. RoomAttributes, BatteryProperties) or ORIGINAL_THEN_COMPOUND (use schema title when possible, compound only on conflict). |
generateSuspendDsl |
boolean | No | false |
If true, generated DSL builder functions are suspend functions for Kotlin coroutines. |
Use Maven properties for the base URL and model version so you can switch environments and pin versions in one place:
<properties>
<modelBaseUrl>https://models.example.com</modelBaseUrl>
<my-model.version>1.0.0</my-model.version>
</properties>
Enum and class naming strategies
Enum generation
INLINE(default): Enums are nested inside the class that uses them (e.g.Thermostat.ThermostatStatus). Keeps the number of files lower and is convenient for simple enums.SEPARATE_CLASS: Enums are generated as standalone classes in separate files. Better for IDE navigation and reuse across multiple classes.
Class naming
COMPOUND_ALL(default): Class names always combine parent and child (e.g.SmartheatingThermostat,RoomAttributes). Guarantees unique names and makes hierarchy clear.ORIGINAL_THEN_COMPOUND: Uses the schematitlewhen there is no conflict (e.g.Thermostat); falls back to compound names (e.g.SmartheatingThermostat) when needed. Produces shorter, more readable names when possible.
DSL: regular vs suspend
By default, the plugin generates regular Kotlin DSL functions for building thing/feature/property objects. Set generateSuspendDsl=true to generate suspend DSL functions instead, so you can use them inside coroutines and call suspend code from within the DSL block.
Generated code structure
The plugin generates a package structure aligned with your Thing Model:
- Main thing class – root class for the thing (e.g.
FloorLamp,Device). attributes/– interfaces/classes for thing-level attributes (e.g.Location,Room).features/– one subpackage per feature with feature and property types (e.g.lamp/Lamp.kt,LampProperties.kt).- DSL functions – fluent builders (e.g.
floorLamp { ... },features { lamp { ... } }). - Path and RQL helpers – provided by the common-models dependency; the generated code uses them for type-safe paths and RQL expressions.
You must add the wot-kotlin-generator-common-models dependency to your project; the generated code extends interfaces and uses path builders from that artifact.
What the generator supports
The plugin follows the WoT Thing Model specification: it handles properties (read/write, various types), actions (with input/output schemas), events, and links (e.g. tm:extends, tm:submodel). Supported data types include primitives (string, number, integer, boolean), object, array, and custom types via $ref. Enums in the schema become Kotlin enums according to enumGenerationStrategy.
Best practices
- Pin model versions in
pom.xml(or a BOM) so builds are reproducible and you can upgrade TMs in a controlled way. - One execution per “logical” model: use a separate
<execution>for each Thing Model you need (e.g. device, room, building). Each execution has its ownthingModelUrlandpackageName. - Align with runtime definitions: use the same base URL and versioning as the Thing Definition URLs you send to Ditto, so the generated types match what Ditto expects.
- Add the common-models dependency: the generated code depends on
wot-kotlin-generator-common-modelsfor path building and Ditto RQL helpers; do not omit it.
Running the plugin from the command line
You can invoke the plugin directly without a full POM execution by passing parameters with -D:
mvn org.eclipse.ditto:wot-kotlin-generator-maven-plugin:codegen \
-DthingModelUrl=https://models.example.com/device-1.0.0.tm.jsonld \
-DpackageName=com.example.wot.model.device \
-DoutputDir=target/generated-sources \
-DenumGenerationStrategy=SEPARATE_CLASS \
-DclassNamingStrategy=ORIGINAL_THEN_COMPOUND
Optional: -DgenerateSuspendDsl=true for suspend DSL. Useful for one-off generation or scripts.
Path generation and type-safe RQL
The common-models dependency provides a path builder API that works with the generated classes. You get compile-time–safe paths and RQL expressions instead of string concatenation.
Path builder API
pathBuilder().from(start = SomeClass::property)– start a path from a property (e.g.Thing::features,Device::attributes)..add(NextClass::property)– append path segments (e.g.Features::thermostat,Attributes::location)..build()– finalize as a path string (e.g. for logging or custom use)..buildSearchProperty()– create a search property object for RQL comparisons (see below)..buildJsonPointer()– build a Ditto JSON Pointer (e.g. forMergeThing,DeleteAttribute). Useful when the generated model exposes astartPath(e.g.Location::startPath) for a nested type.
RQL combinators
Use these from DittoRql.Companion to combine conditions:
and(condition1, condition2, ...)– all conditions must hold.or(condition1, condition2, ...)– at least one condition must hold.not(condition)– negate a condition.
Each condition is often a search property expression (see below). Call .toString() on the result to get the RQL string for Ditto’s search API or conditional request headers.
Search property methods
After .buildSearchProperty() you can chain one of:
| Method | RQL | Example |
|---|---|---|
exists() |
property exists | .exists() |
eq(value) |
equals | .eq("THERMOSTAT") |
ne(value) |
not equal | .ne(0) |
gt(value) |
greater than | .gt(20.0) |
ge(value) |
greater or equal | .ge(timestamp) |
lt(value) |
less than | .lt(timestamp) |
le(value) |
less or equal | .le("2026-02-20T08:00:00Z") |
like(pattern) |
wildcard ? / * |
.like("room-*") |
ilike(pattern) |
case-insensitive like | .ilike("*sensor*") |
in(values) |
value in collection | .in(listOf("A", "B")) |
Example: conditional merge (RQL for Ditto headers)
Typical use is building a condition for Ditto’s conditional request header (e.g. for merge or delete).
Below, we require that the thing has a given attribute type and either no mountedOn or mountedOn less than or equal to a timestamp:
import com.example.wot.model.path.DittoRql.Companion.and
import com.example.wot.model.path.DittoRql.Companion.or
import com.example.wot.model.path.DittoRql.Companion.not
import com.example.wot.model.path.DittoRql.Companion.pathBuilder
// Condition: type eq "THERMOSTAT" AND
// (NOT exists(location/mountedOn) OR location/mountedOn <= eventDate)
val updateCondition = and(
pathBuilder().from(Device::attributes)
.add(Attributes::type)
.buildSearchProperty()
.eq("THERMOSTAT"),
or(
not(
pathBuilder().from(Device::attributes)
.add(Attributes::location)
.add(Location::mountedOn)
.buildSearchProperty()
.exists()
),
pathBuilder().from(Device::attributes)
.add(Attributes::location)
.add(Location::mountedOn)
.buildSearchProperty()
.le(eventDate)
)
).toString()
// Use in Ditto merge headers
val mergeCmd = MergeThing.withThing(
thingId,
thing,
null,
null,
dittoHeaders.toBuilder().condition(updateCondition).build()
)
Example: JSON pointer for DeleteAttribute
For commands that take a JSON pointer (e.g. delete a nested attribute), use the generated startPath and buildJsonPointer():
val deleteLocationAttribute = DeleteAttribute.of(
thingId,
pathBuilder().from(Location::startPath).buildJsonPointer(),
dittoHeaders
)
Example: RQL for search
The same RQL string can be passed to Ditto’s search API (e.g. GET /search/things?filter=...):
val filter = pathBuilder().from(Device::attributes)
.add(Attributes::location)
.buildSearchProperty()
.exists()
dittoClient.searchThings(filter.toString(), ...)
The exact property names and types (Device, Attributes, Location, etc.) come from your Thing Model; the generator produces the matching classes and path helpers so that paths and RQL stay in sync with the model.
WoT to OpenAPI Generator
The WoT to OpenAPI Generator turns a WoT Thing Model into an OpenAPI 3.1.0 YAML (or JSON) that describes Ditto’s HTTP API for Things that follow that model: thing and attribute paths, feature properties, and actions (e.g. inbox messages).
Benefits for frontends and API consumers
The generated OpenAPI spec is a standard, tool-friendly contract. Frontend teams can feed it into code generators (e.g. OpenAPI Generator, Orval, or the OpenAPI TypeScript/JavaScript generators) to generate TypeScript or JavaScript models, typed HTTP client methods, and request/response types for thing, attribute, feature, and action endpoints. That keeps the UI in sync with the backend: API changes are reflected in the spec, and regenerating client code updates types and calls in one step. You get autocomplete, fewer manual typos, and consistent request shapes. The same spec can drive API documentation (e.g. Swagger UI or Redoc), integration tests, or other clients (mobile, scripts). One Thing Model (TM) thus drives both backend Kotlin models and frontend API usage from a single source of truth.
Usage
The generator is available as a CLI (run with java -jar) or as a library. You can get the JAR from Maven Central or build from source.
Command-line:
java -jar wot-to-openapi-generator-1.0.0.jar <model-base-url> <model-name> <model-version> [ditto-base-url]
| Argument | Description |
|---|---|
model-base-url |
Base URL where the TM is served (e.g. https://models.example.com). |
model-name |
Model name (e.g. dimmable-colored-lamp). The generator will load {model-base-url}/{model-name}-{model-version}.tm.jsonld. |
model-version |
Version (e.g. 1.0.0). |
ditto-base-url |
(Optional) Base URL of the Ditto API (e.g. https://ditto.example.com/api/2/things). Used in the generated servers section. |
Example:
java -jar wot-to-openapi-generator-1.0.0.jar \
https://eclipse-ditto.github.io/ditto-examples/wot/models/ \
dimmable-colored-lamp \
1.0.0 \
https://ditto.example.com/api/2/things
Generated specs are written under a generated/ directory (path may vary by version). The output includes thing-level and attribute-level endpoints, feature properties, and action endpoints with request/response schemas derived from the TM.
Best practices
- Run in CI: generate OpenAPI from your main Thing Models in your pipeline and publish the artifacts (e.g. to a docs site or S3) so API consumers always see up-to-date specs.
- Use the same model base URL and versions as in your Kotlin generator and Ditto Thing definitions, so docs and code stay in sync.
- Set
ditto-base-urlwhen you want the generatedserverssection to point at your Ditto instance; otherwise the generator may use a default.
Summary
- ditto-wot-tooling provides the WoT Kotlin Generator (Maven plugin) and the WoT to OpenAPI Generator (CLI/library). Both take a WoT Thing Model URL and produce artifacts you can use at build time or in CI.
- WoT Kotlin Generator: configure
thingModelUrl,packageName,outputDir,enumGenerationStrategy(INLINE/SEPARATE_CLASS),classNamingStrategy(COMPOUND_ALL/ORIGINAL_THEN_COMPOUND), and optionallygenerateSuspendDsl; pin model versions; add one execution per model; depend onwot-kotlin-generator-common-models. The generated code plus common-models give you type-safe path building (e.g.pathBuilder().from().add().buildSearchProperty()) and RQL combinators (and,or,not) and comparison methods (exists,eq,gt,lt,like,in, etc.) for Ditto search and conditional requests. - WoT to OpenAPI Generator: run with model base URL, model name, and version (and optionally Ditto base URL); integrate into CI to keep API docs aligned with your Thing Models.
For more on WoT in Ditto (definitions, skeleton generation, Thing Descriptions), see the WoT integration documentation and the WoT integration blog post. For the tools themselves, go to eclipse-ditto/ditto-wot-tooling.
Feedback?
Please get in touch if you have feedback or questions about the WoT tooling.
–
The Eclipse Ditto team
January 28, 2026
Strengthening supply-chain security in Open VSX
January 28, 2026 11:30 AM
The Open VSX Registry is core infrastructure in the developer supply chain, delivering extensions developers download, install, and rely on every day. As the ecosystem grows, maintaining that trust matters more than ever.
A quick note on terminology: Eclipse Open VSX is an open source project, and the Open VSX Registry is the hosted instance of that project, operated by the Eclipse Foundation. This post focuses primarily on security improvements being rolled out in the Open VSX Registry, while much of the underlying work is happening in the Open VSX project.
Up to now, the Open VSX Registry has relied primarily on post-publication response and investigation. When a bad extension is reported, we investigate and remove it. While this approach remains relevant and necessary, it does not scale as publication volume increases and threat models evolve.
To address this, we are taking a more proactive approach by adding security checks before extensions are published, rather than relying only on reports after the fact.
Why pre-publish security checks matter
Developer tooling ecosystems, including package registries and extension marketplaces, are a popular target, and we see the same types of issues repeatedly:
- Namespace impersonation designed to mislead users
- Secrets or credentials accidentally committed and published
- Malicious or misleading extensions
- Supply-chain attacks that spread quietly over time
Relying only on after-the-fact detection leaves a growing window of exposure. Pre-publish checks help narrow that window by catching the most obvious issues earlier. Similar pre-publication and monitoring approaches are increasingly standard across large extension marketplaces as developer tooling ecosystems mature.
How we’re approaching this work
To move faster and add specialized expertise, we are working alongside security consultants from Yeeth Security. These experts are helping us design and implement a new verification framework, while keeping overall direction, decision-making, and long-term stewardship firmly within the Open VSX project and the Eclipse Foundation as operators of the Open VSX Registry.
Most of this work is happening in the open. We are keeping a small set of security-sensitive details private to reduce the risk of abuse or circumvention.
What we’re building
Together, we’re introducing a new, extensible verification framework. Over time, it will enable Open VSX Registry to:
- Detect clear cases of extension name or namespace impersonation
- Flag accidentally published credentials or secrets
- Scan for known malicious patterns
- Quarantine suspicious uploads for review instead of publishing them immediately, with clear feedback provided to the publisher
If you want to follow along, the work is tracked here:
https://github.com/eclipse/openvsx/issues/1331
This framework is designed to grow with the ecosystem, so we can add new checks as threat models evolve.
A measured rollout
We’re approaching this effort with a focus on ecosystem health. The goal and intent is to raise the security floor, help publishers catch issues early, and keep the experience predictable and fair for good-faith publishers. Our current plan is to:
- Begin monitoring newly published extensions in February, without blocking publication
- Use this monitoring period to tune checks, reduce false positives, and improve feedback
- Move toward enforcement in March, once we’re confident the system behaves predictably and fairly
This staged rollout gives us room to get it right before it impacts publication flows.
What publishers and users should expect
Publishers may start seeing new messages when potential issues are detected. In most cases, these are meant to be helpful nudges, not roadblocks. We also want to thank the thousands of extension publishers who already act responsibly and help make the Open VSX Registry a trusted resource for the broader developer community. The goal is to:
- Catch accidental mistakes early
- Make expectations clearer
- Reduce the likelihood that risky content reaches users
Human review will remain part of the process, especially for edge cases. We will also continue to provide clear remediation guidance in our wikis and project documentation.
For users, the outcome is straightforward. Pre-publish checks reduce the likelihood that obviously malicious or unsafe extensions make it into the ecosystem, which increases confidence in the Open VSX Registry as shared infrastructure.
Security is ongoing work
Security isn’t a one-and-done project. It evolves alongside the ecosystem it protects. We see this work as ongoing, and we’ll continue to share what we’re doing, why we’re doing it, and what we learn along the way.
Looking ahead
Strengthening supply-chain security is a shared responsibility. As the Open VSX Registry continues to grow, investing in proactive safeguards is essential to protecting both publishers and users.
These changes are an important step forward, and part of a longer journey. We’ll keep iterating, learning from real-world use, and adapting as the ecosystem evolves. Community feedback plays a critical role in that process, and we encourage publishers and users to share their experiences as this work rolls out.
We would also like to thank Alpha-Omega for supporting this work, and for their broader support of the Eclipse Foundation’s security initiatives.
Our goal is simple: to keep the Open VSX Registry a resource developers can depend on with confidence.
Growing with the ecosystem
The security work outlined above is part of a broader effort to scale the Open VSX Registry responsibly as adoption continues to grow. That includes investing not just in tooling and processes, but in the people doing the work.
We’re actively expanding the Open VSX team, including roles focused on security, platform engineering, and ecosystem stewardship. If you’re interested in helping build and protect critical open source infrastructure, you can find our current openings on the Eclipse Foundation careers page!
January 26, 2026
The OpenHW Foundation unveils the first industry-ready RISC-V ecosystem to advance European digital sovereignty
by Natalia Loungou at January 26, 2026 12:00 PM
BRUSSELS - January 26, 2026 - The OpenHW Foundation, a global leader in developing RISC-V core IP, today announced the launch of the Unified RISC-V IP Access Platform (UAP) as part of the TRISTAN project. The UAP represents Europe’s first comprehensive collection of industry-ready RISC-V components. As interest in digital sovereignty continues to grow, particularly within the European Union, the UAP makes it significantly easier for technology organisations to innovate based on an open, sovereign foundation.
“The Unified RISC-V IP Access Platform is critical to supporting technological sovereignty in Europe, and the OpenHW Foundation is committed to developing it as a sustainable, interoperable, and community-driven resource for the broader RISC-V ecosystem,” said Florian Wohlrab, Head of OpenHW Foundation. “Open source collaboration is essential to maintaining a competitive playing field, and by working together, we can go further, faster.”
RISC-V is an open standard instruction set architecture used to develop custom processors for a wide range of applications, including embedded systems and consumer devices. The European Union considers RISC-V critical to achieving technological sovereignty and driving greater competition in the global semiconductor market, valued at more than USD 700 billion. While the EU currently accounts for roughly 10% of this market, the 2023 European Chips Act aims to double that share to 20% by 2030.
The UAP lowers key barriers to entry for EU-based organisations by providing a single, unified source of verified, industry-ready RISC-V IP. It consolidates both hardware and software components and provides clear visibility into each item’s maturity, usability, licensing, and integration workflow. This marks the first time such a comprehensive collection of verified EU RISC-V artifacts has been assembled, much of it fully open source, representing an important step toward European digital sovereignty. The platform also ensures that IP produced by multiple EU research projects is captured, maintained, and made accessible for driving sustainable, long-term collaboration.
To support its digital sovereignty goals, the European Union has invested heavily in cutting-edge RISC-V research and development through the Chips Joint Undertaking (CHIPS JU), which funds projects such as TRISTAN, of which the OpenHW Foundation is a member.
Launched in 2023, TRISTAN aims to industrialise RISC-V cores by moving them from research environments into real-world applications and creating a sustainable open source ecosystem to drive competitiveness and enable more agile innovation. The TRISTAN consortium includes 46 partners spanning large enterprises, SMEs, research organisations, universities, and industry associations connected to RISC-V. Together, they combine expertise and resources from across Europe and beyond to drive innovation and collaboration.
Originating within the TRISTAN project, the UAP brings together RISC-V IP under various licenses from TRISTAN and other Chips JU projects, providing end users with a centralized place to find verified, industry-ready RISC-V components.
The UAP acts as a unified access page, linking to repositories hosted on the OpenHW Foundation GitHub, automatically mirrored to a European-hosted GitLab instance and to other public forges where appropriate, or maintained as private assets. It provides documentation, status information, and an evolving structure designed to better support integration across toolchains, accelerators, and infrastructure components.
Oversight of the UAP is provided by the Virtual Repository Task Group, which includes representatives from TRISTAN and ISOLDE. Additional Chips JU and RISC-V-related EU projects, including Rebecca, RIGOLETTO, and Scale4Edge, are also joining the initiative. As more EU projects open source their IP, they will be added as maintainers so that each project can curate its own catalogue and ensure continuity beyond the end of TRISTAN.
“The Unified RISC-V IP Access Platform is one of the most important initiatives to come from the TRISTAN project, ensuring that the contributions from consortium partners continue to have an impact on the European stage long past the end of our funding,” said Rob Wullems, NXP Semiconductors GmbH, TRISTAN Project Lead. “Critically, it enables us to build and nurture a community around European RISC-V that will drive ongoing innovation and collaboration that supports European technological sovereignty.”
About The OpenHW Foundation
OpenHW Foundation, an industry collaboration with the Eclipse Foundation, is a global non-profit organisation dedicated to developing, verifying, and delivering high-quality, open source RISC-V processor cores and related IP for commercial and industrial applications.
With its extensive network of members and partners, the OpenHW Foundation is driving the advancement of open source RISC-V processor technology across cloud, mobile, IoT, AI, automotive, HPC, and other domains. Through its CORE-V Task Group, the organisation ensures industry-aligned, high-quality development, supporting cutting-edge SoC production worldwide.
OpenHW Foundation is supported by leading innovators such as Thales, CEA List, Siemens, Red Hat, ETH Zurich, Beijing Institute of Open Source Chip, and many more.
About TRISTAN
The TRISTAN project, nr. 101095947 is supported by Chips Joint Undertaking (CHIPS-JU) and its members Austria, Belgium, Bulgaria, Croatia, Cyprus, Czechia, Germany, Denmark, Estonia, Greece, Spain, Finland, France, Hungary, Ireland, Iceland, Italy, Lithuania, Luxembourg, Latvia, Malta, Netherlands, Norway, Poland, Portugal, Romania, Sweden, Slovenia, Slovakia, Turkey.
Learn more at tristan-project.eu
About the Eclipse Foundation
The Eclipse Foundation provides a global community of individuals and organisations with a vendor-neutral, business-friendly environment for open source collaboration and innovation. We host Adoptium, the Eclipse IDE, Jakarta EE, Open VSX, Software Defined Vehicle, and more than 400 high-impact open source projects. Headquartered in Brussels, Belgium, we are an international non-profit association supported by over 300 members. Our events, including Open Community Experience (OCX), bring together developers, industry leaders, and researchers from around the world. To learn more, follow us on X and LinkedIn, or visit eclipse.org.
Media contacts:
Schwartz Public Relations (Germany)
Julia Rauch/Marita Bäumer
Sendlinger Straße 42A
80331 Munich
EclipseFoundation@schwartzpr.de
+49 (89) 211 871 -70/ -62
514 Media Ltd (France, Italy, Spain)
Benoit Simoneau
M: +44 (0) 7891 920 370
Nichols Communications (Global Press Contact)
Jay Nichols
+1 408-772-1551
January 22, 2026
European Initiative for Data Sovereignty Released a Trust Framework
by Ben Linders at January 22, 2026 11:47 AM

The Danube release of the Gaia-X trust framework provides mechanisms for the automation of compliance and supports interoperability across sectors and geographies to ensure trusted data transactions and service interactions. The Gaia-X Summit 2025 hosted facilitated discussions on AI and data sovereignty, and presented data space solutions that support innovation across Europe and beyond.
By Ben LindersJanuary 21, 2026
This is not the Vibe Coding Book you are Looking for
by Donald Raab at January 21, 2026 06:12 AM
If you love programming, the book you are looking for may be in here.
The end of programming as I knew it
In February 2000, I switched gears from programming in Smalltalk, to programming in Java. During my first five years of programming in Java, I suffered repetitive eye strain syndrome reading and writing duplicate for loops. This was the end of programming for me. Java was an absolute waste of my time, productivity, and ultimately, my creativity. The Java language was so pedestrian and duplicative in 2004, that I refused to accept it and decided to do something productive about it. I started building a Java collections library which would eventually become known as the open source Eclipse Collections library today.
I could not have predicted this future for myself, but this is the future that happened to and for me. I had learned something vitally important while programming in Smalltalk professionally that I was unwilling to abandon — EVERY modern programming language should have support for concise lambda expressions and feature rich collections. If either of these features are missing in a programming language, then the language is not worth programming in.
Note: I find it sadly ironic that many of the curly-brace-language crowd that said for years that Smalltalk syntax was just too weird to learn, now blindly embrace vibe coding in English. The irony? Smalltalk reads like English. Statements even end in a period, instead of a semi-colon, like a sentence. Smalltalk is still the best object-oriented programming language available, and most developers will never understand why. Read the following blog with a nice cup of soothing tea, and wonder what secrets this long dismissed language might be hiding from you.
A little Smalltalk for the soul
The beginning of the end of the beginning
Java was the beginning of the end of programming for me. It was also a new beginning. I have renewed my love of programming with Java. But, how? By programming in Java, and teaching programming in Java. I like to joke that I program in Smalltalk in every language I program in. Well, maybe it’s not really a joke. Eclipse Collections was the beginning of Smalltalk-inspired collection productivity in Java. It was also the the end of the pedestrian programming language I suffered through in my early years of Java programming. Java was reborn as something new for me, and for all those that I worked with. But there was something missing in Java. I knew what it was, but I had to do something I wouldn’t have thought was possible. I’d have to help the Java language evolve.
I wrote the following story of my ten year quest to get concise lambda expression support in Java. TL;DR, there is a happy conclusion of Java finally getting support for concise lambda expressions in Java 8.
My ten year quest for concise lambda expressions in Java
By the time I wrote this blog, I knew I wanted to write a book about the Eclipse Collections library, but I hadn’t quite figured out how I would write a book about Eclipse Collections. I didn’t want to write a book about lambdas, data structures, and algorithms in Java that I wouldn’t want to read. This seemed like an impossible task for a long time. That is until I figured out how to approach the problem using information chunking.
I love programming, and open source
Programming brings me joy. Teaching others to program brings me even more joy. It has saddened me for twenty five years to see folks abandon their love of programming because programming just became too hard and painful. Programming does not have to be this way. I wanted to write a book that would bring back the joy of programming I experienced as a child. I wrote a blog about the joy of programming a few years ago.
I see programming through the eyes of a child who learned BASIC in the early 1980s. The same child grew up to love the creative power that Smalltalk showed him in the 1990s.
I have spent twenty five years showing Java developers what is possible, and I will continue to do so, for as long as there are developers out there willing to learn. I solved many difficult problems in Java while working at Goldman Sachs. In January 2012, we open sourced GS Collections on the Goldman Sachs GitHub account. Open sourcing GS Collections made it possible for me and others to show the entire world of Java developers what we were all missing for so many years. While it made it possible, it didn’t make it easy. There are millions of Java developers and a lot of inertia to overcome to get them all to take a look to find out what they are missing. I hope the book I wrote will help over time.
I’ve been sharing my joy of programming with other developers in open source now for fourteen years. Ten years have passed with maintaining the library known as Eclipse Collections at the Eclipse Foundation.
Here’s a link to a presentation that I gave to the Java Community Process Executive Committee on GS Collections in May 2014. After I gave this talk, the idea of moving GS Collections to the Eclipse Foundation was born. The slides from this talk have been available since May 2014, but I don’t know how many developers who go regularly reading the meeting minutes of long past meetings of the JCP Executive Committee.
Eclipse Collections has been available in open source for over a decade now. Anyone can contribute to Eclipse Collections at the Eclipse Foundation, so long as they sign the Eclipse Contributor Agreement. This was the primary problem that Eclipse Collections solved over GS Collections. GS Collections was “free as in beer.” Eclipse Collections is “free as in speech.”
Continuing my journey to improve Java coding
Not many developers get to work on software they believe in for twenty-two years. I am fortunate to be one of them. I finished writing and published my first Java programming book in March 2025, which was twenty-one years after I began my journey working on the Java library that would eventually become known as Eclipse Collections. This was the story I shared before publishing.
My Twenty-one Year Journey to Write and Publish My First Book
If you are not using Eclipse Collections, and you believe that the Java code you write today is the best code you could possibly write in Java, then I would encourage you to read this blog. It might help open your eyes to new possibilities.
Refactoring to Eclipse Collections with Java 25 at the dev2next Conference
After I gave this talk, Craig Motlin began sharing OpenRewrite recipes in open source to automate conversions to Eclipse Collections. The recipes hosted by the open source Liftwizard project are located at the link below.
liftwizard/liftwizard-utility/liftwizard-rewrite at main · liftwizard/liftwizard
While writing “Eclipse Collections Categorically: Level up your programming game”, I discovered and shared a way for all Java developers to reorganize their code so that any feature-rich APIs are more easily digestible, by grouping methods into method categories. This became the basis for how I organized my book, and is why the title is “Eclipse Collections Categorically.” Stealing my own thunder, I shared the discovery of how to simulate method categories in Java for free in a blog for all Java developers to benefit from. It may not come as a surprise that I learned this categorization approach from Smalltalk over thirty years ago.
Grouping Java methods using custom code folding regions with IntelliJ
If you don’t believe that Smalltalk could still be having an impact on the productivity potential for Java developers thirty years later, then read the blog above. One of the first things you will see is a Smalltalk Class browser with methods organized into method categories. I have shared my discovery with some of the developers at Oracle who work on the Core JDK team. I have suggested that having method categories as a standard feature in Javadoc would be amazing progress for this now thirty year old programming language. Don’t take my word for it though. Watch the following “Ask the Java Architects” video from Devoxx Belgium 2024. Thanks to Stuart Marks for the shout out. 🙏
https://medium.com/media/cac001dda3f6c658492bbb3c0542557b/hrefThe book: “Eclipse Collections Categorically”
Finally the book! The book, Eclipse Collections Categorically: Level up your programming game, has been available in print and digital versions since March 2025. You can find out more about the available versions of the book (with links to reading samples) at the link below. I encourage you to check out the reading samples before considering buying.
I don’t believe Generative AI can teach you the lessons in this book. I sincerely doubt that any Generative AI solution out there will tell you there is a better way to write and organize Java collections code. This book introduced the idea of adding method categories to the Java language. I have shown how to simulate method categories in IntelliJ and other IDEs by leveraging Custom Code Folding Regions. This is a short-term substitute while we wait for proper method category support in Java.
Generative AI has its place, and it’s not in here
While the whole world was focused on leapfrogging themselves learning new and shiny Generative AI tools, I was writing a book by hand the old fashioned way, minus pen and paper or a typewriter. I used IntelliJ IDEA and AsciiDoc for the writing part. For the publishing I used AsciiDoctor and AsciiDoctorJ. Using AsciiDoctorJ, I was able to implement my publishing pipelines using good old Java, with a touch of Eclipse Collections. I mean, why would I possibly set out to write a book about Eclipse Collections and not actually use Eclipse Collections. That would be so unlike me.
I worked on the cover with my sister who is a designer. Credit to her for making the joy of programming and method categories come alive on the cover.
The only thing you will find in this book is one developers love of programming in print form.
Eclipse Collections Categorically code samples
There is a repo with code samples from the book available at the following link. I will be adding more samples over time. There is much more than these code samples in the book, but if you want to experiment with them on your own, then I hope they bring you some measure of joy.
GitHub - sensiblesymmetry/ec-categorically: Resources for Eclipse Collections Categorically book
The Joy of Smalltalk and Method Categories
Here’s a teaser from my book. If you’re not familiar with Smalltalk, Smalltalk collections, and where the idea of method categories comes from, these two pages from the appendix of my book should be a quick and fun read.
An appendix dedicated to Smalltalk Collecctions in Eclipse Collections CategoricallyThank you for taking the time to read
I write, hoping that the words I share may help make the world a slightly more enjoyable place for anyone who takes the time to read them. I will not use Generative AI to write my blogs or books. I write to communicate to the reader, using my words, in my voice, with all its imperfections. Your time is precious, and so is mine. While it should take you less time to read this blog than it did for me to write it, I am hopeful of the possibility that several people may read what I have written. Collectively, all of the readers who take the time to read something I write may eventually surpass the time spent by me. Even if this doesn’t happen, I would gladly spend many hours writing something that I am the only reader of, and that I can take pride in having written with my own words. If you’re reading this right now, thank you. I hope there was something of value in here for you.
If you wind up buying my first book, I hope it will bring you much joy and enlighten you to the possibilities of coding in Java. I did something to improve my own coding happiness twenty-two years ago, by creating a lambda-enabled collections library in Java. I’ve written quite a bit about Eclipse Collections in my blogs. If you can’t justify spending money on my book, then I hope you will enjoy many of my blogs. All of my blogs are free, and there are quite a few of them. They’re not as comprehensive, organized and as easy to navigate as the book, but they are free.
Thank you again for reading my words, and I hope the joy of programming finds you, and never leaves you.
I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions. I am the author of the book, Eclipse Collections Categorically: Level up your programming game.
January 14, 2026
Stop VS Code’s Java LSP from Rewriting Your Eclipse .classpath with m2e-apt Entries
by Lorenzo Bettini at January 14, 2026 12:40 PM
January 07, 2026
Automotive innovation through open collaboration: momentum builds around open source software as a key driver of efficiency and success
by Anonymous at January 07, 2026 11:45 AM
LAS VEGAS / CES 2026 – 7 January 2026 – The Eclipse Foundation, one of the world’s largest open source software foundations, and the Association of the Automotive Industry (VDA) today announced a significant expansion of the Memorandum of Understanding (MoU) for an Automotive-Grade Open Source Software Ecosystem, transforming this group into one of the most advanced collaborations on next-generation vehicle design in the world.
Launched in June 2025 with 11 signatories, the MoU now includes 32 global companies, representing leaders across the entire automotive value chain. This reflects significant momentum and strong industry alignment around an open, interoperable, and certifiable software foundation for next-generation mobility, implemented within the Eclipse SDV Working Group.
“The growing participation in this collaboration reflects a clear global shift toward open innovation in the automotive industry,” said Mike Milinkovich, Executive Director of the Eclipse Foundation. “Industry leaders recognise that trusted, open source foundations are essential to delivering the next generation of safe, intelligent, and connected vehicles.”
Aligning the industry around shared principles
The expansion of this global collaboration strengthens a shared commitment to addressing common industry challenges through collaborative open source development. By working within an open source software ecosystem with vendor neutral governance, participating organisations reduce fragmentation, improve interoperability and share the burden of developing complex, safety critical software.
The collaboration is designed to achieve:
- Up to 40% reduction in development, integration, and maintenance efforts for non-differentiating software, freeing up engineering capacity for innovative development
- Up to 30% faster time to market through shared, automotive-grade components
- Improved interoperability across suppliers and vehicle platforms
- Greater sustainability and long-term software maintainability
“Through joint development of non-differentiating software, manufacturers and suppliers can focus their resources on what truly matters: delivering unique, customer-centric experiences.”, said Dr. Marcus Bollig, VDA Managing Director.
A united, global ecosystem
With today’s news, this MoU group expands to become one of the world’s largest and most advanced open source ecosystems focused on code-first solutions for next-generation mobility. New signatories include an array of industry leaders and innovators, such as 42dot, Accenture, AVL, Capgemini, Coretura, Cummins, ECARX, Elektrobit, Infineon, LEAR, LG Electronics, Michelin, MOBIS, QNX, Qualcomm, Red Hat, Schaeffler, Traton, Stellantis, T-Systems, and Useblocks joining founding participants Aumovio, BMW, Bosch, ETAS, Hella, Mercedes-Benz, Qorix, Valeo, Vector, Volkswagen, and ZF.
Together, this group makes up a global ecosystem that spans every facet of the automotive value chain, including manufacturers, suppliers, software companies, semiconductor providers, and cloud specialists.
From shared vision to working software
At the centre of this collaboration is Eclipse S-CORE, an open source, automotive-grade software stack developed within the Eclipse SDV Working Group. S-CORE brings together multiple SDV projects into a common reference stack and tooling environment designed to support certifiable, production-ready automotive software.
In November 2025, Eclipse S-CORE delivered its first public release (version 0.5), marking a key milestone and demonstrating the practical results of coordinated open source development. A full release is planned for 2026, targeting vehicle programs expected to reach the market at the latest by 2030.
A sustainable, global model for automotive software
Eclipse SDV provides a sustainable, transparent, and scalable model for industry-wide software innovation. By building shared software foundations in the open, participants are creating a common base for innovation that reduces redundancy, enhances safety, and accelerates time to market.
Together, these organisations are establishing a new benchmark for responsible, certifiable, and globally coordinated software innovation that will shape the future of mobility.
About the VDA
The Association of the Automotive Industry (VDA) represents the interests of the German automotive industry. Its members include manufacturers of passenger cars, commercial vehicles, trailers, and suppliers. The VDA promotes innovation, safety, and sustainability in mobility.
About Eclipse Software Defined Vehicle
Eclipse Software Defined Vehicle (SDV), a working group within the Eclipse Foundation, supports the open source development of cutting-edge automotive technologies that power the programmable vehicles of the future where software defines features, functionality, and operations. With over 50 members, including leading automotive manufacturers, global cloud providers, technology innovators, and key supply chain partners, the initiative has strong industry backing. The working group's mission is to provide a collaborative forum for developing and promoting open source solutions tailored to the global automotive industry. Adopting a “code first” approach, Eclipse SDV focuses on building the industry's first open source software stacks and associated tools that will support the core functionalities of next-generation vehicles.
About the Eclipse Foundation
The Eclipse Foundation provides our global community of individuals and organisations with a business-friendly environment for open source software collaboration and innovation. We host the Eclipse IDE, Adoptium, Software Defined Vehicle, Jakarta EE, Open VSX, and over 400 open source projects, including runtimes, tools, registries, specifications, and frameworks for cloud and edge applications, IoT, AI, automotive, systems engineering, open processor designs, and many others. Headquartered in Brussels, Belgium, the Eclipse Foundation is an international non-profit association supported by over 300 members. To learn more, follow us on social media @EclipseFdn, LinkedIn, or visit eclipse.org.
Third-party trademarks mentioned are the property of their respective owners.
###
Media contacts:
Schwartz Public Relations (Germany)
Julia Rauch/Marita Bäumer
Sendlinger Straße 42A
80331 Munich
EclipseFoundation@schwartzpr.de
+49 (89) 211 871 -70/ -62
514 Media Ltd (France, Italy, Spain)
Benoit Simoneau
M: +44 (0) 7891 920 370
Nichols Communications (Global Press Contact)
Jay Nichols
+1 408-772-1551
VDA
Simon Schuetz
+49 160 95900967
January 02, 2026
Tell Don’t Ask and Be Specific About the Return Type
by Donald Raab at January 02, 2026 07:37 AM
Learn how Bag is a natural return type for a method named countBy.
Tell Don’t Ask
The following is an excerpt from the book, Eclipse Collections Categorically, about the Tell Don’t Ask principle.
Telling the collections what we want — Eclipse Collections CategoricallyEclipse Collections is an open source collections library for Java, with a feature-rich API on the collection types. Being feature-rich allows the library to offer developers more opportunities to “tell don’t ask” collections to accomplish tasks for them.
In this blog we’re going to explore the difference between grouping and counting using Java Stream and Eclipse Collections LazyIterable type. We will also use LocalDate from Java Time library and LocalDateRange from the threeten-extra library. Finally, we’ll explore how to integrate Java Stream with Eclipse Collections methods and types using special Collectors.
GroupingBy and Counting in Java
Since Java 8, we have had the ability to use Stream with the collect method to group and count things, using Collectors.groupingBy() and Collectors.counting().
The following example shows how to count all of the days between January 1, 2000 and December 31, 2025 by their DayOfWeek (e.g. SATURDAY, SUNDAY, MONDAY, etc.). I use Java Stream and the LocalDate type from the Java Time library in addition to the threeten-extra library which includes a LocalDateRange type.
@Test
public void makeTheDaysCountByJavaStream()
{
LocalDateRange range =
LocalDateRange.of(
LocalDate.of(2000, 1, 1),
LocalDate.of(2025, 12, 31));
Stream<LocalDate> days =
range.stream();
Map<DayOfWeek, Long> countsByDayOfWeek =
days.collect(
Collectors.groupingBy(
LocalDate::getDayOfWeek,
Collectors.counting()));
long weekendDays = countsByDayOfWeek.get(DayOfWeek.SATURDAY) +
countsByDayOfWeek.get(DayOfWeek.SUNDAY);
long weekdays = countsByDayOfWeek.get(DayOfWeek.MONDAY) +
countsByDayOfWeek.get(DayOfWeek.TUESDAY) +
countsByDayOfWeek.get(DayOfWeek.WEDNESDAY) +
countsByDayOfWeek.get(DayOfWeek.THURSDAY) +
countsByDayOfWeek.get(DayOfWeek.FRIDAY);
assertEquals(2714, weekendDays);
assertEquals(6782, weekdays);
}
I create a LocalDateRange of all the dates between 2000–1–1 and 2025–12–31. Then I iterate over them using stream() and collect the results into a Map<DayOfWeek, Long> using Collectors.groupingBy(Function, Collectors.counting()). Then I sum the number of weekend days and weekdays observed during the date range.
We are limited in the standard Java Collections Framework to having List, Set, and Map as the basic interface types. A Bag type would be extremely useful, and would be the best choice as the return type for an algorithm that counts things.
Map is a useful data structure if you need to associate and efficiently look up values by corresponding keys, but it serves little purpose beyond this. I have blogged about Map-Oriented Programming in Java previously.
Map-Oriented Programming in Java
How to Make the Days Count
The days in the first example (Stream<LocalDate> days) does not count. I call collect() on days , and pass a composite Collector to do the grouping and counting. What collect returns in this instance is a Map. The counting becomes a part of the fused composite Collector algorithm, which groups and counts.
In the example in the previous section, we used a composed couple of Collector instances to create a Map that holds onto DayOfWeek instances and their corresponding Long counts. We wrapped Collectors.counting() in a Collectors.groupingBy(). This is fusing two operations together. One operation groups the data, and the other counts.
If I were to give a name to these two fused operations, I would call them countBy. If I had to choose the return type for countBy, I would return a Bag. A Bag counts things, as we can see in the following blog by Nikhil Nanivadekar.
A Map does not count things. A Map associates keys with values. A Map does not include the definition of any algorithm (like counting) to maintain the values once they are associated with a key. All a Map provides is lookup by key to value, or iteration over keys, values, or entries (key/value pairs).
CountBy in Eclipse Collections
If we switch to using Eclipse Collections, we can use a LazyIterable<LocalDate> to represent the days in the range. Then we can make the days count. In this case we will use the countBy method.
@Test
public void makeTheDaysCountByEclipseCollections()
{
LocalDateRange range =
LocalDateRange.of(
LocalDate.of(2000, 1, 1),
LocalDate.of(2025, 12, 31));
Set<DayOfWeek> weekend =
Set.of(DayOfWeek.SATURDAY, DayOfWeek.SUNDAY);
LazyIterable<LocalDate> days =
LazyIterate.adapt(range.stream()::iterator);
Bag<DayOfWeek> countsByDayOfWeek =
days.countBy(LocalDate::getDayOfWeek);
PartitionBag<DayOfWeek> weekendOrWeekday =
countsByDayOfWeek.partition(weekend::contains);
Bag<DayOfWeek> weekendDays =
weekendOrWeekday.getSelected();
Bag<DayOfWeek> weekdays =
weekendOrWeekday.getRejected();
assertEquals(2714, weekendDays.size());
assertEquals(6782, weekdays.size());
}
The LocalDateRange type does not implement Iterable<LocalDate>, but it does provide a Stream<LocalDate> via the stream() method, which I use to adapt to an Iterable<LocalDate> using range.stream()::iterator. Here I am defining a new Iterable using a method reference, since Iterable is effectively a FunctionalIterface, with one Single Abstract Method.
The approach is similar to using a Stream, in that LazyIterable is lazy. The difference is that LazyIterable has a method named countBy, which returns a Bag. Bag in turn behaves externally like a Collection, even though internally it looks like a Map<K, Integer>.
Because Bag extends RichIterable in Eclipse Collections, we can tell the Bag to parition itself with with a Predicate. In this case, I want to split weekend days from weekday days. I had no easy way of doing that with the Map<DayOfWeek, Long> in the first example. Since PartitionBag is returned, and holds Bag instances for both selected and rejected values, I can find the total number of weekend days and weekdays just by using the size of each Bag.
It is also possible to use a List, Set, Bag type instead of LazyIterable. The initialization code and countBy call would look as follows using an ImmutableList.
ImmutableList<LocalDate> days =
Lists.immutable.fromStream(range.stream());
Bag<DayOfWeek> countsByDayOfWeek =
days.countBy(LocalDate::getDayOfWeek);
Replace Lists with Sets or Bags and the code will still work.
ImmutableSet<LocalDate> days =
Sets.immutable.fromStream(range.stream());
Bag<DayOfWeek> countsByDayOfWeek =
days.countBy(LocalDate::getDayOfWeek);
Integrating Stream and CountBy
If you want to use a Stream with countBy, you can use Collectors2.countBy(Function).
@Test
public void makeTheDaysCountByStreamAndCollectors2()
{
LocalDateRange range =
LocalDateRange.of(
LocalDate.of(2000, 1, 1),
LocalDate.of(2025, 12, 31));
Set<DayOfWeek> weekend =
Set.of(DayOfWeek.SATURDAY, DayOfWeek.SUNDAY);
Stream<LocalDate> days =
range.stream();
Bag<DayOfWeek> countsByDayOfWeek =
days.collect(
Collectors2.countBy(LocalDate::getDayOfWeek));
PartitionBag<DayOfWeek> weekendOrWeekday =
countsByDayOfWeek.partition(weekend::contains);
Bag<DayOfWeek> weekendDays =
weekendOrWeekday.getSelected();
Bag<DayOfWeek> weekdays =
weekendOrWeekday.getRejected();
assertEquals(2714, weekendDays.size());
assertEquals(6782, weekdays.size());
}
There are many more Collectors on Collectors2, which is part of Eclipse Collections. Collectors2 provides a natural integration path between Java Stream and Eclipse Collections intention revealing methods and types.
Intention Revealing Names Help
Naming things is hard. This is not a good excuse for avoiding naming things. Sometimes it’s helpful to have general methods and return types like Stream.collect() and Map, especially when we have yet to see common patterns emerge. The following pattern has been seen enough times to be given an intention revealing name.
Map<DayOfWeek>, Long> countsByDayOfWeek =
collection.stream()
.collect(
Collectors.groupingBy(LocalDate::getDayOfWeek),
Collectors.counting())
The intention revealing name we have for this in Eclipse Collections is called countBy. The countBy method is concise, clear, and the return type of Bag packs a productivity punch.
Bag<DayOfWeek> countsByDayOfWeek =
days.countBy(LocalDate::getDayOfWeek);
We can use Collectors2.countBy () method with Java Stream, and it is only slightly less concise. The most important part is that the return type is Bag.
Bag<DayOfWeek> countsByDayOfWeek =
days.collect(
Collectors2.countBy(LocalDate::getDayOfWeek));
Final Thoughts
I wrote Eclipse Collections Categorically in the hopes that I could help more Java developers discover better ways of programming with collections in Java. I was motivated to write this blog after sharing one page from the Eclipse Collections Categorically book in a social media post. The following is that page.
Example 42 from Eclipse Collections Categorically — Using countBy with Presidents and GenerationsThis example is one of over 200 Java code examples in the book. For some more examples, there is a GitHub repository that can be used as a resource to accompany the book.
GitHub - sensiblesymmetry/ec-categorically: Resources for Eclipse Collections Categorically book
I am planning on including all of the examples in the book in this repository eventually.
I hope some of you will read the book and and discover the extensive set of behaviors and types that are available to developers who use the open source Eclipse Collections library.
Thanks for reading!
I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions. I am the author of the book, Eclipse Collections Categorically: Level up your programming game.
December 27, 2025
My top three blogs of 2025
by Donald Raab at December 27, 2025 02:43 AM
After eight years, the blogging continues.
I will remember 2025 as the year I published my first book, “Eclipse Collections Categorically.” I will also remember that I continued my commitment to blogging. I wrote 36 blogs in 2025, including this one. My top three blogs for 2025 are linked below. I will keep this blog short, so you can enjoy any of the three blogs below that you might have missed, or want to read again. Enjoy!
#1–Go Primitive in Java, or Go in a Box
4.7K reads
Go Primitive in Java, or Go in a Box
#2–What if Java had Symmetric Converter Methods on Collection?
1.1K reads
What if Java had Symmetric Converter Methods on Collection?
#3–Book: Eclipse Collections Categorically
631 reads
Book: Eclipse Collections Categorically
Note: You can obtain a copy of the Kindle version of the “Eclipse Collections Categorically: Level up your programming game” book for $0 on January 1, 2026. Mark your calendar so you don’t miss this limited time offer.
Thanks for reading!
I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions. I am the author of the book, Eclipse Collections Categorically: Level up your programming game.
December 22, 2025
The Ten Year Anniversary and the Next Five Years for Eclipse Collections
by Donald Raab at December 22, 2025 03:08 AM
Celebrating ten years of Eclipse Collections at the Eclipse Foundation.
Lego your logo! This is a picture of a hand-built lego set my wife designed letter by letter.All was quiet in December 2011
Late in December 2011, Craig Motlin and I were preparing to open source a proprietary library from Goldman Sachs named Caramel that would officially become GS Collections in January 2012. Four years later, in December 2015, this library would be moved to the Eclipse Foundation and officially become Eclipse Collections.
Note: Eclipse Collections is not related to the Eclipse IDE. See this blog to understand both independent open source projects at the Eclipse Foundation.
Earlier this year, Eclipse Collections passed 1,000,000 downloads per month from Maven Central. You can see various metrics about Eclipse Collections, along with the communication strategy I have used to help raise awareness of Eclipse Collections for the past decade in the following blog.
Now we’re on the final approach to landing at the ten year anniversary of Eclipse Collections at the Eclipse Foundation. In the rest of this blog, I’ll be sharing some thoughts about this journey and several of the folks who have contributed many hours of their valuable time over the years. I will also share my wishlist of potential contributions from the Eclipse Collections community for the next five years.
A decade feels like a very long month
Eclipse Collections has been a project at the Eclipse Foundation for ten years. Eclipse Collections 7.0 was released on Christmas 2015. If we look at Eclipse Collections on the release page at the Eclipse Foundation, it just feels like a very long month.
The Twenty Five Releases of Eclipse Collections, since December 25, 2015Eclipse Collections has come a long way in ten years, because we’ve had an amazing volunteer open source community.
Congratulations and thank you to all the committers, contributors, advocates, and supporters of the Eclipse Collections project. Thank you to the Eclipse Foundation for hosting and supporting this amazing project for the past decade.
Migrations to Java versions in Eclipse Collections
Since the first release of Eclipse Collections (7.0), there has been a slow migration to newer Java versions. That is until recently, when we rapidly jumped from Java 11 to Java 17.
- Eclipse Collections 7.0 (2015) -> Java 7
- Eclipse Collections 8.0 (2016) -> Java 8
- Eclipse Collections 12.0 (2025) -> Java 11
- Eclipse Collections 13.0 (2025) -> Java 17
Eclipse Collections spent most of the decade focused supporting Java 8. Java 8 was the version of Java that brought us lambdas and method references. These were the two key features that Eclipse Collections needed to reach its potential as a feature-rich Java collections library.
Blogs, Articles, and Books
Over the past decade, several committers and contributors to Eclipse Collections started blogging, and submitting articles to various publications. The two current project leads for Eclipse Collections also became contributing authors to books.
Committer Blogs
We have eight committers on the Eclipse Collections project. Several of the committers on Eclipse Collections have blogs here on Medium.
Project Lead / Committer: Nikhil Nanivadekar
Project Lead / Committer: Donald Raab
Committer: Vladimir Zakharov
Committer: Sirisha Pratha
Committer: Craig Motlin
Committer: Desislav Petrov
Articles
A few Eclipse Collections articles have been written over the past decade.
Java Advent Calendar
Nikhil Nanivadekar has contributed to the Java Advent Calendar since 2018. That’s contributions for 7 out of the 10 years Eclipse Collections has existed. His most recent contribution to the Java Advent Calendar covered four “Hidden Treasures” in Eclipse Collections.
Hidden Treasures of Eclipse Collections 2025 Edition - JVM Advent
InfoQ Articles
- Eclipse Collections 11.0.0 Features New APIs and Functionality
- Refactoring to Eclipse Collections: Making Your Java Streams Leaner, Meaner, and Cleaner
- The Java Evolution of Eclipse Collections
Baeldung Articles
https://www.baeldung.com/eclipse-collections
https://www.baeldung.com/java-eclipse-primitive-collections
https://www.baeldung.com/jdk-collections-vs-eclipse-collections
Inside Java
Quality Outreach Heads-up - On The Importance of Testing With Early-Access Build
Stuart Marks Blog
Incompatibilities with JDK 15 CharSequence.isEmpty
Books
Nikhil Nanivadekar and myself, the co-project leads for Eclipse Collections, were both contributing authors to “97 Things Every Java Programmer Should Know.” My article contribution is available in blog form at the 97 Things Medium account.
I also wrote the book, “Eclipse Collections Categorically: Level up your programming game”, which was published in March 2025. This book is the most comprehensive guide available to learning the Eclipse Collections API. I wrote a guide to reading the book for the time constrained.
The Author's Inside Guide to Reading Eclipse Collections Categorically
Planning every five years
Planning for a community driven open source project requires flexibility. There are no funded or dedicated resources. There are just contributors and committers who are driven by a desire to collaborate on a project they believe in. The following is the blog I captured five years ago this month, describing my wishlist of potential contributions for Eclipse Collections.
The next 5 years for Eclipse Collections
I edited this blog recently adding green checkmarks to those things that are either completed or in progress.
Now its time to revisit what’s left to be done from the last five year plan and consider some of the important next steps we might consider for the library.
The next five years of Eclipse Collections
Here’s where I’d like to see some focus for the next five years in Eclipse Collections. I hope to see more open source contributors contributing to Eclipse Collections. I also hope to see more open source projects using Eclipse Collections extensively, like dataframe-ec and liftwizard.
OpenRewrite Recipes
I was asked at dev2next 2025 in the “Refactoring to Eclipse Collections” talk I gave with Vladimir Zakharov whether there were OpenRewrite recipes for automating the conversion from JDK Collections/Streams to Eclipse Collections. At that time, I said no, but that this would be a great contribution from the community. It turns out Craig Motlin, one of the Eclipse Collections commiters, had started working on some OpenRewrite recipes that he open sourced in the LiftWizard project. We may look to move these recipes eventually to either Eclipse Collections or OpenRewrite hosted repos, but for now Craig has made these available via an Apache 2.0 license and consumable via a jar on Maven Central and easy to run. We can begin applying these recipes for converting to Eclipse Collections today. Thanks Craig!
- 🏁 Migrate JDK Collections to Eclipse Collections
- Refactor from Java Streams to Eclipse Collections
- Refactor from for-loops to Eclipse Collections
- Refactor from Object collections to Primitive Collections
- Refactor from Verify to Assertions or AssertJ (as soon as available)
Testing
- 🏁 Add AssertJ Support
- Upgrade to JUnit 6
- Replace Scala Unit Tests with Java Equivalent Tests
- Move slow running unit tests to acceptance tests
- Combine Unit Tests and Unit Tests Java 8 modules
Technical Debt
- Deprecate Verify class
- Improve JavaDoc and Categorize Methods
- Update the Eclipse Collections Reference Guide
- Remove JMH Scala Benchmarks
Java & Library Upgrades
- Upgrade to Maven 4
- Baseline development on Java 25
- Test and integrate with Project Valhalla features
- Leverage Local Variable Type Inference sparingly to improve readability
New Containers
- New Optimized HashMaps
- Trees
- More primitive collection types
- Lazy Collections (specific types like LazyList)
- Off-heap Collections
- Persistent Collections (Functional)
New APIs
- Implement more Parallel APIs
Performance Tuning
- Optimize APIs from JDK
- Leverage Vector API for primitive collection API optimizations
The future is up to the contributors
Is there something missing from my wish list that you would like to see in Eclipse Collections? You have the power to impact and influence with your voice and your keyboard. Eclipse Collections is open for contributions, and we’d love to have more contributors helping shape the future of the library.
Have a Happy, Safe, and Healthy Holiday and New Year!
Christmas Day and 10th Anniversary of EC Update: My wife had something made for me I was not expecting, and would have been a nice addition to the five year plan if I was better at predicting the future. This is what open source contributors do sometimes, they surprise you with contributions you least expect. Here’s a picture of the two Eclipse Collections hats my wife gave me today.

I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions. I am the author of the book, Eclipse Collections Categorically: Level up your programming game.
December 18, 2025
What’s in store for open source in 2026?
by Anonymous at December 18, 2025 03:01 PM
As 2025 draws to a close, many of us find ourselves reflecting on a year of remarkable change and looking ahead to what lies beyond the horizon. The end of the year often brings a mix of reflection and anticipation, a time when the open source ecosystem pauses to take stock and to imagine what the next chapter might bring.
Jakarta EE 2025: a year of growth, innovation, and global engagement
by Tatjana Obradovic at December 18, 2025 02:07 PM
As 2025 comes to a close, it's a great moment to reflect on what we’ve achieved together as the Jakarta EE community. From major platform updates to refreshing the website and growing developer engagement, this year has been full of meaningful progress.
Celebrating Jakarta EE 11
One of our biggest milestones this year was Jakarta EE 11. This time we did the release in a different way: we released as soon as the profile or platforms were ready! The Core Profile was available in December 2024 and Web Profile in March 2025, and Jakarta EE Platform finalised in June 2025, reflecting the steady progress of the Jakarta EE community. Compatible products followed right away!
Jakarta EE 11 introduces the new Jakarta Data specification, delivers a modernised testing experience with updated TCK infrastructure based on JUnit 5 and Maven, and expands support for Java 21, including virtual threads. It also streamlines the platform by retiring older specifications such as Managed Beans, reinforcing Contexts and Dependency Injection (CDI) as the preferred programming model and continues to provide Java Records support.
This release marks a significant step forward in simplifying enterprise Java development, improving developer productivity, and supporting modern, cloud native applications. It's a true reflection of the community’s collaborative efforts and ongoing commitment to innovation.
Read the Jakarta EE 11 announcement
Introducing Jakarta Agentic AI: A New Standard for Running AI Agents on Jakarta EE
This year marked the introduction of the Jakarta Agentic AI specification project. Aimed at standardising how AI agents run within Jakarta EE runtimes, this new specification will be included in a future release. Much like Jakarta Servlet unified HTTP processing and Jakarta Batch defined batch workflows, Jakarta Agentic AI will provide a clear, annotation-driven API that defines how agents are created, managed, and executed.
Built on CDI as the core component model, the specification will establish consistent lifecycle patterns and usage semantics, making it easier for developers to implement and integrate a wide range of agent types. The project also anticipates deep integration with key Jakarta EE APIs, ensuring seamless interoperability across the platform.
Jakarta Agentic AI is being developed with broad industry collaboration in mind. The project is seeking input from subject-matter experts, vendors, and API consumers both inside and outside the Java ecosystem to build the most open, portable, and future-ready agent execution model possible. Visit the project page to learn more about the specification.
Listening and learning through the Jakarta EE Developer Survey
Our annual Jakarta EE Developer Survey remains one of the best ways to track how developers and organisations are using enterprise Java and shaping their cloud strategies. In 2025, we saw a 20% increase in participation, with over 1,700 participants sharing how they use Jakarta EE in practice.
The results show continued growth and confidence in Jakarta EE across the ecosystem. Notably, even before the full platform release was finalised, 18 percent of respondents were already using Jakarta EE 11, a strong signal of interest and early adoption.
These insights help us better understand where the community is focusing its energy, from modernising applications and adopting newer Java versions to evaluating cloud strategies and driving specification innovation. We're grateful to everyone who participated and shared their views.
Explore the 2025 developer survey findings
Learning and contributing: A growing developer ecosystem
The Jakarta EE Learn page expanded its resources to better support developers at all levels. As part of our broader effort to support community growth, we also introduced a new Contribute page, a dedicated space that outlines how individuals and organisations can get involved with Jakarta EE.
The Contribute page highlights the many ways to participate, from writing code and improving documentation to joining specification discussions or helping with community outreach. It also explains why contributing matters, what contributors gain, and how to get started.
To further support newcomers, we launched the Jakarta EE Mentorship Program, which pairs new contributors with experienced community mentors who can provide guidance, answer questions, and help them navigate the contribution process. Whether you're new to open source or simply new to Jakarta EE, the mentorship experience helps build skills, confidence, and deeper community connections.
Looking ahead: A refreshed web presence
Throughout 2025, our marketing team in collaboration with the Jakarta EE Marketing Committee worked on a major Jakarta EE website refresh to better reflect the clarity, maturity, and momentum of the community. While the full launch is now scheduled for early January, the homepage and navigation redesign is already complete and ready for rollout. The updated site features a bold new homepage, improved navigation through streamlined mega menus, and a new “Why Jakarta EE” section that helps visitors quickly understand the platform’s value.
This is just the beginning. Additional updates and structural improvements will continue rolling out through 2026, with a focus on enhancing messaging, navigation, and the overall user experience. Stay tuned for the official launch and more updates in the months ahead.
Global presence: virtual events, conferences, and community connections
Jakarta EE had a visible and impactful presence at face-to-face (F2F) conferences around the world, especially in the first half of the year. From Devnexus to JCON and beyond, Jakarta EE working group and community members presented talks, engaged with attendees at our sponsored booths, and built valuable relationships.
In 2025, JakartaOne Livestreams continued to grow with successful regional events in China and the annual JakartaOne Livestream, which attracted more than 6,000 viewers globally, with over 3,200 participants. With 20+ sessions, 15+ speakers, and 14+ hours of multilingual content, the JakartaOne Livestream series continued to drive strong community engagement across regions. Chinese JakartaOne Livestream recordings, as well as the annual JakartaOne Livestream recording, are available on our YouTube channel for anyone interested.
JakartaOne F2F Meetups further expanded the program’s regional footprint, with events in China and Japan drawing 170+ registered participants and 100+ in-person attendees, supported by high community approval and strong local participation.
With 17 Jakarta EE Tech Talks delivered in 2025, the program remains a vital channel for community learning, collaboration, and inspiration. Topics ranged from microservices and containers to security and observability. Recordings of these sessions are available on our YouTube channel.
Looking forward to 2026 and beyond
As we conclude an impactful 2025, it’s clear that Jakarta EE continues to strengthen its role as the open, vendor-neutral foundation for modern enterprise Java. The progress we’ve made this year, from delivering Jakarta EE 11 and introducing new specifications like Jakarta Agentic AI, to expanding our global events and deepening community engagement, reflects the dedication, collaboration, and passion of everyone involved.
2026 promises to be another exciting year of innovation and growth.
Thank you to all members, contributors, committers, and the wider community for your continued support. Together, we’re driving the platform forward and building a vibrant, open, and innovative ecosystem.
Here’s to another year of progress, collaboration, and innovation with Jakarta EE.
What’s in store for open source in 2026?
by Mike Milinkovich at December 18, 2025 11:25 AM
As 2025 draws to a close, many of us find ourselves reflecting on a year of remarkable change and looking ahead to what lies beyond the horizon. The end of the year often brings a mix of reflection and anticipation, a time when the open source ecosystem pauses to take stock and to imagine what the next chapter might bring.
In that spirit, I’d like to share a few thoughts on the forces shaping open source as we head into 2026. The past year has seen emerging trends poised to influence not only the open source ecosystem but also the broader technology industry and the many sectors that depend on it. From governance and sustainability to the evolving role of open collaboration in driving innovation, the ripples we saw in 2025 are likely to become powerful waves in the year ahead.
Prediction 1: As Agentic AI deployments accelerate, many enterprises will shift away from proprietary pilot solutions toward open source AI tooling that helps them integrate agentic workflows with their existing applications and data.
The promise of agentic AI is unmistakable. What enterprises are struggling with is the move from controlled pilots to real production environments that must operate within the constraints of their current systems. Many proprietary agentic platforms remain optimized for “green field” use cases, making them poorly matched to the complex mix of legacy data assets and workloads that are prevalent in enterprise environments.
For agentic AI to deliver real enterprise value, it must operate within existing operational, reliability, and performance constraints. For example, an agentic system that can’t talk to Java systems – the lingua franca of enterprise computing – is effectively cut off from the most critical operational data, workflows, and decision-making contexts. Forcing enterprises to adopt a parallel, Python-based infrastructure in order to deploy AI systems will delay adoption and significantly increase security, performance, and scalability risk.
Open source tooling will play an increasingly important role in solving these challenges. Eclipse LMOS and its Agent Definition Language (ADL) provide one model-neutral option for defining agent behaviour in a structured and maintainable way. LMOS is already in production at Deutsche Telekom, powering an award-winning bot and consumer-facing AI system that processes millions of service and sales interactions across several countries. At the same time, enterprises will have multiple viable open source choices that fit different architectural and operational needs.
Another highly visible growth area in 2026 will be AI-enabled developer tooling. The launch of the Eclipse Theia AI IDE shows how open collaboration can deliver powerful AI development environments without locking teams into proprietary toolchains. The Theia platform allows organisations to choose their preferred LLMs, integrate contextual data through MCP, and build agentic workflows that align with internal security and compliance requirements. For many enterprises, this flexibility will be essential as AI-assisted development becomes part of everyday engineering practice.
Complementary work on projects like Eclipse Adoptium will continue to strengthen the foundation on which AI systems depend. Verified builds, signed binaries, and rigorous QA increase confidence that AI-enabled enterprise applications can be deployed with traceability and accountability.
Jakarta Agentic AI will also begin defining standard patterns for agentic workflows in enterprise Java, giving organisations predictable and interoperable ways to bring agentic capabilities into mission-critical systems.
Prediction 2: Digital sovereignty will quickly rise in strategic importance for nation-states, and open standards will prove critical in making it achievable.
Over the past two decades, extraordinary advances in technology have reshaped global economics and trade. They have enabled entirely new markets, transformed industries, and created business models that were previously unimaginable. As digital infrastructure now underpins nearly every aspect of national competitiveness, governments across the globe are realising how their use of technology affects strategic autonomy, resilience, and digital sovereignty. Questions of who controls critical data, how it is shared, and where it is processed are now central to national policy and economic strategy.
As these pressures grow, open standards will become essential to the path forward. They provide a neutral foundation that allows organisations and nations to build digital capabilities without being locked into proprietary ecosystems or single-vendor dependencies. In 2026, this will matter more than ever. The Eclipse Foundation and the Eclipse Dataspace Working Group (EDWG) recently released two key protocol specifications, which are under review for international standardisation through the ISO/IEC JTC1 Publicly Available Specification (PAS) process. These new protocols represent a significant advancement in enabling open, interoperable, and sovereign dataspaces. They enable organisations, industries, and nations to share data securely while retaining full control over their information. Dataspaces also enable data owners to clearly outline the terms under which their data can be used to train AIs, accelerating the move toward ethical AI systems.
This work shows how open collaboration and open standards will serve as the foundation for trust, interoperability, and sovereignty in the global data economy.
Prediction 3: 2026 will lay the groundwork for the next era of open source silicon
Next year is a pivotal time for open source hardware as the immense efforts of academia gain traction in real-world applications. In 2026 and beyond, we’ll see open source hardware play an increasingly important role in academic and early-stage commercial products. New configurations of RISC-V CVA6 and CV-Wally cores will be especially influential in this early wave of adoption.
Research and innovation efforts will accelerate progress. European projects backed by the Chips JU, such as TRISTAN and Rigoletto, and projects including CHERIoT, will further strengthen the ecosystem by bringing academia and industry together for collaborative semiconductor R&D.
With major adopters and contributors such as Thales already demonstrating the benefits of building on open hardware, 2026 will mark a shift in how organisations approach hardware design and maintenance. More companies will explore open source silicon options. According to research firm Omdia, RISC-V processors are on track to account for almost a quarter of the global market by 2030, signalling that this shift is already well underway.
Prediction 4: 2026 will trigger alarm over the CRA as companies around the world realise they are behind on compliance.
The EU Cyber Resilience Act is the world’s first horizontal cybersecurity regulation, mandating secure-by-design and supply-chain security best practices. It comes with potential fines of up to €15 million or 2.5% of a company’s global annual turnover. In 2026, it will become impossible to ignore. As the deadline approaches, many organisations will scramble to understand and meet the CRA’s requirements, resulting in widespread urgency across global markets.
Beginning September 11, 2026, the CRA mandatory vulnerability reporting requirements take effect, and every manufacturer selling products in Europe will be required to comply. Yet awareness remains alarmingly low, with just 12.3% of SMEs being aware of the CRA compared to 83.5% of very large enterprises. The gap between expectations and preparedness will become painfully clear.
There is, however, a silver lining. As compliance pressures increase, policymakers could emerge as the greatest champions of open source sustainability. The CRA explicitly places security responsibility on manufacturers and not on maintainers of open source projects, which could provide long-overdue clarity and support for the open source ecosystem.
Initiatives like Open Regulatory Compliance (ORC) will help technology companies coordinate their CRA readiness, reducing duplicative efforts, mitigating risks, and protecting innovation. By working together on shared compliance frameworks, organisations can meet regulatory expectations while continuing to advance open source development.
Prediction 5: 2026 will be the year the industry reinvests in open source infrastructure.
The global software ecosystem runs on open source infrastructure, yet for years, many global enterprises have relied on it without meaningfully contributing back. In September, I, along with many other open source stewards, called for greater support from businesses that benefit most to take a larger role in sustaining this critical infrastructure. Encouragingly, that call is already being answered.
One example is Amazon’s recent support for the Eclipse Foundation. This commitment strengthens multiple core services, including the Open VSX Registry, the vendor-neutral extension registry for the Visual Studio Code ecosystem that powers many AI-enabled development environments.
The Open VSX Registry is now one of the fastest-growing package registries in the world. It serves as the default registry for several leading AI developer tools, including Amazon’s Kiro, Cursor, Google Antigravity, Windsurf, IBM’s Project Bob, and others. In 2025, it averaged more than 110 million downloads each month. It now hosts more than 7,000 extensions from nearly 5,000 publishers. With strong enterprise engagement and open governance, the registry is becoming a central distribution hub for the next generation of AI software development tooling.
In 2026, we will also see open infrastructure providers, including the Eclipse Foundation, explore new ways to align funding with commercial and enterprise usage while maintaining openness for general and individual use. Each ecosystem will take its own path, and some experimentation will be needed to achieve the right balance, but the direction is clear. These efforts will strengthen open infrastructure and help ensure that essential shared services remain reliable and sustainable for everyone who relies on them.
December 16, 2025
Xtext, Langium, what next?
December 16, 2025 12:00 AM
December 15, 2025
Bosch holds "SDV Study Session" to explain its efforts to standardize software through open source and the latest information
by Amin Rasti at December 15, 2025 06:50 AM
-
Held on December 12, 2025
By:Hide Sakuma
Ansgar Lindwedel (left), Director of Ecosystem Development for Software-Defined Vehicles at the Eclipse Foundation, and Christian Mecker, President and CEO of Bosch Corporation (right)
On December 12, Bosch held an "SDV Study Session" for members of the press, inviting representatives from Eclipse SDV, an organization that manages open source software-related projects, to explain various initiatives and the latest information aimed at realizing SDV (Software-Defined Vehicles), for which collaboration across industry boundaries is key.
At the study session, three people took to the stage to give presentations: Christian Mecker, President and CEO of Bosch; Yasuhiro Morita, who is in charge of SDV technology at the Bosch Mobility Technology Headquarters for East and Southeast Asia; and Ansgar Lindwedel, Director of Ecosystem Development for Software-Defined Vehicles at the Eclipse Foundation.
The "SDV Study Group" hosted by Bosch
"Connected," "centralized architecture," and "SDV" are important for the spread of new technologies
Christian Mecker, President and CEO of Bosch Corporation
First up was President Mecker, who pointed out that the time span between the creation of new technology and its release on the market is getting shorter every year, and that while various types of vehicle electrification are being developed, from pure BEVs (battery electric vehicles) to PHEVs (plug-in hybrid vehicles), the overall ultimate goal is to reduce CO2 emissions. He also mentioned that autonomous driving is also a trend, and various technologies are being used to evolve the automation level from 1 to 5, and efforts are continuing to meet user needs.
User experience is also an important factor, and by connecting passengers to the vehicle through data utilization, the interior of the vehicle will become a place where passengers can spend time like their living room at home. Each of these technologies will be put into practical use individually, and in the future, it will be necessary to expand them horizontally. To achieve this, three points will be important: "Connected" to constantly update the vehicle, "Centralized Architecture" to centrally manage the data obtained through Connected, and "SDV" to reflect updates in the vehicle, which was the theme of this study session.
He said that the idea that standard specifications are needed to serve as a basis for data when creating data such as updates at many automakers around the world is an issue in the automotive industry.He explained that while standard specifications exist, the technology developed by each automaker is embedded with the company's own DNA and installed in vehicles, and that 70% of the software used becomes standardized.Bosch believes that making standard specifications shareable within the community will lead to cost reductions and improved quality, and is actively working to develop shareable standard specifications.
"Connected," "centralized architecture," and "SDV" will be important for horizontal deployment of practical technologies
Call for collaborative OS development using "open core"
Yasuhiro Morita, SDV Technology Officer, Technology Management Division, Bosch Mobility East and Southeast Asia, Bosch Corporation
Next on stage, Mr. Morita gave a presentation on what software development should look like to realize SDV.
In existing automobile development, hardware and software are closely related, with the hardware that is created first being controlled by the software, but with SDVs, the separation of hardware and software is important, and in addition to the hardware evolving with new technologies, application software will also continue to evolve, improving vehicle functions.
To achieve separation, an OS (operating system) is needed to mediate between hardware and software, and currently each automaker is focusing on developing its own OS, but Morita points out that the demand for sophistication has led to increasing complexity, and at the same time, the number of software codes has exploded, leading to fragmentation of technology.From the perspective of Bosch, a Tier 1 manufacturer, if this situation continues, when one company wants to provide technology developed to suit its environment to another company, new development procedures will be required to accommodate different OS specifications, leading to increased costs.
In order to eliminate such losses and improve efficiency across the industry, efforts have begun to develop standard operating systems and other specifications, and the entire group, including Bosch and its affiliated company ETAS, is working on this.
To achieve SDV, hardware and software must be separated and an OS must be created to mediate between them.
Bosch has proposed a method of consolidating multiple implementation efforts into a single "open core" in order to achieve the standardization required to realize SDVs. While each automaker will inject its own DNA into the application portion to differentiate its vehicles, the company is calling for automakers, suppliers, integrators, and others to join consortiums such as the Eclipse Foundation's "S-CORE," "AUTOSAR," "COVESA," "JasPar," and the "Open SDV Initiative," which are also participating in study groups, to standardize the OS and middleware that consolidate the implementation in each area as collaborative areas, and to advance collaborative development.
A proposal to standardize areas that are not suitable for differentiation, such as the OS and middleware, as an "open core," and allow each automaker to concentrate resources on the application part to differentiate their vehicles.
There are standards for various parts of a car, and OpenCore is an initiative aimed at standardizing the operating system used in cars.
The third option, "co-creation," is effective in software development
Ansgar Lindwedel, Director of Ecosystem Development for Software-Defined Vehicles, Eclipse Foundation
In his presentation, Lindwedel of the Eclipse Foundation explained that the Eclipse Foundation, which was founded in 2004 and has a 21-year history, is the world's largest open source foundation, working on a wide range of projects, with contributors from all over the world participating in its activities and continuing to work with the goal of being a trusted partner.
In response to the challenges facing the automotive industry, particularly in the software field, the Eclipse Foundation established the new "Eclipse SDV" with the participation of 11 members in 2022. The three specific challenges cited were a shortage of software developers while automakers and suppliers face an increasing number of jobs requiring engineers, resulting in a shortage of engineering human resources; the lack of horizontally deployable solutions when solutions are developed in-house; and the rapid evolution of the automotive industry.
Until now, when automobile manufacturers and other companies needed software, they had two choices: either make it themselves or buy it from somewhere else, but as President Mecker pointed out in his presentation, 70% of the software used will be standard content that does not lead to product differentiation. While it will be important to continue developing the remaining 30%, which creates each company's individuality, he said that for standard content, a third option, the idea of "creating it collaboratively," will be effective.
The benefits of adopting open source in software development
Providing a new option for software procurement: "collaborative creation" rather than the traditional "make it yourself" or "buy it from somewhere" approach
Participating companies of "Eclipse SDV." When it was established in 2022, there were 11 members, mostly from Europe, but now the group also includes companies from the US and the Asia-Pacific region.
In the three years since its founding, Eclipse SDV has seen an increase in the number of participating members, and in addition to the initial European focus, companies from the United States and the Asia-Pacific region have also joined. However, no major Japanese companies have joined, and in order to change this situation, Eclipse SDV held its first explanatory event in Japan for corporate representatives with the cooperation of Bosch, a strategic member.
Also at the event, Eclipse announced version 0.5 of its middleware for high-performance architectures, Eclipse S-CORE. It explained that through collaborative activities in the development project, participating members have confirmed improvements in the software's efficiency and speed. It also said that Eclipse S-CORE is not positioned solely for Europe, but is planned to be developed as a global project. In fact, an official announcement is planned at CES 2026, to be held in Las Vegas, USA, three weeks from now, that 17 globally active companies will be participating in the project.
Eclipse S-CORE version 0.5, middleware for high-performance architectures, announced
It will also be announced at CES 2026 that 17 new companies will join the project.
The Eclipse Foundation explained that it does not intend to limit Eclipse S-CORE to SDV for the automotive industry, but rather positions it as a truly open source technology that can be widely shared, and that it would like to provide it covering hardware, RISC-V, RTOS, and other technologies.However, it also mentioned that in terms of standardization, it is considering sharing technology widely within the community, and that one option is to do this on a regional or group basis, such as on a Japanese standard, European standard, or US standard.
However, the Eclipse Foundation stated that it is committed to not having to devote its efforts to local, regional standardization, and that it hopes to help successfully integrate the local efforts underway in Japan into a global framework.
Eclipse S-CORE is positioned as a truly open source technology that can be widely shared.
The advantage of Eclipse SDV is that it can utilize a global ecosystem.
The study session also included a question and answer session.
During the question and answer session that followed the presentation, Lindwedel was asked about the relationship with organizations such as AUTOSAR, JasPar, and the Open SDV Initiative, which are working towards the standardization of in-vehicle software.
During his visit to Japan, he had the opportunity to speak with members of JasPar participating companies, who told him that JasPar is also working on open source initiatives.However, Eclipse SDV has no intention of competing with other organizations, and the emphasis on open source is to keep the door open and promote collaboration.
However, one of the advantages of their activities is that they can utilize the global ecosystem. Japanese manufacturers such as Toyota Motor Corporation, Nissan Motor Corporation, and Honda Motor Co., Ltd. are participating in the Linux Foundation, but they commented that they may be able to make further progress by also joining Eclipse SDV, which has European and North American companies as members.
Morita continued to share his views, saying that Eclipse SDV originated in Europe and JasPar originated in Japan, AGL (Automotive Grade Linux) started as a standardization for the infotainment domain, and JasPar initially started in the body domain, but over the years has expanded into areas such as functional safety.
He explained that it had been announced earlier that the "SoDeV" architecture, which utilizes the Linux platform, would cover security up to the hypervisor, and that this content has some similarities to the Eclipse S-CORE architecture, and that as software development using open source has spread to all domains, it is no longer the time for organizations to operate independently, but rather for them to cooperate with each other. He said that it would be a shame to keep the activities being carried out by JasPar in Japan only, and that he wanted to expand this globally and act as a bridge between collaborating organizations to avoid duplication of efforts.
December 10, 2025
MCP Joins the Linux Foundation
by Scott Lewis (noreply@blogger.com) at December 10, 2025 08:10 PM
by Scott Lewis (noreply@blogger.com) at December 10, 2025 08:10 PM
December 09, 2025
Model Evolution Support - Santa Came Early for CDO Users!
by Eike Stepper (noreply@blogger.com) at December 09, 2025 05:15 PM
As the year draws to a close and the festive season approaches, the CDO team has delivered an early Christmas present to its community: the brand new Model Evolution Support feature! If you’ve ever wished for a smoother, safer way to evolve your EMF models in a CDO repository, your wish has just been granted.
Unwrapping the Gift: What Is Model Evolution Support?
Just like a carefully wrapped present under the tree, model evolution has long been a much-anticipated feature for CDO users. In the past, evolving your domain models—adding new features, refactoring classes, or updating enumerations—could be a daunting task, often requiring manual intervention, downtime, or even risky database migrations.
With the new Model Evolution Support, CDO introduces a phased, robust, and highly configurable process to handle model changes. This means you can now update your Ecore models and let CDO guide your repository through the necessary steps to keep your data and schema in sync—no more crossing your fingers and hoping for the best!
How Does It Work? A Phased Approach
Think of model evolution as a series of steps, much like unwrapping a present layer by layer. CDO’s approach is divided into clear phases:
- Change Detection: CDO first checks for differences between the stored models and your newly registered EPackages. If nothing has changed, the process stops here—no unnecessary work!
- Repository Export (Optional): Before making any changes, CDO can export your repository data, providing a safety net in case you need to roll back.
- Schema Migration: This is the heart of the process. CDO updates the database schema, adjusts feature IDs, and ensures enums are consistent with your new model.
- Store Processing (Optional): Any additional post-processing on the database can be handled here.
- Repository Processing (Optional): Final touches and clean-up on the repository itself.
Each phase is managed by a dedicated handler, and you can customize or extend these handlers to fit your project’s needs. Default implementations are provided for the most critical phases, so you’re ready to go out of the box.
Modes for Every Wish List
Santa knows that not everyone wants the same gift, and CDO’s Model Evolution Support is just as flexible. You can configure the evolution mode to:
- Migrate: Automatically evolve your models and database (the default and most magical option).
- Prevent: Detect changes but stop the process if any are found—perfect for production environments where stability is key.
- Disabled: Turn off model evolution entirely if you want to manage changes manually.
Peace of Mind: Logging, Recovery, and Transparency
No Christmas present is complete without instructions and a warranty! CDO logs every step of the evolution process, stores context and state in dedicated folders, and allows you to resume interrupted evolutions. This transparency means you can always see what’s happening and recover gracefully from any hiccups.
Conclusion: The Gift That Keeps on Giving
With Model Evolution Support, CDO users can look forward to a new year of painless model updates, safer migrations, and more time to focus on building great applications. It’s a feature that truly feels like Santa came early—delivering exactly what the community has been wishing for.
The feature is ready for use in this p2 repository.
More information can be found in the help center.
Happy holidays, and happy evolving!
by Eike Stepper (noreply@blogger.com) at December 09, 2025 05:15 PM
The Tangent is the Opportunity
by Donald Raab at December 09, 2025 12:25 AM
Focus and flow are great until they hinder and hide creative discoveries.
Slow down to speed up.
We’re running 900mph on a hamster wheel. We exist in this constant state of feeling like we’re going super fast, with the end result of feeling like we haven’t gone very far at all. Don’t be afraid to slow down and get off the wheel once in a while, so you can take the time to assess if the direction you are headed in will lead anywhere meaningful and impactful in the long term.
It’s not the speed, it’s the direction.
Acknowledging our humanity is not a weakness. Neither is it a strength. Being human is just a fundamental truth, that if you continually deny, will be an immovable titanium wall thwarting any real progress. We kill our own creativity, and stunt our upper limits of productivity by not embracing our fundamental need to be human.
Write. Think. Repeat.
Writing is a creative endeavor. It helps us think deeply and discover things that might have been hiding in plain sight. Writing naturally slows us down, and that is good. We can’t discover things when we are heads down on the hamster wheel running 900mph, trying to squeeze as much productivity out of keyboard automation as we can.
Simple structure can guide you.
Writing is hard. Getting started can be surprisingly easy. Try five, seven, five. Writing haiku can provide a clear and easy path to start with. Choosing the words and syllables that evoke feeling is the bridge between the structure and creativity. Start simple. Progress will be quick and in small bursts.
Complexity is your enemy.
Don’t start with the PHD thesis on the the meaning of all things if your writing muscles have gone unused due to the hamster wheel effects of constant productivity chasing. Write a simple haiku, and then another, and repeat until you get into the writing flow. Then try writing something different.
Consistency is the key.
After a month of an intense technical writing marathon a few years ago, I took a month break from technical writing to just write haiku. I had never written haiku before. I didn’t write the haiku with the hope of becoming a haiku guru. I wrote them as a forced detox from the hamster wheel of writing I put myself on. I needed to slow down, but it was important that I didn’t stop. I needed a change of direction.
The tangent is the opportunity.
Did I waste a month by just writing haiku? No. There was something magical there that I didn’t see at all. I turned the eleven haiku I wrote in one month into a Java Haiku Kata using Eclipse Collections and newer Java language features like Java Text Blocks. My Java Haiku Kata eventually became the basis for José Paumard’s JEP Café 9 on the Java YouTube Channel. His video has over 70,000 views on YouTube. It was in top 10 of all videos on the Java YouTube channel by popularity for a couple years, and is currently in the number 11 spot.
What if I didn’t slow down and follow the tangent?
I might have been completely burnt out and might have stopped writing blogs altogether. I wouldn’t have experimented with writing Haiku and wouldn’t have created a Java Kata that José was able to leverage as a basis for one of his two most popular JEP Café videos to date. As a side benefit, 70,000 developers have read my haiku in a Java Text Block. This was a “chef’s kiss” outcome.
Be well and be you
I wanted to keep this short and show a simple example of following a really outlandish tangent. I would have never predicted the final outcome in a million years. This outcome has brought me unexpected joy. It’s possible I may never write anything that gets as many reads as my random month of eleven haiku. Then again, if I take the time to slow down and chase tangents whenever I feel the need to recharge, I may eventually discover my greatest opportunity. I hope you find yours as well!
Thanks for reading!
I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions. I am the author of the book, Eclipse Collections Categorically: Level up your programming game.
December 08, 2025
AI and an Architecture of Open Source Participation
by Scott Lewis (noreply@blogger.com) at December 08, 2025 07:34 PM
A perspective on open source participation and AI:
by Scott Lewis (noreply@blogger.com) at December 08, 2025 07:34 PM
December 05, 2025
One Positive Effect of Java 25 with Compact Object Headers Enabled
by Donald Raab at December 05, 2025 04:48 AM
Measuring the memory cost of ArrayList and Eclipse Collections FastList
Donald Walker left a comment on my “What if Java didn’t have modCount?” blog that got me thinking.
Some thoughts: the real overhead of modCount may be that it is sitting there eating 4 bytes (minimum…) on each instance of Java fundamental collection classes.
I’ve thought for many years, apparently incorrectly, that java.util.ArrayList required 8 bytes more than Eclipse Collections FastList, because of the 4 byte int field named modCount. After reading the comment above, I wrote some code in Java 25 and proved myself wrong. Then I changed a property, ran the code again, and proved myself right. Thank you Project Lilliput and JEP 519!
I’m wondering if it’s possible that my faulty memory of the 8 byte savings was from the days of 32-bit Java or 64-bit Java before compressed oops. I didn’t blog back then, so I can’t find out. Thankfully, I am writing this down now so I don’t forget.
Update: The Progression of Memory Compression
Over the years, two important things have been added to aid in the reduction of memory consumption for objects in a 64-bit JVM. The first thing, and the most impactful was CompressedOops. The second was Compact Object Headers.
CompressedOops was added sometime in Java 6, and became a default enabled property in Java 7. I learned today that the property can be disabled. CompressedOops compresses object references from 8 bytes down to 4 bytes. This results in tremendous memory savings when using 64-bit JVMs because every object reference saves 4 bytes, give or take, given an 8 byte object alignment.
Compact Object Headers are now available via a property in Java 25. This saves essentially 4 bytes in the object header of an object. It will offer a much smaller, but measurable savings in some heaps. Basically instead of saving on every reference, there is a 4 byte savings on every instance of an object. There are more object references than object instances in a heap.
The following chart is a simplified view I wanted to update this blog with so folks could see the progression of memory compression for ArrayList and FastList since 64-bit JVMs were first introduced.
The progression of compression using CompressedOops and Compact Object Headers with empty ArrayList and FastListThis section was an update to the original blog so you could see the progression in a simple chart. The rest of the blog shows the default memory layout of ArrayList and FastList using three different options in Java 25 for compressing memory.
Note: I did not try to see if it is possible to disable CompressedOops and enable Compact Object Headers, as this potential combination did not make sense to me as a desirable combination.
What is the memory cost of modCount?
An int is 4 bytes. So modCount should cost 4 bytes. With 8 byte alignment, 4 bytes will cost you 8 if you leave 4 bytes of padding.
Memory Layout of ArrayList and FastList
I ran a test using Java 25, Eclipse Collections 13.0, and Java Object Layout (JOL) 0.17 which is an OpenJDK Project.
<dependency>
<groupId>org.eclipse.collections</groupId>
<artifactId>eclipse-collections</artifactId>
<version>13.0.0</version>
</dependency>
<dependency>
<groupId>org.openjdk.jol</groupId>
<artifactId>jol-core</artifactId>
<version>0.17</version>
</dependency>
The following is the code I ran with vanilla Java 25, measuring the memory layout and cost of an empty ArrayList and empty FastList.
@Test
public void emptyArrayListVsFastList()
{
this.outputMemory(new ArrayList());
this.outputMemory(new FastList());
}
private void outputMemory(Object instance)
{
System.out.println(ClassLayout.parseInstance(instance).toPrintable());
System.out.println(GraphLayout.parseInstance(instance).toFootprint());
}
The output is as follows:
java.util.ArrayList object internals:
OFF SZ TYPE DESCRIPTION VALUE
0 8 (object header: mark) 0x0000000000000001 (non-biasable; age: 0)
8 4 (object header: class) 0x00218308
12 4 int AbstractList.modCount 0
16 4 int ArrayList.size 0
20 4 java.lang.Object[] ArrayList.elementData []
Instance size: 24 bytes
Space losses: 0 bytes internal + 0 bytes external = 0 bytes total
java.util.ArrayList@4bd31064d footprint:
COUNT AVG SUM DESCRIPTION
1 16 16 [Ljava.lang.Object;
1 24 24 java.util.ArrayList
2 40 (total)
org.eclipse.collections.impl.list.mutable.FastList object internals:
OFF SZ TYPE DESCRIPTION VALUE
0 8 (object header: mark) 0x0000000000000001 (non-biasable; age: 0)
8 4 (object header: class) 0x011a8000
12 4 int FastList.size 0
16 4 java.lang.Object[] FastList.items []
20 4 (object alignment gap)
Instance size: 24 bytes
Space losses: 0 bytes internal + 4 bytes external = 4 bytes total
org.eclipse.collections.impl.list.mutable.FastList@7cbd9d24d footprint:
COUNT AVG SUM DESCRIPTION
1 16 16 [Ljava.lang.Object;
1 24 24 org.eclipse.collections.impl.list.mutable.FastList
2 40 (total)
The surprise here for me is that for some reason I was mistakenly expecting FastList to be 8 bytes smaller than ArrayList. As JOL reports, there is a 4 byte “object alignment gap” in FastList where modCount would be, if FastList extended AbstractList, which it does not.
Enabling Compact Object Headers
I’ll add the following property with Java 25 to enable Compact Object Headers and run again.
-XX:+UseCompactObjectHeaders
Now the output is as follows:
java.util.ArrayList object internals:
OFF SZ TYPE DESCRIPTION VALUE
0 8 (object header: mark) 0x0023c40000000001 (Lilliput)
8 4 int AbstractList.modCount 0
12 4 int ArrayList.size 0
16 4 java.lang.Object[] ArrayList.elementData []
20 4 (object alignment gap)
Instance size: 24 bytes
Space losses: 0 bytes internal + 4 bytes external = 4 bytes total
java.util.ArrayList@7354b8c5d footprint:
COUNT AVG SUM DESCRIPTION
1 16 16 [Ljava.lang.Object;
1 24 24 java.util.ArrayList
2 40 (total)
org.eclipse.collections.impl.list.mutable.FastList object internals:
OFF SZ TYPE DESCRIPTION VALUE
0 8 (object header: mark) 0x012c800000000001 (Lilliput)
8 4 int FastList.size 0
12 4 java.lang.Object[] FastList.items []
Instance size: 16 bytes
Space losses: 0 bytes internal + 0 bytes external = 0 bytes total
org.eclipse.collections.impl.list.mutable.FastList@10f7f7ded footprint:
COUNT AVG SUM DESCRIPTION
1 16 16 [Ljava.lang.Object;
1 16 16 org.eclipse.collections.impl.list.mutable.FastList
2 32 (total)
With Compact Object Headers (COH) enabled in Java 25, I finally see the 8 byte difference between ArrayList and FastList that I have been mistakenly expecting all these years. Yay!
Just to confirm, I will pre-size the lists to a size of 10, and expect to see the same 8 byte savings with COH enabled.
@Test
public void presizedArrayListVsFastList()
{
this.outputMemory(new ArrayList(10));
this.outputMemory(new FastList(10));
}
This is the output. Still an 8 byte difference. Woo hoo!
java.util.ArrayList object internals:
OFF SZ TYPE DESCRIPTION VALUE
0 8 (object header: mark) 0x0023c40000000001 (Lilliput)
8 4 int AbstractList.modCount 0
12 4 int ArrayList.size 0
16 4 java.lang.Object[] ArrayList.elementData [null, null, null, null, null, null, null, null, null, null]
20 4 (object alignment gap)
Instance size: 24 bytes
Space losses: 0 bytes internal + 4 bytes external = 4 bytes total
java.util.ArrayList@7354b8c5d footprint:
COUNT AVG SUM DESCRIPTION
1 56 56 [Ljava.lang.Object;
1 24 24 java.util.ArrayList
2 80 (total)
org.eclipse.collections.impl.list.mutable.FastList object internals:
OFF SZ TYPE DESCRIPTION VALUE
0 8 (object header: mark) 0x012c800000000001 (Lilliput)
8 4 int FastList.size 0
12 4 java.lang.Object[] FastList.items [null, null, null, null, null, null, null, null, null, null]
Instance size: 16 bytes
Space losses: 0 bytes internal + 0 bytes external = 0 bytes total
org.eclipse.collections.impl.list.mutable.FastList@10f7f7ded footprint:
COUNT AVG SUM DESCRIPTION
1 56 56 [Ljava.lang.Object;
1 16 16 org.eclipse.collections.impl.list.mutable.FastList
2 72 (total)
Update: Disabling CompressedOops
While responding to a comment from Donald Walker on this blog, I decided to see if I could see what the cost of ArrayList and FastList would have been in 64-bit with CompressedOops disabled. I search to see if there is a way to disable CompressedOops, and there is. The following is the property I used.
-XX:-UseCompressedOops
When I run the empty list test with ArrayList and FastList, with CompressedOops disabled, this is the result.
java.util.ArrayList object internals:
OFF SZ TYPE DESCRIPTION VALUE
0 8 (object header: mark) 0x0000000000000009 (non-biasable; age: 1)
8 4 (object header: class) 0x00218300
12 4 int AbstractList.modCount 0
16 4 int ArrayList.size 0
20 4 (alignment/padding gap)
24 8 java.lang.Object[] ArrayList.elementData []
Instance size: 32 bytes
Space losses: 4 bytes internal + 0 bytes external = 4 bytes total
java.util.ArrayList@12299890d footprint:
COUNT AVG SUM DESCRIPTION
1 16 16 [Ljava.lang.Object;
1 32 32 java.util.ArrayList
2 48 (total)
org.eclipse.collections.impl.list.mutable.FastList object internals:
OFF SZ TYPE DESCRIPTION VALUE
0 8 (object header: mark) 0x0000000000000001 (non-biasable; age: 0)
8 4 (object header: class) 0x011a8000
12 4 int FastList.size 0
16 8 java.lang.Object[] FastList.items []
Instance size: 24 bytes
Space losses: 0 bytes internal + 0 bytes external = 0 bytes total
org.eclipse.collections.impl.list.mutable.FastList@1fb19a0d footprint:
COUNT AVG SUM DESCRIPTION
1 16 16 [Ljava.lang.Object;
1 24 24 org.eclipse.collections.impl.list.mutable.FastList
2 40 (total)
Here the cost of ArrayList went from 24 bytes to 32 bytes, and FastList stayed steady at 24 bytes. Why did ArrayList jump to 32 bytes? It’s because now the reference cost for objects goes from 4 bytes to 8 bytes which impacts the ArrayList.elementData and creates an “alignment/padding gap” with the 8 byte alignment.
This is probably why my expectation all these years was that FastList would save 8 bytes. I may have tested with either 32-bit JVM or with a 64-bit JVM without CompressedOops enabled. With Compact Object Headers enabled again, this expectation appears to be met again.
Lessons Learned
When you learn something, and test something, write it down and share it somewhere it can be recalled, like a blog. Memory fades and fails. Something that may or may not have been true 15-20 years ago, should be validated and confirmed with tests. Things change. We’ve come a long way since the days of 32-bit Java and even 64-bit Java before compressed oops.
I’m happy to see and confirm the 8 byte savings of not having modCount in FastList will arrive with Java 25 and Compact Object Headers enabled. Now I’ve written it down.
Thanks for reading!
I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions. I am the author of the book, Eclipse Collections Categorically: Level up your programming game.
December 04, 2025
2025 Open Source Congress Report
by Jacob Harris at December 04, 2025 10:00 AM
Uncover the growing challenges facing open source today, from rising regulatory demands to the complexities of operating as a global, mature ecosystem.
December 01, 2025
LaTeX listings: Eclipse colors
by Lorenzo Bettini at December 01, 2025 11:58 AM
November 30, 2025
What if a Developer can Help Improve Software Development in Java?
by Donald Raab at November 30, 2025 08:26 PM
My journey to help improve the Java ecosystem continues.
My journey has been greater than the destination. I’ve travelled a lot of miles and made a lot of friends along the way.Things happen sometimes when you ask “What if?” It’s not enough to just ask the question. You have to invest yourself in exploring the answer and convince others the thing you are thinking or talking about is important to consider. Sometimes you should write a blog. Sometimes you should write some code. Sometimes you’ll need to engage in open community discussions. Sometimes you should write a book. Regardless, any changes to a programming language or library requires a commitment of patience and persistence. If you believe strongly in your “what if”, then just do it. Even if you have to go it alone at first , the view might be very pleasant when you get there. If and when others see what you see, they may enjoy the view as well, and commit themselves to the cause. It’s ok if no one else sees what you see, or if the value is deemed to not meet the cost, or if it simply is not a priority. Everyone needs an incentive to do something. Sometimes incentives won’t align. If you learned something, then take that as the win and move on. There’s plenty of work to do.
I shared this post originally on LinkedIn. The blogs I referenced are all on Medium, so I am sharing it here as well.
When I decided at the end of 2023 to take time off to travel and to write a book about the open source Eclipse Collections library, I didn’t start by travelling or writing the book. I started by blogging.
In the first fifteen days of 2024, I wrote three “What if Java…” blogs. These blogs created the necessary distance and space for me to think about what I might want to write about in a book about a Java collections library I had created and worked on for twenty years. I wanted to recall some of what motivated me to create Eclipse Collections twenty years earlier. The simple answer was “because Smalltalk”, but there were many more nuanced answers, that most Java developers would not immediately appreciate or understand.
I’ve written a couple more “What if Java…” style blogs since writing the first three. If you want to see some of how I see the world of software development, through my former Smalltalk developer lens, then check out these blogs.
📔 What if null was an Object in Java?
📔 What if Java had Symmetric Converter Methods on Collection?
📔 What if Java didn’t have modCount?
If you want to understand why I believed that Java would get lambdas all the way back in 2004, then this is the blog to read. [This was really my first “What if Java got support for lambdas?” blog.]
📔 My ten year quest for concise lambda expressions in Java
Twenty years after starting this quest, I began the journey writing the book about an improbable open source Java library. While I have written and published the first edition of the book, my journey hasn’t finished yet. I continue to convey the message to all Java developers, that there are different, sometimes better, ways to approach solving problems in Java. Java is a great programming language. Developers who program in Java can and should learn a lot from classic programming languages, like Smalltalk.
The following is the story I wrote after completing the book writing portion of my journey. There is an appendix dedicated to Smalltalk, and the whole organization of the book owes much to the idea of message categories I learned from Smalltalk over thirty years ago.
📙 My Twenty-one Year Journey to Write and Publish My First Book
Thanks for reading! 🙏
I am the creator of and committer for the Eclipse Collections OSS project, which is managed at the Eclipse Foundation. Eclipse Collections is open for contributions. I am the author of the book, Eclipse Collections Categorically: Level up your programming game.
November 29, 2025
Easily running SWT on different Linux Distros
by Jonah Graham at November 29, 2025 03:13 AM
As a follow-up to my earlier article on testing and developing SWT on GTK, I’ve recently run into a set of challenges that require running SWT tests, samples, and even the full IDE across multiple Linux distributions and different GTK minor versions.
Historically, whenever I needed to test Eclipse on another distro, I’d either dual-boot my machine-using a shared data partition to keep setup simple, or, for quick one-off experiments, spin up a VM. Occasionally I’d throw together a Docker image just for a single test run.
But as I found myself switching between environments more frequently, those ad-hoc approaches grew cumbersome. I needed something faster and more repeatable. So I formalized the Docker approach and built a workflow that lets me launch Java applications-including SWT under X11 or Wayland-with just a click or two.
The result is swt-dev-linux (https://github.com/jonahgraham/swt-dev-linux), a GitHub repository where I’m collecting and documenting the scripts that make this workflow straightforward and reliable. If you need to test SWT across different Linux/GTK configurations, I hope it makes your life easier too.
Demo
Here is a screenshot where I have the SWT ControlExample running on 4 different distros simultaneously. It makes it easy to compare and contrast behaviours:

In a process tree this looks like:

The above examples were all running on GTK4 + x11, the next example is running GTK4 one on x11 and one on wayland on Fedora 43, with my host distro being Ubuntu 25.10:

Recursive SDKs Demo
Here is another screenshot showing (from top left):
- Eclipse SDK setup with my main SWT development environment, launching:
- a child Eclipse SDK running on my Ubuntu docker container, launching:
- a child Eclipse SDK also running on my Ubuntu docker container, launching:
- a hello world Java application

Here is what the process tree looks like for the nested SDKs:

Intrigued?
Come visit my GitHub repo at https://github.com/jonahgraham/swt-dev-linux and try it out and let me know what you think. File an issue or a PR!
November 27, 2025
Next generation AI IDEs - sponsored by Eclipse and Advantest
by Andrey Loskutov (noreply@blogger.com) at November 27, 2025 04:06 PM
"Next generation AI" IDEs
AI is the current buzzword, everyone seem to use it, so why not to write something about it?
Disclaimer: the following blog is written wearing my "hat" as a proud Advantest team member, from the company that provides both hardware and software for the "AI".
It happened to me (because of my job as Java/Eclipse platform owner at Advantest) to stumble upon two "AI based" (and hyped) IDEs recently - Cursor and Google Antigravity (surely there could be more of that stuff).
If you never heard about Cursor or Antigravity and wonder what are they, let give them a chance to introduce themselves (just few selected headlines from the product pages):
Cursor
Cursor is the best way to code with AI
The best way to build software
Develop enduring software at scale
The new way to build software
Antigravity
Experience liftoff with the next-generation IDE
For developers - Achieve new heights
For organizations - Level up your entire team
New Era in AI-Assisted Software Development
So it looks like this should be really cool and surely "cutting edge" stuff, isn't?
Quest for Essence
However, looking at product pictures, it seems that they both are ... "AI pimped" forks of VS Code (and on the second look they are forks of VS Code).
I will not say much about AI here (my personal "I" is enough for me so far), but as a Java and Eclipse developer, I was curious how different they are compared to "old school" IDEs.
Following our Advantest corporate "quest for essence" mantra, I couldn't resist to look under the hood, to search for the "essence" of these IDEs Java support.
It was a short but amazing journey, this is a short illustrated report about it.
TL;DR
You can skip reading entire blog post, here is one picture that should explain everything *:
* if you wonder why everything is based on Advantest, you should know that ~70% of the semiconductor tests are done on equipment from Advantest so you basically can't have any modern electronic device without parts of it tested on our testers).
Full Story
(Disclaimer: I've checked Cursor and pure VS Code only, but not Antigravity, because later one required root right for installation, but the findings presented below should be same for every VS based/forked IDE).
So let start Cursor, click away all popups and suggestions to buy / login / enable AI, open simple Java file and try to debug it.
First surprise: Cursor can't build or run Java by default!
By default, Cursor only provides basic text editor with syntax highlighting!
Cursor – depends on Microsoft?
To build, run & debug Java code, Cursor recommends to install VS Code Extension Pack for Java (from Microsoft).
Cursor – depends on Red Hat?
OK, let's install Extension Pack for Java (from Microsoft)...
But wait, Extension Pack for Java from Microsoft is based on ... Language Support for Java (from Red Hat)
Cursor – depends on JDT!
OK, let's install Language Support for Java, but wait, it is based on … Eclipse JDT (Java Development Tools)…
Cursor – Eclipse AI inside
Now it also happened to me to be also a Eclipse Platform and JDT project lead, and I was interested how much from Eclipse or JDT code is actually used by Cursor.
It is not difficult to find where and what is installed, so a quick check revealed ...
... a lot of Eclipse libraries stored at ~/.cursor/extensions/redhat.java-1.47.0-linux-x64/server/plugins/.
Seeing this, it was obvious to me, that it is not just a single ECJ library (Eclipse Compiler for Java from JDT) used, but the entire Eclipse platform with lot of dependencies, and that could only work if Eclipse was started by Cursor as a separated Java/OSGI application (because VS Code is running in native Chrome browser process).
Quick check with jps revealed the headless JVM process started by Cursor and running full featured Eclipse product under the hood:
The truth: Cursor - sponsored by Eclipse and Advantest!
To make it clear: to provide full Java support, "the next generation AI tooling" starts regular Eclipse application (similar to regular Eclipse IDE) in a language server, but "headless", without any UI component.
Why should Cursor start Eclipse? Well, because "AI" (in the sense most people understand it) can't compile, build and debug anything, for this work "AI" is simply not intelligent enough!
"The next generation AI" still needs "old boring AI" provided by JDT project and JDT code runs inside Eclipse process. Of course, VS Code based IDEs can use other language servers for Java, but as of today, the most popular one is the extension which is based on Eclipse, and it uses Eclipse / JDT tooling simply because it provides "for free" things which are not easy to implement from scratch. In fact, ECJ (Eclipse Compiler for Java) only alternative Java compiler implementation (after javac from Oracle). Many projects also use ECJ for compilation on CI because ECJ compiler is faster as javac.
Advantest supports Open Source and helps Eclipse Platform with maintenance since many years.
In particular, ECJ (Eclipse Compiler for Java), as a major part of JDT, is maintained by Srikanth Sankaran, taking leading role in Eclipse Java compiler development since two years.
As of today, Srikanth is the most active JDT core contributor. He reviewed, redesigned, reimplemented, refactored support for all Java language enhancements from Java 10 till Java 25, in a 20 months long intense effort.
These improvements were delivered over several Eclipse releases in the past 20 months and are available in upcoming Eclipse 4.38 release (… and also in Cursor, VS Code, …).
Let's summarize what we've learned about Cursor
- Cursor can’t compile, build, run or debug any Java code by default.
- Cursor Java support is based on VS Code extensions.
- Most popular VS Code Java extension is backed by headless Eclipse product.
- Both Eclipse IDE and Cursor Java support depend on same JDT "AI"
With that, we can proudly say, that "AI" in Cursor / Antigravity / VS Code (for Java development) is sponsored by Eclipse and Advantest.
by Andrey Loskutov (noreply@blogger.com) at November 27, 2025 04:06 PM
November 19, 2025
Language execution with Langium and LLVM
November 19, 2025 12:00 AM
November 13, 2025
OCX 2026: Let’s build the future of open collaboration together
by Clark Roundy at November 13, 2025 04:41 PM
TL;DR - Registration for OCX26 is officially open! Join us in Brussels from 21–23 April 2026, and grab your early bird discount before 6 January. Don’t miss the chance to be part of the future of open collaboration.
The heartbeat of open source
At the Eclipse Foundation, openness is more than a value. It’s who we are. Each year, the Open Community Experience (OCX) brings that spirit to life by connecting developers, innovators, researchers, and industry leaders from around the world.
OCX 2026 is shaping up to be our biggest and most inspiring event yet. And we’re doing it in the heart of Europe: Brussels, a city known for innovation, collaboration, and great waffles.
One pass. Six experiences. Endless opportunities.
Your OCX26 pass gives you full access to the Main OCX Track plus five collocated events, each focused on the technologies and communities shaping the future of open source:
- Open Community for Tooling: IDEs, modeling tools, and developer platforms driving innovation.
- Open Community for Automotive: The hub for software-defined vehicles and next-gen mobility.
- Open Community for AI: Exploring responsible, transparent, and open AI frameworks.
- Open Community for Compliance: Tackling security, regulation, and the Cyber Resilience Act.
- Open Community for Research: Where academia meets industry to turn ideas into impact.
Whether you write code, design smarter cars, research AI, navigate compliance, or just love open source, OCX26 is where you belong.
Why register early?
Because it saves you over €100! Register before 6 January 2026 to lock in early bird pricing.
Our program teams are now putting together an unmissable lineup filled with fresh ideas, bold conversations, and practical insights. You can expect sessions on everything from secure software practices and CRA compliance to AI-powered development tools and next-generation mobility platforms shaping the future of open source.
Who should attend?
If you care about open source, OCX26 is the place to be:
- Developers and maintainers shaping open tools and frameworks
- Innovators in automotive, embedded, and edge systems
- AI researchers advancing ethical, open AI
- Compliance and security professionals navigating new regulations
- Academics and industry partners turning research into real-world impact
- Tech leaders connecting innovation to industry needs
In short, YOU!
Got something to share?
There’s still time to submit your talk, but not much.
The call for proposals closes on 19 November,
We’re looking for stories, insights, and breakthroughs from across the open source ecosystem: Java, AI, automotive, embedded, compliance, and research. Whether it’s a new project, an interesting idea, or a collaboration success story, your voice belongs on the OCX stage.
Don’t miss the chance to share your expertise and connect with hundreds of passionate community members from across the world.
Sponsor the future
OCX exists because of the organizations that believe in open collaboration and community-driven innovation.
Now’s your chance to join them as a sponsor of OCX. Our flexible Sponsorship packages put your brand in front of developers, innovators, and leaders who are shaping the next generation of open technology.
From AI and automotive to tooling and compliance, OCX26 connects your brand with the communities shaping tomorrow’s technology.
Be part of the experience
Mark your calendars, grab your early bird pass, and get ready to join over 600 open source innovators in Brussels this April for three days of collaboration, connection, and creativity.
👉 Register now.
👉 Submit your talk by 13 November.
👉 Explore sponsorship opportunities.
November 06, 2025
Foundation Announces Maintainers Fund
by Scott Lewis (noreply@blogger.com) at November 06, 2025 01:22 AM
by Scott Lewis (noreply@blogger.com) at November 06, 2025 01:22 AM
November 05, 2025
AWS invests in strengthening open source infrastructure at the Eclipse Foundation
by Mike Milinkovich at November 05, 2025 02:29 PM
In our recent open letter and blog post on sustainable stewardship of open source infrastructure, we called on the industry to take a more active role in supporting the systems and services that drive today’s software innovation. Today, we’re excited to share a powerful example of what that kind of leadership looks like in action.
The Eclipse Foundation is pleased to announce that Amazon Web Services (AWS) has made a significant investment to strengthen the reliability, performance, and security of the open infrastructure that supports millions of developers around the world. This commitment will benefit multiple core services, including Open VSX Registry, the open source registry for Visual Studio Code extensions that powers AI-enabled development environments such as Kiro and other leading tools.
Sustaining the backbone of open source innovation
For more than two decades, the Eclipse Foundation has quietly maintained open infrastructure that forms the foundation of modern software creation for millions of software developers worldwide. Its privately hosted systems deliver more than 500 million downloads each month across services such as download.eclipse.org, the Eclipse Marketplace, and Open VSX. These platforms serve as the backbone for individuals, organisations, and communities that rely on open collaboration to build the technologies of the future.
AWS’s investment will help improve performance, reliability, and security across this infrastructure. The collaboration reflects a shared commitment to keeping open source systems resilient, transparent, and sustainable at global scale.
Open VSX: a model for sustainable open infrastructure
Open VSX is a vendor-neutral, open source (EPL-2.0) registry for Visual Studio Code extensions. It serves as the default registry for Kiro, Amazon’s AI IDE platform, and is relied upon by a growing global community of developers. The registry now hosts over 7,000 extensions from nearly 5,000 publishers and delivers in excess of 110 million downloads per month. As a leading registry serving developer communities worldwide, including JavaScript and AI development communities, Open VSX has become a vital piece of open source infrastructure that supports thousands of development teams worldwide.
By supporting Open VSX, AWS is helping to strengthen the foundations of this essential service and reinforcing the Eclipse Foundation’s ability to provide secure, reliable, and globally accessible infrastructure. Their contribution reflects the importance of collective investment in maintaining the resilience, openness, and security of the tools developers use every day.
This sponsorship highlights the shared responsibility that all organisations have in sustaining the technologies they depend on. It also sets a strong example of how industry leaders can contribute to ensuring that the services we all rely on remain trustworthy, scalable, and sustainable for the future.
Improving reliability, security, and trust
The AWS investment is helping strengthen security, ensuring fair access, and improving long-term service reliability. Ongoing work focuses on enhancing malware detection, improving traffic management, and expanding operational monitoring to ensure a stable and trusted experience for developers around the world.
As part of this collaboration, AWS is providing infrastructure and services that will improve availability, performance, and scalability across these systems. This support will accelerate key roadmap initiatives and help ensure that the platforms developers rely on remain secure, scalable, and trustworthy well into the future.
A shared commitment to open source sustainability
AWS’s contribution demonstrates how industry leaders can make strategic investments in sustaining the shared infrastructure their businesses depend on every day. By investing in the services that support open source development, AWS is helping to ensure that critical technologies remain open, reliable, and accessible to everyone.
The Eclipse Foundation continues to serve as an independent steward of open source infrastructure, maintaining the tools and systems that enable software innovation across industries. Together with supporters like AWS, we are building a stronger foundation for the future of open collaboration.
But this is only the beginning. The long-term health of open source infrastructure depends on collective action and shared responsibility. We encourage other organisations to follow AWS’s example and take an active role in sustaining the technologies that make modern development possible.
Learn how your organisation can make a difference through Eclipse Foundation membership or direct sponsorship opportunities. The future of open innovation depends on all of us; and together, we can keep it strong, secure, and sustainable.
November 04, 2025
Self-Brewed Beer is (Almost) Free - Experiences using Ollama in Theia AI - Part 2
November 04, 2025 06:12 PM
This is part two of an extended version of a talk I gave at TheiaCon 2025. That talk covered my experiences with Ollama and Theia AI in the previous months. In part one I provided an overview of Ollama and how to use it to drive Theia AI agents, and presented the results of my experiments with different local large language models.
In this part, I will draw conclusions from these results and provide a look into the future of local LLM usage.
Considerations Regarding Performance
The experiment described in part one of this article showed that working with local LLMs is already possible, but still limited due to relatively slow performance.
Technical Measures: Context
The first observation is that the LLM is becoming slower as the context grows. The reason is that the LLM needs to parse the entire context for each message. At the same time, a too small context window leads to the LLM forgetting parts of the conversation. In fact, as soon as the context window is filled, the LLM engine will start discarding the first messages in the conversation, while retaining the system prompt. So, if you experience that an agent seems to forget the initial instructions you gave it in the chat, this means most likely that the context window is exceeded. In this case the agent might become unusable, so it is a good idea to use a context window that is large enough to fit the system prompt, the instructions, and the tool calls during processing. On the other hand, at a certain point in long conversations or reasoning chains, the context can become so large that each message takes more than a minute to process.
Consequently, as users, we need to develop an intuition for the necessary context length–long enough for the task, but not too excessive.
Also, it is a good idea to reduce the necessary context by
- adding paths in the workspace to the context beforehand, so that instead of letting the agent browse and search the workspace for the files to modify via tool calls, we already provide that information. In my experiments, this reduced token consumption from about 60,000 tokens to about 20,000 for the bug analysis task. (Plus, this also speeds up the analysis process as a whole, because the initial steps of searching and browsing the workspace do not need to be performed by the agent).
- keeping conversations and tasks short. Theia AI recommends this even for non-local LLMs and provides tools such as Task Context and Chat Summary. So, it is a good idea to follow Theia AI's advice and use these features regularly.
- defining specialized agents. It is very easy to define a new agent with its custom prompt and tools in Theia AI. If we can identify a repeating task that needs several specialized tools, it is a good idea to define a specialized agent with this specialized toolset. In particular regarding the support for MCP servers, it might be tempting to start five or more MCP servers and just throw all the available tools into the Architect or Coder agent's prompt. This is a bad idea, though, because each tool's definition is added to the system prompt and thus, consumes a part of the context window.
Note that unloading/loading models is rather expensive as well and usually takes up to several seconds. And in Ollama, even changing the context window size causes a model reload. Therefore, as VRAM is usually limited, it is a good idea to stick to one or two models that can fit into the available memory, and not change context window sizes too often.
Organizational Measures
Even with these considerations regarding the context length, Local LLMs will always be slower than their cloud counterparts.
Therefore, we should compensate for this at the organizational level by adjusting the way we work; for example, while waiting for the LLM to complete a task,
- we could start to write and edit prompts for the next features
- we can review the previous task contexts or code modifications and adjust them
- we can do other things in parallel, like go to lunch, grab a coffee, go to a meeting, etc., and let the LLM finish its work while we are away.
Considerations Regarding Accuracy
As mentioned in part 1, local LLMs are usually quantized (which means basically: rounded) so that weights-or parameters-consume less memory. Therefore, a model can have a lower accuracy. The symptom for this is that the agent does not do the correct thing, or does not use the correct arguments when calling a tool.
In my experience, analyzing the reasoning/thinking content and checking the actual tool calls an agent makes, is a good way to determine what goes wrong. Depending on the results of such an analysis
- we can modify the prompt; for example by giving more details, more examples, or by emphasizing important things the model needs to consider
- we can modify the implementation of the provided tools. This, of course, requires building a custom version of the Theia IDE or the affected MCP server. But if a tool call regularly fails, because the LLM does not get the arguments 100% correct, but we could mitigate for these errors in the tool implementation, it might be beneficial to invest in making the tool implementation more robust.
- we can provide more specific tools; for example, Theia AI only provides general file modification tools, such as writeFileReplacements. If you work mostly with TypeScript code, for example, it might be a better approach to implement and use a specialized TypeScript file modification tool that can automatically take care of linting, formatting, etc. on the fly.
Considerations Regarding Complexity
During my experiments, I have tried to give the agent more complex tasks to work on and let it run overnight. This failed however, because sooner or later, the agent will be unable to continue due to the limited context size; it starts forgetting the beginning of the conversation and thus, its primary objective.
One way to overcome this limitation is to split complex tasks into several smaller, lower-level ones. Starting with version 1.63.0, Theia AI supports agent-to-agent delegation. Based on this idea, we could implement a special Orchestrator agent (or a more programmatic workflow) that is capable to split complex tasks into a series of simpler ones. These simpler tasks could then be delegated to specialized agents (refined versions of Coder, AppTester, etc.) one by one. This would have the advantage that each step could start with a fresh, empty context window, thus following the considerations regarding context discussed above.
This is something that would need to be implemented and experimented with.
Odds and Ends
This blog article has presented my experiences and considerations about using local LLMs with Theia AI.
Several topics have only been touched slightly, or not at all, and are subject of further inspection and experimentation:
- Until recently, I had considered Ollama too slow for code completion, mostly because the TTFT (time to first token) is usually rather high. But recently, I have found that at least with the model zdolny/qwen3-coder58k-tools:latest, the response time feels okay. So, I will start experimenting with this and some other models for code completion.
- Also, Ollama supports fill-in-the-middle completion. This means that the completion API does not only support providing a prefix, but also a suffix as input. This API is currently not supported by Theia AI directly. The Code Completion Agent in Theia usually provides the prefix and suffix context as part of the user prompt. So Theia AI would have to be enhanced to support the fill-in-the-middle completion feature natively. And it needs to be determined whether this will also help to improve performance and accuracy.
- Next, there are multiple approaches regarding optimizing and fine-tuning models for better accuracy and performance. There are several strategies, such as Quantization, Knowledge Distillation, Reinforcement Learning, and Model Fine Tuning which can be used to make models more accurate and performant for one's personal use cases. The Unsloth and MLX projects, for example, aim at providing optimized, local options to perform these tasks.
- Finally, regarding Apple Silicon Processors in particular, there are two alternatives to boost performance, if they were supported:
- CoreML is a proprietary Apple framework to use the native Apple Neural Engine (which would provide another performance boost, if an LLM could run fully on it). The bad news is that at the moment, it seems that using the Apple Neural Engine is currently limited by several factors. Therefore, there are no prospects of running a heavier LLM, such as gpt-oss:20b, on the ANE, at the moment.
- MLX is an open framework, also developed by Apple, that runs very efficiently on Apple Silicon processors using a hybrid approach to combine CPU, GPU, and Apple Neural Engine resources. Yet, there is still very limited support available to run LLMs in MLX format. But at least, there are several projects and enhancements in development:
- there is a Pull Request in development to add MLX support to Ollama, which is the basis for using the Neural Engine
- other projects, such as LM Studio, swama, mlx-lm and others support models in the optimized MLX format, but in my experiments, tool call processing was unstable, unfortunately.
Outlook
The evolution of running LLMs locally and using them for agentic development in Theia AI has been moving fast recently. The progress made in 2025 alone suggests that LLMs running locally will continue to get better and better over time:
- better models keep appearing: from deepseek-r1 to qwen3 and gpt-oss, we can be excited about what will come next
- context management is getting better: every other week, we can observe discussions around enhancing or using the context window more effectively in one way or another: the Model-Context-Procol, giving LLMs some form of persistent memory, choosing more optimal representations of data, for example by using TOON, utilizing more intelligent context compression techniques, to name just a few.
- hardware is becoming better, cheaper, and more available; I have performed my experiments with a 5 year old processor (Apple M1 Max) and I have already achieved acceptable results. Even today's processors are already much better, and there is more to come in the future
- software is becoming better: Ollama is being actively developed and enhanced, and Microsoft has recently published BitNet, an engine to support 1-bit LLMs, etc.
We can be excited to see what 2026 will bring…
Self-Brewed Beer is (Almost) Free - Experiences using Ollama in Theia AI - Part 1
November 04, 2025 03:38 PM
This blog article is an extended version of a talk I gave at TheiaCon 2025. The talk has covered my experiences with Ollama and Theia AI in the previous months.
What is Ollama?
Ollama is an OpenSource project which aims at making it possible to run Large Language Models (LLMs) locally on your own hardware with a docker-like experience. This means that, as long as your hardware is supported, it is detected and used with no further configuration.
Advantages
Running LLMs locally has several advantages:
- Unlimited tokens: you only pay for the power you are consuming and the hardware if you do not already own it.
- Full confidentiality and privacy: the data (code, prompts, etc.) never leaves your network. You do not have to worry about providers using your confidential data to train their models.
- Custom models: You have the option to choose from a large number of pre-configured models, or you can download and import new models, for example, from huggingface. Or you can take a model and tweak it or fine-tune it to your specific needs.
- Vendor neutrality: It does not matter who wins the AI race in a few months, you will always be able to run the model you are used to locally.
- Offline: You can use a local LLM on a suitable laptop even when traveling, for example by train or on the plane. No Internet connection required. (A power outlet might be good, though...)
Disadvantages
Of course, all of this also comes at a cost. The most important disadvantages are:
- Size limitations: Both the model size (number of parameters) and context size are heavily limited by the available VRAM.
- Quantization: As a compromise to allow for larger models or contexts, quantization is used to sacrifice weight precision. In other words, a model with quantized parameters can fit more parameters in the same amount of memory. This comes at a cost of lower inference accuracy as we will see further below.
Until recently, the list of disadvantages has included that there was no support for local multimodal models. So, reasoning about images, video, audio, etc. was not possible. But that has changed last week, when ollama 0.12.7 was released along with locally runnable qwen3-vl model variants.
Development in 2025
A lot has happened in 2025 alone. At the beginning of 2025, there was neither a good local LLM for agentic use (especially reasoning and tool calling was not really usable) and also the support for Ollama in Theia AI was limited.
But since then, in the last nine months:
- Ollama 0.9.0 has added support for reasoning/thinking and streaming tool calling
- More powerful models have been released (deepseek-r1, qwen3, gpt-oss, etc.)
- Ollama support in Theia AI has seen a major improvement
With the combination of these changes, it is now very well possible to use Theia AI agents backed by local models.
Getting Started
To get started with Ollama, you need to follow these steps:
- Download and install the most recent version of Ollama. Be sure to regularly check for updates, as with every release of Ollama, new models, new features, and performance improvements are implemented.
- Start Ollama using a command line like this:
OLLAMA_NEW_ESTIMATES="1" OLLAMA_FLASH_ATTENTION="1" OLLAMA_KV_CACHE_TYPE="q8_0" ollama serve
Keep an eye open for the Ollama release changelogs, as the environment settings can change over time. Make sure to enable and experiment with new features. - Download a model using
ollama pull gpt-oss:20b - Configure the model in Theia AI by adding it to the Ollama settings under Settings > AI Features > Ollama
- Finally, as described in my previous blog post, you need to add request settings for the Ollama models in the settings.json file to adjust the context window size (num_ctx), as the default context window in Ollama is not suitable for agentic usage.
Experiments
As a preparation for TheiaCon, I have conducted several non-scientific experiments on my MacBook Pro M1 Max with 64GB of RAM. Note that this is a 5-year-old processor.
The task I gave the LLM was to locate and fix a small bug: A few months ago, I had created Ciddle - a Daily City Riddle, a daily geographical quiz, mostly written in NestJS and React using Theia AI. In this quiz, the user has to guess a city. After some initial guesses, the letters of the city name are partially revealed as a hint, while keeping some letters masked with underscores. As it turned out, this masking algorithm had a bug related to a regular expression not being Unicode-friendly: it matched only ASCII letters, but not special characters, such as é. So special characters would never be masked with underscores.
Therefore, I wrote a prompt explaining the issue and asked Theia AI to identify the bug and fix it. I followed the process described in this post:
- I asked the Architect agent to analyze the bug and plan for a fix
- once without giving the agent the file containing the bug, so the agent needs to analyze and crawl the workspace to locate the bug
- once with giving the agent the file containing the bug using the "add path to context" feature of Theia AI
- I asked Theia AI to summarize the chat into a task context
- I asked Coder to implement the task (in agent mode, so it directly changes files, runs tasks, writes tests, etc.)
- once with the unedited summary (which contained instructions to create a test case)
- once with the summary with all references to an automated unit test removed, so the agent would only fix the actual bug, but not write any tests for it
The table below shows the comparison of different models and settings:
| Model | Architect | Architect (with file path provided) | Summarize | Coder (fix and create test) | Coder (fix only) |
| gpt-oss:20b | |||||
| - w/ num_ctx = 16k | 175s | 33s | 32s | 2.5m (3) | 43s |
| - w/ num_ctx = 128k | 70s | 50s | 32s | 6m | 56s |
| qwen3-14b - w/ num_ctx = 40k |
(1) | 143s | 83s | (4) | (4) |
| qwen3-coder:30b - w/ num_ctx = 128k |
(2) | (2) | 64s | 21m (3) | 13m |
| gpt-oss:120b-cloud | 39s | 16s | 10s | 90s (5) | 38s |
(1) without file path to fix, the wrong file and bugfix location is identified
(2) with or without provided path to fix, qwen3-coder "Architect" agent runs in circles trying to apply fixes instead of providing an implementation plan
(3) implemented fix correctly, but did not write a test case, although instructed to do so.
(4) stops in the middle of the process without any output
(5) in one test, gpt-oss:120b-cloud did not manage to get the test file right and failed when the hourly usage limit was exceeded
Observations
I have performed multiple experiments. The table reports more or less the best case times. As usual when working with LLMs, the results are not always deterministic. But, in general, if the output is similar for a given model, the processing time is also the same within a few seconds, so the table above shows more or less typical results for the case that the outcome was acceptable, if this was possible.
In general, I have achieved the best results with gpt-oss:20b with a context window of 128k tokens (the maximum for this model). A smaller context window can result in faster response times, but at the risk of not performing the task completely; for example, when running with 16k context, the Coder agent would fix the bug, but not provide a test, even though the task context contained this instruction.
Also, in my first experiments, the TypeScript/Jest configuration contained an error which caused the model (even with 128k context) to run around in circles for 20 minutes and eventually deleting the test again before finishing its process.
The other two local models, I used in the tests, qwen3:14b and qwen3-coder:30b were able to perform some of the agentic tasks, but usually at a lower performance and even failing in some scenarios.
Besides the models listed in the table above, I have also tried a few other models that were popular in the Ollama model repository, such as granite4:small-h and gemma3:27b. But they either had a similar behavior as qwen3:14b, so they just stopped at some point without any output, or they did not use the tools provided and just replied with a general answer.
Also note, that some tools (such as deepseek-r1) do not support tool calling in their local variants (yet...?). There are some variants of common models that are modified by users to support tool calling in theory, but in practice the tool calls are either not properly detected by ollama, or the provided tools are not used at all.
Finally, just for comparison, I have also used the recently released Ollama cloud model feature to run the same tasks with gpt-oss:120b-cloud. As expected, the performance is much better than with local models, but at the same time, the gpt-oss:120b-cloud model also began to run around in circles once. So even that is not perfect in some cases.
To summarize, the best model for local agentic development with Ollama is currently gpt-oss:20b. In case everything works, it is surprisingly fast even with my 5 year old hardware. But, if something goes wrong, it usually goes fatally wrong, and the model will entangle itself in endless considerations and fruitless attempts to fix the situation.
Stay tuned for the second part of this article, where I will describe the conclusions I draw from my experiences and experiments, discuss consequences, and provide a look into the future of local LLMs in the context of agentic software development.
October 29, 2025
Open Source MBSE at Scale: From Industry-Proven Tools to Web-Native SysML v2
by Cédric Brun (cedric.brun@obeo.fr) at October 29, 2025 12:00 AM
Cedric Brun, CEO of Obeo, and Asma Charfi, from CEA, look back on 15 years of open-source ecosystem development and share their vision for the next generation of Model-Based Systems Engineering (MBSE) tools.
Context
- Event: 2025 IEEE International Symposium on Systems Engineering (ISSE)
- Location: ENSTA, Paris
- Date: October 2025
Summary
This joint presentation explored how open-source MBSE technologies have evolved over the past 15 years — from Eclipse-based industrial tools like Capella, Papyrus, and Sirius, to new web-native environments supporting SysML v2 and agent-assisted engineering.
Key messages included:
- The power of open ecosystems for accelerating innovation in education, research, and industry.
- Lessons learned from large-scale industrial adoption of MBSE tools.
- The emergence of next-generation modeling environments — collaborative, extensible, and AI-augmented, bridging the gap between domain experts and software engineers.
The talk sparked lively discussions and a strong interest from the IEEE community regarding the convergence of open-source platforms and upcoming SysML v2 tooling.
Highlights
- 15 years of open collaboration across the Eclipse ecosystem — from early Papyrus and Capella foundations to today’s vibrant MBSE community.
- Industry-proven tools at scale, including Capella and its extensions (Team, Cloud, and Publication), showcasing how open-source can sustain mission-critical engineering.
- A live proof of concept illustrating “Obeo Enterprise for SysON,” combining SysML v2 with Arcadia semantics and an AI agent assisting the creation of a logical architecture for the X-Wing spacecraft.
- A forward-looking perspective on the transition to web-native, cloud-enabled, and AI-augmented modeling platforms built for openness and collaboration.
Open Source MBSE at Scale: From Industry-Proven Tools to Web-Native SysML v2 was originally published by Cédric Brun at CEO @ Obeo on October 29, 2025.
by Cédric Brun (cedric.brun@obeo.fr) at October 29, 2025 12:00 AM
October 27, 2025
Open VSX security update, October 2025
October 27, 2025 07:30 PM
Over the past few weeks, the Open VSX team and the Eclipse Foundation have been responding to reports of leaked tokens and related malicious activity involving certain extensions hosted on the Open VSX Registry. We want to share a clear summary of what happened, what actions we’ve taken, and what improvements we’re implementing to strengthen the security of the ecosystem.
Background
Earlier this month, our team was alerted to a report from Wiz identifying several extension publishing tokens inadvertently exposed by developers within public repositories. Some of these tokens were associated with Open VSX accounts.
Upon investigation, we confirmed that a small number of tokens had been leaked and could potentially be abused to publish or modify extensions. These exposures were caused by developer mistakes, not a compromise of the Open VSX infrastructure. All affected tokens were revoked immediately once identified.
To improve detection going forward, we introduced a token prefix format in collaboration with MSRC to enable easier and more accurate scanning for exposed tokens across public repositories.
The “GlassWorm” campaign
Around the same time, a separate report from Koi Security described a new malware campaign that leveraged some of these leaked tokens to publish malicious extensions. The report referred to this as a “sel”-propagating worm,” drawing comparisons to the ShaiHulud incident that impacted the npm registry in September.
While the report raises valid concerns, we want to clarify that this was not a self-replicating worm in the traditional sense. The malware in question was designed to steal developer credentials, which could then be used to extend the attacker’s reach, but it did not autonomously propagate through systems or user machines.
We also believe that the reported download count of 35,800 overstates the actual number of affected users, as it includes inflated downloads generated by bots and visibility-boosting tactics used by the threat actors.
All known malicious extensions were removed from Open VSX immediately upon notification, and associated tokens were rotated or revoked without delay.
Status of the incident
As of October 21, 2025, the Open VSX team considers this incident fully contained and closed. There is no indication of ongoing compromise or remaining malicious extensions on the platform.
We continue to collaborate closely with affected developers, ecosystem partners, and independent researchers to ensure transparency and reinforce preventive measures.
Strengthening the platform
This event has underscored the importance of proactive defense across the supply chain, particularly in community-driven ecosystems. To that end, we are implementing several improvements:
-
Token lifetime limits: All tokens will have shorter validity periods by default, reducing the potential impact of accidental leaks.
-
Simplified revocation: We are improving internal workflows and developer tooling to make token revocation faster and more seamless upon notification.
-
Security scanning at publication: Automated scanning of extensions will now occur at the time of publication, helping us detect malicious code patterns or embedded secrets before an extension becomes available to users.
-
Ecosystem collaboration: We are continuing to work with other marketplace operators, including VS Code and third-party forks, to share intelligence and best practices for extension security.
Help us build a more secure and sustainable open source future
We take this responsibility seriously, and the trust you place in us is paramount. Incidents like this remind us that supply chain security is a shared responsibility: from publishers managing their tokens carefully, to registry maintainers improving detection and response capabilities.
The Open VSX incident is now resolved, but our work on improving the resilience of the ecosystem is ongoing. We remain committed to transparency and to strengthening every part of our platform to ensure that open source innovation continues safely and securely.
Open VSX is built by and for the open source developer community. It needs your support to stay sustainable. Read more about this in our recent blog post.
If you believe you’ve discovered a security issue affecting Open VSX, please reach out to us at openvsx@eclipse-foundation.org.
Thank you for your vigilance, cooperation, and commitment to a safer open source community.
October 24, 2025
Before the Cloud: Eclipse Foundation’s Quiet Stewardship of Open Source Infrastructure
by Denis Roy at October 24, 2025 08:12 PM
Long before the cloud era, the Eclipse Foundation quietly served as the backbone of open source stewardship. Its software, frameworks, processes and infrastructure helped define and standardise developer workflows that are now core to modern engineering practices.
As early as 2005, the Eclipse IDE’s modular plugin architecture embodied what we now recognise as today's extension registry model. Developers no longer needed to manually download and configure artifacts; they could be automatically ingested, at high volume, into build and delivery pipelines known today as CI/CD.
Eclipse Foundation’s early success demanded infrastructure that could scale globally without the benefit of GitHub, Cloudflare, AWS, or GCP. Like many pioneering platforms of that time, we had to build performant and resilient systems from the ground up.
Fast forward two decades, and open source infrastructure has become the backbone of software delivery across every industry. Developer platforms now span continents and power everything from national infrastructure to consumer technology. In this landscape, software delivery is no longer just a technical process but a key driver of innovation, competition, and developer velocity.
Today, the Eclipse Foundation continues its legacy of building dependable open source infrastructure, powering registries, frameworks, and delivery systems that enable millions of developers to innovate at scale. From open registries like Open VSX to entreprise-grade frameworks such as Jakarta EE, the Foundation provides the scaffolding for the next generation of AI-augmented development. Its vendor-neutral governance ensures that tools, and the innovations they enable, remain open, globally accessible and community-driven.
From IDEs to extension registries, the Eclipse Foundation continues to shape the digital backbone of modern innovation. It remains one of the world’s most trusted homes for open collaboration, enabling developers, communities, and organisations to build the technologies that define the future—at global scale.
October 17, 2025
How we used Maven relocation for Xtend
by Lorenzo Bettini at October 17, 2025 01:45 PM
October 10, 2025
Announcing Eclipse Ditto Release 3.8.0
October 10, 2025 12:00 AM
Eclipse Ditto team is excited to announce the availability of a new minor release, including new features: Ditto 3.8.0.
Adoption
Companies are willing to show their adoption of Eclipse Ditto publicly: https://iot.eclipse.org/adopters/?#iot.ditto
When you use Eclipse Ditto it would be great to support the project by putting your logo there.
Changelog
The main improvements and additions of Ditto 3.8.0 are:
- Diverting Ditto connection responses to other connections (e.g. to allow multi-protocol workflows)
- Dynamically re-configuring WoT validation settings without restarting Ditto
- Enforcing that WoT model based thing definitions are used and match a certain pattern when creating new things
- Support for OAuth2 “password” grant type for authenticating outbound HTTP connections
- Configure JWT claims to be added as information to command headers
- Added support for client certificate based authentication for Kafka and AMQP 1.0 connections
- Extend “Normalized” connection payload mapper to include deletion events
- Support silent token refresh in the Ditto UI when using SSO via OAuth2/OIDC
- Enhance conditional updates for merge thing commands to contain several conditions to dynamically decide which parts of a thing to update and which not
The following non-functional work is also included:
- Improving WoT based validation performance for merge commands
- Enhancing distributed tracing, e.g. with a span for the authentication step and by adding the error response for failed API requests
- Updating dependencies to their latest versions
- Providing additional configuration options to Helm values
The following notable fixes are included:
- Fixing nginx CORS configuration which caused Safari / iOS browsers to fail with CORS errors
- Fixing transitive resolving of Thing Models referenced with
tm:ref - Fixing sorting on array fields in Ditto search
- Fixing issues around “put-metadata” in combination with merge commands
- Fixing that certificate chains for client certificate based authentication in Ditto connection was not fully parsed
- Fixing deployment of Ditto on OpenShift
Please have a look at the 3.8.0 release notes for a more detailed information on the release.
Artifacts
The new Java artifacts have been published at the Eclipse Maven repository as well as Maven central.
The Ditto JavaScript client release was published on npmjs.com:
The Docker images have been pushed to Docker Hub:
- eclipse/ditto-policies
- eclipse/ditto-things
- eclipse/ditto-things-search
- eclipse/ditto-gateway
- eclipse/ditto-connectivity
The Ditto Helm chart has been published to Docker Hub:
–
The Eclipse Ditto team
October 09, 2025
Response diversion - Multi-protocol workflows made easy
October 09, 2025 12:00 AM
Today we’re excited to announce a powerful new connectivity feature in Eclipse Ditto: Response Diversion. This feature enables sophisticated multiprotocol workflows by allowing responses from one connection to be redirected to another connection instead of being sent to the originally configured reply target.
With response diversion, Eclipse Ditto becomes even more versatile in bridging different IoT protocols and systems, enabling complex routing scenarios that were previously challenging or impossible to achieve.
The challenge: Multi-protocol IoT landscapes
Modern IoT deployments often involve multiple protocols and systems working together. Consider these common scenarios:
- Cloud integration: Your devices use MQTT to communicate with AWS IoT Core, but your analytics pipeline consumes data via Kafka
- Protocol translation: Legacy systems expect HTTP webhooks, but your devices communicate via AMQP
- Response aggregation: You want to collect all device responses in a central monitoring system regardless of the original protocol
Until now, implementing such multiprotocol workflows required complex external routing logic or multiple intermediate systems. Response diversion brings this capability directly into Ditto’s connectivity layer.
How response diversion works
Response diversion is configured at the connection source level using a key in the specific config and special header mapping keys:
{
"headerMapping": {
"divert-response-to-connection": "target-connection-id",
"divert-expected-response-types": "response,error,nack"
},
"specificConfig": {
"is-diversion-source": "true"
}
}
And in the target connection, by defining a target. In the case of multiple sources one or exactly the same number of sources targets are required. If multiple targets are configured they are mapped to the sources by order. Only diverted responses will be accepted by source connections which ids are defined in the specific config under the key ‘authorized-connections-as-sources’ in a comma separate format.
{
"id": "target-connection-id-1",
"targets": [
{
"address": "command/redirected/response",
"topics": [],
"qos": 1,
"authorizationContext": [
"pre:ditto"
],
"headerMapping": {}
}
],
"specificConfig": {
"is-diversion-target": "true"
}
}
{
"targets": [
{
"address": "command/redirected/response",
"topics": [],
"qos": 1,
"authorizationContext": [
"pre:ditto"
],
"headerMapping": {}
}
],
"specificConfig": {
"is-diversion-target": "true",
"authorized-connections-as-sources": "target-connection-id-1,..."
}
}
When a command is received through a source with response diversion configured, Ditto intercepts the response and routes it through the specified target connection instead of the original reply target.
Real-world use case: AWS IoT Core with Kafka
Let’s explore a practical scenario that demonstrates the power of response diversion. In this setup:
- Devices communicate with AWS IoT Core via MQTT (bidirectional)
- Apache Kafka IoT Core pushes device commands to a Kafka topic
- Device commands are consumed from Kafka topics
- Responses must go back to AWS IoT Core via MQTT (since IoT Core doesn’t support Kafka consumers)
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ AWS IoT Core │ │ Kafka Bridge │ │ Apache Kafka │ │ Eclipse Ditto │
│ (MQTT) │ │ /Analytics │ │ │ │ │
│ │ │ │ │ │ │ │
│ ┌─────────────┐ │ │ ┌─────────────┐ │ │ ┌─────────────┐ │ │ ┌─────────────┐ │
│ │Device │ │───▶│ │MQTT→Kafka │ │───▶│ │device- │ │───▶│ │Kafka Source │ │
│ │Commands │ │ │ │Bridge │ │ │ │commands │ │ │ │Connection │ │
│ │(MQTT topics)│ │ │ │ │ │ │ │topic │ │ │ │ │ │
│ └─────────────┘ │ │ └─────────────┘ │ │ └─────────────┘ │ │ └─────────────┘ │
│ ▲ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ ▼ │
│ │ │ │ │ │ │ │ ┌─────────────┐ │
│ │ │ │ │ │ │ │ │Command │ │
│ │ │ │ │ │ │ │ │Processing │ │
│ │ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ └─────────────┘ │
│ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ ▼ │
│ │ │ │ │ │ │ │ ┌─────────────┐ │
│ │ │ │ │ │ │ │ │Response │ │
│ │ │ │ │ │ │ │ │Diversion │ │
│ │ │ │ │ │ │ │ │Interceptor │ │
│ │ │ │ │ │ │ │ └─────────────┘ │
│ │ │ │ │ │ │ │ │ │
│ │ │ │ │ │ │ │ ▼ │
│ ┌─────────────┐ │ │ │ │ │ │ ┌─────────────┐ │
│ │Device │ │◀───┼─────────────────┼────┼─────────────────┼────│ │MQTT Target │ │
│ │Responses │ │ │ │ │ │ │ │Connection │ │
│ │(MQTT topics)│ │ │ │ │ │ │ │(AWS IoT) │ │
│ └─────────────┘ │ │ │ │ │ │ └─────────────┘ │
└─────────────────┘ └─────────────────┘ └─────────────────┘ └─────────────────┘
Legend:
───▶ Command Flow (MQTT → Kafka → Ditto)
◀─── Response Flow (Ditto → MQTT, bypassing Kafka)
Example Configuration
First, create the Kafka connection that consumes device commands:
{
"id": "kafka-commands-connection",
"connectionType": "kafka",
"connectionStatus": "open",
"uri": "tcp://kafka-broker:9092",
"specificConfig": {
"bootstrapServers": "kafka-broker:9092",
"saslMechanism": "plain"
},
"sources": [{
"addresses": ["device-commands"],
"authorizationContext": ["ditto:kafka-consumer"],
"headerMapping": {
"device-id": "{{ header:device-id }}",
"divert-response-to-connection": "aws-iot-mqtt-connection",
"divert-expected-response-types": "response,error"
}
}]
}
Next, create the MQTT connection that will handle diverted responses:
{
"id": "aws-iot-mqtt-connection",
"connectionType": "mqtt",
"connectionStatus": "open",
"uri": "ssl://your-iot-endpoint.amazonaws.com:8883",
"sources": [],
"targets": [
{
"address": "device/{{ header:device-id }}/response",
"topics": [],
"headerMapping": {
"device-id": "{{ header:device-id }}",
"correlation-id": "{{ header:correlation-id }}"
}
}
],
"specificConfig": {
"is-diversion-target": "true"
}
}
Flow explanation
- Command ingestion: Kafka connection consumes device commands from the
device-commandstopic - Response diversion: Commands are configured to divert responses to the
aws-iot-mqtt-connection - Response routing: Responses are automatically published to AWS IoT Core via MQTT on the device-specific response topic
- Device notification: Devices receive responses via their subscribed MQTT topics in AWS IoT Core
This setup enables a seamless flow from Kafka-based systems back to MQTT-based device communication without requiring external routing logic.
Try it out
Response diversion is available starting with Eclipse Ditto version 3.8.0. Update your deployment and start experimenting with multi-protocol workflows!
The feature documentation provides comprehensive configuration examples and troubleshooting guidance. We’d love to hear about your use cases and feedback.
Get started with response diversion today and unlock new possibilities for your IoT connectivity architecture.
–
The Eclipse Ditto team
October 01, 2025
Key Highlights from the 2025 Jakarta EE Developer Survey Report
by Tatjana Obradovic at October 01, 2025 02:09 PM
The results are in! The State of Enterprise Java: 2025 Jakarta EE Developer Survey Report has just been released, offering the industry’s most comprehensive look at the state of enterprise Java. Now in its eighth year, the report captures the perspectives of more than 1700 developers, architects, and decision-makers, a 20% increase in participation compared to 2024.
The survey results give us insight into Jakarta EE’s role as the leading framework for building modern, cloud native Java applications. With the release of Jakarta EE 11, the community’s commitment to modernisation is clear, and adoption trends confirm its central role in shaping the future of enterprise Java. Here are a few of the major findings from this year’s report:
Jakarta EE Adoption Surpasses Spring
For the first time, more developers reported using Jakarta EE (58%) than Spring (56%). This clearly indicates growing awareness that Jakarta EE provides the foundation for popular frameworks like Spring. This milestone underscores Jakarta EE’s momentum and the community’s confidence in its role as the foundation for enterprise Java in the cloud era.
Rapid Uptake of Jakarta EE 11
Released earlier this year, Jakarta EE 11 has already been adopted by 18% of respondents. Thanks to its staged release model, with Core and Web Profiles first, followed by the full platform release, developers are migrating faster than ever from older versions.
Shifts in Java SE Versions
The community continues to embrace newer Java versions. Java 21 adoption leapt to 43%, up from 30% in 2024, while older versions like Java 8 and 17 declined. Interestingly, Java 11 showed a rebound at 37%, signaling that organisations continue to balance modernisation with stability.
Cloud Migration Strategies Evolve
While lift-and-shift (22%) remains the dominant approach, developers are increasingly exploring modernisation paths. Strategies include gradual migration with microservices (14%), modernising apps to leverage cloud-native features (14%), and full cloud-native builds (14%). At the same time, 20% remain uncertain, highlighting a need for clear guidance in this complex journey.
Community Priorities
Survey respondents reaffirmed priorities around cloud native readiness and faster specification adoption, while also emphasising innovation and strong alignment with Java SE.
Why This Matters
These findings highlight not only Jakarta EE’s accelerating momentum but also the vibrant role the community plays in steering its evolution. With enterprise Java powering mission-critical systems across industries, the insights from this survey provide a roadmap for organisations modernising their applications in an increasingly cloud native world.
A Call to the Community
The Jakarta EE Developer Survey continues to serve as a vital barometer of the ecosystem. With the Jakarta EE Working Group hard at work on the next release, including innovative features, there’s never been a better time to get involved. Whether you’re a developer, architect, or enterprise decision-maker, now is the perfect time to get involved:
- Explore the full report
- Join the Jakarta EE Working Group: Shape the platform’s future while engaging directly with the community.
- Contribute: Your feedback, participation, and innovations help Jakarta EE evolve faster.
With the Jakarta EE Working Group already preparing for the next release, including new cloud native capabilities, the momentum is undeniable. Together, we are building the future of enterprise Java.
September 30, 2025
Testing and developing SWT on GTK
by Jonah Graham at September 30, 2025 03:21 PM
I have recently started working on improved support of GTK4 in SWT and I have been trying to untangle the various options that affect SWT + GTK and how everything goes together.
Environment Variables
These are key environment variables that control where and how SWT draws in GTK land.
- SWT_GTK4: If this is set to 1 then SWT will attempt to use GTK4 libraries
- GDK_BACKEND: Which backend the GDK layer (a layer below GTK) uses to draw. Can be set to x11 or wayland.
- DISPLAY: when GDK_BACKEND is x11, controls which display the program is drawn on.
If SWT_GTK4 or GDK_BACKEND is set to a value that is not supported, then generally the code gracefully falls back to the other value. For example, setting SWT_GTK4=1 without GTK4 libraries will attempt to load GTK3 libraries.
If DISPLAY is set to an invalid value, you will generally get a org.eclipse.swt.SWTError: No more handles [gtk_init_check() failed] exception (although there are other reasons you can get that exception).
GDK_BACKEND is often set by unexpected places. For example on my machine I often find GDK_BACKEND set to x11, even though I have not requested that. Other places, such as VSCode may force setting GDK_BACKEND depending on different circumstances. Therefore I recommend being explicit/careful with GDK_BACKEND to ensure that SWT is using the backend you expect.
X11 and Wayland
When Wayland is in use, and GDK_BACKEND=x11, then Xwayland is used to bridge the gap between an application written to use X11 and the user’s display. Sometimes the behaviour of Xwayland and its interactions can differ from using a machine with X as the real display. To test this you may want to install a machine (or VM) with a distro that uses X11 natively, such as Xubuntu. The alternative is to use a VNC server (see below section).
X11 VNC Server
Rather than installing a VM or otherwise setting up a different machine you can use a VNC Server running an X server. This has the added benefit of giving a mostly accurate X11 experience, but also benefits from maintaining its own focus and drawing, allowing X11 tests to run without interrupting your development environment.
In the past I have recommended using Xvfb as documented in CDT’s testing manual. However, for my current SWT development I have used tiger VNC so I can see and interact with the window under test.
When I was experimenting on setting this up I seemed to have accidentally changed my Ubuntu theme. I was doing a bunch of experimenting, so I’m not sure what exactly I did. I have included the steps I believe are necessary below, but I may have edited out an important step – if so, please comment below and I can update the document.
These are the steps to setup and configure tiger vnc that worked for me on my Ubuntu 25.04 machine:
-
sudo apt install tigervnc-standalone-server tigervnc-commonInstall the VNC tools sudo apt install xfce4 xfce4-goodiesInstall an X11 based window manager and basic tools (there are probably some more minimal sets of things that could be installed here)vncpasswdConfigure VNC with a password access controlsudo vi /etc/X11/Xtigervnc-sessionEdit how X11 session is started. I found that the default didn’t work well, probably because xfce4 was not the only thing installed on my machine and the Xsession script didn’t quite know what to do. Theexec /etc/X11/Xsession "$@"didn’t launch successfully, so I replaced that line with these lines:
unset SESSION_MANAGER
unset DBUS_SESSION_BUS_ADDRESS
exec startxfce4
TheSESSION_MANAGERandDBUS_SESSION_BUS_ADDRESSare unset because I wanted to keep this session independent of other things running on my machine and I was getting errors without them unset.vncserver :99Start the VNC Server – adjust:99for the display you want to use, you setDISPLAYenvironment variable to:99in this case.xtigervncviewer -SecurityTypes VncAuth -passwd /tmp/pathhere/passwd :99Start the viewer, use the command that vncserver output as part of its startup
Wayland Remote connection
I have not had the opportunity to use this much yet, but recent Ubuntu machines come with desktop sharing using RDP based on gnome-remote-desktop. This should allow connecting to a Ubuntu machine and use Wayland remotely. Enable it from Settings -> System -> Remote Desktop and connect to the machine using Remote Desktop.
What to test?
Now that I am developing SWT, specifically targetting GTK4 work, there are different configurations of the above to test. My primary focus is to test:
SWT_GTK4=0withGDK_BACKEND=x11running on the defaultDISPLAYthat is connected to XwaylandSWT_GTK4=1withGDK_BACKEND=wayland(in this caseDISPLAYis unused)
However these additional settings seem useful to test, especially as x11 backend sometimes seems to be used unexpectedly on wayland:
SWT_GTK4=0withGDK_BACKEND=x11running on theDISPLAYconnected to my VNC. This is really useful for when I want to leave tests running in the backgroundSWT_GTK4=1withGDK_BACKEND=x11the behaviour of various things (such as the Clipboard) is different when using GTK4 with wayland. I don’t know how important this use case is long termSWT_GTK4=0withGDK_BACKEND=wayland– I don’t know if this really adds anything and have hardly tried this combination.
Run Configurations
Here is what a few of my run configurations look like

September 28, 2025
History
by Scott Lewis (noreply@blogger.com) at September 28, 2025 08:39 PM
by Scott Lewis (noreply@blogger.com) at September 28, 2025 08:39 PM
September 24, 2025
From Excess to Balance: The Collapse of All-You-Can-Eat
by Denis Roy at September 24, 2025 01:50 PM
A few years ago, I noticed that things were changing in the Eclipse Foundation's (EF) IT operations: we were adding servers, and lots of them.
Trays of 3U mega-machines, packing 14 compute units each, with on-board switches, immense fans and drawing much electrical power, providing our community with CPU cycles galore. Storage devices could not keep up, so in came the clustered mega-storage solution, nine massive machines with drives and drives and drives, coupled with expensive switching gear to link everything together.
And yet, it's still not enough. And it's unsustainable.
You may have heard a new buzzword that's been making inroads into the IT and Developer mainstreams: sustainability. There are a few articles floating about that mention it. The Eclipse Foundation is not immune to the unsustainable practice of unlimited consumption, and at the IT Desk, we're pivoting. We have to.
It's all about fairness. Responsible usage is a shared task to be supported by all, not just a few. In the following months, the engineers in the EF IT team will work towards measuring what matters and drawing baselines for reasonable consumption. Our systems will then be adapted to inform you if those reasonable consumption limits have been reached.
What does this mean? Well, that build that has been running continuously in the background may come to a stop, with an invitation to resume it -- tomorrow. The 275MB of the same dependencies that are downloaded 5x each day may fail after the third time, inviting you to resume -- later. Those 40,000 files produced by each build may be acceptable -- once, but not continuously.
The EF is here to help. We'll strive to provide visibility ands predictability in our operations. We'll start in observer-mode first. We'll communicate and share our findings. We'll help you adapt to the new sustainable environment.
The burden of responsible usage belongs to all of us -- for a fair, open and sustainable future.
September 23, 2025
Businesses built on open infrastructure have a responsibility to sustain it
by Mike Milinkovich at September 23, 2025 01:04 PM
The global software ecosystem runs on open source infrastructure. As demand grows, we invite the businesses who rely on it most to play a larger role in sustaining it.
Open source infrastructure is the backbone of the global digital economy. From registries to runtimes, open source underpins the tools, frameworks, and platforms that developers and enterprises rely on every day. Yet as demand for these systems grows, so too does the urgency for those who depend on them most to play a larger role in sustaining their future.
Today, the Eclipse Foundation, alongside Alpha-Omega, OpenJS Foundation, Open SSF, Packagist (Composer), the Python Software Foundation (PyPI), the Rust Foundation (crates.io), and Sonatype (Maven Central), released a joint open letter urging greater investment and support for open infrastructure. The letter calls on those who benefit most from these critical digital resources to take meaningful steps toward ensuring their long-term sustainability and responsible stewardship.
The scale of open source’s impact cannot be overstated: A 2024 Harvard study, The Value of Open Source Software, estimated that the supply-side value of widely used OSS is estimated to top $4.15 billion, while the demand-side value reached $8.8 trillion. Even more striking, 96% of that value came from the work of just 5% of OSS developers. The authors of the study estimate that without open source, organisations would need to spend more than 3.5 times their current software budgets to replicate the same capabilities.
This open ecosystem now powers much of the software industry worldwide, a sector worth trillions of dollars. Yet the investment required to sustain its underlying infrastructure has not kept pace. Running enterprise-grade infrastructure that provides zero downtime, continuous monitoring, traceability, and secure global distribution carries very real costs. The rapid rise of generative and agentic AI has only added to the strain, driving massive new workloads, many of them automated and inefficient.
The message is clear: with meaningful financial support and collaboration from industry, we can secure the long-term strength of the open infrastructure you rely on. Without that shared commitment, these vital resources are at risk.
Open VSX: Critical infrastructure worth investing in
The Eclipse Foundation stewards Open VSX, the world’s largest open source registry for VS Code extensions. Originally created to support Eclipse Foundation projects, it has grown into essential infrastructure for enterprises, serving millions of developers. Today it is the default marketplace for many VS Code forks and cloud environments, and as AI-native development and platform engineering accelerate, Open VSX is emerging as a backbone of extension infrastructure used by AI-driven development tools.
Open VSX currently handles over 100 million downloads each month, a nearly 4x increase since early 2024. This rapid growth underscores the accelerating demand across the ecosystem. Innovative, high-growth companies like Cursor, Windsurf, StackBlitz, and GitPod (now Ona), are just a few of the many organisations building on and benefiting from Open VSX. It is enterprise-class infrastructure that requires significant investment in security, staffing, maintenance, and operations.
Yet there is a clear imbalance between consumption and contribution.
Since its launch in September 2022:
- Over 3,000 issues have been submitted by more than 2,500 individuals
- Around 1,200 pull requests have been submitted, but only by 43 contributors
In a global ecosystem with tens of thousands of users, fewer than 50 people are doing the work to keep things running and improving. That gap between use and support is difficult to maintain over the long term.
A proven model for sustainability
The Eclipse Foundation also stewards Eclipse Temurin, the open source Java runtime provided by the Adoptium Working Group. With more than 700 million downloads and counting, Temurin has become a cornerstone of the Java ecosystem, offering enterprises a cost-effective, production-grade option.
To help maintain that momentum, the Adoptium Working Group launched the Eclipse Temurin Sustainer Program, designed to encourage reinvestment in the project and support faster releases, stronger security, and improved test infrastructure. The new Temurin ROI calculator shows that enterprises can save an average of $1.6 million annually by switching to open source Java.
Together, Open VSX and Temurin demonstrate what is possible when there is shared investment in critical open source infrastructure. But the current model of unlimited, no-cost use cannot continue indefinitely. The shared goal must be to create a sustainable and scalable model in which commercial consumers of these services provide the primary financial support. At the same time, it is essential to preserve free access for open source users, including individual developers, maintainers, and academic institutions.
We encourage all adopters and enterprises to get involved:
- Contribute to the code: Review issues, submit patches, and help evolve the projects in the open under Eclipse Foundation governance.
- Sustain what you use: Support hosting, testing, and security through membership, sponsorship, or other financial contributions, collaborating with peers to keep essential open infrastructure strong.
Investing now helps ensure the systems you depend on remain resilient, secure, and accessible for everyone.
Looking ahead
The growth of Open VSX and Eclipse Temurin underscores their value and importance. They have become cornerstones of modern development, serving a global community and fueling innovation across industries. But growth must be matched with sustainability. Because those who benefit most have not always stepped up to support these projects, we are implementing measures such as rate limiting. This is not about restricting access. It is about keeping the doors open in a way that is fair and responsible.
We are at a turning point. The future of open source infrastructure depends on more than goodwill. I remain optimistic that we can meet this challenge. By working together, industry and the open source community can ensure that these vital systems remain reliable, resilient, and accessible to all. I invite you to join us in honouring the spirit of open source by aligning responsibility with usage and helping to build a sustainable future for shared digital infrastructure.
September 09, 2025
Building MCP Servers: Tool Descriptions + Service Contracts = Dynamic Tool Groups
by Scott Lewis (noreply@blogger.com) at September 09, 2025 12:18 AM
The Model Context Protocol (MCP) can easily be used to expose APIs and services in the form of MCP tools...i.e. functions/methods that can take input, perform some actions based upon that input, and produce output, without specifying a particular language or runtime.
OSGi Services (and Remote Services) provide a dynamic, flexible, secure environment for microservices, with clear well-established mechanisms for separating service contracts from service implementations.
One way to think of a service contract for large language models (LLMs) is that the service contract can be enhanced to provide LLM-processable metadata for each tool/method/function. Any service contract can still be used by human developers (API consumers), but with tool-specific meta-data/descriptions added, larger service contracts can be also used by any model.
Since service contracts in most languages are sets of functions/methods, the service contract can also be used to represent groupings of MCP tools, or Dynamic MCP ToolGroups. The example on the MCPToolGroups page and on the Bndtools project templates, is a simple example of grouping a set of related functions/methods into a service contract and including MCP tool meta-data (tool and tool param text descriptions).
by Scott Lewis (noreply@blogger.com) at September 09, 2025 12:18 AM
Eclipse Collections Categorically: Level up your programming game
September 09, 2025 12:00 AM
August 29, 2025
Eclipse in Wayland (2025)
by Lorenzo Bettini at August 29, 2025 08:43 AM
August 26, 2025
Building MCP Servers: Dynamic Tool Groups
by Scott Lewis (noreply@blogger.com) at August 26, 2025 12:04 AM
Currently, adding tools to MCP servers is a static process. i.e. a new tool is designed and implemented, MCP meta-data (descriptions) added via annotations, decorators, or code, the new code is added to the MCP server, things are compiled and started, tested, debugged, etc.
As well, there is currently no mcp concept of tool 'groups'...i.e. multiple tools that are grouped together based upon function, common use case, organization, or discoverability. Most current MCP servers have a flat namespace of tools.
I've created a repo with a small set of classes, based upon the mcp-java-sdk and the mcp-annotations projects, that supports the dynamic adding and removing of tool groups from mcp servers.
In environments with the OSGi service registry, this allows the easy, dynamic, and secure (type safe) adding and removing of OSGi services (and/or remote services) to MCP servers.
by Scott Lewis (noreply@blogger.com) at August 26, 2025 12:04 AM
August 22, 2025
Building MCP Servers: Alternative Transports
by Scott Lewis (noreply@blogger.com) at August 22, 2025 02:38 AM
by Scott Lewis (noreply@blogger.com) at August 22, 2025 02:38 AM
August 02, 2025
Building MCP Servers: Preventing AI Monopolies
by Scott Lewis (noreply@blogger.com) at August 02, 2025 09:21 PM
I recently read an insightful article about using open protocols (MCP in this case) to prevent user context/data lock-in at the AI application layer:
Open Protocols Can Prevent AI Monopolies
In the spirit of this article, I've decided to make an initial code contribution to the MCP java sdk project
by Scott Lewis (noreply@blogger.com) at August 02, 2025 09:21 PM
July 31, 2025
Langium 4.0 is released!
July 31, 2025 12:00 AM
July 11, 2025
Building MCP Servers - part 3: Security
by Scott Lewis (noreply@blogger.com) at July 11, 2025 10:46 PM
There have been recent reports of critical security vulnerabilities on the mcp-remote project, and the mcp inspector project.
I do not know all the technical details of the exploits, but it appears to me that in both cases it has to do vulnerabilities introduced by the MCP Server implementation. and use of the stdio MCP transport.
I want to emphasize that example described in these two posts
is using mechanisms that are...though heavy usage by commercial server technologies over the past 10 years...not subject to the same sorts of remote vulnerabilities seen by the mcp-remote and mcp-inspector projects.
Also, the flexibility in discovery and distribution provided by the RSA Specification and the RSA implementation used, allows for addressing MCP Server remote tools, or protocol weaknesses, quickly and easily, without having to update the MCP Server or tooling implementation code.
by Scott Lewis (noreply@blogger.com) at July 11, 2025 10:46 PM
July 08, 2025
Building MCP Servers: Integration via Remote Tools
by Scott Lewis (noreply@blogger.com) at July 08, 2025 07:35 PM
It has become popular to build Model Context Protocol Servers. This makes a lot of sense from the developer-as-integrator point of view, since the MCP specification and multi-language SDKs make it possible to easily integrate resources, prompts, and tools into multiple LLMs without having to use the model-and-language-specific model APIs directly.
MCP tools spec provides a general way for LLMs to use tool meta-data (e.g. text descriptions) for the tool's required input data, behavior, and output data. These text descriptions can then be used by the LLM...in combination with interaction with the user...to decide when and how to use the tool...i.e. to call the function and provide some output to the LLM and/or the user.
Building an MCP Server
When creating a new MCP Server, it's easiest to create the tool metadata and implement the tool functionality as part of a new MCP server implementation. But this approach requires that every new tool (or integration with existing API/servers) results in a new MCP server or an update/new version of an existing MCP server.
Remote Tools
It's frequently better architecture to decouple the meta-data declaration and implementation of a given tool from the MCP Server itself, and allow the MCP Server to dynamically add/remove tools at runtime, as tools can then be discovered, added, meta-data made available to the model(s), called, evaluated, and potentially removed or updated without the creation of an entirely new MCP Server, but rather dynamically discovering, securing, importing, using, evaluating, updating, and removing tools from an MCP Server.
This approach is potentially more secure (as it allows tool-specific authentication and access control), more flexible, and more scalable, since remote tools can be distributed on multiple hosts over a network. And it allows easy integration with existing APIs.
In the next post I describe a working example that uses remote tools.
by Scott Lewis (noreply@blogger.com) at July 08, 2025 07:35 PM
Building MCP Servers - part 2: Example Using Remote Services and Bndtools
by Scott Lewis (noreply@blogger.com) at July 08, 2025 07:23 PM
In a previous post, I described how using dynamic remote tools could make building MCP Servers more flexible, more secure, and more scalable. In this post, I show an example MCP Server that uses remote services and Bndtools to build.
To use/try this example yourself see Installing Bndtools with Remote Services Support below.
Create a Bndtools Workspace via File->New->Bndtools Workspace, and choose the ECF/bndtools.workspace template from this dialog
Choose Next->Finish
Choose File->New->Bnd OSGi Project to show this dialog
There are 4 project templates. They should each be created in your workspace in turn:
1. MCP ArithmeticTools API Project (example name: org.test.api)
2. MCP ArithmeticTools Impl Server Project (ArimethicTools RS Server) (example name: org.test.impl)
3. MCP ArithmeticTools Consumer Project (MCP Server Impl/ArithmeticTools Consumer) (example name: org.test.mcpserver)
4. MCP ArithmeticTools Test Client Project (MCP Client) (example name: org.test.mcpclient)
Note: For the Impl/Consumer/Test Client project creations you will be prompted to provide the name of the project that you specified for the API Project (1)
Click Finish for each of the first 3 (server) projects.
If you wish to use your own MCP Client, and start the MCP Server (MCP ArithmeticTools Consumer Project) from your own MCP Client (stdio transport), see the Readme.md in the MCP ArithmeticTools Consumer project.
MCP ArithmeticTools Example Test Client
There is also a MCP Example Test Client Project template, implemented via the MCP Java SDK that makes a couple of test calls to the MCP Server/Arithmetic Tools Consumer server.
Note: When creating the test client, the project template will as for you to specify the names of the API project (1) and the MCP Server/ArithmeticTools Consumer project (3). For example:
Click Finish
You should have these four projects in the Bndtools Explorer
You can open source for any/all the projects, set breakpoints, etc.
Launching the ArithmeticTools Remote Service Impl server
The ArithmeticTools Impl server (2) must be launched first
To launch the ArithmeticTools Impl (2), see the Readme.md in that project
Launching MCP Client and MCP Server (with stdio transport)
The MCP Client (4) may then be launched, and it will launch the MCP Server (3) and use the MCP stdio transport. The MCP Client (4) Readme.md more info on launch and expected output of the MCP Client.
You may examine or set breakpoints in the ArithmeticTools Impl Server (2) or the MCP Client (4) to examine the communication sequence for calling the ArithmeticTools.add or multiple tools.
Installing Bndtools with Remote Services Support
2. Add ECF Latest Update site to Eclipse install with URL: https://download.eclipse.org/rt/ecf/latest/site.p2
3. Install Feature: SDK for Bndtools 7.1+
by Scott Lewis (noreply@blogger.com) at July 08, 2025 07:23 PM
July 03, 2025
Collaborative coding in the browser: The OCT Playground
July 03, 2025 12:00 AM
July 02, 2025
Eclipse Open VSX Registry Security Advisory
July 02, 2025 08:15 AM
This security advisory provides additional technical details following our initial statement and the corresponding CVE record.
TL;DR
A vulnerability in the Eclipse Open VSX Registry’s automated publishing system could have allowed unauthorized extension uploads. It did not affect existing extensions or admin functions.
The issue was reported on May 4, 2025, fully fixed by June 24, and followed by a complete audit. No evidence of compromise was found, but 81 extensions were proactively deactivated as a precaution.
The standard publishing process was not impacted. Recommendations have been issued to reduce future risk.
The Issue
On May 4, 2025, the Eclipse Foundation Security Team received a notification from Koi Security researchers about a potential vulnerability in the Eclipse Open VSX Registry extension publication process. The Security Team promptly contacted the Open VSX team, which confirmed the issue and began working on a fix. A first version of the fix was proposed within two weeks.
The Eclipse Open VSX Registry allows developers to publish extensions via CI/CD systems. To increase the availability of widely used extensions to its growing user base, it also includes a mechanism that automatically pulls, builds, and publishes a curated list of extensions. This list is publicly maintained in a configuration file. The vulnerability was found in this automated process.
Specifically, build scripts were executed without proper isolation, which could have inadvertently exposed a privileged token. This token allowed publishing of new extension versions under any namespace, including those not owned by an attacker. However, it did not allow deletion of existing extensions, overwriting of published versions, or access to administrative features of the registry.
To exploit this vulnerability, an attacker would need to either:
- Take over an already accepted extension (e.g., by compromising the developer’s account) and inject malicious code to exfiltrate the token; or
- Submit a new extension for inclusion in the auto-publish list, have it accepted (following a manual review of the pull request), and later push a new version with code designed to exfiltrate the token.
In both scenarios, any extension published using the token would appear to originate from the privileged user, which serves as a basis for the ongoing investigation into potential exploitation.
The Fix
The Eclipse Open VSX team implemented sandboxing for the extension build process to isolate builds and protect credentials. The fix underwent several iterations and was successfully deployed on June 24, 2025.
Importantly, this vulnerability only affected the auto-publishing mechanism. An attacker would have needed to control an extension listed for automatic updates or get their own extension added to the list. The standard extension publishing workflow is not affected by this vulnerability.
Timeline Summary
- May 4, 2025 – Vulnerability reported by Koi Security researchers
- May 5–17, 2025 – Issue confirmed and first version of fix developed
- May 17–June 24, 2025 – Development and testing of updated versions of the fix
- June 24, 2025 – Final fix approved and deployed; privileged token rotated
- Post-deployment – Audit of all affected extensions completed
- June 27, 2025 – CVE-2025-6705 published
- July 2, 2025 – Security advisory published
Investigation Summary
As the root cause was the lack of build isolation, the implemented patch introduces sandboxing and separation between build processes. It was deployed on June 24, 2025, and the potentially exposed privileged token was rotated following the deployment.
To determine whether the vulnerability had been exploited, the Eclipse Security and Open VSX teams audited all extensions published using the privileged token. These were cross-referenced with all extensions listed for automatic publication, whether currently or previously included. The focus was first on extensions that were published using the privileged token but were never added to the auto-publish list.
Findings: 14 extensions, encompassing a total of 20 unique published versions, were identified as having been published by the privileged user without a clear link to the auto-publish list. Although this is atypical, there has long existed a one-off workflow allowing Open VSX Registry operators to publish new extensions manually. It is most likely that these 20 revisions were published using this workflow.
As automation had reached its limits, the team manually reviewed the suspicious extensions. All were deemed legitimate and did not display signs of compromise. Indicators of compromise considered included:
- Mismatches between publication dates and repository tags/releases
- Publication of unknown versions
- Discrepancies between Eclipse Open VSX and the Microsoft Visual Studio Code Marketplace
- Sudden change in publishing behavior (e.g., from the extension owner to the privileged user)
- Other anomalous patterns
We then examined extensions legitimately published by the privileged user (by virtue of being listed for auto-publishing), searching for irregularities based on the indicators mentioned above. We identified 51 such extensions (61 unique extension versions), warranting further manual investigation. In all cases, however, the anomalies were ultimately ruled out. For example, while certain extension versions lacked a corresponding tag or formal release in their source repository, their version numbers had previously appeared in the build configuration history within the source repository. These versions were never officially released, but their publication dates aligned with the commit dates, strengthening the confidence that these were false positives rather than signs of compromise.
Conclusion: None of the 65 identified extensions (81 distinct published versions) showed evidence of being compromised. Nevertheless, as a precaution, all 81 versions have been deactivated while we contact their respective authors. Should evidence of compromise emerge, further advisories will be issued.
Recommendations
Based on our findings, we recommend the following actions to address the root causes of this vulnerability:
For Open VSX Registry operators:
- Mitigate risk from untrusted code: enforce a documented vetting process for new extensions before adding them to the auto-publish list. This limits exposure from potentially malicious submissions.
- Reduce exposure window: periodically review accepted extensions, especially after updates, to detect suspicious behavior that may emerge post-approval.
- Contain credential blast radius: replace shared, privileged tokens with namespace-specific credentials. This enforces the principle of least privilege and prevents cross-namespace publishing.
- Eliminate insecure workflows: consider disabling the auto-publishing mechanism entirely, or at a minimum, remove the one-off manual publishing feature, which bypasses the protections applied to the automated pipeline.
For the Open VSX user community:
- Exercise caution when installing or updating extensions. Extensions can access your development environment and may introduce risk if compromised. They are a critical part of the software supply chain and must be treated as such.
More Information
The Eclipse Open VSX Registry has grown significantly in popularity. We are grateful to Koi Security researchers for their responsible disclosure and encourage all users and vendors relying on Eclipse Open VSX to contribute to its continued security and sustainability. The Eclipse Foundation remains committed to ensuring the Open VSX Registry is a safe, trusted, and reliable platform for distributing and consuming secure, high-quality extensions.
For technical or security-related inquiries, please contact the Eclipse Foundation Security Team at security@eclipse-foundation.org.
June 30, 2025
A Brazilian Dream: Otavio Santana’s Rise Through Open Source
by Tatjana Obradovic at June 30, 2025 05:12 PM
“Sometimes, all someone needs is to believe it's possible”
Brazil stands at the heart of the Global South, shaping global technology and community movements. Follow us as we explore Otavio Santana’s inspiring journey from his beginnings in Salvador, Brazil, to the international stages of Java stardom. Discover how community, discipline, and an insatiable passion for learning empowered him to overcome social, economic, and linguistic barriers, and ultimately thrive in the world of open source software.
Growing Up in Salvador: The Start of a Remarkable Journey
Otavio Santana was born and raised in Salvador, Brazil’s sixth-largest city. His childhood was marked by economic hardship. “I came from a poor family, so I didn't have opportunities,” Otavio recalls. Raised primarily by his mother, he navigated a world of limitations with determination. Initially drawn to history and music, he couldn’t pursue those passions in academia and instead turned to computer science, a decision that would change his life.
Due to financial constraints, Otavio attended a private university that allowed him to work part-time and support his family. There, he learned C and C++, and secured his first job in the field. However, his interest in contributing to open source projects was hindered by a significant obstacle: he did not speak English.
Finding Belonging in the Java Community
Everything changed in 2010 when Otavio joined the Java community. Despite his limited English, he was welcomed with kindness and encouragement. “People tried to communicate with me, even though I didn’t speak a single word of English,” he says. This early support sparked a deep appreciation for the community and a growing interest in the language itself.
Though he initially doubted his ability to master Java, the welcoming spirit of the community helped him gain confidence. Java also promised better job prospects and higher salaries, making it a practical choice. In 2012, Otavio moved to São Paulo to pursue new opportunities and immerse himself further in the open source world. The more he contributed, the more awards he earned. To date, he has received numerous accolades, including several JCP Awards, the Java Champion title, and the Duke’s Choice Award. He has also authored several books.
Discipline, Dedication, and a Community of Support
Otavio’s progress was built on unwavering discipline. He has followed a routine of early mornings dedicated to study and open source contributions, with weekends often spent learning. “I consistently wake up at 5 a.m. to study or contribute to open source projects,” he says. This dedication led him to contribute to major projects such as integrating Java with Apache Cassandra, OpenJDK, and Adopt a JSR.
Through conferences like JavaOne, Otavio met other open source leaders, including fellow Brazilian Bruno Souza. Despite his language struggles, his work spoke volumes, soon leading to his first job opportunity. “My English was terrible, but my future boss trusted me because I was involved in open source.”
Chicken and Egg Problem: Challenges for Developers in the Global South
Otavio’s story also highlights systemic challenges faced by developers across Latin America. He refers to the chicken-and-egg problem of English proficiency: without good English, it is difficult to secure a job at an international company; without such a job, it is hard to improve one’s English. “Most people are just in survival mode – studying and working to support their families,” he explains.
The pandemic intensified these challenges. Many IT workers in Brazil were employed by government entities that were unprepared for remote work. As a result, layoffs increased. At the same time, the global shift to remote work raised the bar for both technical and linguistic qualifications.
Why Open Source Matters – Especially in Brazil
“Open source is a huge opportunity for poor people,” Otavio affirms. For developers in Brazil and similar contexts, open source offers a unique gateway to skill development, networking, and career growth. It is also one of the few avenues where economic background matters less than passion and contribution.
His first open source role did not come through traditional hiring channels, but from being noticed at a JavaOne conference. Open source became his credential, his classroom, and his bridge to the world.
Moving to Portugal: Expanding Horizons
Otavio eventually moved to Portugal, a decision driven by access to more tech conferences and a thriving open source scene. Events such as Devoxx and EclipseCon (now Open Community Experience) offered opportunities that were less accessible in Latin America. Being closer to these events made it easier for Otavio to stay engaged and connected with the global community.
How Open Source Foundations Can Help
Otavio believes that open source foundations can do more to include developers from Latin America. His recommendations include:
- Showcasing diverse role models: Interviews like his can inspire others to see what’s possible.
- Organising hackathons: Practical, hands-on experiences help new contributors break the ice.
- Fostering community connections: Introductions to active contributors can help newcomers envision their own paths.
“Sometimes, all someone needs is to believe it's possible,” Otavio says.
Otavio Santana’s story is a compelling reminder of how open source can transform lives. For him, and for many who follow a similar path, it’s not just a way to write code, but a path to greater possibilities: to dream boldly, reach globally, and create a future once thought unattainable.
Let’s make sure we keep those doors open for the next Otavio.
Otavio has made significant contributions to the Java and open source ecosystems. Since Java 8, he has helped shape the direction and objectives of the Java platform as a member of the JCP Executive Committee. Additionally, he serves as a committer and leader in several open source projects and specifications, showcasing his dedication to the community.
Recognised for his impactful work, he has received numerous accolades, including all categories of the JCP Awards and the Duke’s Choice Award. He is also a distinguished Java Champion and a member of the Oracle ACE programme.
Beyond technology, he is an enthusiast for history and the economy. He loves traveling, programming, and learning languages. He speaks Portuguese, English, Spanish, Italian, and French fluently and has a particular talent for dad jokes.
June 27, 2025
Vulnerability in Eclipse Open VSX Registry extension publication process
June 27, 2025 08:15 AM
On May 4th, the Eclipse Foundation (EF) Security Team received a notification from researchers at Koi Security regarding a potential issue in the Eclipse Open VSX marketplace extension publication process. The EF Security Team immediately contacted the Eclipse Open VSX team, and upon confirming the issue, work on a fix was promptly initiated.
Following several iterations and thorough testing (necessary due to the intrusive nature of the change to the extension build process) the fix was successfully deployed on June 24th.
We would like to thank the researchers for reporting the issue, reviewing the proposed fixes, and supporting the resolution process, as well as the members of the Eclipse Open VSX team who were involved.
The researchers have published their findings at Koi Security’s blog, providing further insight into the issue. Additionally, we have published CVE-2025-6705 to track and document this vulnerability.
A more detailed technical security advisory will be published in the coming days.
Eclipse Open VSX has grown in popularity in recent months, and we’re grateful to independent researchers for their investigation and responsible disclosure. We encourage all projects that depend on Eclipse Open VSX to consider contributing to or financially supporting the initiative.
June 25, 2025
Jakarta EE 11: Empowering Enterprise Java Developers with Enhanced Productivity and Performance
by Tatjana Obradovic at June 25, 2025 06:42 PM
The Eclipse Foundation has announced the release of Jakarta EE 11 Platform, which builds on previous Core Profile (December 2024) and Web Profile (March 2025) versions.
This release signifies advancement in simplifying enterprise Java, emphasising developer productivity and overall performance. Key highlights include modernised Test Compatibility Kits (TCKs), the introduction of the new Jakarta Data specification, major updates to existing specifications, and support for the latest Java LTS release, enabling developers to leverage the enhancements in Java 21, including Virtual Threads.
Key Features of Jakarta EE 11
Since the release of Jakarta EE 10, the enterprise Java renaissance has kept accelerating. Jakarta EE 11 builds on this progress, enhancing performance and increasing developer productivity, led by the introduction of the Jakarta Data specification.
Jakarta Data is a major step forward in simplifying persistence logic within enterprise applications. Key features include:
- BasicRepository: A foundational repository interface that provides out-of-the-box support for essential data operations, reducing boilerplate and setup time.
- CrudRepository: Builds on BasicRepository to offer full Create, Read, Update, and Delete (CRUD) functionality, enabling clean and intuitive database interactions.
- Pagination: Includes support for both offset-based and cursor-based pagination, giving developers flexible tools to efficiently manage large data sets.
- Query Language: Introduces a concise, purpose-built query language that simplifies method-level query definitions directly within Jakarta Data repositories.
These capabilities make Jakarta Data a valuable addition for teams looking to build robust, data-centric applications with less code and greater clarity.
Modernising the TCK
A major part of the release was the modernisation and enhancement of the Jakarta EE Platform Test Compatibility Kit (TCK). This initiative improves maintainability and flexibility by incorporating modern testing tools like JUnit 5 and Maven, making it easier to evolve the TCK alongside the platform. These updates streamline compatibility testing and lower the barrier to contributing new tests, helping to drive future innovation across the Jakarta EE ecosystem. As a result, contributors can get started more easily, and new vendors or developers interested in participating will find it significantly more accessible to join and contribute. Those interested in getting involved can learn more and engage through the Jakarta EE TCK project page.
Streamlining Specifications
Jakarta EE 11 continues its evolution toward a more modern and efficient development model by refining and simplifying its specification set. A notable change is the removal of the deprecated Managed Beans specification, which had long been superseded by more flexible and powerful alternatives. This cleanup helps reduce legacy complexity and clarifies the recommended programming model going forward.
In line with this, the platform places a strong emphasis on Contexts and Dependency Injection (CDI) as the central programming model. CDI has been further enhanced in Jakarta EE 11, now serving as the standard replacement for Managed Beans, promoting consistency and a more declarative development style throughout the ecosystem.
The release also reinforces its commitment to modern Java features by embracing Java Records across various specifications, such as Jakarta Bean Validation. This allows developers to leverage concise, immutable data carriers while still benefiting from validation and integration features, helping to reduce boilerplate and improve readability.
Finally, Jakarta EE 11 aligns with the direction of the Java platform by removing all references to the Java SE SecurityManager, following the deprecation outlined in JEP 411. This shift enables the adoption of more contemporary, fine-grained security mechanisms that better suit today’s cloud-native and modular application architectures.
Leveraging Java 21 Enhancements
Jakarta EE 11 supports Java 17 or higher, with unique enhancements for Java 21 users. One of the most notable features is the updated Concurrency specification, which enables developers to leverage Virtual Threads in Java 21. This results in significant performance gains by allowing efficient handling of concurrent tasks without the overhead of traditional thread management.
Coming Next
Development on Jakarta EE 12 is already in progress, with a planned release in 2026. This next version aims to elevate the platform’s API source level to Java SE 21, while targeting Java SE 25 for runtime support. The community is actively working on enhancements across many specifications, including possible introductions like Jakarta Query and Jakarta MVC, and continued evolution of Jakarta NoSQL. Staying committed to its established two-year release cycle, Jakarta EE continues to prioritize long-term planning and sustained innovation.
Growing Ecosystem and Community Participation
The Jakarta EE ecosystem continues to expand, with many working group members, including Fujitsu, IBM, Oracle, Payara, and Tomitribe, certifying Jakarta EE 11 compatible products. The list of compatible implementations and products is expected to grow rapidly post-release.
The Jakarta EE community actively welcomes contributions and participation from all interested parties. To connect with the global community and participate in ongoing discussions and development efforts, visit the Jakarta EE website.
Anyone interested in contributing or staying informed is encouraged to connect with the community through jakarta.ee/connect/.
June 18, 2025
Using local LLMs in Theia AI with Ollama
June 18, 2025 01:24 PM
With the inclusion of PR 15795, the Theia 1.63.0 release that has been built today now supports Ollama release 0.9.0 which has introduced several improvements, which are now also available in Theia AI. This post highlights those improvements and gives a few general hints on how to improve your experience with Ollama in Theia AI.
Streaming Tool Calling
One of the most interesting features of the Ollama 0.9.0 release is support for streaming tool calling in the API. This makes working with the LLM much more convenient, as you can follow the output and tool calls during the generation. As a side effect, this also boosts output quality especially for tool calls for the models that support them. See this article for more information on this.
Starting with Theia 1.63.0, Theia AI uses the new streaming API including tool calling so that Ollama models now behave more like models from other providers, such as OpenAI, Google, and Anthropic.
Explicit Thinking Markup
Thinking or reasoning in LLMs have received much attention in the recent months, especially since deepseek-r1 was announced. While Ollama did support reasoning models for several months now, the reasoning steps have always been part of the actually generated response and could only be extracted by looking for <think>...</think> or similar markup generated by the LLM.
Ollama 0.9.0 has added support for distinguishing reasoning output and actual output on the API level. The new Ollama provider in Theia propagates this feature so collapsible reasoning blocks are rendered on the UI level.

Token Usage Reporting for Ollama
Of course, as you are running Ollama most likely on your own hardware, the token usage report is not as crucial as with paid third-party services. But still, it might be at least interesting to know how many tokens you are using with your own Ollama server.
The new Ollama provider released in Theia 1.63.0 now reports token usage statistics in the AI Configuration View in the tab Token Usage.
Support for Images in the Chat Context
Theia 1.63.0 has added preliminary support for images in the Theia AI chat. Using the + button in the AI Chat, it is now possible to add images which are passed to the LLM as part of the context. This is also supported for Ollama LLMs; but note that not many models support images. One example is the llava model.
If unsure, you can use the ollama show command in a terminal to check for the image capability of a model.
Configuration and Usage Hints
Using an Ollama server running on usually limited local hardware is most likely no competition for paid 3rd party services. But maybe the absence of token limits and privacy concerns are still sometimes more important than performance. To make Ollama models at least a bit more usable in practice, it is important to tweak the configuration to adapt it to your hardware and needs. This section gives some hints about possible optimizations. Note: these settings have been tested with an Apple Silicon M1 Max system; for other hardware, please refer to other blog articles that discuss Ollama optimizations.
Ollama Server Settings
To reduce the memory footprint of the request context, set these environment variables before executing ollama serve:
-
OLLAMA_FLASH_ATTENTION=1 -
OLLAMA_KV_CACHE_TYPE="q8_0"
See the Ollama FAQ for more details.
Theia AI Considerations
We might be tempted to download and use different models for different Theia AI agents, e.g. a universal model, such as qwen3 for universal tasks, and more specialized models, such as codellama for code-centric tasks like the Coder or Code Completion agent. However, we need to consider that each model that is loaded at the same time uses a lot of memory so that Ollama might decide to unload models that are not used at the moment. And since unloading and loading models takes some time, performance decreases if you use too many models.
Therefore, it is usually a better idea to use fewer models, for example one smaller model for tasks that should not take too long (such as Code Completion and Chat Session Naming), and one larger model for complex tasks (such as Architect or Coder agents). The best results can be achieved if both models fit in the available VRAM so that no model loading/unloading occurs.
Theia AI Request Settings
One important thing to note is that Ollama per default uses a context window of just 2048 tokens. Most tasks in Theia AI will not provide satisfactory results with this default. Therefore it is important to adjust the request settings and to increase the num_ctx parameter. As with the model size discussed before, the context size should be set to match the agent and task because a larger context leads to results with higher quality but longer runtimes. Therefore, it is important to play around with the settings and find the optimal settings matching the used models and the use case.
These hints provide some initial help:
- Check the Ollama server logs and keep an eye out for messages like this:
"truncating input prompt" limit=2048 prompt=4313 keep=4 new=1024
This message indicates that the input prompt exceeds the required context length, so the input prompt will be truncated (which will usually lead to bad answers from the model, as it will forget half of the question you were asking...) - Also during the process of loading a model, if you see a message like
llama_context: n_ctx_per_seq (2048) < n_ctx_train (40960)
this indicates that the current num_ctx (2048) setting is smaller than the context window the model was trained with (40960).
Alternatively, you can also useollama showto check for the context length property of the model. (Do not confuse the context length property, which is the theoretical maximum context length for this model with the num_ctx parameter, which is the actual context length used in requests).
To adjust the num_ctx parameter in Theia, go to Settings > AI Features > Model Settings and follow the link to open the settings.json file in the editor.
Then create a new setting like this:
"ai-features.modelSettings.requestSettings": [
{
"scope": {
"agentId": "Coder",
"modelId": "ollama/qwen3:14b",
"providerId": "ollama"
},
"requestSettings": { "num_ctx": 40960 }
}]
In the scope part, you declare the provider, model, and agent to which apply the settings. You can leave out either key to apply the setting for all providers, agents, and/or models, respectively.
In the same way you can adjust and play around with the other Ollama parameters, such as temperature, top_k, etc.
June 13, 2025
Visual Studio Adding Support for MCP Servers
by Scott Lewis (noreply@blogger.com) at June 13, 2025 07:20 PM
by Scott Lewis (noreply@blogger.com) at June 13, 2025 07:20 PM
June 11, 2025
CDO: New Release, New Homepage
by Eike Stepper (noreply@blogger.com) at June 11, 2025 03:37 PM
CDO 4.31 is now available, as usual, together with the Eclipse Simultaneous Release 2025-06.
This CDO release contains no new features because most of my time was absorbed by working on the new build system and the new CDO Homepage:
by Eike Stepper (noreply@blogger.com) at June 11, 2025 03:37 PM
June 10, 2025
Improving ECA Renewals with Automated Notifications
June 10, 2025 01:48 PM
To make it easier for our community to maintain an active contributor status, we're introducing a new notification service for the Eclipse Contributor Agreement (ECA).
Starting June 11, 2025, we will begin sending email reminders before a standalone ECA is set to expire. For those who need to take action, the email will have a subject line of "Action Required: Your Eclipse Contributor Agreement (ECA) is Expiring Soon" and will contain a link to renew the agreement.
If you are an Eclipse committer who has signed an Individual Committer Agreement (ICA), or an employee of a member organization that has signed the Member Company Committer Agreement (MCCA), you do not need to renew the standalone ECA, as both agreements already include it. If you are covered by one of these agreements, an expiring standalone ECA will not affect your ability to contribute. In this case, you will receive a separate informational email with the subject: "No Action Required: Your Eclipse Contributor Agreement (ECA) is Expiring Soon" to confirm your status.
For those covered only by a standalone ECA, if it expires, you won't be allowed to contribute to open source projects at Eclipse until you sign it again. Specifically:
- You will no longer be able to submit a merge request to an Eclipse project repositories hosted on Eclipse Foundation GitLab.
- Your commits included in a GitHub Pull Request will fail our automated ECA validation check.
If this happens, you can always restore your ability to contribute by visiting https://accounts.eclipse.org/user/eca and signing the ECA. Your contributor status will be restored once the new agreement is processed, which may take 5 to 15 minutes for our system caches to update.
For any questions or feedback, please join the discussion on our HelpDesk issue.
June 03, 2025
Remote Tools for Model Context Protocol (MCP) Servers
by Scott Lewis (noreply@blogger.com) at June 03, 2025 12:10 AM
The Model Context Protocol (MCP) is a new protocol for integrating AI/LLMs with existing software services. MCP Server Tools allow LLMs get additional context-relevant data, write/change remote data, and to take actions.
Most current MCP servers declare their tools statically. When the MCP server starts up it's available tools and any tool meta-data (such as text descriptions of the tool behavior provided in decorators or annotations) are made available to MCP clients that connect to an MCP server. The MCP client (LLM) can then call an available tool at the appropriate time, providing tool-specific input data, and the tool can take actions, get additional data, and provide those data to the client.
OSGi Remote Services/Remote Service Admin provides a open, standardized, multi-protocol, modular, extensible way to discover, dynamically export and import, and secure inter-process communication between services. Combining Remote Services with MCP Tool meta-data allows the creation of dynamic remote tools.
Remote Tools for MCP Servers
This README.md shows an example 'Arithmetic' service, with 'add' and 'multiply' tools defined and described via Java annotations to an ArithmeticTools service. The python MCP Server communicates with the Java Server (startup and after) to dynamically add to/update from its set of tools that it exposes to MCP Clients.
Here is a simple diagram showing the communication between and MCP client, the Python MCP Server, and a Java Arithmetic Service Server.
MCP Client (LLM) <- MCP -> Python MCP Server <- Arithmetic Service -> Java Server
The ArithmeticTools service is a simple example, but exposes a powerful and general capability, Arbitrary remote tool services may be declared and provided with the appropriate tool description meta-data, and then made dynamically available to any MCP servers created in Python, Java, or other languages. Both the MCP and RS/RSA are transport agnostic, allowing the service developer and service provider to use the remote-tool-appropriate-and-secure communication protocol.
by Scott Lewis (noreply@blogger.com) at June 03, 2025 12:10 AM
April 17, 2025
Langium AI: The fusion of DSLs and LLMs
April 17, 2025 12:00 AM
April 09, 2025
Security Incident Review: API Endpoint Exposure on accounts.eclipse.org
April 09, 2025 11:31 PM
In late March 2025, a security researcher in our community reported a security concern about a publicly accessible API endpoint containing user information on accounts.eclipse.org. After reviewing the issue, we determined this API endpoint was unnecessary and have since disabled it.
We looked through our access logs for the past few months and confirmed that the only requests were from the security researcher and the Eclipse Foundation staff who verified the report.
Background
The incident was introduced by the Drupal core JSON:API module being enabled in production without recommended field access restrictions. This module exposes data such as user entities and fields based on Drupal’s default permissions. While email addresses were not visible, other custom fields were publicly accessible for some users and staff:
- City
- Province/State
- Country
- Matrix ID
We did not intend for this information to be publicly accessible by this module. While we do ask our committers to provide their full postal address, that additional information is not stored in Drupal and was not exposed.
Timeline of Events
- January 19, 2025: Endpoint became accessible after going live with our Drupal 10 migration of accounts.eclipse.org
- March 26, 2025 (08:16 AM EDT): Received initial security report from ethical security researcher in the security@eclipse.org inbox.
- April 3, 2025 (11:08 AM EDT): Security Team confirmed the issue, and notified the WebDev team.
- April 3, 2025 (12:39 PM EDT): Endpoint was disabled.
Resolution and Next Steps
To resolve the issue and prevent similar incidents in the future, we have:
- Disabled Drupal's JSON:API module from accounts.eclipse.org
- Installed and configured the Field Permissions module to enforce field-level access control.
- Initiated a full audit of all our Drupal-based sites to ensure the JSON:API module is disabled and the Field Permissions module is enabled and properly configured.
We remain committed to continuously strengthening our security practices and protecting user data. We would like to thank security researcher Gaurang Maheta for promptly reporting this issue. If you have any questions or concerns, please contact us at privacy@eclipse.org or security@eclipse.org.
March 26, 2025
Use tracing in Eclipse plugins
by Andrey Loskutov (noreply@blogger.com) at March 26, 2025 09:34 AM
Many Eclipse developers (mis)use logging for debug messages. As the result, the Error Log view in Eclipse, which is supposed to show important errors or warnings, is quickly filled up with debug infos, hiding relevant warnings or errors:
But there is advanced tracing capability built in into Eclipse that provides exactly the required functionality (reporting low level, debug related information) and doesn't spam the Error Log view by default.
Here is the "classic" introduction into tracing that explains briefly the idea of debug tracing. Note, that tracing is switched off by default but can be switched on via command line or by a user via General -> Tracing preferences, so tracing functionality is available not only during debug session.
It is pretty straightforward to enable tracing for a plug-in.
Steps:
- Remove the current code to read any debug options flag from your plug-in (all what you did after reading article above :-))
- Your plug-in's activator must implement org.eclipse.osgi.service.debug.DebugOptionsListener
- In the plug-in's activator start(BundleContext context) method register debug options listener with the OSGI framework
- Create boolean fields to cache the debug flags state
- Update the flag state in the optionsChanged(DebugOptions options) method of the DebugOptionsListener interface (added in step 2)
Here is a simple code that does that:
import java.util.Hashtable;
import org.eclipse.core.runtime.Plugin;
import org.eclipse.osgi.service.debug.DebugOptions;
import org.eclipse.osgi.service.debug.DebugOptionsListener;
import org.osgi.framework.BundleContext;
public class MyPlugin extends Plugin implements DebugOptionsListener {
public static final String ID = "your_bundle_symbolic_name_as_in_the_manifest";
private static MyPlugin plugin;
private boolean debug;
private boolean traceExtraData;
public MyPlugin() {
super();
if (plugin != null) {
throw new IllegalStateException("MyPlugin is a singleton");
}
plugin = this;
}
@Override
public void start(BundleContext context) throws Exception {
super.start(context);
Hashtable<String, String> props = new Hashtable<>(4);
props.put(DebugOptions.LISTENER_SYMBOLICNAME, ID);
context.registerService(DebugOptionsListener.class.getName(), this, props);
}
@Override
public void optionsChanged(DebugOptions options) {
debug = options.getBooleanOption(ID + "/debug", false);
traceExtraData = debug? options.getBooleanOption(ID + "/debug/traceExtraData", false) : false;
}
/**
* @return true in case we are in debug; false otherwise
*/
public static boolean isDebug() {
return plugin.debug;
}
/**
* @return true in case we are in debug and want to trace extra data; false otherwise
*/
public static boolean isTracingExtraData() {
return plugin.traceExtraData;
}
}
To make your options show up in the preferences dialog:
- Contribute to the org.eclipse.ui.trace.traceComponents extension point in the plugin.xml and add your bundle entry there:
<extension point="org.eclipse.ui.trace.traceComponents">
<component id="some_unique_id" label="The label shown to the user in the tracing dialog">
<bundle name="your_bundle_name" />
</component>
</extension>
- Create a new .options file in the plugin directory with following content:
your_bundle_symbolic_name_as_in_the_manifest/debug=false
your_bundle_symbolic_name_as_in_the_manifest/debug/traceExtraData=false
- and add it to the build.properties (so it is shipped with the bundle):
bin.includes = .options,\
by Andrey Loskutov (noreply@blogger.com) at March 26, 2025 09:34 AM
March 21, 2025
A first look at Copilot in Eclipse
by Lorenzo Bettini at March 21, 2025 03:21 PM
March 09, 2025
Eclipse CDT 12 New Features – Eclipse IDE 2025-03
by Jonah Graham at March 09, 2025 03:05 AM
I am delighted to announce the release of Eclipse CDT 12 and CDT LSP 3, which will be generally available this Wednesday as part of the Eclipse IDE 2025-03 release.
The preferred way to get CDT is to install Eclipse CDT as part of the Eclipse IDE for C/C++ Developers using the installer. Further release information on how to get CDT will be published on Wednesday on the GitHub CDT 12 release page.
The two themes for the CDT 12 release are highlighting the new C/C++ editing experience based on the CDT LSP project by leveraging clangd and improved CMake integration, especially for CDT extenders. But there are many other new and noteworthy items, along with over 200 issues and PRs closed across CDT and CDT LSP repos.
Here is a video highlight of the new C/C++ Editing Experience based on the Language Server Protocol (LSP) using clangd and showing the improved CMake integration.





















