Skip to main content

Eclipse Collections 11.0 Released

by Donald Raab at November 30, 2021 03:07 AM

Features you want with the collections you need.

Hampton Court Palace, England, 2004, Photo taken by Donald Raab

Eclipse Collections 11.0 is here

Eclipse Collections 11.0 in Maven Central

I’m excited to share that the Eclipse Collections 11.0 release is now available in Maven Central. It has been over a year since we released Eclipse Collections 10.4 in August, 2020. Since then there have been three new JDK releases (15, 16, 17)! Eclipse Collections continues to participate in the OpenJDK Quality Outreach Program and tests against the latest releases of the JDK as they become available. We are currently building and testing actively against JDK 8, 11, 17, and 18 EA.

10 years as OSS, 6 years at Eclipse Foundation

GS Collections was released in January 2012, and was migrated to the Eclipse Foundation in December 2015. We celebrate 6 years of Eclipse Collections at the Eclipse Foundation in December 2021, and 10 years of open source development and public releases in January 2022.

Thank you to everyone who worked to make Eclipse Collections possible, and to all of the contributors who have invested time and energy into this wonderfully feature rich library the past 6 years at the Eclipse Foundation.

Here’s a look back at the article in InfoQ detailing GS Collections migrating to the Eclipse Foundation to become Eclipse Collections six years ago.

GS Collections Moves to the Eclipse Foundation

Thank you to the release team

The Eclipse Collections 11.0 release would not have been possible without the efforts of Eclipse Collections committer Sirisha Pratha and Eclipse Collections Project Lead Nikhil Nanivadekar. Thank you for all of your hard work at delivering this release!

Thank you to the community

The 11.0 release has a lot of new features submitted by our outstanding community of contributors. Thank you so much to all of the contributors who donated their valuable time to making Eclipse Collections more feature rich and even higher quality. Your efforts are very much appreciated.

New Features with Contributor Blogs

I’ve been encouraging Eclipse Collections contributors to write blogs about the features they contribute to the project. I do my best to set a good example and try to regularly blog about any features I added to Eclipse Collections, or new katas I add to the Eclipse Collections Kata repository.

Following are a few of the blogs written by contributors about features they have contributed to the Eclipse Collections 11.0 release.

  • Added containsAny and containsNone on primitive iterables.

Primitive containsAny & containsNone in Eclipse Collections

Blog by Rinat Gatyatullin

  • Added union, intersect, difference, symmetric difference, cartesianProduct , isSubsetOf, isProperSubsetOf to primitive sets.

Blogs by Sirisha Pratha

  • Added anySatisfyWithOccurrences, allSatisfyWithOccurrences, noneSatisfyWithOccurrences, detectWithOccurrences to Bag.

EC by Example: new features in the Bag API

Blog by Alex Goldberg

  • Added putAllMapIterable method to MutableMap.
  • Added withMapIterable to MutableMap.
  • Added newWithMap and newWithMapIterable to ImmutableMap.

New ways to create Maps in Eclipse Collections Map API

Blog by Neha Sardana

  • Added toImmutableList/Set/Bag/Map/BiMap to RichIterable.
  • Added toImmutableSortedList/Set/Bag to RichIterable.
  • Added toImmutableSortedBag/List/Set with Comparator to RichIterable.
  • Added toImmutableSortedBagBy/ListBy/SetBy with Function to RichIterable.

Improving the symmetry of converter methods in Eclipse Collections

Blog by Donald Raab

  • Added ClassComparer utility.

How to introspect and find conceptual symmetry between classes in Java

Blog by Donald Raab

More new Features w/ Examples

The following are the remaining list of new features. The features in bold have examples immediately following.

  • Added selectWithIndex and rejectWithIndex to OrderedIterable and ListIterable.
@Test
public void selectWithIndex()
{
var list = Lists.mutable.with(1, 2, 3, 4);
var actual = list.selectWithIndex((each, i) -> each + i > 3);
var expected = Lists.mutable.with(3, 4);
Assertions.assertEquals(expected, actual);
}

@Test
public void rejectWithIndex()
{
var list = Lists.mutable.with(1, 2, 3, 4);
var actual = list.rejectWithIndex((each, i) -> each + i > 3);
var expected = Lists.mutable.with(1, 2);
Assertions.assertEquals(expected, actual);
}
  • Added covariant overrides for sortThis().
  • Added covariant return types to methods in MultiReaderList that return this.
  • Added primitive singleton iterator.
  • Added toSortedList(Comparator) and toSortedListBy(Function) to primitive iterables.
@Test
public void toSortedListWithComparator()
{
var set = IntSets.immutable.with(1, 2, 3, 4, 5);
var list = set.toSortedList((i1, i2) -> i2 - i1);
var expected = IntLists.immutable.with(5, 4, 3, 2, 1);
Assertions.assertEquals(expected, list);
}

@Test
public void toSortedListByWithFunction()
{
var set = IntSets.immutable.with(1, 2, 3, 4, 5);
var list = set.toSortedListBy(Math::negateExact);
var expected = IntLists.immutable.with(5, 4, 3, 2, 1);
Assertions.assertEquals(expected, list);
}
  • Added isEqual and isSame to Pair and Triple as default methods.
@Test
public void isEqual()
{
Twin<String> pair1 = Tuples.twin("1", "1");
Assertions.assertTrue(pair1.isEqual());
Twin<String> pair2 = Tuples.twin("1", "2");
Assertions.assertFalse(pair2.isEqual());

Triplet<String> triple1 = Tuples.triplet("1", "1", "1");
Assertions.assertTrue(triple1.isEqual());
Triplet<String> triple2 = Tuples.triplet("1", "2", "1");
Assertions.assertFalse(triple2.isEqual());
}

@Test
public void isSame()
{
Twin<String> pair1 = Tuples.identicalTwin("1");
Assertions.assertTrue(pair1.isSame());
Twin<String> pair2 = Tuples.twin("1", new String("1"));
Assertions.assertFalse(pair2.isSame());

Triplet<String> triple1 = Tuples.identicalTriplet("1");
Assertions.assertTrue(triple1.isSame());
Triplet<String> triple2 = Tuples.triplet("1", new String("1"), "1");
Assertions.assertFalse(triple2.isSame());
}
  • Added converters from Pair and Triple to List types.
@Test
public void pairToList()
{
Twin<String> twin = Tuples.twin("1", "2");
var mutableList = Tuples.pairToList(twin);
var fixedSizeList = Tuples.pairToFixedSizeList(twin);
var immutableList = Tuples.pairToImmutableList(twin);
var expected = Lists.mutable.with("1", "2");
Assertions.assertEquals(expected, mutableList);
Assertions.assertEquals(expected, fixedSizeList);
Assertions.assertEquals(expected, immutableList);
}

@Test
public void tripleToList()
{
Triplet<String> triplet = Tuples.identicalTriplet("1");
var mutableList = Tuples.tripleToList(triplet);
var fixedSizeList = Tuples.tripleToFixedSizeList(triplet);
var immutableList = Tuples.tripleToImmutableList(triplet);
var expected = Lists.mutable.with("1", "1", "1");
Assertions.assertEquals(expected, mutableList);
Assertions.assertEquals(expected, fixedSizeList);
Assertions.assertEquals(expected, immutableList);
}
  • Added toImmutableSortedBagBy to Collectors2.
@Test
public void collectors2toImmutableSortedBagBy()
{
List<Integer> list = List.of(1, 2, 2, 3, 3, 3);
ImmutableSortedBag<Integer> bag =
list.stream().collect(
Collectors2.toImmutableSortedBagBy(Math::negateExact));
Comparator<Integer> c =
Functions.toIntComparator(Math::negateExact);
var expected = SortedBags.mutable.with(c, 1, 2, 2, 3, 3, 3);
Assertions.assertEquals(expected, bag);
}
  • Added toImmutableSortedMap and toImmutableSortedMapBy to Collectors2.
@Test
public void collects2toImmutableSortedMap()
{
List<Integer> list = List.of(1, 2, 3);
Comparator<Integer> c =
Functions.toIntComparator(Math::negateExact);
ImmutableSortedMap<Integer, String> map =
list.stream().collect(
Collectors2.toImmutableSortedMap(c, e -> e, String::valueOf));
var expected = SortedMaps.mutable.with(c, 1, "1", 2, "2", 3, "3");
Assertions.assertEquals(expected, map);
}

@Test
public void collects2toImmutableSortedMapBy()
{
List<Integer> list = List.of(1, 2, 3);
Function<Integer, Integer> negate = Math::negateExact;
ImmutableSortedMap<Integer, String> map =
list.stream().collect(
Collectors2.toImmutableSortedMapBy(negate, e -> e, String::valueOf));
Comparator<Integer> c = Comparator.comparing(negate);
var expected = SortedMaps.mutable.with(c, 1, "1", 2, "2", 3, "3");
Assertions.assertEquals(expected, map);
}
  • Added toSortedMap and toSortedMapBy to Collectors2.
@Test
public void collects2toSortedMap()
{
List<Integer> list = List.of(1, 2, 3);
Comparator<Integer> c =
Functions.toIntComparator(Math::negateExact);
MutableSortedMap<Integer, String> map =
list.stream().collect(
Collectors2.toSortedMap(c, e -> e, String::valueOf));
var expected = SortedMaps.mutable.with(c, 1, "1", 2, "2", 3, "3");
Assertions.assertEquals(expected, map);
}

@Test
public void collects2toSortedMapBy()
{
List<Integer> list = List.of(1, 2, 3);
Function<Integer, Integer> negate = Math::negateExact;
MutableSortedMap<Integer, String> map =
list.stream().collect(
Collectors2.toSortedMapBy(negate, e -> e, String::valueOf));
Comparator<Integer> c = Comparator.comparing(negate);
var expected = SortedMaps.mutable.with(c, 1, "1", 2, "2", 3, "3");
Assertions.assertEquals(expected, map);
}

Norwegian Website Translation

We now have a Norwegian translation of the Eclipse Collections web site.

Norwegian Translation of the Eclipse Collections website

Thank you to Rustam Mehmandarov for the contribution and to Mads Opheim for reviewing the translation!

Three new Eclipse Collections Katas in 2021

There were three new code katas created and added to the Eclipse Collections Kata repository in 2021. I wrote the following blog about the six Eclipse Collections Katas. The Eclipse Collections katas are the best hands-on resource for learning the Eclipse Collections library.

The Eclipse Collections Code Katas

And there’s more!

Please refer to the 11.0 release notes for a more comprehensive set of changes made available in the 11.0 release. In there you will find details of optimizations, tech debt reduction, removed functionality, build changes, and a list of breaking changes.

Thank you

We continue to see an upward trend of downloads from Maven Central each month. From all the contributors and committers to the entire Eclipse Collections community… thank you for using Eclipse Collections!

Downloads of eclipse-collections for the past 12 months

We hope you enjoy all of the new features and improvements in the 11.0 release!

I am a Project Lead and Committer for the Eclipse Collections OSS project at the Eclipse Foundation. Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it on GitHub.


by Donald Raab at November 30, 2021 03:07 AM

CDT.cloud? C/C++ tooling in the web/cloud

by Jonas Helming, Maximilian Koegel and Philip Langer at November 26, 2021 11:08 AM

Are you looking to build a custom C/C++ tool using modern technologies? Do you have an existing Eclipse CDT-based toolchain and...

The post CDT.cloud? C/C++ tooling in the web/cloud appeared first on EclipseSource.


by Jonas Helming, Maximilian Koegel and Philip Langer at November 26, 2021 11:08 AM

The Eclipse Collections Code Katas

by Donald Raab at November 23, 2021 08:42 PM

Learn the Eclipse Collections library by completing hands-on Java code katas.

Photo by Carl Heyerdahl on Unsplash
We learn best by doing

What is Eclipse Collections?

Eclipse Collections is an open source Java collections library. Eclipse Collections has been evolving as a Java library since 2004. It was first released into open source as GS Collections in January 2012. It was later moved to the Eclipse Foundation in December 2015 and became Eclipse Collections.

You can read more about the history of GS Collections and its move to the Eclipse Foundation in the following article in InfoQ.

GS Collections Moves to the Eclipse Foundation

What are the Eclipse Collections Katas?

The Eclipse Collections Katas are structured exercises organized into individual kata modules that will help you learn the Eclipse Collections library. There are currently six distinct code katas that make up the Eclipse Collections Kata Project on GitHub.

GitHub - eclipse/eclipse-collections-kata: Eclipse Collections Katas

The katas are the best way to learn the Eclipse Collections library. The Eclipse Collections project started out at the Eclipse Foundation initially with two supporting code katas — the Company Kata and the Pet Kata. These two katas have been used over the past decade to teach thousands of Java developers how to use the Eclipse Collections library. They are still the best katas to start learning Eclipse Collections by doing.

I blogged about the two original Eclipse Collections katas a few years ago.

A Tale of Two Katas

More doing, more learning

Over the past four years, four more code katas have been added to the Eclipse Collections Kata project. Each kata focuses on a different area of Eclipse Collections. The more Eclipse Collections katas you do, the more you will learn about Eclipse Collections.

I will briefly describe each of the four new Eclipse Collections katas, and what motivated their creation.

Candy Kata

The Candy Kata was originally developed to teach Java developers about the Bag data structure available in Eclipse Collections.

Halloween is a popular holiday in the U.S. Many school aged kids get dressed up in costumes and go around their neighborhoods collecting candy in bags. In my neighborhood, the school kids usually make the rounds collecting bags of candy in different groups. The youngest kids start with their parents early in the afternoon, followed by the middle school kids before sunset, and then finally the high school kids who usually go out in the evening.

This kata has a single test class with two tests that focus on using methods on Bag and Set classes.

I blogged about the Candy Kata on Halloween back in 2018.

Trick or Treat: A Halloween Kata

Converter Method Kata

The Converter Method Kata started out as a blog that explained how to convert one collection type to another type using Eclipse Collections converter methods. Converter methods are methods that being with the prefix “to” and copy collections contents from one type to another. In order to convert a List to a Set in Eclipse Collections, you can call toSet on the List. The kinds of converters a developer can learn in this kata are as follows:

  • Convert object collections to MutableCollection types
  • Convert object collections to ImmutableCollection types
  • Convert Java Streams to MutableCollection types using Collectors2
  • Convert Java Streams to ImmutableCollection types using Collectors2
  • Convert primitive collections to other primitive MutableCollection types

I blogged about the converter methods available in Eclipse Collections almost a year ago.

Converter methods in Eclipse Collections

Top Methods Kata

The Top Methods Kata started out as a blog where I wanted to see how many of the Eclipse Collections methods I could include in a single method example. I decided to stop at 25 methods, even though there are a lot more in the RichIterable interface. We all have limited time, so I wanted to create a kata that a developer could complete that would expose them to the most commonly used methods in the Eclipse Collections API as quickly as possible.

The Top Methods Kata has a single test that you need to complete. There is a test method for each method in the Eclipse Collections API I wanted to include as an example for developers to learn.

Linked here is the original blog which I later turned into the Top Methods Kata.

My 25 favorite methods from the Eclipse Collections API

Lost and Found Kata

Eclipse Collections is a huge library. The code base has over 1 million lines of code. This makes it a challenge to teach all of the amazing features available in the Eclipse Collections API to developers. A million lines of code is just too much code to just sit down and read. This rather large code base has 18 years of software engineering investment, including contributions from over 100 developers. There are many data structures and algorithms you will not find direct equivalents for in the JDK today.

Nikhil Nanivadekar and I gave a talk on Eclipse Collections to a few hundred software engineers in Amazon at the end of July 2021. In this talk we included examples of several data structures and algorithms that we felt were not generally well known by developers, and had no direct equivalent in the JDK. After delivering this talk, I decided I would write a few blogs going into much more detail about the “missing Java data structures no one ever told you about”. The following blog links to the blog series I wrote in August 2021.

Blog Series: The missing Java data structures no one ever told you about

The Lost and Found Kata was a translation of this blog series into an advanced code kata that can help contributors who are interested learn Eclipse Collections in depth.

The kata is broken into three sections and covers many different data types. I’m including this snapshot of the README.md to show what a developer can potentially learn by completing this kata.

README — What is in the Lost and Found Kata?

After 18 years of working on Eclipse Collections, I started to feel like I was at risk of forgetting more than I could remember. I wanted to write down as many things that I know or learned, in blogs and katas before the knowledge became only mysterious artifacts buried in a million lines of code. This is why I created this kata. In this kata I am sharing lessons I learned directly with developers in hand-on exercises, hopefully guaranteeing this found knowledge is never at risk of being lost again.

Do. Or do not. There is no Try.

I hope this blog motivates some developers to complete the Eclipse Collections katas. We are looking for more committers for the Eclipse Collections library. If you are interested in investing and committing time to your own learning and the development of the library, then I highly recommend completing the Lost and Found Kata. For more information on becoming a committer on a project managed at the Eclipse Foundation, check out this post on the Eclipse Foundation Wiki.

Development Resources/Becoming a Committer

For folks looking for some quick examples to read, or to validate their own solutions to the katas, there are solution modules checked in for each of the katas.

I hope you enjoy these katas! Feedback, improvements and contributions to the katas are amazing and welcome contributions to the library!

Thank you!

I am a Project Lead and Committer for the Eclipse Collections OSS project at the Eclipse Foundation. Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it on GitHub.


by Donald Raab at November 23, 2021 08:42 PM

Announcing Eclipse Ditto Release 2.2.0

November 22, 2021 12:00 AM

The Eclipse Ditto teams announces availability of Eclipse Ditto 2.2.0.

It features several nice added features and e.g. allows using the dash - in the namespace part of thing IDs.

Adoption

Companies are willing to show their adoption of Eclipse Ditto publicly: https://iot.eclipse.org/adopters/?#iot.ditto

From our various feedback channels we however know of more adoption.
If you are making use of Eclipse Ditto, it would be great to show this by adding your company name to that list of known adopters.
In the end, that’s one main way of measuring the success of the project.

Changelog

The main improvements and additions of Ditto 2.2.0 are:

  • Filter for twin life-cycle events like e.g. “thing created” or “feature deleted” via RQL expressions
  • Possibility to forward connection logs via fluentd or Fluent Bit to an arbitrary logging system
  • Add OAuth2 client credentials flow as an authentication mechanism for Ditto managed HTTP connections
  • Enable loading additional extra JavaScript libraries for Rhino based JS mapping engine
  • Allow using the dash - as part of the “namespace” part in Ditto thing and policy IDs

The following notable fixes are included:

  • Policy enforcement for event publishing was fixed
  • Search updater cache inconsistencies were fixed
  • Fixed diff computation in search index on nested arrays

The following non-functional work is also included:

  • Collect Apache Kafka consumer metrics and expose them to Prometheus endpoint

Please have a look at the 2.2.0 release notes for a more detailed information on the release.

Artifacts

The new Java artifacts have been published at the Eclipse Maven repository as well as Maven central.

The Ditto JavaScript client release was published on npmjs.com:

The Docker images have been pushed to Docker Hub:



Ditto


The Eclipse Ditto team


November 22, 2021 12:00 AM

How to introspect and find conceptual symmetry between classes in Java

by Donald Raab at November 19, 2021 08:54 PM

Demonstrating ClassComparer, a utility class built with Eclipse Collections.

Photo by Jack French on Unsplash

A challenge with rich APIs

Rich APIs can be great. They can significantly reduce the duplication of code by providing many useful common behaviors. Sometimes there are different implementations of classes that have a similar rich set of method signatures. Some classes in Java are conceptually equivalent, even if they don’t share a common parent abstraction that defines that behavior.

The problem is that we as humans sometimes need help determining and understanding the subtle differences between classes that are conceptually the same. Parsing and comparing the text in Java source files or Javadoc that may be hundreds of lines long is laborious and error prone. Sometimes we miss things. Computers can help us here so we can identify and understand patterns.

Eclipse Collections is a library that has a primary design goal of providing good symmetry in its API. This goal is extremely challenging to achieve, given the size of the Eclipse Collections API. The Eclipse Collections API has grown so much in the past 18 years, I’ve needed some help to fully understand it.

A solution for comparing classes

A few years ago, I wrote some code in a utility class for comparing the method signatures in two classes. The utility leverages the methods available in Java to introspect classes along with data structures and algorithms available in Eclipse Collections to compare the methods.

There is nothing spectacularly amazing about the utility I wrote that compares classes. I have shared earlier versions of the source code in a previous blog (linked towards the end). It wasn’t until recently when I had to copy the code to my current Eclipse Collections project and modify it in order to compare IntIterable and RichIterable that I decided it would be useful to include it in the Eclipse Collections eclipse-collections-testutils module so I can easily use it in any project where I may need to compare rich APIs.

The utility does the following:

  • Reads the methods of each Class using getMethods
  • Optionally reads the parameters for each Method using getParameters
  • Optionally will include the return type of each method
  • Adds a String for each method to one of two MutableSortedSet instances
  • Calculates the intersection of the two sets of methods
  • Calculates the differences of the two sets of methods in both directions
  • Outputs the methods sorted alphabetically and groups them by their first letter

There are a few other useful methods on the utility class named ClassComparer, but this is its primary purpose.

Keep it simple

The utility creates the equivalent of a Venn Diagram between the method signatures of two classes. The clever part, if there is any, is that the utility class will allow you to optionally include parameter types and return types in the method signature comparison. By excluding parameter types and return types, we are able to determine if two classes have good conceptual symmetry by simply comparing the method names.

A simple comparison

As a first example of a comparison, I will compare StringBuffer and StringBuilder. Note: I was using this code in my local Eclipse Collections project, which is currently using Java 8. StringBuilder was introduced as a drop-in replacement for StringBuffer in Java 5, so we would expect them to have identical method signatures. Conceptually, they are identical. When we dig deeper into the details, we will see that they have subtle differences.

Here’s the code required to compare the two classes, excluding parameter types or return types.

Compare StringBuffer.class and StringBuilder.class

This particular method will do the comparison and output a text version of the intersection and bi-directional difference of StringBuffer and StringBuilder classes. Because I have used the default constructor, method parameter types and return types are excluded from the comparison.

Output of intersection and differences between StringBuffer and StringBuilder

Now, let’s compare the output when we include the parameters in the methods.

Setting the first flag to true enables the inclusion of method parameter types

Here’s the output with the method parameters included.

Output comparing StringBuffer and StringBuilder with parameter types

The classes still look identical. So let’s include return types, by setting the second flag in the ClassComparer constructor to true.

Setting the second flag to true enables the inclusion of method return types

Now the output looks much different.

Output comparing StringBuffer and StringBuilder with parameter and return types

Many of the method signatures return either StringBuffer or StringBuilder. There is a parent abstraction for both named AbstractStringBuilder which defines many of the methods like append that are called from the two subclasses and are overridden to return a the specific return type. This is why the number of methods grew significantly in the bi-directional difference.

An experimental Swing UI

I decided to build a quick prototype Swing UI to display the intersection and differences for ClassComparer. The Swing UI is nicer for quickly comparing the differences, as it uses a three pane list view which more or less emulates a Venn Diagram, just without the overlapping circles.

Here’s what the Swing UI looks like comparing the two classes above.

Excluding parameter and return types

Swing UI comparing StringBuffer and StringBuilder excluding parameter and return types

Including parameter types

Swing UI comparing StringBuffer and StringBuilder including parameter types only

Including parameter and return types

Swing UI comparing StringBuffer and StringBuilder including parameter and return types

Finding asymmetry in the Eclipse Collections API

To illustrate how a developer working on Eclipse Collections can use this utility to find potential work to complete in the space of improving symmetry, I will compare a primitive collection interface with its equivalent on the object side.

IntIterable vs. RichIterable

First, let’s compare the method signatures for IntIterable and RichIterable from a purely conceptual view. We can do this just by using the constructor with no parameter on ClassComparer.

Compare IntIterable and RichIterable interfaces

The output shows some interesting differences between the primitive collection parent type and the object collection parent type even at the conceptual level. Conceptual symmetry is the most important concern we have in Eclipse Collections. We clearly have plenty of work to do in this space.

The output comparing IntIterable and RichIterable interfaces

Looking at the same output in the experimental Swing UI shows one of the strengths of the text output that groups methods by their first letter. There is no need to scroll in the text output, where with the three pane Swing UI, all methods do not fit on the screen so scrolling is required.

The Swing UI comparing IntIterable and RichIterable

Look what happens to the text output when we add parameter types and return types to the methods in the comparison.

Part 1 of the output comparing IntIterable and RichIterable including parameter and return types
Part 2 of the output comparing IntIterable and RichIterable including parameter and return types

As we can see, there are quite a few methods overloaded in the Eclipse Collections RichIterable and IntIterable interfaces. Using the conceptual view without method parameter and return types hides this detail. This detail is important, so both views have value.

This is what the Swing UI looks like for this larger comparison.

The Swing UI comparing IntIterable and RichIterable including parameter and return types

The Swing UI is potentially more useful for this particular comparison, since scrolling is required in both cases.

Stay tuned

The code for ClassComparer will be available in the Eclipse Collections 11.0 release which is due out soon. It is located in the eclipse-collections-testutils module. If you want to see the code, it is now available in the Eclipse Collections repo linked below.

eclipse-collections/ClassComparer.java at master · eclipse/eclipse-collections

An older version of the code exists in the following blog, which is where I first started exploring how I could more easily understand the differences between rich APIs.

Graduating from Minimal to Rich Java APIs

I’ve used the ClassComparer utility to produce all of the examples in this blog. The Swing UI code which is part of a separate class is not included yet in Eclipse Collections. I need to discuss the potential inclusion of the swing code with the other Eclipse Collections committers. It would be unique for Eclipse Collections eclipse-collections-testutils module to include any UI code, so I can’t make any promises on it arriving there. For now, here is a gist with the code for the experimental Swing UI.

https://medium.com/media/5c561bf56ae0d618248d0f6a4ca33b31/href

I hope you found this useful. Thanks for reading!

I am a Project Lead and Committer for the Eclipse Collections OSS project at the Eclipse Foundation. Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it on GitHub.

Other Java Resource articles you may like to explore


How to introspect and find conceptual symmetry between classes in Java was originally published in Javarevisited on Medium, where people are continuing the conversation by highlighting and responding to this story.


by Donald Raab at November 19, 2021 08:54 PM

Welcome AsiaInfo to the Jakarta EE Working Group!

by Tanja Obradovic at November 18, 2021 05:06 PM

It is great to see that our Chinese membership is growing!   HuNan AsiaInfo AnHui is a Chinese software company, mainly engaged in basic software research and development, committed to communication operators, artificial intelligence, 5G, Internet of Things software and cloud computing, big data and other industry applications. Current products include database product AntDB, Application middleware product FlyingServer, AI-RPA, etc. The goal is to provide good basic software, connect relevant global norms and technologies, and serve the huge software demand market in China.

Warm welcome to the HuNan AsiaInfo AnHui as a new member of the Jakarta EE Working Group! 


by Tanja Obradovic at November 18, 2021 05:06 PM

Welcome SOUJava to the Jakarta EE Working Group!

by Tanja Obradovic at November 17, 2021 04:50 PM

We have another Java User Group as a Guest member! I am extremely happy to let you know that SOUJava has joined Jakarta EE Working Group! 

SOU Java does not need an introduction in the Java community! Their members are already heavily involved in Jakarta EE-related projects and having them as a Guest Member of the Jakarta EE Working Group additionally shows their commitment to advancing enterprise Java development.

Please join me in welcoming the SOUJava to Jakarta EE Working Group! It is great to see the Jakarta EE Working Group and our community is continuously growing.


by Tanja Obradovic at November 17, 2021 04:50 PM

Jakarta EE Community Update Fall 2021

by Tanja Obradovic at November 16, 2021 10:23 PM

The summer is usually a time when the activities slow down a little, and Jakarta EE is no exception. Throughout the summer, the Jakarta EE Platform project has put all its attention on Jakarta EE 10, while the TCK project has started with the massive task of refactoring the TCK. Other projects also  moved forward during summer.

The fall for Jakarta EE, however, has been and still is very busy! The teams are continuously working on the new releases on the individual specifications, while the Jakarta EE platform team is working on planning the Platform and Profiles (yes it will be plural for Profiles going forward!) release for the version 10. We are also working hard to bring you all the latest Jakarta EE news in our JakartaOne Livestream event that will be happening on December 7th this year. Our Compatibility Product list and membership base are growing, , and the Jakarta EE boat is sailing strong and in the right direction!

Here are some of highlights:

JakartaOne Livestream 2021: The Biggest Jakarta EE Event of the Year!

The biggest celebration of Jakarta EE is always our JakartaOne Livestream 2021 annual event! This year is no exception, so save the date December 7th, 202and please register and let us know you’ll be there!

Our popular JakartaOne Livestream virtual conference series, as you know, runs in different languages as well! 

We had the following language-specific events:

 

The Jakarta EE Community and Working Group is Growing

It is great to see new members continuously joining the working group. Recently, we welcomed:

SOU Java does not need an introduction in the Java community! Their members are already heavily involved in Jakarta EE-related projects and having them as a Guest Member of the Jakarta EE Working Group additionally shows their commitment to advancing enterprise Java development.

It is great to see that our Chinese membership is growing!   HuNan Asiainfo AnHui,founded on September 25, 2019, is a Chinese software company, mainly engaged in basic software research and development, committed to communication operators, artificial intelligence, 5G, Internet of Things software and cloud computing, big data and other industry applications. Current products include database product AntDB, Application middleware product FlyingServer, AI-RPA, etc. The goal is to provide good basic software, connect relevant global norms and technologies, and serve the huge software demand market in China.

If you or your company are interested in the work of the Jakarta EE Working Group, please visit our membership pages and get in touch! 


Jakarta Tech Talks

We had quite a few new Jakarta Tech Talks! Please visit the Jakarta Tech Talk playlist on Jakarta EE YouTube channel to view the recordings, here are a few:

If you have a topic / talk related to enterprise java development you’d like to share with the community, please let us know. Fill out this form and we’ll help you out to set it up and promote it! 

Jakarta EE 10 is Taking Shape!

I am beyond excited to see all the progress we see related to Jakarta EE 10 in GitHub (label EE10). The creation/plan review for Jakarta EE Core Profile 10 was approved by the Jakarta EE Specification Committee. Jakarta EE Web Profile 10 and Jakarta EE Platform 10 issues are in discussion and plan reviews are expected soon. Please join the discussion and Jakarta EE Platform call to provide your input, refer to Jakarta EE Specifications Calendar (public url, iCal)  for details on all technical calls.


Jakarta EE Individual Specifications and Project Teams 

All meetings in the various specification projects are published in the public Jakarta EE Specifications Calendar (public url, iCal).Everyone interested is welcome to join the calls. Do note that the Jakarta EE Platform team is extremely busy and productive. The call is public and is welcome to all people who would like to contribute to technical discussions.

Select the one that you are interested in and help out. Each specification team is eager to welcome you! 

Want To Learn How To Use Jakarta EE?  

The Eclipse Cargo Tracker is a fantastic example of an end-to-end Jakarta EE application that showcases core Jakarta EE technologies. Thanks to Scaleforce and Jelastic for providing resources to deploy the demo application to the cloud.

Give the Cargo Tracker a try and consider contributing to the project at Cargo Tracker GitHub repository.


EclipseCon 2021 recordings! 

We had a great conference and presence at the EclipseCon 2021! The Jakarta EE Community Day was fabulously organized by the community! Many thanks to Reza Rahman, Werner Keil and Petr Aubrecht. The presentations are available in the folder.  

The conference talk recordings are now available on YouTube in the EclipseCon 2021 Playlist, and in Swapcard. In Swapcard, go to Agenda to locate a talk (Tuesday, Wednesday, and Thursday only), and click on the title to watch the video. We are hoping to see you next year in person!

Adopt A Spec!

Attention all JUGs: get recognized for your involvement in the specification development!

We need help from the community! All JUGs out there, please choose the specification of your interest and adopt it. Here is the information about the Adopt-A-Spec program. 

 

Stay Connected With the Jakarta EE Community

The Jakarta EE community is very active and there are a number of channels to help you stay up to date with all of the latest and greatest news and information. Subscribe to your preferred channels today:

·  Social media: Twitter, Facebook, LinkedIn Group, LinkedIn Page

·  Mailing lists: jakarta.ee-community@eclipse.org, jakarta.ee-wg@eclipse.org, project mailing lists, Slack workspace

·  Calendars: Jakarta EE Community Calendar, Jakarta EE Specification Meetings Calendar 

·  Newsletters, blogs, and emails: Eclipse Community Newsletter, Jakarta EE blogs, Hashtag Jakarta EE

·  Meetings: Jakarta Tech Talks, Jakarta EE Update, and Eclipse Foundation events and conferences

You can find the complete list of channels here.

To help shape the future of open source, cloud native Java, get involved in the Jakarta EE Working Group.

 

 


by Tanja Obradovic at November 16, 2021 10:23 PM

You are invited!!! JakartaOne Livestream 2021 the Jakarta EE event of the year!

by Tanja Obradovic at November 16, 2021 07:12 PM

The biggest celebration of Jakarta EE is always our JakartaOne Livestream 2021 annual event! This year is no exception, so save the date December 7th, 2021 and please register  and let us know you’ll be there!!

This year again, it will be a twelve hour virtual event that will provide you with a variety of talks and discussions related to Jakarta EE. Join us to find out what are the future plans for the individual specifications but also plans for the platform and the profiles. We have a great lineup of speakers for the main talks; to avoid missing anyone here is the picture from the website. 

NOTE: please refer to the website for updates and possible sessions time changes 

We’ll be having quite a few surprises for the Studio Jakarta EE sessions, as well! So stay tuned for these. 

As we are having a long day, we’ll be having pizza! So make your own, create it as Jakarta EE as possible and share the picture on social media. Do not forget to tag us (@JakartaOneCong and @JakartaEE) or show it on the day of the event! We want to celebrate all of your work related to Jakarta EE including pizza!

 

 

 


by Tanja Obradovic at November 16, 2021 07:12 PM

OSPO Alliance Announces the OSS Good Governance Handbook

by Jacob Harris at November 09, 2021 01:00 PM

The OSPO Alliance today announced the publication of the first version of the Open Source Good Governance Handbook. Collaboratively developed in the framework of the OW2 Open Source Good Governance initiative, the handbook introduces a methodology to implement professional management of open source software in an organisation. It addresses the need to use open source software properly and fairly, safeguarding organisations from technical, legal and IP threats, and maximising for them the advantages of open source.

This methodology goes beyond compliance and liability. It is about building awareness in communities of end-users (open software developers themselves) and systems integrators, and developing mutually beneficial relationships within the FOSS ecosystem.

The Good Governance Handbook provides a comprehensive methodology based on a few key concepts. From a generic perspective, the approach is structured along five Goals (Usage, Trust, Culture, Engagement and Strategy), each supported by five Canonical Activities described in the handbook. At execution stage, the methodology requires adapting the Canonical Activities to the specific condition of each organisation by transferring them into Customized Activity Scorecards which, put together, constitute a roadmap for setting up and running an Open Source Program Office (OSPO) and a management system to monitor progress.

The OSS Good Governance Handbook is the first major publication of the OSPO Alliance to help companies and public institutions implement an Open Source Program Office (OSPO).  The OSPO Alliance provides an organisation-neutral venue for contributors to share resources, experience and skills. Supporters can opt to contribute in several ways: at the new OSPO.Zone website, contribute directly to the next steps of the Good Governance Initiative methodology, and share their experiences (both successes and failures!) as case studies or with more informal short interviews. 

“This publication is the result of nearly two years of collaborative work with OW2 members and OSS practitioners. We are now moving to the implementation and customization phase with some key adopters of the methodology. We are excited to publish this effort with the OSPO Alliance,” said Cédric Thomas, CEO of OW2.

The OSPO Alliance is also pleased to welcome eight new supporters, including the Free Software Foundation Europe, SAP, and WiPro.

Quotations From Supporters About Their Participation In the OSPO Alliance: 

APELL
“APELL (Association Professionnelle Européenne du Logiciel Libre), Europe’s Open Source Business Association, and its national members, are eager to support the mission of the OSPO Alliance. Europe is an open source champion, and there is still massive potential to tap everywhere from large and small software vendors to big open source users, both in the private and public sectors. We see that this partnership will be very important to strengthening not only Europe's but the world's capacity to use and contribute to open source at an even greater scale. Increased understanding of open source governance and the establishment of Open Source Programme Offices hold massive potential to enable open innovation at large in society,” said Timo Väliharju, Founder and board member of APELL.

CNLL 
“The CNLL, the association representing the free software sector in France, has long supported the creation of OSPOs, or "free software missions", in France, at the level of the Government but also at the level of major local authorities, as well as at the European level. The mission of these OSPOs in France should be to make concrete the notion of "encouraging the use of free software" within the administration, as provided for in Article 16 of the Digital Republic Act (or Lemaire Act) of 2016. These OSPOs should network with each other and with organizations representing the industry in order to fully realize the economic and strategic benefits of OSS in Europe,” said Stéfane Fermigier, co-president of CNLL.

OSBA
“The OSB Alliance – Bundesverband für digitale Souveränität e.V. represents about 170 companies from the Open Source economy. We are committed to sustainably anchoring the central importance of open source software and open standards for a digitally sovereign society in the public consciousness.

“For the OSB Alliance establishing an OSPO is a great strategic approach for public or private organisations to professionalize their engagement with the Open Source Ecosystem. So we see the OSPO Alliance as an exciting new European initiative to support organisations in the public, private or academia sector with information and best practices on OSPOs. Such support can especially accelerate the process of setting up important OSPOs as planned e.g. at EU level or the planned Zendis in Germany,” declared Peter Ganten, Chairman of the OSBA-Board.

FSFE     
“The Free Software Foundation Europe (FSFE) is a charity that empowers users to control technology. Since 2001 the FSFE has been enhancing users' rights by abolishing barriers for software freedom. For 20 years FSFE has been helping individuals and organisations to understand how Free Software contributes to freedom, transparency, and self-determination.
 
Joining OSPO.Zone is another brick in the wall to ensure that more Free Software is developed and used - be it in the private or governmental sector. Free Software gives everybody the right to use, study, share and improve software. To make sure that these rights are understood and used an OSPO can make a difference,” said Matthias Kirschner, President Free Software Foundation Europe

About the OSPO Alliance: 
The OSPO Alliance was established by leading European open source non-profit organisations, including OW2, Eclipse Foundation, OpenForum Europe, and Foundation for Public Code, and experienced practitioners with the aim to grow awareness for open source in Europe and to globally promote structured, responsible and professional management of open source by companies and administrations. OSPO.Zone is the new website for delivering the resources and collaboration envisaged by the OSPO Alliance. Learn more at https://ospo.zone.

Media contacts:
Schwartz Public Relations
Sendlinger Straße 42A
80331 Munich
EclipseFoundation@schwartzpr.de
Julia Rauch / Sophie Dechansreiter / Tobias Weiß
+49 (89) 211 871 – 43 / -35 / -70
 


by Jacob Harris at November 09, 2021 01:00 PM

Register for TheiaCon 2021 now!

by Jonas Helming, Maximilian Koegel and Philip Langer at November 09, 2021 08:42 AM

EclipseCon 2021 has just closed its doors and we are happy to already announce the next highlight of the year, TheiaCon...

The post Register for TheiaCon 2021 now! appeared first on EclipseSource.


by Jonas Helming, Maximilian Koegel and Philip Langer at November 09, 2021 08:42 AM

How I generate the JavaFlight Recorder Docu

by Tom Schindl at November 03, 2021 11:05 PM

From a tweet von Gunnar Morling

I learned today that my work on JFR-Doc is used for JfrUnit. In a follow up Gunnar asked me how those JSON-Files are generated and I promised to write up a blog what I hacked together in an evening to get this going.

Get the input

The first and most important thing is to get all JFR-Events (those commands are executed eg in your Java-17-Install-Dir)

./java -XX:StartFlightRecording:filename=/tmp/out-bin.jfr \
-version # Dump JFR Data
./jfr metadata /tmp/out-bin.jfr &gt; /tmp/openjdk-17.jfr

The final file holds content like this

class boolean {
}
class byte {
}
class char {
}
class double {
}
class float {
}
class int {
}
class long {
}
class short {
}

@Name("java.lang.Class")
@Label("Java Class")
class Class {
  @Label("Class Loader")
  ClassLoader classLoader;

  @Label("Name")
  String name;

  @Label("Package")
  Package package;

  @Label("Access Modifiers")
  int modifiers;

  @Label("Hidden")
  boolean hidden;
}
...

Looks like these are Java-Classes so one strategy could be to just compile those and use Reflection to extract meta informations but I went another route

Parsing the .jfr-File

Handcrafting a parser is certainly not the way to go. I needed something that could provide me a fairly simple Logical-AST. There are BNF-Definitions for Java but I wanted something much simpler so I fired up my Eclipse IDE and created an Xtext-Project using the wizards and replaced the content in the .xtext-File with

grammar at.bestsolution.jfr.JFRMeta with org.eclipse.xtext.common.Terminals

generate jFRMeta "http://www.bestsolution.at/jfr/JFRMeta"

Model:
  classes+=Clazz*;

Clazz:
  annotations+=Annotation*
  'class' name=ID ( 'extends' super=QualifiedName )? '{'
    attributes += Attribute*
  '}';

Attribute:
  annotations+=Annotation*
  type=[Clazz|ID] array?='[]'? name=ID ';'
;

Annotation:
  '@' type=[Clazz|ID] ('(' (values+=AnnotationValue |
   ('{' values+=AnnotationValue
   (',' values += AnnotationValue)* '}')) ')')?
;

AnnotationValue:
  valueString=STRING | valueBoolean=Boolean | valueNum=INT
;

enum Boolean:
  TRUE="true" | FALSE="false"
;

QualifiedName:
  ID ('.' ID)*;

That’s all required because the .jfr-File is extremly simple so we don’t need a more complex definition.

How to convert

Well although Xtext is primarily used to develop DSL-Editors for the Eclipse IDE one can run the generated parser in plain old Java. So all now needed is to write a generator who parses the .jfr-File(s) and generate different output from it (HTML, JSON, …) and because although Java now has multiline strings Xtend is the much better choice to write a “code”-generator.

package at.bestsolution.jfr

import org.eclipse.xtext.resource.XtextResourceSet
import org.eclipse.xtext.resource.XtextResource
import java.util.ArrayList
import org.eclipse.emf.common.util.URI
import java.nio.file.Files
import java.nio.file.Paths
import at.bestsolution.jfr.jFRMeta.Model
import java.nio.file.StandardOpenOption
import at.bestsolution.jfr.jFRMeta.Clazz

import static extension at.bestsolution.jfr.GenUtil.*
import at.bestsolution.jfr.jFRMeta.Attribute

class JSONGen {
     def static void main(String[] args) {
         val versions = createVersionList(Integer.parseInt(args.get(0)))
         val injector = new JFRMetaStandaloneSetup().createInjectorAndDoEMFRegistration();
         val resourceSet = injector.getInstance(XtextResourceSet);
         resourceSet.addLoadOption(XtextResource.OPTION_RESOLVE_ALL, Boolean.TRUE);

         val models = new ArrayList
         for( v : versions ) {
             val resource = resourceSet.getResource(
                 URI.createURI("file:/Users/tomschindl/git/jfr-doc/openjdk-"+v+".jfr"), true);
             val model = resource.getContents().head as Model;
             models.add(model)
         }

         for( pair : models.indexed ) {
             val model = pair.value
             var version = versions.get(pair.key)
             val preModel = pair.key == 0 ? null : models.get(pair.key - 1)
             Files.writeString(Paths.get("/Users/tomschindl/git/jfr-doc/openjdk-"+version+".json"),model.generate(preModel,version), StandardOpenOption.TRUNCATE_EXISTING, StandardOpenOption.CREATE)
         }
     }

     def static generate(Model model, Model prevModel, String ver) '''
         {
             "version": "«ver»",
             "distribution": "openjdk",
             "events": [
                 «val evts = model.classes.filter[c|c.super == "jdk.jfr.Event"]»
                 «FOR e : evts»
                     «e.generateEvent»«IF e !== evts.last»,«ENDIF»
                 «ENDFOR»
             ],
             "types": [
                 «val types = model.classes.filter[c|c.super === null]»
                 «FOR t : types»
                     «t.generateType»«IF t !== types.last»,«ENDIF»
                 «ENDFOR»
             ]
         }
     '''

     def static generateEvent(Clazz clazz) '''
         {
             "name": "«clazz.name»",
             "description": "«clazz.description»",
             "label": "«clazz.label»",
             "categories": [
                 «val cats = clazz.categories»
                 «FOR cat : cats»
                     "«cat»"«IF cat !== cats.last»,«ENDIF»
                 «ENDFOR»
             ],
             "attributes": [
                 «FOR a : clazz.attributes»
                     «a.generateAttribute»«IF a !== clazz.attributes.last»,«ENDIF»
                 «ENDFOR»
             ]
         }
     '''

     def static generateType(Clazz clazz) '''
         {
             "name": "«clazz.name»",
             "attributes": [
                 «FOR a : clazz.attributes»
                     «a.generateAttribute»«IF a !== clazz.attributes.last»,«ENDIF»
                 «ENDFOR»
             ]
         }
     '''

     def static generateAttribute(Attribute a) '''
         {
             "name": "«a.name»",
             "type": "«a.type.name»",
             "contentType": "«a.contentType»",
             "description": "«a.description»"
         }
     '''
}

All sources are available at https://github.com/BestSolution-at/jfr-doc if you look at this code keep in mind that it was hacked together in an evening


by Tom Schindl at November 03, 2021 11:05 PM

Support for OAuth2 client credentials flow for HTTP connections

November 03, 2021 12:00 AM

The upcoming release of Eclipse Ditto version 2.2.0 supports HTTP connections that authenticate their requests via OAuth2 client credentials flow as described in section 4.4 of RFC-6749.

Detailed information can be found at Connectivity API > HTTP 1.1 protocol binding.

This blog post shows an example of publishing a twin event to an HTTP endpoint via OAuth2 client credentials flow. For simplicity, we will use webhook.site for both the token endpoint and the event publishing destination. Feel free to substitute them for real OAuth and HTTP servers.

Prerequisites

This example requires 2 webhooks. We will use

  • https://webhook.site/785e80cd-e6e6-452a-be97-a59c53edb4d9 for access token requests, and
  • https://webhook.site/6148b899-736f-47e6-9382-90b1d721630e for event publishing.

Replace the webhook URIs by your own.

Configure the token endpoint

Configure the token webhook to return a valid access token response. Here is an example for a token expiring at 00:00 on 1 January 3000. The field expires_in is an arbitrary big number not reflecting the actual expiration time of the access token.

  • Status code: 200
  • Content type: application/json
  • Response body:
    {
      "access_token": "ewogICJhbGciOiAiUlMyNTYiLAogICJ0eXAiOiAiSldUIgp9.ewogICJhdWQiOiBbXSwKICAiY2xpZW50X2lkIjogIm15LWNsaWVudC1pZCIsCiAgImV4cCI6IDMyNTAzNjgwMDAwLAogICJleHQiOiB7fSwKICAiaWF0IjogMCwKICAiaXNzIjogImh0dHBzOi8vbG9jYWxob3N0LyIsCiAgImp0aSI6ICI3ODVlODBjZC1lNmU2LTQ1MmEtYmU5Ny1hNTljNTNlZGI0ZDkiLAogICJuYmYiOiAwLAogICJzY3AiOiBbCiAgICAibXktc2NvcGUiCiAgXSwKICAic3ViIjogIm15LXN1YmplY3QiCn0.QUJD",
      "expires_in": 1048576,
      "scope": "my-scope",
      "token_type": "bearer"
    }
    

The access token has the form <headers>.<body>.<signature>, where <headers> and <body> are base64-encoding of the headers and the body in JSON format, and <signature> is the base-64 encoded signature computed according to the issuer’s key pair. Since the token webhook is not a real OAuth2 server, the signature in the example is a placeholder. The unencoded headers and body are as follows.

Headers

{
  "alg": "RS256",
  "typ": "JWT"
}

Body

{
  "aud": [],
  "client_id": "my-client-id",
  "exp": 32503680000,
  "ext": {},
  "iat": 0,
  "iss": "https://localhost/",
  "jti": "785e80cd-e6e6-452a-be97-a59c53edb4d9",
  "nbf": 0,
  "scp": [
    "my-scope"
  ],
  "sub": "my-subject"
}

Create the connection

Create a connection publishing twin events to the event publishing webhook using OAuth2 credentials. The tokenEndpoint field is set to the access token webhook.

{
  "id": "http_oauth2",
  "name": "http_oauth2",
  "connectionType": "http-push",
  "connectionStatus": "open",
  "uri": "https://webhook.site:443",
  "targets": [
    {
      "address": "POST:/6148b899-736f-47e6-9382-90b1d721630e",
      "topics": ["_/_/things/twin/events"],
      "authorizationContext": ["integration:ditto"]
    }
  ],
  "credentials": {
    "type": "oauth-client-credentials",
    "tokenEndpoint": "https://webhook.site/785e80cd-e6e6-452a-be97-a59c53edb4d9",
    "clientId": "my-client-id",
    "clientSecret": "my-client-secret",
    "requestedScopes": "my-scope"
  }
}

Generate a thing-created event

Create a thing granting read access to the connection’s subject. The thing-created event will be distributed to the connection for publishing.

{
  "_policy": {
    "entries": {
      "DEFAULT": {
        "subjects": {
          "{{ request:subjectId }}": {
            "type": "the creator"
          },
          "integration:ditto": {
            "type": "the connection"
          }
        },
        "resources": {
          "policy:/": {
            "grant": ["READ", "WRITE"],
            "revoke": []
          },
          "thing:/": {
            "grant": ["READ", "WRITE"],
            "revoke": []
          }
        }
      }
    }
  }
}

HTTP requests made by the HTTP connection

Before the HTTP connection publishes the thing-created event, it makes an access token request against the token endpoint to obtain a bearer token.

POST /785e80cd-e6e6-452a-be97-a59c53edb4d9 HTTP/1.1
Host: webhook.site
Accept: application/json
Content-Type: application/x-www-form-urlencoded

grant_type=client_credentials
&client_id=my-client-id
&client_secret=my-client-secret
&scope=my-scope

The request should appear at the access token webhook. The webhook should return the configured access token response.

{
  "access_token": "ewogICJhbGciOiAiUlMyNTYiLAogICJ0eXAiOiAiSldUIgp9.ewogICJhdWQiOiBbXSwKICAiY2xpZW50X2lkIjogIm15LWNsaWVudC1pZCIsCiAgImV4cCI6IDMyNTAzNjgwMDAwLAogICJleHQiOiB7fSwKICAiaWF0IjogMCwKICAiaXNzIjogImh0dHBzOi8vbG9jYWxob3N0LyIsCiAgImp0aSI6ICI3ODVlODBjZC1lNmU2LTQ1MmEtYmU5Ny1hNTljNTNlZGI0ZDkiLAogICJuYmYiOiAwLAogICJzY3AiOiBbCiAgICAibXktc2NvcGUiCiAgXSwKICAic3ViIjogIm15LXN1YmplY3QiCn0.QUJD",
  "expires_in": 1048576,
  "scope": "my-scope",
  "token_type": "bearer"
}

The HTTP connection will cache the access token and use it to authenticate itself at the event publishing webhook for each thing event, including the first thing-created event.

POST /6148b899-736f-47e6-9382-90b1d721630e HTTP/1.1
Host: webhook.site
Content-Type: application/vnd.eclipse.ditto+json
Authorization: Bearer ewogICJhbGciOiAiUlMyNTYiLAogICJ0eXAiOiAiSldUIgp9.ewogICJhdWQiOiBbXSwKICAiY2xpZW50X2lkIjogIm15LWNsaWVudC1pZCIsCiAgImV4cCI6IDMyNTAzNjgwMDAwLAogICJleHQiOiB7fSwKICAiaWF0IjogMCwKICAiaXNzIjogImh0dHBzOi8vbG9jYWxob3N0LyIsCiAgImp0aSI6ICI3ODVlODBjZC1lNmU2LTQ1MmEtYmU5Ny1hNTljNTNlZGI0ZDkiLAogICJuYmYiOiAwLAogICJzY3AiOiBbCiAgICAibXktc2NvcGUiCiAgXSwKICAic3ViIjogIm15LXN1YmplY3QiCn0.QUJD

{
  "topic": "<namespace>/<name>/things/twin/events/created",
  "headers": {},
  "path": "/",
  "value": {
    "policyId": "<policy-id>"
  },
  "revision": 1
}

The HTTP connection will obtain a new token from the access token webhook when the previous token is about to expire.

Please get in touch if you have feedback or questions regarding this new functionality.

Ditto


The Eclipse Ditto team


November 03, 2021 12:00 AM

The Eclipse IDE Turns 20

by Shanda Giacomoni at November 02, 2021 11:00 AM

One of the world’s most popular open source desktop development environments continues to evolve to support a new generation of developers 

BRUSSELS – November 2, 2021 – The Eclipse Foundation, one of the world’s largest open source software foundations, together with the Eclipse IDE Working Group, today announced the 20th anniversary of the Eclipse Platform suite of products, related technologies, and ecosystem. Throughout that time, the Eclipse Platform has been instrumental in driving the adoption of open source, as well as serving as the core technology for building some of the most advanced applications in the world. From the Eclipse Java development tools, to NASA’s Mars Rover mission planning software, to massive semiconductors, and myriad other technologies that power our lives, the Eclipse Platform continues to support developers building new applications. 

“As the project that gave our organization its name, it is with great pride that I’ve watched this platform evolve to meet the challenges of today,” said Mike Milinkovich, executive director of the Eclipse Foundation. “The wonderful community that has driven this evolution, as well as our new working group, continue to ensure the Eclipse IDE platform will meet the needs of developers for another 20 years.”

20 years after the Eclipse family of projects was first launched, the Eclipse Platform and ecosystem continue to be relied upon worldwide by developers and businesses to create commercially viable products in a variety of industry sectors. With millions of users, tens of millions of downloads annually, and billions of dollars in shared investment, the Eclipse IDE is one of the world’s most popular desktop development environments.

The Eclipse Foundation recently formed the Eclipse IDE Working Group to support this global community. This new working group provides governance, guidance, and funding for the communities that support the delivery and maintenance of the Eclipse IDE products. Founded by multiple participants, including Bosch, EclipseSource, IBM, Kichwa Coders, Renesas, SAP, VMware, and Yatta Solutions, this governance structure will enable broad collaboration to ensure the Eclipse IDE meets the latest market requirements. All consumers and adopters of Eclipse Platform technologies are highly encouraged to join and participate in this new working group.

The goal of the Eclipse IDE Working Group is to ensure the continued success, vibrancy, quality, and sustainability of the Eclipse Platform, desktop IDE, and underlying technologies. This effort includes support for planning and delivery processes, as well as the related delivery technology. If the Eclipse IDE is important to your organization’s development efforts, the new working group represents an opportunity to help shape the future of this critical platform. For more information, visit ide-wg.eclipse.org.

To learn more about how to get involved with the Eclipse IDE Working Group, visit the Eclipse Foundation membership page or see the working group’s Charter and Participation Agreement. Working group members benefit from a broad range of services, including exclusive access to detailed industry research findings, marketing assistance, and expert open source governance. Sponsorship is also welcomed as a way for companies that want to support the Eclipse IDE without joining the working group. 

Quotes from Strategic Members

IBM

"20 years ago, when IBM contributed our Java development tools to a consortium of organizations which later on became the Eclipse Foundation, we hoped to create an open process that would excite developers to participate in the shaping and development of the tools they would use every day to create applications,” said Todd Moore, IBM’s VP of Open Technology. “At the time we hoped this would ignite the industry around Java, and the Eclipse Foundation did just that. Here's to 20 more successful years as Java moves into the cloud native world.”

Renesas Electronics

“Renesas’ embedded tools environment is built through the utilization of the groundbreaking Eclipse platform and the fully functional C/C++ Developers Toolkit (CDT),” said Akyiya Fukui, vice president, IoT and Infrastructure Business Unit, Software Development Division. “The open tools environment, the diverse partners and third party tools integration have quickly become the de facto IDE platform for embedded software for customers around the world. Renesas will continue to support the platform and work closely with the Foundation to ensure the projects build on this success.” 

Robert Bosch GmbH

"We have been using the Eclipse platform for designing and developing solutions for Automotive and IoT use cases. One of the key success factors in software industry is a widely adopted open source platforms, which can be used to create not only developer ecosystems but also commercial solutions. Thanks to contributions from many developers in the Eclipse community, tools and standards are created on an open platform that many companies benefit from. As an Eclipse IDE Strategic group member, we are excited to contribute to the platform and help to keep it powerful for many more years to come," said Vadiraj Krishnamurthy, Head Technology & Innovation, Robert Bosch Engineering and Business Solutions.

SAP

“SAP has delivered development tools for SAP NetWeaver based on the Eclipse platform from its very beginning 20 years ago,” said Karl Kessler, vice president of Product Management ABAP Platform, SAP. “Developers have used the SAP tools to build successful applications for the SAP customer base and ecosystem that run more than 70% of the worldwide business transactions around the globe. SAP continues to invest into its tools portfolio, in particular the ABAP development tools (ADT) for Eclipse. ADT helps to build Cloud applications powered by SAP HANA that run on SAP Business Technology Platform and SAP S/4HANA. SAP Partners have used the tool plugins from SAP to deliver tailored tool extensions. SAP will continue to contribute to the Eclipse Foundation to enrich the value of the Eclipse Platform and community.”

Vector Informatik GmbH

"With PREEvision, we provide an Eclipse-based tool for the automotive industry to develop electronic systems (E/E) of modern vehicles. The Eclipse IDE builds a both solid and versatile basis for us to meet the demands of our customers for a model-based development platform and central E/E data backbone," says Georg Zimmermann, director for PREEvision. "We started our journey with Eclipse over 15 years ago and the fact that Vector still launches new Eclipse-based products like the DaVinci Developer Adaptive speaks for itself. As an Eclipse IDE Working Group member, we are excited to contribute to the platform and help to keep it powerful for the next 20 years."

Yatta Solutions GmbH 

“To improve development, we need to enable developers. This includes providing them with the best possible tooling,” said Dr. Leif Geiger, product manager, Software Engineer and Co-Founder, Yatta Solutions. “Since the beginning, Eclipse has played a key role in my career. An IBM Eclipse innovation grant funded my research. And Yatta’s first product, UML Lab, is based on Eclipse. Over time, we have increased our commitment towards Eclipse: I became project lead of the Eclipse Marketplace Client open-source project, and I was recently elected Package Maintainer of the Eclipse IDE for Java Developers.”

Dr. Geiger continued, “I think that now more than ever, we need a vendor-neutral, open-source IDE for businesses and developers alike. By keeping this precious open ecosystem alive and kicking, we can remain independent. But to ensure the best possible future for Eclipse, the IDE needs to become more attractive for businesses and developers alike, and win developers as users and committers again. In the end, the Eclipse IDE is about collaboration among developers and across corporate borders. Let’s keep it that way. Here’s to another 20 years of the Eclipse IDE!”

About the Eclipse Foundation

The Eclipse Foundation provides our global community of individuals and organizations with a mature, scalable, and business-friendly environment for open source software collaboration and innovation. The Foundation is home to the Eclipse IDE, Jakarta EE, and over 400 open source projects, including runtimes, tools, and frameworks for cloud and edge applications, IoT, AI, automotive, systems engineering, distributed ledger technologies, open processor designs, and many others. The Eclipse Foundation is an international non-profit association supported by over 330 members, including industry leaders who value open source as a key enabler for their business strategies. To learn more, follow us on Twitter @EclipseFdn, LinkedIn or visit eclipse.org.

Third-party trademarks mentioned are the property of their respective owners.

###

Media contacts 

Schwartz Public Relations for the Eclipse Foundation, AISBL

Julia Rauch / Sophie Dechansreiter / Tobias Weiß

Sendlinger Straße 42A

80331 Munich

EclipseFoundation@schwartzpr.de

+49 (89) 211 871 – 43 / -35 / -70

 

Nichols Communications for the Eclipse Foundation, AISBL

Jay Nichols

jay@nicholscomm.com

+1 408-772-1551


by Shanda Giacomoni at November 02, 2021 11:00 AM

Moving Embedded Software Development Tools into the Cloud

by Christopher Guindon (webdev@eclipse-foundation.org) at November 02, 2021 12:00 AM

In this article, we introduce the embedded special interest group (embedded SIG), hosted by the Eclipse Foundation. The embedded SIG is an open collaboration of embedded vendors and service providers, with the goal of strengthening the open source ecosystem for building web- and cloud-based tools used for embedded development. The SIG provides the structure for participants to coordinate efforts, share insights and collaborate on technical initiatives and standards.

by Christopher Guindon (webdev@eclipse-foundation.org) at November 02, 2021 12:00 AM

The unparalleled design of Eclipse Collections

by Donald Raab at November 01, 2021 04:49 AM

Exploring the evolutionary design of parallelism in Eclipse Collections

Photo by Jason Yuen on Unsplash

Plenty of data, memory, and cores

In 2006, I was working on a proprietary distributed Java caching architecture in a large financial services company. We cached and indexed all of the firm’s trades and positions in a complex object graph in memory so we could slice and dice through it at high speed. By this time, I had already built and was extensively leveraging the serial eager API of an internal library named Caramel. This library would eventually become what we now know as Eclipse Collections.

In the area I was working in 2006, we had many caches that were tens of gigabytes in size, networked together. There were many large collections (millions of objects) that we would iterate through in memory to perform various pipelined calculations which most often involved grouping, netting and aggregating various balances.

Around this time, I would discover the Fork/Join framework from Doug Lea in the original EDU.oswego concurrent utility library. The Fork/Join framework was initially left out of java.util.concurrent when Java 5 was released. It would later be included in Java 7. I learned about the Fork/Join framework reading about it in “Concurrent Programming in Java” (CPJ) and this paper on “A Java Fork/Join Framework” by Doug Lea.

I went looking for my copy of CPJ on my bookshelf as I was writing this blog and when I couldn’t find it, I bought myself a brand new copy. This book is a must read for all aspiring senior Java developers.

Eager Serial, Parallel and Fork/Join

The initial methods in the Caramel library were all serial and eager. I occasionally would add fused methods to combine a couple operations together to increase performance. For example, the method collectIf is a combination of select (inclusive filter) and collect (map or transform). It would be a few years before we would add lazy methods to the API in Eclipse Collections.

For more information and examples of fused methods, please check out the following blog.

From Eager to Fused to Lazy

The initial implementation of the parallel API in Caramel was eager. There is a utility class that is still available in Eclipse Collections today that provides this eager parallel functionality. The class is named ParallelIterate. The class initially used the Fork/Join framework from the EDU.oswego concurrent library. It would later be converted to use Executor after the java.util.concurrent package was introduced in Java 5 without the Fork/Join framework. When the Fork/Join framework was added in Java 7, a new utility class named FJIterate was added to GS Collections. FJIterate is included in its own module in Eclipse Collections and is distributed in a separate jar file. FJIterate has existed since mid-2013, which was two years after Java 7 was released (July 2011). It will require an extra Maven dependency if you want to use it.

<dependency>
<groupId>org.eclipse.collections</groupId>
<artifactId>eclipse-collections-forkjoin</artifactId>
<version>${eclipse-collections.version}</version>
</dependency>

The methods available on ParallelIterate and FJIterate are almost the same. The implementations are fairly similar, with the primary difference being that ParallelIterate uses Executor and and FJIterate uses the Fork/Join framework. Using Executor makes ParallelIterate more suitable for some tasks. For raw, in-memory compute performance, FJIterate is sometimes the better option.

Both ParallelIterate and FJIterate were built for the same reason. We wanted raw parallel performance for large in memory data sets that were backed by arrays. Both classes will parallelize execution for a fixed set of algorithms for any Iterable type. The primary workhorse for both ParallelIterate and FJIterate is a parallel forEach. All of the other parallel algorithms are implemented using parallel forEach. There are twelve overloaded forms of forEach on ParallelIterate and FJIterate. The overloads take different parameters to give as much control as possible to developers. The design rationale for this is simple. We believed that if someone was able to prove that they would benefit from parallelism, then they would be in the best position to decide how to tune various parameters to squeeze as much performance as possible out of the hardware for parallel use cases they had.

ParallelIterate

Here are the methods available on ParallelIterate.

ParallelIterate (Eclipse Collections - 10.4.0)

FJIterate

Here are the methods available on FJIterate

FJIterate (Eclipse Collections - 10.4.0)

ParallelIterate vs. FJIterate

The symmetric difference and intersection of the methods on the two utility classes are as follows.

Symmetric Difference and Intersection of ParallelIterate and FJIterate methods

The biggest difference is that several sumBy methods were added to ParallelIterate but not ported over to FJIterate.

The Futility of Utility

The downside of utility classes like ParallelIterate and FJIterate is that methods can only return a single type. You only get one choice, so if you want to return different implementations from a method based on the input parameter type, you have to choose a common parent type. Methods on ParallelIterate and FJIterate take any java.lang.Iterable as an input parameter, and methods that return collections (e.g. select, reject, collect) have to unfortunately return java.util.Collection. Developers can control the return type by using overloaded methods with the same name which take a target collection as a parameter. If the Iterable used as the source parameter implements the BatchIterable interface, it will be optimized for parallelism for both ParallelIterate and FJIterate. If the source does not implement BatchIterable, or does not implement java.util.List, both utility classes will default to copying the elements to an array before parallelizing.

Here are some examples of ParallelIterate using the basic form of select (inclusive filter) and the overloaded form of select that takes a target collection.

https://medium.com/media/33bcec800b6e1e1c018b3c66ebddb4cd/href

The behavior of ParallelIterate select method is to return a type similar to the Iterable type that is passed in. For a List, a List implementation will be returned. For a Set, a Set implementation is returned. Unfortunately, since there can only be one signature for this method, the return type has to be the most abstract type which is Collection. Collection as an interface is not terribly useful. If I ever get around to refactoring the utility, I will return MutableCollection or RichIterable instead of Collection. This will make the utility methods a lot more useful, and maybe just slightly less futile.

Lazy asParallel

We took a different approach when it came to designing and implementing the lazy parallel API in Eclipse Collections. We decided we would require developers to provide an Executor and a batch size instead of offering up multiple combinations of knobs and switches to configure via overloads as we did with our parallel utility classes. Based on our experience with supporting eager parallelism for seven years, these two parameters seemed to be the most important configuration options that developers wanted control over. This makes the lazy parallel API in Eclipse Collections slightly harder to use than parallelStream in the JDK. This is by design. It should be harder for a developer to write parallel code, because they first need to determine if using a parallel algorithm will speed up or slow down an operation. If a developer understands exactly what they are doing, because they ran benchmarks and have proven parallelism will indeed help the performance of a specific use case, then they will be in the best position to determine how to configure the parallelism for optimal performance.

The other more important difference between the eager and lazy parallel API, is that the algorithms are available through data structures themselves for lazy, instead of being located in a utility class like ParallelIterate.

Here’s an example that takes a million integers and filters all of the prime values into a Bag. I show a fluent approach first, and then break the fluent calls into their intermediate results to show the intermediate return types.

https://medium.com/media/5d77ae7f378a01314f099feaf722bb7b/href

Notice there is a very specific type named ParallelListIterable that is returned from asParallel. This type is lazy, so no real computation occurs until a terminal operation is called. The same type is returned after calling select. The method toBag is a terminal operation and results in a MutableBag being created. Now let’s look at what happens to our types if the initial collection is a MutableSet instead of a MutableList.

https://medium.com/media/06b093536ea85fac922cefdd35faf407/href

Notice the return type for asParallel for a MutableSet is ParallelUnsortedSetIterable.

ParallelListIterable vs. ParallelIterate

If we compare the methods available on ParallelListIterable with the methods available on ParallelIterate, it will become evident that a lot more investment has been made in growing the parallel lazy API in Eclipse Collections. The following shows the symmetric difference and intersection of methods available between both.

Symmetric Difference and Intersection of ParallelListIterable and ParallelIterate methods

JDK stream vs. parallelStream

Have you ever noticed the return type for stream and parallelStream in the JDK is the same type? They both return Stream. You might think that perhaps the implementations that are returned for the methods are different classes implementing the same interface, but they are not. Both stream and parallelStream return a new instance of ReferencePipeline.Head. The difference between them is a boolean parameter named parallel. What this means is that the serial and parallel code paths are mixed together, and usually split on a boolean expression involving a call to a method named isParallel where parallelism might choose a different path. I searched for usages of isParallel in AbstractPipeline and found there are 48 usages in the parent class and four subclasses (ReferencePipeline, IntPipeline, LongPipeline, DoublePipeline) .

The upside here is that the serial lazy and parallel lazy API in the JDK with streams is identical. Having a single implementation class for both serial and parallel guarantees this as they share the exact same code paths. The downside is that the code paths are hard to understand just by reading the code and very difficult to trace, even with the help of a debugger.

Eclipse Collections asLazy vs. asParallel

We’ve already seen that the return types for asParallel are covariant for the types the method is defined on. The return type will always be a subtype of ParallelIterable. ParallelIterable has no direct relation to RichIterable. The method asLazy, which is defined on RichIterable returns a LazyIterable. LazyIterable extends RichIterable.

The following class diagram shows the inheritance relationships between RichIterable, LazyIterable and ParallelIterable.

RichIterable, LazyIterable and ParallelIterable Interfaces

RichIterable is the parent type for LazyIterable and all of the container types in Eclipse Collections (e.g. MutableList/ImmutableList, MutableSet/ImmutableSet, etc.). LazyIterable provides serial lazy algorithm implementations.

ParallelIterable is the parent type for all corresponding parallel lazy adapters. There is a distinct split between serial and parallel API in Eclipse Collections. This means there is asymmetry between LazyIterable and ParallelIterable. This allows us to limit the parallel API to only those algorithms where there would be a reasonable performance benefit if parallelized. This also allows the serial implementations to be as simple as possible, and the parallel implementations can be optimized for specific types (e.g. Bag, List, Set).

LazyIterable vs. ParallelIterable

There are a lot more methods available on LazyIterable than on ParallelIterable. This can always change over time, if we determine that there is a need and a benefit to implementing a parallel version of a lazy serial method.

Symmetric Difference and Intersection of LazyIterable and ParallelIterable methods

Performance Benchmarks

I wrote some benchmarks a few years ago comparing filter, map, reduce and filter+map+reduce for a combination of serial, parallel, eager, lazy, object, and primitive types. The code and the results for the benchmarks were captured in the following blog. As you’ll see in the blog, I ran the benchmarks on JDK 8.

The 4 am Jamestown-Scotland ferry and other optimization strategies

I decided when I started writing this blog, I wanted to write new benchmarks. I wanted to run the benchmarks on JDK 17 so I could see how the older eager parallel and fork/join utility classes held up with all of the optimizations that have arrived in the last nine versions of the JDK. I also wanted to make the benchmark code immediately available in open source for developers to experiment with on their own, and arrive at their own conclusions on their own hardware. The benchmarks are part of the JMH Kata module in the BNYM Code Katas repo. This time I focused on a use case for filter+map. There is a fused method for filter+map on the ParallelIterate and FJIterate utility classes named collectIf. This method is also available on the serial API for

CodeKatas/FilterMapJMHBenchmark.java at master · BNYMellon/CodeKatas

The JMH Kata is what I refer to as a “sandbox kata”. You can use it as a sandbox to run your own experiments and try out your own benchmarks. It’s set up to run a few starter JMH benchmarks, and saves you the time of setting up a project to do the same.

Hardware

I used my MacPro “trash can” with the following hardware specs to measure the benchmarks:

Processor Name: 12-Core Intel Xeon E5
Processor Speed: 2.7 GHz
Number of Processors: 1
Total Number of Cores: 12
L2 Cache (per Core): 256 KB
L3 Cache: 30 MB
Memory: 64 GB

Software and Benchmark Configuration

I used OpenJDK 17 with Java Streams and Eclipse Collections 11.0.0.M2. I used JMH version 1.33 as the benchmark harness for the tests. I ran with 10 warmup iterations, and 10 measurement iterations with 2 forks. The warmup time and measurement time are both 5 seconds. I am using Mode.Throughput with the tests so they are easy to read. The numbers are in Operations per Second. The bigger the number, the better the result.

Data

I ran the benchmarks using a simple class named Person. Person has a name (String), age (int), heightInInches (double), weightInPounds (double). I ran the benchmarks for the the following data sizes.

  • 10,000 (filters and maps 4,995 values)
  • 100,000 (filters and maps 49,942 values)
  • 1,000,000 (filters and maps 499,615 values)
  • 8,675,309 (filters and maps 4,337,179 values)

The Charts

I sorted the columns in the chart from least to greatest, so it would be easy to find the slowest (far left) and fastest (far right) results. So be aware that the columns may be different for different data sizes.

Results — 10K People

Fastest Parallel: Eclipse Collections Eager Parallel (ParallelIterate)
Slowest Parallel: JDK Parallel Stream.toList()

Fastest Serial: Eclipse Collections Eager Serial
Slowest Serial: JDK Serial Stream Collectors.toList()

Filter / Map — Ops per second — 10,000 People

Results — 100K People

Fastest Parallel: Eclipse Collections Eager Parallel (ParallelIterate)
Slowest Parallel: JDK Parallel Stream Collectors.toList()

Fastest Serial: Eclipse Collections Eager Serial
Slowest Serial: JDK Serial Stream Collector.toList()

Filter / Map — Ops per second — 100,000 People

Results — 1 Million People

Fastest Parallel: Eclipse Collections Eager Fork/Join (FJIterate)
Slowest Parallel: Eclipse Collections Lazy asParallel()

Fastest Serial: Eclipse Collections Eager Serial
Slowest Serial: JDK Serial Stream Collector.toList()

Filter / Map — Ops per second — 1,000,000 People

Results — 8,675,309 People

Fastest Parallel: Eclipse Collections Eager Fork/Join (FJIterate)
Slowest Parallel: JDK Parallel Stream Collectors.toList()

Fastest Serial: Eclipse Collections Eager Serial
Slowest Serial: JDK Serial Stream Collectors.toList()

Filter / Map — Ops per second — 8,675,309 People

Results — JMH Output

Below is the raw consolidated JMH output used in the graphs above. There are also three mega sizes I tested with (25M, 50M, 100M) that I have not included graphs for. I had to switch from operations per second to milliseconds per operation on them so I didn’t want the graphs to be confusing. For the mega sizes, smaller numbers are better. The results with the mega sizes were consistent with Eclipse Collections Eager Fork/Join (FJIterate) being the fastest for parallel. Eclipse Collections Eager Serial was the fastest for the serial in all but the largest test, where JDK Serial Stream.toList() came out on top.

https://medium.com/media/527fbef4a785d8cfc316ef07b028b83a/href

Some Lessons Learned

After more than 15 years of building parallel eager utility classes in Eclipse Collections, I’ve learned a few things. I had forgotten some of the lessons I learned along the way, but writing this blog has helped me re-discover some of them while pouring over the code. Writing efficient parallel algorithms is extremely hard work, and you will spend a lot of time running and re-running benchmarks. It is a rabbit hole, and you will lose days or weeks of your life if you fall into it.

You can sometimes tune performance for specific eager algorithms so that maybe you will get a 5%, 10% or maybe even 20% speedup over more general lazy algorithms. If performance is really important to you, then you may find implementing specific use cases with lower level frameworks like Fork/Join or Executors will be beneficial. Sometimes even hand coding an algorithm using a higher level construct like a parallel forEach with an efficient concurrent data structure will give good returns.

In 2013, buying a personal desktop machine with a decent number of cores and RAM that I could run parallel benchmarks against seemed like it would be a good long term investment for Eclipse Collections. In retrospect, I think it was a good investment, as I have used the machine to prepare benchmarks for various talks and blogs over the years. My plan has been to not even look at upgrading my personal desktop again until 10 years have passed. Surprisingly, even with all the promise of multiple cores showing up in laptops and desktop machines, it hasn’t been until relatively recently that we’ve seen a decent uptick in the number of cores and RAM for less than a totally outrageous prices, like I paid for my Mac Pro “trash can” in 2013.

Even though I have run a lot of benchmarks on the MacPro over the years, I haven’t actually done much tuning at all of any of the parallel algorithms in Eclipse Collections. I had previously tested Eclipse Collections with an extremely large machine at my previous employer (24 cores, 256GB RAM). We were already seeing good speedups for many of the parallel eager and lazy algorithms we implemented. As I mentioned above, our parallel lazy algorithms were implemented more recently than the parallel eager, but also haven’t really been tuned much since late 2014. Craig Motlin gave a great talk in June 2014 on the Eclipse Collections parallel lazy implementation approach. It has some great explanations and lessons on how three different implementations (Java 8, Scala, Eclipse Collections previously GS Collections) were tuned for specific parallel algorithms. I will link to it here for anyone who is looking to learn some great lessons about optimization strategies for parallel algorithms.

Parallel-lazy Performance: Java 8 vs Scala vs GS Collections

The Future

Now that JDK 17 is released, and there are new, cheaper machines with more cores available on the market, it might be worthwhile testing and tuning the parallel algorithms in Eclipse Collections again. It might also be useful to expand on the current parallel lazy implementation. Java Streams seem to be improving for some parallel use cases, and can probably still benefit from approaches that Eclipse Collections uses to optimize specific algorithms like filter and map. Craig describes the approach we use in his talk above, so it is definitely worthwhile watching. Often the future can benefit from lessons learned in the past.

I would like to refactor and clean up the parallel eager implementations in Eclipse Collections and improve the symmetry between ParallelIterate and FJIterate. The biggest change I would like to make will be to change the return type from Collection to either RichIterable or MutableCollection. in ParallelIterate and FJIterate.

I would also like to see some folks opportunistically pick up and continue the work on parallel lazy implementation of Eclipse Collections. There are a lot of methods that have not been added yet as I illustrated above showing the difference between LazyIterable and ParallelIterable. There is a cost and a benefit to improving symmetry. So far, the cost for adding more parallel methods has outweighed the benefits, which is why we haven’t done much more work in this space. But for the right person, who might looking to cut their teeth on parallel programming, and maybe test out all the cores in their newly purchased MacBook Pro with an M1 Max, the benefits of learning how to build optimized parallel algorithms might outweigh the costs.

I believe that parallel programming will become increasingly important for a larger population of Java developers. Learning how to program and tune parallel algorithms effectively will be something many developers will need to learn. The knowledge and experience from books like CPJ from Doug Lea and “Java Concurrency in Practice” (JCIP) from Brian Goetz will become important and popular again. Now that I have my second brand new copy of CPJ, and my previously signed copy of JCIP, I am ready to re-learn the lessons of concurrency and parallelism all over again.

Final Thoughts

My goal for this blog was to share some lessons I learned from the past 15 years that otherwise might have gone undiscovered or completely forgotten. I doubt most developers who use Eclipse Collections will have dug into the parallel algorithms available in the library before reading this blog. I hope some Java developers read this blog and find it useful for helping them learn more about parallel programming approaches they may not have been previously aware of. If you read it and liked it, you can let me know by clapping and/or commenting. I generally dislike including micro-benchmarks in blogs, but I think folks find them interesting enough to start investigating and learning more. Take my benchmarks with a huge grain of salt and two pounds of skepticism. I wouldn’t recommend basing any decisions on them. I highly recommend writing your own application benchmarks for your own specific use cases and determining for yourself whether a particular approach will help you achieve better or worse performance. As I’ve recommended in my previous blog linked above:

Prove it before going Parallel.

Thanks for reading!

I am a Project Lead and Committer for the Eclipse Collections OSS project at the Eclipse Foundation. Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it on GitHub.


The unparalleled design of Eclipse Collections was originally published in Javarevisited on Medium, where people are continuing the conversation by highlighting and responding to this story.


by Donald Raab at November 01, 2021 04:49 AM

Eclipse JKube 1.5.1 is now available!

October 28, 2021 03:00 PM

On behalf of the Eclipse JKube team and everyone who has contributed, I'm happy to announce that Eclipse JKube 1.5.1 has been released and is now available from Maven Central.

Thanks to all of you who have contributed with issue reports, pull requests, feedback, spreading the word with blogs, videos, comments, etc. We really appreciate your help, keep it up!

What's new?

Without further ado, let's have a look at the most significant updates:

Kubernetes and OpenShift Gradle Plugins (Preview)

Eclipse JKube 1.5.1 finally brings the new Kubernetes Gradle Plugin and OpenShift Gradle Plugin. We're releasing these plugins as a preview including most of the standard functionality that you can find in the homologous Maven plugins. However, there's still work to do, so please share your feedback

Getting started

The first step is to add the plugin to your project:

plugins {
  id 'org.eclipse.jkube.kubernetes' version '1.5.1'
  /* ... */
}

Or in case of OpenShift:

plugins {
  id 'org.eclipse.jkube.openshift' version '1.5.1'
  /* ... */
}

If your application is based on Spring Boot, then this is all the configuration you'll need.

Available tasks

This is still a preview, more tasks are yet to come. Following is the list of the currently supported Gradle tasks:

TaskDescription
k8sBuildBuild container images
k8sPushPushes the built images to the container image registry
k8sResourceGenerate resource manifests for your application
k8sApplyApplies the generated resources to the connected cluster
k8sHelmGenerate Helm charts for your application
k8sDebugDebug your Java app running on the cluster
k8sLogShow the logs of your Java app running on the cluster
k8sUndeployDeletes the kubernetes resources that you deployed via the k8sApply task

The tasks can be executed as in the following example:

gradle build k8sBuild k8sPush k8sResource k8sApply

You can also check the recording of my EclipseCon 2021 session to learn more about the Gradle Plugins.

Hacktoberfest

This year we've been active members of the Hacktoberfest. This is aligned with our all year round first-timers-only issue strategy.

The 1.5.1 release includes around 30 contributions from members of the community who took part on Hacktoberfest 2021. On behalf of the core maintainer team I want to thank you all for your participation and making JKube even better with your contributions.

Using this release

If your project is based on Maven, you just need to add the kubernetes maven plugin or the openshift maven plugin to your plugin dependencies:

<plugin>
  <groupId>org.eclipse.jkube</groupId>
  <artifactId>kubernetes-maven-plugin</artifactId>
  <version>1.5.1</version>
</plugin>

If your project is based on Gradle, you just need to add the kubernetes gradle plugin or the openshift gradle plugin to your plugin dependencies:

plugins {
  id 'org.eclipse.jkube.kubernetes' version '1.5.1'
}

How can you help?

If you're interested in helping out and are a first time contributor, check out the "first-timers-only" tag in the issue repository. We've tagged extremely easy issues so that you can get started contributing to Open Source and the Eclipse organization.

If you are a more experienced developer or have already contributed to JKube, check the "help wanted" tag.

We're also excited to read articles and posts mentioning our project and sharing the user experience. Feedback is the only way to improve.

Project Page | GitHub | Issues | Gitter | Mailing list | Stack Overflow

Eclipse JKube Logo

October 28, 2021 03:00 PM

The Eclipse Foundation Joins Bosch, Microsoft, and Other Industry Leaders to Create an Open Source Working Group for the Software-Defined Vehicle

by Shanda Giacomoni at October 28, 2021 09:00 AM

Open source leader actively recruiting interested enterprises to develop a new vendor-neutral, working group focused on building the next-generation vehicles based on the open source model 

BRUSSELS – October 28, 2021The Eclipse Foundation, one of the world’s largest open source foundations, along with multiple industry leaders, including Bosch, Microsoft and others, today announced an open invitation for technology leaders to help define a new working group focused specifically on the Software-Defined Vehicle. The ultimate goal will be the creation of a vendor-agnostic, open source ecosystem with a vivid, contributing community focused on building the foundation for a new era in automotive software development. This announcement serves as a “call to action” for all interested parties to join this initiative and help shape the future of mobility. 

Today, next-generation vehicle developers are turning to software-based solutions for their new designs. The Eclipse Foundation believes this will lead to an open source revolution that results in software-defined vehicles. Software-defined vehicles will enable vehicle manufacturers as well as automotive suppliers to put software at the very center of vehicle development, with hardware considerations to follow. 

“We’re very excited to develop this new effort here at the Eclipse Foundation. Although we have extensive roots with the automotive community, a project of this scope and scale has never been attempted before,” said Mike Milinkovich, executive director of the Eclipse Foundation. “This initiative enables participants to get in at the ‘ground level” and ensure they each have an equal voice in this project.”

To achieve this significant change in the design process, this new working group will build the foundation of an open ecosystem for deploying, configuring, and monitoring vehicle software in a secure and safe way. Vehicle manufactures around the world may use this foundation to focus on differentiating customer features, like mobility services and end-user experience improvements, while saving time and cost on the non-differentiating elements, like operating systems, middleware or communication protocols.
 
To support the transformation to software-defined vehicles, major players from the technology industry as well as the automotive industry are encouraged to collaboratively  develop an open source in-vehicle application runtime stack, cloud-based vehicle operations as well as highly integrated development toolchains. The ultimate goal of the open source software-defined vehicle initiative is to scale in-vehicle software across vehicle models, product lines, brands, organizations, and time. 

The Eclipse Foundation and its decades of experience managing the governance of complex technology initiatives and multi-vendor organizations make it the ideal organization to help manage such an endeavor. Its commitment to transparency, vendor-neutrality, and a shared voice will ensure that all participants have the opportunity to shape the future of the working group.  

To learn more about getting involved with the Eclipse Foundation’s Software-Defined Vehicle initiative, please visit us at sdv.eclipse.org, or email us at membership@eclipse.org. 


Quotes from Members 

BlackBerry
“This Eclipse Foundation Software-Defined Vehicle collaboration will be an important factor in influencing next-generation Software-Defined Vehicle architectures,” said Grant Courville, vice-president of product management and strategy at BlackBerry QNX. “BlackBerry QNX has a long history of embracing industry standards and we continue to work very closely with our customers and partners to help define and enable future automotive architectures. As a founding member, for 20 years BlackBerry has had a front row seat to The Eclipse Foundation’s relentless pursuit to help spur developer innovation and we’re thrilled to be part of this new initiative with a view to accelerating the software-defined future of automotive.”

Bosch
“Technological, organizational, and cultural innovations pave the way for the software-defined vehicle. The use of open-source software and technology neutrality are the pillars for a strong community to actively shape the transformation in automotive software engineering together with our customers and partners,” said Sven Kappel, vice president - Head of Project Software Defined Vehicle at Bosch. “For Bosch, collaboration across industries is key to realize the software-defined vehicle. Together with the Eclipse Foundation and other participants we are driving this change and looking forward to welcoming additional contributors to the initiative.”

EPAM Systems
“The automotive industry is undergoing a period of rapid transformation, with the next generation of vehicles transitioning to software-defined,” said Alex Agizim, CTO, Automotive & Embedded Systems, EPAM. “EPAM is proud to bring its embedded engineering and digital orchestration expertise to this industry-first initiative for open-source software-defined vehicles. In partnership with Bosch, Microsoft, The Eclipse Foundation and more, this collaboration will help usher in the new era in automotive development.”

ETAS
“The software-defined vehicle will play a key role in the future of mobility,” said Christoph Hartung, president and chairman of the ETAS Board of Management. “The explosive increase in complexity can only be mastered by working closely together as we do in this initiative.” 

Microsoft
​​“With digital technologies unlocking the future of accessible, sustainable and safe transportation experiences, mobility services providers are increasingly looking to differentiate through software innovation,” said Ulrich Homann, corporate vice president and Distinguished Architect, Microsoft. “By standardizing the development, deployment and management of software-defined vehicles through collaboration in the open-source space, businesses can bring tailored mobility solutions to their customers faster and can focus on innovations.” 

Red Hat
“Since our founding, Red Hat has clearly seen and advocated for open source collaboration as a force multiplier for software quality and value,” said Francis Chow, vice president, In-Vehicle Operating System, Red Hat. “We are pleased to collaborate on software-defined vehicles built with an open source backbone alongside the other member organizations of the Eclipse Software-Defined Vehicle initiative.”

SUSE
“Defining and developing the software-defined vehicle will transform the automotive industry, enabling manufacturers to truly address the rapidly changing concerns and pain points the market is experiencing today,” said Thomas Di Giacomo, SUSE chief technology and product officer. “For nearly 30 years, SUSE has been a trusted partner supporting systems and essential workloads in some of the most challenging and critical industries, including the automotive industry. We are eager to contribute our experience and ready-to-use open source technologies to help advance the automotive software industry.”

About the Eclipse Foundation
The Eclipse Foundation provides our global community of individuals and organizations with a mature, scalable, and business-friendly environment for open source software collaboration and innovation. The Foundation is home to the Eclipse IDE, Jakarta EE, and over 400 open source projects, including runtimes, tools, and frameworks for cloud and edge applications, IoT, AI, automotive, systems engineering, distributed ledger technologies, open processor designs, and many others. The Eclipse Foundation is an international non-profit association supported by over 330 members, including industry leaders who value open source as a key enabler for their business strategies. To learn more, follow us on Twitter @EclipseFdn, LinkedIn or visit eclipse.org.


Third-party trademarks mentioned are the property of their respective owners.

###

Media contacts: 

Schwartz Public Relations for the Eclipse Foundation, AISBL
Julia Rauch / Sophie Dechansreiter / Tobias Weiß
Sendlinger Straße 42A
80331 Munich
EclipseFoundation@schwartzpr.de
+49 (89) 211 871 – 43 / -35 / -70

Nichols Communications for the Eclipse Foundation, AISBL
Jay Nichols
jay@nicholscomm.com
+1 408-772-1551
 


by Shanda Giacomoni at October 28, 2021 09:00 AM

Introducing Oniro: A Vendor Neutral, Open Source OS for Next-Gen Devices

by Mike Milinkovich at October 26, 2021 12:01 PM

It’s a rare event when a new operating system comes along. And it’s even rarer to have the opportunity to influence the direction of that OS at its earliest stages. So I’m delighted to tell you that today we are announcing a new working group and top-level project that gives you that opportunity. The Oniro community will nurture and evolve the Oniro operating system, a transparent, vendor-neutral, and independent OS for the next generation of distributed systems.

The Oniro OS will provide a true, community-driven open source solution that runs on a wider spectrum of devices than today’s operating systems. And it will make it far easier to integrate different types of next-gen hardware and software.

Architected to Go Beyond Today’s Operating Systems

The Oniro OS can run on more devices than current operating systems because it features a multi-kernel architecture:

  • A Linux Yocto kernel allows the OS to run on larger embedded devices, such as Raspberry Pi-class devices 
  • A Zephyr kernel allows the OS to run on highly resource-constrained devices, such as a coffee maker or a thermostat

With the ability to run the same OS on different classes of devices, Oniro will provide an ideal solution to support the future of IoT, machine economy, edge, mobile, and other next-gen devices:

  • Consumers and adopters of the Oniro OS will have a more seamless experience than they have with the current generation of operating systems.
  • Devices will be able to directly connect to one another and share data, enabling a much higher degree of interoperability than is possible today.
  • Data exchanged between devices can flow directly to one another rather than always being shared via the cloud, enabling low latency architectures which are also inherently more secure and private. 

We expect the initial use cases for Oniro will be in the IoT and industrial IoT domains with applications for mobile devices coming later as the community evolves, grows, and establishes its roadmap.

Enabling the Global Ecosystem for OpenHarmony

Oniro is an independent open source implementatio of OpenAtom’s OpenHarmony. To deliver on the promise of Oniro, the community will deliver an independent, but compatible implementation of the OpenHarmony specifications, tailored for the global market. OpenHarmony is based on HarmonyOS, a multi-kernel OS that was developed by Huawei and contributed to the OpenAtom Foundation last year. In the future Oniro will also deliver additional specifications to help drive global adoption.

By creating a compatible implementation of OpenHarmony, the Oniro community can ensure that applications built for Oniro will run on OpenHarmony and vice versa. This interoperability will allow the Oniro community to create a global ecosystem and marketplace for applications and services that can be used across both operating systems, anywhere in the world. 

Join an Innovative Open Source Community

I truly believe that Oniro is open source done right. It’s a huge opportunity to build an operating system that rethinks how devices across many different device classes can interoperate in a secure and privacy-preserving way. 

Because Oniro’s evolution is being guided by an open and vendor-neutral community using the Eclipse Development Process, openness and transparency are a given. This will go a long way towards building the engagement and stakeholder trust necessary to create the global ecosystem.

The founding members of the Oniro Working Group include telecom giant, Huawei, Arm software experts Linaro, and industrial IoT specialists Seco. As more organizations become aware of Oniro, we expect the community to encompass organizations of all sizes and from all industries. 

I strongly encourage everyone with an interest in next-gen devices — corporations, academics, individuals — to take the opportunity to get involved in Oniro in its earliest stages. To get started, join the Oniro conversation by subscribing to the Oniro working group list.


by Mike Milinkovich at October 26, 2021 12:01 PM

Open Source Leader the Eclipse Foundation Launches Vendor-Neutral Operating System for Next-Generation Device Interoperability

by Jacob Harris at October 26, 2021 07:00 AM

FOR IMMEDIATE RELEASE

Oniro will provide a true open source solution to make multi-device hardware and software integration easier

Brussels, October 26, 2021The Eclipse Foundation, a European open source foundation, furthering the recently announced cooperation with the OpenAtom Foundation, announced today the launch of the Oniro project and working group. 

Oniro aspires to become a transparent, vendor-neutral, and independent alternative to established IoT and edge operating systems. To achieve this goal and ensure Oniro has a global reach, the Eclipse Foundation and its members will deliver a compatible independent implementation of OpenHarmony, an open source operating system specified and hosted by the OpenAtom Foundation.

“Oniro is open source done right,” said Mike Milinkovich, executive director of the Eclipse Foundation. “It represents a unique opportunity to develop and host a next-generation operating system to support the future of mobile, IoT, machine economy, edge and many other markets.”

With the creation of the Oniro top-level project, the Eclipse Foundation aims to strengthen the global technology ecosystem, while bringing a vendor-neutral, open source OS to the global market.

To facilitate the governance for the Oniro device ecosystem, the Eclipse Foundation is also launching a new dedicated working group. The Eclipse Foundation’s working group structure provides the vendor neutrality and legal framework that enables transparent and equal collaboration between companies.

“We’re very proud to be hosting a major European open source project with worldwide contribution aiming to develop an independent OS,” says Gaël Blondelle, vice president of European ecosystem development. “To achieve this, we want to welcome developers and companies from Europe and the rest of the world to join our working group at the Eclipse Foundation and bring this groundbreaking project to life together.”

Quotes from Supporters

Huawei
“We have been working hard with Linaro, Seco, Array, NOITechPark, Synesthesia to prepare Oniro’s initial code contribution and public cloud CI/CD infrastructure, and it is so exciting to see everything moving under the expert governance of the Eclipse Foundation,” said Davide Ricci, Director of the Huawei’s Consumer Business Group European Open Source Technology Center. “Under the Eclipse Foundation the project will have its greatest chance at onboarding new contributing members and bringing real products on the shelves of consumer electronics stores around the world. We reckon Oniro is not a sprint, rather a marathon, and we are thrilled and committed to this world changing journey.”

Linaro 
“Over the past year, Linaro has worked closely with Huawei and other Oniro members on preparing the OS foundations of Oniro, leveraging the work Linaro is already doing on open source projects such as MCUboot, the Yocto project, Trusted Substrate and multiple RTOSs,” said Andrea Gallo, VP of Business Development. “Formalizing the governance of this project through the Eclipse Foundation is the natural next step in delivering a truly vendor-neutral and independent operating system.” 

SECO
“Oniro will be the future of the open source OS, it will mark a new trend for its deeply innovative nature and defining it only as an operating system would be extremely reductive. In fact, it focuses on the end-user with an incredible user-experience, but it is also oriented to the content creators and OEMs at the same time, bringing to all of them certainty, choice and convenience,” said Gianluca Venere, Chief Innovation Officer, SECO. “It is born for device collaboration at the edge, to be hardware architecture independent, to create a swarm intelligence, and to enable ambient computing. For more than 40 years SECO has been designing and manufacturing innovative products and services for OEMs and we strongly believe that Oniro is a game changer in supporting our customers to the digital transformation.”

About the Eclipse Foundation
The Eclipse Foundation provides our global community of individuals and organizations with a mature, scalable, and business-friendly environment for open source software collaboration and innovation. The Foundation is home to the Eclipse IDE, Jakarta EE, and over 400 open source projects, including runtimes, tools, and frameworks for cloud and edge applications, IoT, AI, automotive, systems engineering, distributed ledger technologies, open processor designs, and many others. The Eclipse Foundation is an international non-profit association supported by over 330 members, including industry leaders who value open source as a key enabler for their business strategies. To learn more, follow us on Twitter @EclipseFdn, LinkedIn or visit eclipse.org.

Media Contacts:

Schwartz Public Relations for Eclipse Foundation
Julia Rauch / Sophie Dechansreiter / Tobias Weiß
Sendlinger Straße 42A
80331 Munich
EclipseFoundation@schwartzpr.de
+49 (89) 211 871 – 43 / -35 / -70

Nichols Communications for Eclipse Foundation in North America
Jay Nichols
jay@nicholscomm.com
+1 408-772-1551

PR Paradigm for Eclipse Foundation in France
Oscar Barthe
oscar@prparadigm.com
(+33) 06 73 51 78 91 
 

MSL Group for Eclipse Foundation in Italy
Rosa Parente
rosa.parente@mslgroup.com
+39 340 8893581


by Jacob Harris at October 26, 2021 07:00 AM

The Eclipse IoT Working Group Celebrates its 10th Anniversary

by Jacob Harris at October 25, 2021 11:00 AM

The world’s largest open source community for edge and IoT continues to drive innovation that benefits a broad range of industries and applications 

 

BRUSSELS – October 25, 2021 – The Eclipse Foundation, one of the world’s largest open source software foundations, today celebrated the 10th Anniversary of the Eclipse IoT Working Group. Eclipse IoT is the largest open source IoT community in the world with 47 working group members, 47 projects, 360 contributors, and more than 32 million lines of code.

“It would be challenging to measure the industry impact of the Eclipse IoT Working Group over the past 10 years,” said Mike Milinkovich, executive director of the Eclipse Foundation. “From day one, this working group had a vision focused on developing actionable code as opposed to blueprints or standards, which has enabled it to stand apart from other organizations. This focus, along with the broad and diverse mix of Eclipse IoT ecosystem participants, has led to an extremely vibrant community that has helped drive commercial innovation and adoption at scale.”
 
In addition to original founding members, IBM and Eurotech, the current Eclipse IoT ecosystem now includes globally recognized players such as Bosch.IO, Red Hat, Huawei, Intel, SAP, and Siemens. The community is further enriched with Industrial IoT (IIoT) specialists like Aloxy, Cedalo, itemis, and Kynetics; along with edge IoT innovators that include ADLINK Technology and Edgeworx.
    
Eclipse IoT is home to open source innovation that has delivered some of the industry’s most popular IoT protocols. CoAP (Eclipse Californium), DDS (Eclipse Cyclone DDS), LwM2M (Eclipse Leshan), MQTT (Eclipse Paho, Eclipse Mosquitto and Eclipse Amlen) and OPC UA (Eclipse Milo) are all built around Eclipse IoT projects. Other popular Eclipse IoT production-ready platforms cover use cases such as digital twins (Eclipse Ditto), energy management (Eclipse VOLTTRON), contactless payments (Eclipse Keyple), Smart cities (Eclipse Kura) in addition to Eclipse Kapua  — a modular IoT cloud platform that manages data, devices, and much more.
 
To learn more about how to get involved with Eclipse IoT, Edge Native, Sparkplug or other working groups at the Eclipse Foundation, visit the Foundation’s membership page. Working group members benefit from a broad range of services, including exclusive access to detailed industry research findings, marketing assistance, and expert open source governance.

For further IoT & edge related information, please reach us at:
IoT@eclipse.org
Edgenative@eclipse.org

Quotes from Eclipse IoT Working Group pioneers

Andy Stanford-Clark, IBM UK CTO & Co-Inventor of MQTT
“Our original vision for the IoT Working Group was to create and curate a software stack which would enable developers to write ‘applications for platforms’, rather than ‘custom code for specific devices.’ Over the 10 years, I think we’ve made that vision a reality. I’m immensely proud of what we’ve achieved together.”

Andy Piper, Developer Advocate & Founding Project Lead, Eclipse Paho 
“It is inspiring to see the range and scope of projects that make up the Eclipse IoT Working Group, 10 years on - we knew that the keys to success would be open source, interoperability, and open standards. I’m hugely proud of the success of MQTT and Mosquitto, and the wider ecosystem in this space.” 

Marco Career, CTO, Eurotech
“Eclipse IoT WG has shattered the silos of monolithic M2M applications and proprietary connectivity by promoting open standards and open architectures while creating a vibrant community of interoperable projects. Eurotech is proud of having been part of this journey and we wish Eclipse IoT WG 10 more successful years”.

Deb Bryant, Senior Director, Open Source Program Office, Red Hat
“The 10th anniversary of the Eclipse Foundation IoT Working Group is a significant milestone not only for its members and partners, but for the technology and open source communities. Many solutions to challenges within global IoT ecosystems are the result of the Eclipse Foundation IoT Working Group’s dedication over the last decade to creating a vendor-neutral community of open source projects. Red Hat is proud to be a member of the Eclipse Foundation and looks forward to continuing our support for the IoT Working Group and helping to foster open source IoT achievements.”

Benjamin Cabé, Principal Program Manager, Microsoft:
“It is both exciting and humbling to see how our initial vision of enabling an Internet of Things based on open source and open standards has effectively turned into a reality, ten years down the road. The Eclipse IoT Working Group and its community of passionate individuals have been a catalyst for IoT innovation, and I am looking forward to ten more years of success!”

About the Eclipse Foundation
The Eclipse Foundation provides our global community of individuals and organizations with a mature, scalable, and business-friendly environment for open source software collaboration and innovation. The Foundation is home to the Eclipse IDE, Jakarta EE, and over 400 open source projects, including runtimes, tools, and frameworks for cloud and edge applications, IoT, AI, automotive, systems engineering, distributed ledger technologies, open processor designs, and many others. The Eclipse Foundation is an international non-profit association supported by over 330 members, including industry leaders who value open source as a key enabler for their business strategies. To learn more, follow us on Twitter @EclipseFdn, LinkedIn or visit eclipse.org.
Third-party trademarks mentioned are the property of their respective owners.

###

Media contacts 

Schwartz Public Relations for the Eclipse Foundation, AISBL
Julia Rauch / Sophie Dechansreiter / Tobias Weiß
Sendlinger Straße 42A
80331 Munich
EclipseFoundation@schwartzpr.de
+49 (89) 211 871 – 43 / -35 / -70

Nichols Communications for the Eclipse Foundation, AISBL
Jay Nichols
jay@nicholscomm.com
+1 408-772-1551
 


by Jacob Harris at October 25, 2021 11:00 AM

What Cloud Developers Want

by Mike Milinkovich at October 22, 2021 12:30 PM

The results of our first-ever Cloud Developer Survey are in, providing important insight into the development tools being used today, the role of open source, and the capabilities developers are looking for in next generation cloud-based tools and IDEs.  

The Cloud Developer Survey was conducted April 22-May 1, 2021, with more than 300 software developers, DevOps specialists, architects, and IT leaders in the US, UK, France, and Germany being interviewed. It’s important to point out that this survey was fielded by an independent team of analysts with the express purpose of minimizing bias, and to provide a clear market perspective to our member community. 

In commissioning this research project, our primary objective was to gain a better understanding of cloud-based developer trends by identifying the requirements, priorities, and challenges faced by organizations that deploy and use cloud-based development solutions, including those based on open source technologies. Our expectation is that through these findings, we can better ensure developers have the tools and technologies they need for cloud native application development.

An interesting finding is that more than 40 percent of survey respondents indicated that their company’s most important applications are now cloud native. And only three percent said their company has no cloud migration plans for important on-premise applications. This bodes well for the growth in cloud-based tools to help accelerate this trend and migration.

Developers Expect Open Source Tools and Technologies

One of the most significant trends revealed by the survey is the extremely high value developers place on open source. This is a rare number to see in survey results, but 100 percent of participating organizations said they allow their developers to use open source technologies for software development; though 62 percent do place at least some restrictions on usage.

Looking ahead, developers expect open source to continue to grow in popularity, with more than 80 saying they consider open source to be important both now and in the future. With the focus on cloud native applications and growing reliance on open source, it’s safe to say that open source and cloud development go hand-in-hand, and are here to stay.

Flexibility, Better Integrations, and Innovation are Attractive 

The Cloud Developer Survey also revealed that while developers use a variety of tools, they prefer using those with which they’re already familiar. This is reflected by the fact that 57 percent of survey respondents are still using desktop IDEs, including the Eclipse IDE. What this means is that there remains a huge developer community that has yet to benefit from open source cloud IDE technologies like Eclipse Theia, Eclipse Che, and Open VSX Registry, along with the ecosystem and products built around them.

Developers that do use cloud-based tools aren’t necessarily tied to using what their cloud provider recommends. Instead, they prefer open source options that offer opportunities for customization and innovation. No matter which technologies developers opt to use, increasing productivity is crucial. Developers are looking for better integrations of APIs and other features and tools that help save them time and effort.

Developers also want the flexibility to choose best-of-breed products and tools as needed to work more efficiently and to support the next wave of innovation in artificial intelligence, machine learning, and edge technologies. Open source drives innovation in these technologies, and flexible, open source tools will be key to attracting top talent to these cutting-edge development opportunities.

Read the Full Report and Recommendations

To review the complete Cloud Developer Survey results and the associated recommendations, download the survey report.

For more information about the Eclipse Cloud DevTools ecosystem and its benefits for members, visit the website.


by Mike Milinkovich at October 22, 2021 12:30 PM

OSGi Services with gRPC - Let's be reactive

by Scott Lewis (noreply@blogger.com) at October 20, 2021 02:54 AM

ECF has just introduced an upgrade to the grpc distribution provider.   Previously, this distribution provider used ReaxtiveX java version 2 only.  With this release, ReactiveX java version 3 is also supported.

As many know, gRPC allows services (both traditional call/response [aka unary] and streaming services) to be defined by a 'proto3' file.  For example, here is a simple service with four methods, one unary (check) and 3 streaming (server streaming, client streaming, and bi-directional streaming)
syntax = "proto3";

package grpc.health.v1;

option java_multiple_files = true;
option java_outer_classname = "HealthProto";
option java_package = "io.grpc.health.v1.rx3";

message HealthCheckRequest {
string message = 1;
}

message HealthCheckResponse {
enum ServingStatus {
UNKNOWN = 0;
SERVING = 1;
NOT_SERVING = 2;
SERVICE_UNKNOWN = 3; // Used only by the Watch method.
}
ServingStatus status = 1;
}

service HealthCheck {
// Unary method
rpc Check(HealthCheckRequest) returns (HealthCheckResponse);
// Server streaming method
rpc WatchServer(HealthCheckRequest) returns (stream HealthCheckResponse);
// Client streaming method
rpc WatchClient(stream HealthCheckRequest) returns (HealthCheckResponse);
// bidi streaming method
rpc WatchBidi(stream HealthCheckRequest) returns (stream HealthCheckResponse);
}
The gRPC project provides a plugin so that when protoc is run, java code (or other language code) is generated that can then be used on the server and/or clients.

With some additional plugins, the classes generated by protoc can use the ReactiveX API for generating code.   So, for example, here is the java code generated by running protoc, grpc, reactive-grpc, and the osgi-generator plugins on the above HealthCheck service definition.  

Note in particular the HealthCheckService interface generated by the osgi-generator protoc plugin:
package io.grpc.health.v1.rx3;

import io.reactivex.rxjava3.core.Single;
import io.reactivex.rxjava3.core.Flowable;

@javax.annotation.Generated(
value = "by grpc-osgi-generator (REACTIVEX) - A protoc plugin for ECF's grpc remote services distribution provider at https://github.com/ECF/grpc-RemoteServiceSProvider ",
comments = "Source: health.proto. ")
public interface HealthCheckService {
/**
* <pre>
* Unary method
* </pre>
*/
default Single<io.grpc.health.v1.rx3.HealthCheckResponse> check(Single<io.grpc.health.v1.rx3.HealthCheckRequest> requests) {
return null;
}
/**
* <pre>
* Server streaming method
* </pre>
*/
default Flowable<io.grpc.health.v1.rx3.HealthCheckResponse> watchServer(Single<io.grpc.health.v1.rx3.HealthCheckRequest> requests) {
return null;
}
/**
* <pre>
* Client streaming method
* </pre>
*/
default Single<io.grpc.health.v1.rx3.HealthCheckResponse> watchClient(Flowable<io.grpc.health.v1.rx3.HealthCheckRequest> requests) {
return null;
    }
    /**
* <pre>
* bidi streaming method
* </pre>
*/
default Flowable<io.grpc.health.v1.rx3.HealthCheckResponse> watchBidi(Flowable<io.grpc.health.v1.rx3.HealthCheckRequest> requests) {
return null;
}
}

Note that it uses the two ReactiveX 3 classes: io.reactivex.rxjava3.core.Single, and io.reactivex.rxjava3.core.Flowable. These two classes provide api for event-driven/reactive sending and receiving of unary (Single) and streaming (Flowable) arguments and return values.

The ReactiveX API...particularly Flowable...makes it very easy to implement both consumers and implementers of the streaming API, while maintaining ordered delivery and non-blocking communication.

For example, this is a simple implementation of the HealthCheckService. Note how the Single and flowable methods are able to express the implementation logic through methods such as Flowable.map.
Here is a simple implementation of a consumer of the HealthCheckService.

The use of the ReactiveX API simplifies both the implementation and the consumer use of both unary and streaming services. As an added bonus: the reactive-grpc library used in the ECF Distribution provider provides *flow-control* using backpressure.

In next article I'll describe how OSGi Remote Services can be easily used to export, publish, discover, and import remote services with full support for service versioning, security, and dynamics. I'll also describe one can use tools like maven or bndtools+eclipse to generate source code (as above) from a proto3 file and easily run a generated service as an OSGi Remote Service.


by Scott Lewis (noreply@blogger.com) at October 20, 2021 02:54 AM

Eclipse Foundation Projects are OpenChain Conformant

by Mike Milinkovich at October 19, 2021 01:46 PM

Today we announced that the Eclipse Foundation is the first open source foundation to confirm its open source development process conforms with the OpenChain ISO 5230 international standard for open source license compliance. This means that every Eclipse Foundation project is being developed under a process which conforms to the ISO 5230 standard. The announcement is great news for our open source software contributors, users, adopters, and stakeholders globally.

The OpenChain ISO 5230 standard is officially known as the OpenChain 2.1 ISO/IEC 5230:2020 standard, and is maintained by the OpenChain Project. Its goal is to provide a clear and effective process management standard, so that organizations of all sizes, in all industries, and in all markets can benefit from a more efficient and effective open source supply chain.  

The time and effort we put into documenting that our existing development processes comply with the OpenChain ISO 5230 standard will help strengthen global supply chain integrity, and showcases our commitment to supporting our members and all of our projects’ downstream adopters.

Supported by Leading Organizations Globally

Before it became an official ISO/IEC standard in December 2020, the OpenChain initiative was the de facto standard for several years. The standard was developed based on the contributions of more than 100 project participants, and supported by organizations including Arm, BMW Car IT, Bosch, Cisco, Comcast, Ericsson, Facebook, Fujitsu, Google, Hitachi, Huawei, Microsoft, MOXA, OPPO, Panasonic, Qualcomm, Siemens, Sony, Toshiba, Toyota, and Uber. 

The breadth, depth, and diversity of organizations involved in developing the OpenChain ISO 5230 standard clearly demonstrate the importance with which the initiative is viewed across industries. The availability of the official, published standard is expected to increase conformance from hundreds of organizations to thousands. But to my knowledge, the Eclipse Foundation is the first open source foundation that has done the work necessary to document that all of our projects are developed under an OpenChain conformant process. This is an important milestone for both the Eclipse Foundation and for the OpenChain standard and its community.

Learn More and Get Involved

Because the OpenChain ISO 5230 standard is open, everyone with an interest in the initiative can engage with the community, share their knowledge, and contribute to the future of the standard. 

Follow the links below to learn more:


by Mike Milinkovich at October 19, 2021 01:46 PM

Open Source Software Leader the Eclipse Foundation Announces It Has Achieved OpenChain ISO 5230 Conformance

by Jacob Harris at October 19, 2021 11:00 AM

BRUSSELS – October 19, 2021 – The Eclipse Foundation AISBL, a global community fostering a mature, scalable, and business-friendly environment for software collaboration and innovation, has announced that it is the first open source software foundation to confirm that its open source development and license management processes are OpenChain ISO 5230 conformant. This means that all Eclipse Foundation open source projects are developed under an  ISO 5230 conformant program which fulfills the license compliance requirements of the standard. 

“Certifying that our development process is OpenChain ISO 5230 conformant is another step in showcasing our foundation’s role in the global open source ecosystem which is critical to today’s innovation-driven economy,” said Mike Milinkovich, Executive Director of the Eclipse Foundation. “We are thrilled to provide our worldwide contributors, users, adopters, and stakeholders the opportunity to benefit from a more efficient and effective open source supply chain."

OpenChain ISO 5230 is a simple, clear and effective process management standard for open source license compliance. The OpenChain Project maintains the International Standard for open source license compliance. This allows companies of all sizes and in all sectors to adopt the key requirements of a quality open source compliance program. 

Interested parties can find out more about this open standard here - https://www.openchainproject.org/about 

Supporting Quotes

Bosch
"Open Source is at the center of many products within Bosch," says Marcel Kurzmann, representative of Bosch in the OpenChain Governing Board, Robert Bosch GmbH. "Having OpenChain compliant supply chains is a key building block  for an efficient handling of the Open Source parts. Thus, we welcome the initiative of the Eclipse Foundation to develop Open Source projects in an OpenChain compliant way."

SAP
“Open source is at the heart of many SAP solutions and our innovation strategies in segments such as Industry 4.0,” said Peter Giese, Director of SAP Open Source Program Office, SAP. “Being able to both contribute to and consume Eclipse Foundation projects developed under OpenChain conformant processes simplifies and enhances the open source supply chain for us, our partners and customers.”

Daimler 
“FOSS is everywhere! It is in the vehicles we sell, in the mobile apps we provide, in our backend systems and websites, and even used on the shop floor every day. To foster OpenChain conformance in our open source supply chain we recommend our software suppliers to get certified and commit to this ISO standard”, says Christian Wege, member of FOSS CoC at Daimler.

OpenChain Project
“The heart of open source is collaboration and Eclipse Foundation is an exemplary example of where such collaboration takes place,” says Shane Coughlan, OpenChain General Manager. “I look forward to our ongoing engagement as we help foster a new phase in open source supply chains. We are reaching an era where OpenChain ISO 5230 and related standards are the key to rapid, clear management of code."

About the Eclipse Foundation
The Eclipse Foundation provides our global community of individuals and organizations with a mature, scalable, and business-friendly environment for open source software collaboration and innovation. The Foundation is home to the Eclipse IDE, Jakarta EE, and over 400 open source projects, including runtimes, tools, and frameworks for cloud and edge applications, IoT, AI, automotive, systems engineering, distributed ledger technologies, open processor designs, and many others. The Eclipse Foundation is an international non-profit association supported by over 330 members, including industry leaders who value open source as a key enabler for their business strategies. To learn more, follow us on Twitter @EclipseFdn, LinkedIn or visit eclipse.org.

Third-party trademarks mentioned are the property of their respective owners.


###


Media contacts 

Schwartz Public Relations for the Eclipse Foundation, AISBL
Julia Rauch / Sophie Dechansreiter / Tobias Weiß
Sendlinger Straße 42A
80331 Munich
EclipseFoundation@schwartzpr.de
+49 (89) 211 871 – 43 / -35 / -70

Nichols Communications for the Eclipse Foundation, AISBL
Jay Nichols
jay@nicholscomm.com
+1 408-772-1551
 


by Jacob Harris at October 19, 2021 11:00 AM

The Eclipse Foundation Releases Results from the First Annual Cloud Developer Survey Report

by Jacob Harris at October 14, 2021 11:00 AM

FOR IMMEDIATE RELEASE

BRUSSELS -– October 14, 2021 –  The Eclipse Foundation, one of the world’s largest open source foundations, along with the Eclipse Cloud DevTools Working Group, today announced the availability of the first annual Cloud Developer Survey Report. The report was commissioned by the Eclipse Cloud DevTools Working Group and is the result of more than 300 interviews conducted by an independent analyst organization. Participants consisted of software developers, as well as DevOps, IT and development leadership. Primary survey objectives were to gain a better understanding of cloud-based developer trends by identifying the requirements, priorities, and challenges faced by organizations that deploy and use cloud-based development solutions, including those based on open source technologies. 

“Cloud-based software developer tools are still very much in their infancy, but are gaining significant interest and momentum. Naturally, the majority of these new platforms are built on top of open source software,” said Mike Milinkovich, executive director of the Eclipse Foundation. “This research demonstrates that there is solid traction for cloud development tools, that developers do have a strong appetite for making the transition, and that the Eclipse Foundation and the Cloud DevTools Working Group are at the forefront of this transition. Eclipse Foundation projects like Eclipse Open VSX, Eclipse Theia, and Eclipse Che are leading the way by providing the community-led open source technologies that developers want.”

Survey participants represent a broad set of industries, organizations, and job functions. Five of the top conclusions drawn from the survey data include:

  • Developers are open to new tools and strategies, but often prefer the tools they know and understand - Developers will not shy away from trying new solutions. 
  • Cloud developers love open source solutions - Developers prefer open source because it allows them 1) to customize their tools; 2) to plug into their existing environments; 3) experiment with something unfamiliar.
    AI/ML are attracting significant interest from developers - There is increased use of AI/ML, and much of this is happening at the edge by front-end developers. These highly skilled developers are interested in technologies that tend to be more advanced and cutting edge. 
  • Open source software is driving innovation - As more is asked of developers and technology continues to be the source of growth for companies, developers are being pushed to learn and do more. A lot of this learning and innovation is happening in the OSS community.
  • Developers need to do more "non-developer" things...but can't afford the time - Developer productivity demands greater and additional API integration (especially into IDEs) of the ever-growing tools developers are required to use.

In addition to these findings, the survey report provides detailed analyst recommendations for cloud  developers, employers, and other ecosystem participants.  The 2021 Cloud Developer Survey Report is now available to all interested parties and can be downloaded for free here.

To learn more about getting involved with the Eclipse Cloud DevTools Working Group, please visit us at https://ecdtools.eclipse.org/, or email us at membership@eclipse.org. Developers and other interested parties can also join the Cloud DevTools Working Group mailing list to stay informed about working group projects and progress. 

About the Eclipse Foundation

The Eclipse Foundation provides our global community of individuals and organizations with a mature, scalable, and business-friendly environment for open source software collaboration and innovation. The Foundation is home to the Eclipse IDE, Jakarta EE, and over 400 open source projects, including runtimes, tools, and frameworks for cloud and edge applications, IoT, AI, automotive, systems engineering, distributed ledger technologies, open processor designs, and many others. The Eclipse Foundation is an international non-profit association supported by over 330 members, including industry leaders who value open source as a key enabler for their business strategies. To learn more, follow us on Twitter @EclipseFdn, LinkedIn or visit eclipse.org.

Third-party trademarks mentioned are the property of their respective owners.

###

Media contacts: 

Schwartz Public Relations for the Eclipse Foundation, AISBL
Julia Rauch / Sophie Dechansreiter / Tobias Weiß
Sendlinger Straße 42A
80331 Munich
EclipseFoundation@schwartzpr.de
+49 (89) 211 871 – 43 / -35 / -70

Nichols Communications for the Eclipse Foundation, AISBL
Jay Nichols
jay@nicholscomm.com
+1 408-772-1551


by Jacob Harris at October 14, 2021 11:00 AM

Alice’s adventures in Sirius Web Land

October 11, 2021 10:00 AM

Since my early childhood I have loved stories, listening to books read by my mum, then reading by myself comics or classical literature for school and now, as I dedicate not so much time to reading, mostly blog posts and news on the internet. One of my favorite novels remains “Alice’s Adventures in Wonderland”. Alice's Adventures in Wonderland

A young girl named Alice falls through a rabbit hole into a fantastic world of weird creatures. She meets people, she experiences, she tastes, she has to make decisions, sometimes she’s scared, and the minute after she’s happy. This book is like a roller coaster full of events and emotions.

When I think about our job at Obeo, when we have to create a tool dedicated to a specific domain for one of our customers, I feel like we are all small Alices experiencing the Sirius Land. We start by meeting people, trying to understand their jobs, their needs, we make decisions about what concepts we will specify, how they will be represented… Sometimes it works “Yes! I did it!” and sometimes the user’s feedback is not so good and we rework the tool “Oh, no, try again ;(“…

One year ago, we at Obeo released a fist version of the Sirius Web project and I had this little Alice in mind…

Alice was beginning to get very tired of creating DSL graphical editors and of having too many things to do: start Eclipse, describe her domain with Ecore, generate the EMF code, launch another Eclipse runtime, specify her graphical mappings with Sirius Desktop, test with another Eclipse runtime, package everything to an update site, send it to Bob so that he can install it, help Bob who can’t find how to install the modeler, reiterate from the beginning to update the tool according to Bob feedbacks and needs…

“Oh dear! Would you tell me, please, which way I ought to go from here?” she asked,

“That depends a good deal on where you want to get to,” said the Cat.

Alice prayed for a framework to easily create and deploy her studios to the web!

Curiouser and curiouser! This exists!

I have a talk at EclipseCon 2021 to tell you about Alice in Sirius Web Land! In this session, I will introduce and demonstrate:

  • how to describe your domain
  • how to specify your graphical editor
  • how to deploy your studio to your end-users … everything from your browser, thanks to Sirius Web!

This is not a dream, this is really happening!

Sirius Web Domain and View definitions

I will demonstrate all examples using 100% open-source software. Come and join me! You have no excuse, register at EclipseCon. As it is a virtual event you can attend from anywhere, even from Wonderland!


October 11, 2021 10:00 AM

Cloud DevTools Community Update - October 2021

by Christopher Guindon (webdev@eclipse-foundation.org) at October 08, 2021 12:00 AM

In this post we talk about the latest happenings in the Cloud DevTools community. Included is information about our new website design, upcoming events, project updates, the Embedded SIG, our developer survey, and more.

by Christopher Guindon (webdev@eclipse-foundation.org) at October 08, 2021 12:00 AM

A hands-on tutorial for Eclipse GLSP

by Jonas Helming, Maximilian Koegel and Philip Langer at October 05, 2021 06:54 AM

Do you want to learn how to implement diagram editors using Eclipse GLSP? Then please read on. We have just published...

The post A hands-on tutorial for Eclipse GLSP appeared first on EclipseSource.


by Jonas Helming, Maximilian Koegel and Philip Langer at October 05, 2021 06:54 AM

RHAMT Eclipse Plugin 4.0.0.Final has been released!

by josteele at October 05, 2021 05:54 AM

We are happy to announce the latest release of the Red Hat Application Migration Toolkit (RHAMT) Eclipse Plugin.

Getting Started

It is now available through JBoss Central, and from the update site here.

What is RHAMT?

RHAMT is an automated application migration and assessment tool.

Example ways to RHAMT up your code:

  • Moving your application from WebLogic to EAP, or WebSphere to EAP

  • Version upgrade from Hibernate 3 to Hibernate 4, or EAP 6 to EAP 7

  • Change UI technologies from Seam 2 to pure JSF 2.

An example of how to run the RHAMT CLI:

$ ./rhamt-cli --input /path/to/jee-example-app-1.0.0.ear --output /path/to/output --source weblogic --target eap:7

The output is a report used to assess and prioritize migration and modernization efforts.

The RHAMT Eclipse Plugin - What does it do?

Consider an application migration comprised of thousands of files, with a myriad of small changes, not to mention the tediousness of switching between the report and your IDE. Who wants to be the engineer assigned to that task? :) Instead, this tooling marks the source files containing issues, making it easy to organize, search, and in many cases automatically fix issues using quick fixes.

Let me give you a quick walkthrough.

Ruleset Wizard

We now have quickstart template code generators.

Rueset Wizard

Rule Creation From Code

We have also added rule generators for selected snippets of code.

Rule Generation From Source

Ruleset Graphical Editor

Ruleset navigation and editing is faster and more intuitive thanks to the new graphical editor.

Graphical Editor

Ruleset View

We have created a view dedicated to the management of rulesets. Default rulesets shipped with RHAMT can now be opened, edited, and referenced while authoring your own custom rulesets.

Ruleset View

Run Configuration

The Eclipse plugin interacts with the RHAMT CLI process, thereby making it possible to specify command line options and custom rulesets.

Run Configuration

Ruleset Submission

Lastly, contribute your custom rulesets back to the community from within the IDE.

Ruleset Submission


You can find more detailed information here.

Our goal is to make the RHAMT tooling easy to use. We look forward to your feedback and comments!

Have fun!
John Steele
github/johnsteele


by josteele at October 05, 2021 05:54 AM

We are hiring

by jeffmaury at October 05, 2021 05:54 AM

The Developer Experience and Tooling group, of which JBoss Tools team is part, is looking for an awesome developer. We are looking to continue improving the usability for developers around various IDEs including Eclipse, VSCode and IntelliJ and around the Red Hat product line, including JBoss Middleware.

Topics range from Java to JavaScript, application servers to containers, source code tinkering to full blown CI/CD setups.

If you are into making developers life easier and like to be able to get involved in many different technologies and get them to work great together then do apply.

You can also ping me (jeffmaury@redhat.com) for questions.

The current list of openings are:

Note: the job postings do list a specific location, but for the right candidate we are happy to consider many locations worldwide (anywhere there is a Red Hat office), as well as working from home.

Have fun!
Jeff Maury
@jeffmaury @jbosstools


by jeffmaury at October 05, 2021 05:54 AM

RHAMT Eclipse Plugin 4.1.0.Final has been released!

by josteele at October 05, 2021 05:54 AM

Happy to announce version 4.1.0.Final of the Red Hat Application Migration Toolkit (RHAMT) is now available.

Getting Started

Downloads available through JBoss Central and from the update site.

RHAMT in a Nutshel

RHAMT is an application migration and assessment tool. The migrations supported include application platform upgrades, migrations to a cloud-native deployment environment, and also migrations from several commercial products to the Red Hat JBoss Enterprise Application Platform.

What is New?

Eclipse Photon

The tooling now targets Eclipse Photon.

Photon

Ignoring Patterns

Specify locations of files to exclude from analysis (using regular expressions).

Ignore Patterns

External Report

The generated report has been moved out of Eclipse and into the browser.

Report View

Improved Ruleset Schema

The XML ruleset schema has been relaxed providing flexible rule structures.

Schema

Custom Severities

Custom severities are now included in the Issue Explorer.

Custom Category

Stability

A good amount of time has been spent on ensuring the tooling functions consistently across Windows, OSX, and Linux.

You can find more detailed information here.

Our goal is to make the RHAMT tooling easy to use. We look forward to your feedback and comments!

Have fun!
John Steele
github/johnsteele


by josteele at October 05, 2021 05:54 AM

Quarkus

by jeffmaury at October 05, 2021 05:54 AM

You’ve probably heard about Quarkus, the Supersonic Subatomic Java framework tailored for Kubernetes and containers.

We wrote an article on how to create your first Quarkus project in an Eclipse based IDE (like Red Hat CodeReady Studio).


by jeffmaury at October 05, 2021 05:54 AM

Integration Tooling for Eclipse Photon

by pleacu at October 05, 2021 05:54 AM

Try our leaner, complete Eclipse Photon and Red Hat Developer Studio 12 compatible integration tooling.

devstudio12

JBoss Tools Integration Stack 4.6.0.Final / Red Hat Developer Studio Integration Stack 12.0.0.GA

All of the Integration Stack components have been verified to work with the same dependencies as JBoss Tools 4.6 and Red Hat Developer Studio 12.

What’s new for this release?

This is the initial release in support of Eclipse Photon. It syncs up with Developer Studio 12.0.0, JBoss Tools 4.6.0 and Eclipse 4.8.0 (Photon). It is also a maintenance release for Teiid Designer and BRMS tooling.

Released Tooling Highlights

Business Process and Rules Development

BPMN2 Modeler Known Issues

See the BPMN2 1.5.0.Final Known Issues Section of the Integration Stack 12.0.0.GA release notes.

Drools/jBPM6 Known Issues

See the Drools 7.8.0.Final Known Issues Section of the Integration Stack 12.0.0.GA release notes.

Data Virtualization Highlights

Teiid Designer

See the Teiid Designer 11.2.0.Final Resolved Issues Section of the Integration Stack 12.0.0.GA release notes.

What’s an Integration Stack?

Red Hat Developer Studio Integration Stack is a set of Eclipse-based development tools. It further enhances the IDE functionality provided by Developer Studio, with plug-ins specifically for use when developing for other Red Hat products. It’s where DataVirt Tooling and BRMS tooling are aggregated. The following frameworks are supported:

Red Hat Business Process and Rules Development

Business Process and Rules Development plug-ins provide design, debug and testing tooling for developing business processes for Red Hat BRMS and Red Hat BPM Suite.

  • BPEL Designer - Orchestrating your business processes.

  • BPMN2 Modeler - A graphical modeling tool which allows creation and editing of Business Process Modeling Notation diagrams using graphiti.

  • Drools - A Business Logic integration Platform which provides a unified and integrated platform for Rules, Workflow and Event Processing including KIE.

  • jBPM - A flexible Business Process Management (BPM) suite.

Red Hat Data Virtualization Development

Red Hat Data Virtualization Development plug-ins provide a graphical interface to manage various aspects of Red Hat Data Virtualization instances, including the ability to design virtual databases and interact with associated governance repositories.

  • Teiid Designer - A visual tool that enables rapid, model-driven definition, integration, management and testing of data services without programming using the Teiid runtime framework.

The JBoss Tools website features tab

Don’t miss the Features tab for up to date information on your favorite Integration Stack components.

Installation

The easiest way to install the Integration Stack components is through the stand-alone installer or through our JBoss Tools Download Site.

For a complete set of Integration Stack installation instructions, see Integration Stack Installation Guide

Let us know how it goes!

Paul Leacu.


by pleacu at October 05, 2021 05:54 AM

Integration Tooling for Eclipse Oxygen

by pleacu at October 05, 2021 05:54 AM

Try our complete Eclipse Oxygen and Red Hat JBoss Developer Studio 11 compatible integration tooling.

jbosstools jbdevstudio blog header

JBoss Tools Integration Stack 4.5.2.Final / Developer Studio Integration Stack 11.2.0.GA

All of the Integration Stack components have been verified to work with the same dependencies as JBoss Tools 4.5 and Developer Studio 11.

What’s new for this release?

This release provides full Teiid Designer tooling support for JBoss Data Virtualization 6.4 runtime. It provides an updated BPMN2 Modeler and jBPM/Drools for our Business Process Modeling friends. It also provides full synchronization with Devstudio 11.2.0.GA, JBoss Tools 4.5.2.Final and Eclipse Oxygen.2. Please note that SwitchYard is deprecated in this release.

Released Tooling Highlights

JBoss Business Process and Rules Development

BPMN2 Modeler Known Issues

See the BPMN2 1.4.2.Final Known Issues Section of the Integration Stack 11.2.0.GA release notes.

Drools/jBPM6 Known Issues

See the Drools 7.5.0.Final Known Issues Section of the Integration Stack 11.2.0.GA release notes.

SwitchYard Highlights

See the SwitchYard 2.4.1.Final Resolved Issues Section of the Integration Stack 11.2.0.GA release notes.

Data Virtualization Highlights

Teiid Designer

See the Teiid Designer 11.1.1.Final Resolved Issues Section of the Integration Stack 11.2.0.GA release notes.

What’s an Integration Stack?

Red Hat JBoss Developer Studio Integration Stack is a set of Eclipse-based development tools. It further enhances the IDE functionality provided by JBoss Developer Studio, with plug-ins specifically for use when developing for other Red Hat JBoss products. It’s where DataVirt Tooling, SOA tooling and BRMS tooling are aggregated. The following frameworks are supported:

JBoss Business Process and Rules Development

JBoss Business Process and Rules Development plug-ins provide design, debug and testing tooling for developing business processes for Red Hat JBoss BRMS and Red Hat JBoss BPM Suite.

  • BPEL Designer - Orchestrating your business processes.

  • BPMN2 Modeler - A graphical modeling tool which allows creation and editing of Business Process Modeling Notation diagrams using graphiti.

  • Drools - A Business Logic integration Platform which provides a unified and integrated platform for Rules, Workflow and Event Processing including KIE.

  • jBPM - A flexible Business Process Management (BPM) suite.

JBoss Data Virtualization Development

JBoss Data Virtualization Development plug-ins provide a graphical interface to manage various aspects of Red Hat JBoss Data Virtualization instances, including the ability to design virtual databases and interact with associated governance repositories.

  • Teiid Designer - A visual tool that enables rapid, model-driven definition, integration, management and testing of data services without programming using the Teiid runtime framework.

JBoss Integration and SOA Development

JBoss Integration and SOA Development plug-ins provide tooling for developing, configuring and deploying BRMS and SwitchYard to Red Hat JBoss Fuse and Fuse Fabric containers.

  • All of the Business Process and Rules Development plugins plus SwitchYard. Switchyard is deprecated as of this release.

  • Fuse Tooling has moved out of the Integration Stack to be a core part of JBoss Tools and Developer Studio.

The JBoss Tools website features tab

Don’t miss the Features tab for up to date information on your favorite Integration Stack components.

Installation

The easiest way to install the Integration Stack components is through the stand-alone installer or through our JBoss Tools Download Site.

For a complete set of Integration Stack installation instructions, see Integration Stack Installation Guide

Let us know how it goes!

Paul Leacu.


by pleacu at October 05, 2021 05:54 AM

Integration Tooling for Eclipse Oxygen

by pleacu at October 05, 2021 05:54 AM

Try our complete Eclipse Oxygen and Red Hat JBoss Developer Studio 11 compatible integration tooling.

jbosstools jbdevstudio blog header

JBoss Tools Integration Stack 4.5.0.Final / Developer Studio Integration Stack 11.0.0.GA

All of the Integration Stack components have been verified to work with the same dependencies as JBoss Tools 4.5 and Developer Studio 11.

What’s new for this release?

This is the initial release in support of Eclipse Oxygen. It syncs up with Developer Studio 11.0.0, JBoss Tools 4.5.0 and Eclipse 4.7.0 (Oxygen.0). It is also a maintenance release for SwitchYard and BRMS tooling.

Data Virtualization tooling support is not yet available (scheduled for the autumn).

SwitchYard is deprecated in this release.

Fuse Tooling has moved out of the Integration Stack to be a core part of JBoss Tools and Developer Studio.

Released Tooling Highlights

JBoss Business Process and Rules Development

BPMN2 Modeler Known Issues

See the BPMN2 1.4.0.Final Known Issues Section of the Integration Stack 11.0.0.GA release notes.

Drools/jBPM6 Known Issues

See the Drools 7.0.1.Final Known Issues Section of the Integration Stack 11.0.0.GA release notes.

SwitchYard Highlights

See the SwitchYard 2.4.0.Final Resolved Issues Section of the Integration Stack 11.0.0.GA release notes.

Data Virtualization Highlights

Teiid Designer

Not yet available for Oxygen.

What’s an Integration Stack?

Red Hat JBoss Developer Studio Integration Stack is a set of Eclipse-based development tools. It further enhances the IDE functionality provided by JBoss Developer Studio, with plug-ins specifically for use when developing for other Red Hat JBoss products. It’s where DataVirt Tooling, SOA tooling and BRMS tooling are aggregated. The following frameworks are supported:

JBoss Business Process and Rules Development

JBoss Business Process and Rules Development plug-ins provide design, debug and testing tooling for developing business processes for Red Hat JBoss BRMS and Red Hat JBoss BPM Suite.

  • BPEL Designer - Orchestrating your business processes.

  • BPMN2 Modeler - A graphical modeling tool which allows creation and editing of Business Process Modeling Notation diagrams using graphiti.

  • Drools - A Business Logic integration Platform which provides a unified and integrated platform for Rules, Workflow and Event Processing including KIE.

  • jBPM6 - A flexible Business Process Management (BPM) suite.

JBoss Data Virtualization Development

JBoss Data Virtualization Development plug-ins provide a graphical interface to manage various aspects of Red Hat JBoss Data Virtualization instances, including the ability to design virtual databases and interact with associated governance repositories. Data Virtualization tooling support is not yet available (scheduled for the autumn).

  • Teiid Designer - A visual tool that enables rapid, model-driven definition, integration, management and testing of data services without programming using the Teiid runtime framework.

JBoss Integration and SOA Development

JBoss Integration and SOA Development plug-ins provide tooling for developing, configuring and deploying BRMS and SwitchYard to Red Hat JBoss Fuse and Fuse Fabric containers.

  • All of the Business Process and Rules Development plugins plus SwitchYard. Switchyard is deprecated as of this release.

  • Fuse Tooling has moved out of the Integration Stack to be a core part of JBoss Tools and Developer Studio.

The JBoss Tools website features tab

Don’t miss the Features tab for up to date information on your favorite Integration Stack components.

Installation

The easiest way to install the Integration Stack components is through the stand-alone installer.

For a complete set of Integration Stack installation instructions, see Integration Stack Installation Instructions

Be the first to try it on Oxygen!

Paul Leacu.


by pleacu at October 05, 2021 05:54 AM

Integration Tooling for Eclipse 2019-03

by pleacu at October 05, 2021 05:54 AM

Check out our new branding for Eclipse 2019-03. We’re now Red Hat CodeReady Studio 12 Integration Stack.

crstudio12

JBoss Tools Integration Stack 4.11.0.Final / Red Hat CodeReady Studio Integration Stack 12.11.0.GA

All of the Integration Stack components have been verified to work with the same dependencies as JBoss Tools 4.11 and Red Hat CodeReady Studio 12.

What’s new for this release?

DataVirtualization support from Teiid Designer is no longer available through the Integration Stack. It can be installed directly from Teiid Designer

This release has an updated BPMN2 Modeler and jBPM/Drools/KIE.

Released Tooling Highlights

Business Process and Rules Development

BPMN2 Modeler Known Issues

See the BPMN2 1.5.1.Final Known Issues Section of the Integration Stack 12.11.0.GA release notes.

Drools/jBPM6 Known Issues

See the Drools 7.21.0.Final Known Issues Section of the Integration Stack 12.11.0.GA release notes.

What’s an Integration Stack?

Red Hat CodeReady Studio Integration Stack is a set of Eclipse-based development tools. It further enhances the IDE functionality provided by Developer Studio, with plug-ins specifically for use when developing for other Red Hat products. It’s where BRMS tooling is aggregated. The following frameworks are supported:

Red Hat Business Process and Rules Development

Business Process and Rules Development plug-ins provide design, debug and testing tooling for developing business processes for Red Hat BRMS and Red Hat BPM Suite.

  • BPEL Designer - Orchestrating your business processes.

  • BPMN2 Modeler - A graphical modeling tool which allows creation and editing of Business Process Modeling Notation diagrams using graphiti.

  • Drools - A Business Logic integration Platform which provides a unified and integrated platform for Rules, Workflow and Event Processing including KIE.

  • jBPM - A flexible Business Process Management (BPM) suite.

The JBoss Tools website features tab

Don’t miss the Features tab for up to date information on your favorite Integration Stack components.

Installation

The easiest way to install the Integration Stack components is through the stand-alone installer or through our JBoss Tools Download Site.

For a complete set of Integration Stack installation instructions, see Integration Stack Installation Guide

Let us know how it goes!

Paul Leacu.


by pleacu at October 05, 2021 05:54 AM

JBoss Tools 4.9.0.AM3 for Eclipse 2018-09 M2

by jeffmaury at October 05, 2021 05:54 AM

Happy to announce 4.9.0.AM3 (Developer Milestone 3) build for Eclipse 2018-09 M2.

Downloads available at JBoss Tools 4.9.0 AM3.

What is New?

Full info is at this page. Some highlights are below.

General

Server Tools

Wildfly 14 Server Adapter

A server adapter has been added to work with Wildfly 14. It adds support for Java EE 8.

Forge Tools

Forge Runtime updated to 3.9.1.Final

The included Forge runtime is now 3.9.1.Final. Read the official announcement here.

Fuse Tooling

Know issues

A regression has been introduced touching all functionalities using Jaxb. It includes:

  • Variable content display in debug

  • Data Transformation wizard

  • Tracing Camel route via Jolokia Connection

It may affect other functionalities. In this case, you will have this kind of error in log:

java.lang.NullPointerException
          at javax.xml.bind.ContextFinder.handleClassCastException(ContextFinder.java:95)

Please note that it has been already fixed on nightly build

Enjoy!

Jeff Maury


by jeffmaury at October 05, 2021 05:54 AM

JBoss Tools 4.9.0.AM2 for Eclipse 2018-09 M2

by jeffmaury at October 05, 2021 05:54 AM

Happy to announce 4.9.0.AM2 (Developer Milestone 2) build for Eclipse 2018-09 M2.

Downloads available at JBoss Tools 4.9.0 AM2.

What is New?

Full info is at this page. Some highlights are below.

General

Eclipse 2018-09

JBoss Tools is now targeting Eclipse 2018-09 M2.

Fuse Tooling

WSDL to Camel REST DSL improvements

The version of the library used to generate Camel REST DSL from WSDl files has been updated. It now covers more types of WSDL files. See https://github.com/jboss-fuse/wsdl2rest/milestone/3?closed=1 for the list of improvements.

REST Editor tab improvements

In the last milestone we began adding editing capabilities to the read-only REST tab to the route editor we added in the previous release. Those efforts have continued and we now have a fully editable REST tab.

Fully Editable REST Editor

You can now:

  • Create and delete REST Configurations

  • Create and delete new REST Elements

  • Create and delete new REST Operations

  • Edit properties for a selected REST Element in the Properties view

  • Edit properties for a selected REST Operation in the Properties view

In addition, we’ve improved the look and feel by fixing the scrolling capabilities of the REST Element and REST Operations lists.

Enjoy!

Jeff Maury


by jeffmaury at October 05, 2021 05:54 AM

JBoss Tools 4.6.0.AM3 for Eclipse Photon.0.RC3

by jeffmaury at October 05, 2021 05:54 AM

Happy to announce 4.6.0.AM3 (Developer Milestone 3) build for Eclipse Photon.0.RC3.

Downloads available at JBoss Tools 4.6.0 AM3.

What is New?

Full info is at this page. Some highlights are below.

General

Eclipse Photon

JBoss Tools is now targeting Eclipse Photon RC3.

Fuse Tooling

Camel URI completion with XML DSL

As announced here, it was already possible to have Camel URI completion with XML DSL in the source tab of the Camel Route editor by installing the Language Support for Apache Camel in your IDE.

This feature is now installed by default with Fuse Tooling!

Camel URI completion in source tab of Camel Editor

Now you have the choice to use the properties view with UI help to configure Camel components or to use the source editor and benefit from completion features. It all depends on your development preferences!

Webservices Tooling

JAX-RS 2.1 Support

JAX-RS 2.1 is part of JavaEE8 and JBoss Tools now provides you with support for this update of the specification.

Server side events

JAX-RS 2.1 brought support for server side events. The Sse and SseEventSink resources can now be injected into method arguments thanks to the @Context annotation.

Enjoy!

Jeff Maury


by jeffmaury at October 05, 2021 05:54 AM

JBoss Tools 4.6.0.AM2 for Eclipse Photon.0.M7

by jeffmaury at October 05, 2021 05:54 AM

Happy to announce 4.6.0.AM2 (Developer Milestone 2) build for Eclipse Photon.0.M7.

Downloads available at JBoss Tools 4.6.0 AM2.

What is New?

Full info is at this page. Some highlights are below.

General

Eclipse Photon

JBoss Tools is now targeting Eclipse Photon M7.

OpenShift

Enhanced Spring Boot support for server adapter

Spring Boot runtime was already supported by the OpenShift server adapter. However, it has one major limitation: files and resources were synchronized between the local workstation and the remote pod(s) only for the main project. If your Spring Boot application had dependencies that were present in the local workspace, any change to a file or resource of one of these dependencies was not handled. This is not true anymore.

Fuse Tooling

Camel Rest DSL from WSDL wizard

There is a new "Camel Rest DSL from WSDL" wizard. This wizard wraps the wsdl2rest tool now included with the Fuse 7 distribution, which takes a WSDL file for a SOAP-based (JAX-WS) web service and generates a combination of CXF-generated code and a Camel REST DSL route to make it accessible using REST operations.

To start, you need an existing Fuse Integration project in your workspace and access to the WSDL for the SOAP service. Then use File→New→Other…​ and select Red Hat Fuse→Camel Rest DSL from WSDL wizard.

On the first page of the wizard, select your WSDL and the Fuse Integration project in which to generate the Java code and Camel configuration.

SOAP to REST Wizard page 1

On the second page, you can customize the Java folder path for your generated classes, the folder for the generated Camel file, plus any customization for the SOAP service address and destination REST service address.

SOAP to REST Wizard page 2

Click Finish and the new Camel configuration and associated Java code are generated in your project. The wizard determines whether your project is Blueprint, Spring, or Spring Boot based, and it creates the corresponding artifacts without requiring any additional input. When the wizard is finished, you can open your new Camel file in the Fuse Tooling Route Editor to view what it created.

Fuse Tooling editor Rest Tab

That brings us to another new functionality, the REST tab in the Fuse Tooling Route Editor.

Camel Editor REST tab

The Fuse Tooling Route Editor provides a new REST tab. For this release, the contents of this tab is read-only and includes the following information:

  • Details for the REST Configuration element including the component (jetty, netty, servlet, etc.), the context path, the port, binding mode (JSON, XML, etc.), and host. There is only one REST Configuration element.

  • A list of REST elements that collect REST operations. A configuration can have more than one REST element. Each REST element has an associated property page that displays additional details such as the path and the data it consumes or produces.

Fuse Tooling Rest Elements Properties View
  • A list of REST operations for the selected REST element. Each of the operations has an associated property page that provides details such as the URI and output type.

Fuse Tooling Rest Operations Properties View

For this release, the REST tab is read-only. If you want to edit the REST DSL, use the Route Editor Source tab. When you make changes and save them in the Source tab, the REST tab refreshes to show your updates.

Enjoy!

Jeff Maury


by jeffmaury at October 05, 2021 05:54 AM

JBoss Tools 4.6.0.AM1 for Eclipse Photon.0.M6

by jeffmaury at October 05, 2021 05:54 AM

Happy to announce 4.6.0.AM1 (Developer Milestone 1) build for Eclipse Photon.0.M6.

Downloads available at JBoss Tools 4.6.0 AM1.

What is New?

Full info is at this page. Some highlights are below.

General

Eclipse Photon

JBoss Tools is now targeting Eclipse Photon M6.

Forge Tools

Forge Runtime updated to 3.9.0.Final

The included Forge runtime is now 3.9.0.Final. Read the official announcement here.

Enjoy!

Jeff Maury


by jeffmaury at October 05, 2021 05:54 AM

JBoss Tools 4.5.3.AM3 for Eclipse Oxygen.3

by jeffmaury at October 05, 2021 05:54 AM

Happy to announce 4.5.3.AM3 (Developer Milestone 3) build for Eclipse Oxygen.3.

Downloads available at JBoss Tools 4.5.3 AM3.

What is New?

Full info is at this page. Some highlights are below.

OpenShift

CDK and Minishift Server Adapter runtime download

When working with both CDK and upstream Minishift, you needed to have previously downloaded the CDK or Minishift binary. It is now possible to download the runtime to a specific folder when you create the server adapter.

Let’s see an example with the CDK server adapter.

From the Servers view, select the new Server menu item and enter cdk in the filter:

cdk server adapter wizard

Select Red Hat Container Development Kit 3.2+

cdk server adapter wizard1

Click the Next button:

cdk server adapter wizard3

In order to download the runtime, click the Download and install runtime…​ link:

cdk server adapter wizard4

Select the version of the runtime you want to download

cdk server adapter wizard5

Click the Next button:

cdk server adapter wizard6

You need an account to download the CDK. If you already had configured credentials, select the one you want to use. If you didn’t, click the Add button to add your credentials.

cdk server adapter wizard7

Click the Next button. Your credentials will be validated, and upon success, you must accept the license agreement:

cdk server adapter wizard8

Accept the license agreement and click the Next button:

cdk server adapter wizard9

You can choose the folder where you want the runtime to be installed. Once you’ve set it, click the Finish button:

The download of the runtime will be started and you should see the progression on the server adapter wizard:

cdk server adapter wizard10

Once the download is completed, you will notice that the Minishift Binary and Username fields have been filled:

cdk server adapter wizard11

Click the Finish button to create the server adapter.

Please note that if it’s the first time you install CDK, you must perform an initialization. In the Servers view, right click the server and select the Setup CDK menu item:

cdk server adapter wizard12
cdk server adapter wizard13

Hibernate Tools

Hibernate Runtime Provider Updates

A number of additions and updates have been performed on the available Hibernate runtime providers.

New Hibernate 5.3 Runtime Provider

With beta releases available in the Hibernate 5.3 stream, the time was right to make available a corresponding Hibernate 5.3 runtime provider. This runtime provider incorporates Hibernate Core version 5.3.0.Beta2 and Hibernate Tools version 5.3.0.Beta1.

hibernate 5 3
Figure 1. Hibernate 5.3 is available
Other Runtime Provider Updates

The Hibernate 5.0 runtime provider now incorporates Hibernate Core version 5.0.12.Final and Hibernate Tools version 5.0.6.Final.

The Hibernate 5.1 runtime provider now incorporates Hibernate Core version 5.1.12.Final and Hibernate Tools version 5.1.7.Final.

The Hibernate 5.2 runtime provider now incorporates Hibernate Core version 5.2.15.Final and Hibernate Tools version 5.2.10.Final.

Fuse Tooling

Fuse Ignite Technical Extension templates

The existing template for "Custom step using a Camel Route" has been updated to work with Fuse 7 Tech Preview 4.

Two new templates have been added: - Custom step using Java Bean - Custom connector

New Fuse Ignite wizard with 3 options

Improvements of the wizard to create a Fuse Integration project

The creation wizard provides better guidance for the targeted deployment environment:

New Fuse Integration Project wizard page to select environment

More place is available to choose the templates and they are now filtered based on the targeted environment:

New Fuse Integration Project wizard page to select templates

It also points out to other places to find different examples for advanced users (see the link at the bottom of the previous screenshot).

Camel Rest DSL editor (Technical preview)

Camel is providing a Rest DSL to help the integration through Rest endpoints. Fuse Tooling is now providing a new tab in read-only mode to visualize the Rest endpoints defined.

Rest DSL editor tab in read-only mode

It is currently in Tech Preview and needs to be activated in Window → Preferences → Fuse Tooling → Editor → Enable Read Only Tech preview REST DSL tab.

Work is still ongoing and feedback is very welcome on this new feature, you can comment on this JIRA epic.

Dozer upgrade and migration

When upgrading from Camel < 2.20 to Camel > 2.20, the Dozer dependency has been upgraded to a version not backward-compatible If you open a Data transformation based on Dozer in Fuse Tooling, it will propose to migrate the file used for the transformation (technically changing the namespace). It allow to continue to use the Data Transformation editor and have - in most cases - the Data Transformation working at runtime with Camel > 2.20.

Enjoy!

Jeff Maury


by jeffmaury at October 05, 2021 05:54 AM

JBoss Tools 4.5.3.AM2 for Eclipse Oxygen.3

by jeffmaury at October 05, 2021 05:54 AM

Happy to announce 4.5.3.AM2 (Developer Milestone 2) build for Eclipse Oxygen.3.

Downloads available at JBoss Tools 4.5.3 AM2.

What is New?

Full info is at this page. Some highlights are below.

OpenShift

CDK and Minishift Server Adapter better developer experience

When working with both CDK and upstream Minishift, it is recommanded to distinguish environments through the MINISHIFT_HOME variable. It was possible before to use this parameter but it requires a two steps process:

  • first create the server adapter (through the wizard)

  • then change the MINISHIFT_HOME in the server adapter editor

It is now possible to set this parameter from the server adapter wizard. So now, everything is correctly setup when you create the server adapter.

Let’s see an example with the CDK server adapter.

From the Servers view, select the new Server menu item and enter cdk in the filter:

cdk server adapter wizard

Select Red Hat Container Development Kit 3.2+

cdk server adapter wizard1

Click the Next button:

cdk server adapter wizard2

The MINISHIFT_HOME parameter can be set here and is defaulted.

Fuse Tooling

Display Fuse version corresponding to Camel version proposed

When you create a new project, you select the Camel version from a list. Now, the list of Camel versions includes the Fuse version to help you choose the version that corresponds to your production version.

Fuse Version also displayed in drop-down list close to Camel version

Update validation for similar IDs between a component and its definition

Starting with Camel 2.20, you can use similar IDs for the component name and its definition unless the specific property "registerEndpointIdsFromRoute" is provided. The validation process checks the Camel version and the value of the "registerEndpointIdsFromRoute" property.

For example:

<from id="timer" uri="timer:timerName"/>

Improved guidance in method selection for factory methods on Global Bean

When selecting factory method on a Global bean, a lot of possibilities were proposed in the user interface. The list of factory methods for a global bean is now limited to only those methods that match the constraints of the bean’s global definition type (bean or bean factory).

Customize EIP labels in the diagram

The Fuse Tooling preferences page for the Editor view includes a new "Preferred Labels" option.

Fuse Tooling editor preference page

Use this option to define the label of EIP components (except endpoints) shown in the Editor’s Design view.

Dialog for defining the display text for an EIP

General

Credentials Framework

Sunsetting jboss.org credentials

Download Runtimes and CDK Server Adapter used the credentials framework to manage credentials. However, the JBoss.org credentials cannot be used any more as the underlying service used by these components does not support these credentials.

The credentials framework still supports the JBoss.org credentials in case other services / components require or use this credentials domain.

Aerogear

Aerogear component deprecation

The Aerogear component has been marked deprecated as there is no more maintenance on the source code. It is still available in Red Hat Central and may be removed in the future.

Arquillian

Arquillian component removal

The Arquillian component has been removed from Red Hat Central as it has been deprecated a while ago.

BrowserSim

BrowserSim component deprecation

The BrowserSim component has been marked deprecated as there is no more maintenance on the source code. It is still available in Red Hat Central and may be removed in the future.

Freemarker

Freemarker component removal

The Freemarker component has been removed from Red Hat Central as it has been deprecated a while ago.

LiveReload

LiveReload component deprecation

The LiveReload component has been marked deprecated as there is no more maintenance on the source code. It is still available in Red Hat Central and may be removed in the future.

Enjoy!

Jeff Maury


by jeffmaury at October 05, 2021 05:54 AM

JBoss Tools 4.5.3.AM1 for Eclipse Oxygen.2

by jeffmaury at October 05, 2021 05:54 AM

Happy to announce 4.5.3.AM1 (Developer Milestone 1) build for Eclipse Oxygen.2.

Downloads available at JBoss Tools 4.5.3 AM1.

What is New?

Full info is at this page. Some highlights are below.

OpenShift

Minishift Server Adapter

A new server adapter has been added to support upstream Minishift. While the server adapter itself has limited functionality, it is able to start and stop the Minishift virtual machine via its minishift binary. From the Servers view, click New and then type minishift, that will bring up a command to setup and/or launch the Minishift server adapter.

minishift server adapter

All you have to do is set the location of the minishift binary file, the type of virtualization hypervisor and an optional Minishift profile name.

minishift server adapter1

Once you’re finished, a new Minishift Server adapter will then be created and visible in the Servers view.

minishift server adapter2

Once the server is started, Docker and OpenShift connections should appear in their respective views, allowing the user to quickly create a new Openshift application and begin developing their AwesomeApp in a highly-replicatable environment.

minishift server adapter3
minishift server adapter4

Fuse Tooling

New shortcuts in Fuse Integration perspective

Shortcuts for the Java, Launch, and Debug perspectives and basic navigation operations are now provided within the Fuse Integration perspective.

The result is a set of buttons in the Toolbar:

New Toolbar action

All of the associated keyboard shortcuts are also available, such as Ctrl+Shift+T to open a Java Type.

Performance improvement: Loading Advanced tab for Camel Endpoints

The loading time of the "Advanced" tab in the Properties view for Camel Endpoints is greatly improved.

Advanced Tab in Properties view

Previously, in the case of Camel Components that have a lot of parameters, it took several seconds to load the Advanced tab. For example, for the File component, it would take ~3.5s. It now takes ~350ms. The load time has been reduced by a factor of 10. (See this interesting article on response time)

If you notice other places showing slow performance, you can file a report by using the Fuse Tooling issue tracker. The Fuse Tooling team really appreciates your help. Your feedback contributes to our development priorities and improves the Fuse Tooling user experience.

Enjoy!

Jeff Maury


by jeffmaury at October 05, 2021 05:54 AM

JBoss Tools 4.5.2.AM2 for Eclipse Oxygen.2

by jeffmaury at October 05, 2021 05:54 AM

Happy to announce 4.5.2.AM2 (Developer Milestone 2) build for Eclipse Oxygen.2 (built with RC2).

Downloads available at JBoss Tools 4.5.2 AM2.

What is New?

Full info is at this page. Some highlights are below.

Fuse Tooling

Fuse 7 Karaf-based runtime Server adapter

Fuse 7 is cooking and preliminary versions are already available on early-access repository. Fuse Tooling is ready to leverage them so that you can try the upcoming major Fuse version.

Fuse 7 Server Adapter

Classical functionalities with server adapters are available: automatic redeploy, Java debug, Graphical Camel debug through created JMX connection. Please note: - you can’t retrieve the Fuse 7 Runtime yet directly from Fuse tooling, it is required to download it on your machine and point to it when creating the Server adapter. - the provided templates requires some modifications to have them working with Fuse 7, mainly adapting the bom. Please see work related to it in this JIRA task and its children.

Display routes defined inside "routeContext" in Camel Graphical Editor (Design tab)

"routeContext" tag is a special tag used in Camel to provide the ability to reuse routes and to split them across different files. This is very useful on large projects. See Camel documentation for more information. Since this version, the Design of the routes defined in "routeContext" tags are now displayed.

Usability improvement: Progress bar when "Changing the Camel version"

Since Fuse Tooling 10.1.0, it is possible to change the Camel version. In case the Camel version was not cached locally yet and for slow internet connections, this operation can take a while. There is now a progress bar to see the progress.

Switch Camel Version with Progress Bar

Enjoy!

Jeff Maury


by jeffmaury at October 05, 2021 05:54 AM

JBoss Tools 4.5.2.AM1 for Eclipse Oxygen.1a

by jeffmaury at October 05, 2021 05:54 AM

Happy to announce 4.5.2.AM1 (Developer Milestone 1) build for Eclipse Oxygen.1a.

Downloads available at JBoss Tools 4.5.2 AM1.

What is New?

Full info is at this page. Some highlights are below.

OpenShift

Support for route timeouts and liveness probe for OpenShift Server Adapter debugging configurations

While debugging your OpenShift deployment, you may face two different issues:

  • if you launch your test through a Web browser, then it’s likely that you will access your OpenShift deployment through an OpenShift route. The problem is that, by default, OpenShift routes have a 30 seconds timeout for each request. So if you’re stepping through one of your breakpoints, you will get a timeout error message in the browser window even if you can still debug your OpenShift deployment. And you’re now stuck will the navigation of your OpenShift application.

  • if your OpenShift deployment has a liveness probe configured, depending on your virtual machine capabilities or how your debugger is configured, if your stepping into one of your breakpoints, the liveness probe may fail thus OpenShift so OpenShift will restart your container and your debugging session will be destroyed.

So, from now, when the OpenShift server adapter is started in debug mode, the following action are being performed:

  • if an OpenShift route is found that is linked to the OpenShift deployment you want to debug, the route timeout will be set or increased to 1 hour. The original or default value will be restored when the OpenShift server adapter will be restarted in run mode.

  • if your OpenShift deployment has a liveness probe configured, the initialDelay field will be increased to 1 hour if the defined value for this field is lower than 1 hour. If the value of this field is defined to a value greater than 1 hour, it is left intact. The original value will be restored when the OpenShift server adapter will be restarted in run mode

Fuse Tooling

Camel context parameters configurable in properties view for Camel version < 2.18

Before Camel 2.18, the Camel catalog is missing information about Camel Context. Fuse Tooling is now providing this missing piece of information and thus allow to edit Camel Context parameters in Properties view like for any other component. It is activated when there is no element selected on the diagram.

Parameters in Properties view for Camel context

Usability improvement: Progress bar when "Changing the Camel version"

Since Fuse Tooling 10.1.0, it is possible to change the Camel version. In case the Camel version was not cached locally yet and for slow internet connections, this operation can take a while. There is now a progress bar to see the progress.

Switch Camel Version with Progress Bar

Enjoy!

Jeff Maury


by jeffmaury at October 05, 2021 05:54 AM

JBoss Tools 4.5.1.AM3 for Eclipse Oxygen.1

by jeffmaury at October 05, 2021 05:54 AM

Happy to announce 4.5.1.AM3 (Developer Milestone 3) build for Eclipse Oxygen.1.

Downloads available at JBoss Tools 4.5.1 AM3.

What is New?

Full info is at this page. Some highlights are below.

OpenShift.io

OpenShift.io login

It is possible to login from JBoss Tools to OpenShift.io. A single account will be maintained per workspace. Once you initially logged onto OpenShift.io, all needed account information (tokens,…​) will be stored securely.

There are two ways to login onto OpenShift.io:

  • through the UI

  • via a third party service that will invoke the proper extension point

UI based login to OpenShift.io

In the toobar, you should see a new icon Toolbar. Click on it and it will launch the login.

If this is the first time you login to OpenShift.io or if you OpenShift.io account tokens are not valid anymore, you should see a browser launched with the following content:

osio browser

Enter your RHDP login and the browser will then auto-close and an extract (for security reasons) of the OpenShift.io token will be displayed:

osio token dialog

This dialog will be also shown if an OpenShift.io account was configured in the workspace and the account information is valid.

Via extension point

The OpenShift.io integration can be invoked by a third party service through the org.jboss.tools.openshift.io.code.tokenProvider extension point. This extension point will perform the same actions as the UI but basically will return an access token for OpenShift.io to the third party service. A detailed explanation of how to use this extension point is described here: Wiki page

You can display the account information using the Eclipse Jboss Tools → OpenShift.io preference node. If you workspace does not contain an OpenShift.io account yet, you should see the following:

osio preferences

If you have a configured OpenShift.io account, you should see this:

osio preferences1

CDK 3.2 Beta Server Adapter

A new server adapter has been added to support the next generation of CDK 3.2. This is Tech Preview in this release as CDK 3.2 is Beta. While the server adapter itself has limited functionality, it is able to start and stop the CDK virtual machine via its minishift binary. Simply hit Ctrl+3 (Cmd+3 on OSX) and type CDK, that will bring up a command to setup and/or launch the CDK server adapter. You should see the old CDK 2 server adapter along with the new CDK 3 one (labeled Red Hat Container Development Kit 3.2+ ).

cdk3.2 server adapter

All you have to do is set the credentials for your Red Hat account, the location of the CDK’s minishift binary file, the type of virtualization hypervisor and an optional CDK profile name.

cdk3.2 server adapter1

Once you’re finished, a new CDK Server adapter will then be created and visible in the Servers view.

cdk3.2 server adapter2

Once the server is started, Docker and OpenShift connections should appear in their respective views, allowing the user to quickly create a new Openshift application and begin developing their AwesomeApp in a highly-replicatable environment.

cdk3.2 server adapter3
cdk3.2 server adapter4
This is Tech Preview. The implementation is subject to change, may not work with next releases of CDK 3.2 and testing has been limited.

Fuse Tooling

Global Beans: improve support for Bean references

It is now possible to set Bean references from User Interface when creating a new Bean:

Create Factory Bean Reference

Editing Bean references is also now available on the properties view when editing an existing Bean:

Edit Factory Bean Reference

Additional validation has been added to help users avoid mixing Beans defined with class names and Beans defined referencing other beans.

Enjoy!

Jeff Maury


by jeffmaury at October 05, 2021 05:54 AM

JBoss Tools 4.5.1.AM2 for Eclipse Oxygen.1

by jeffmaury at October 05, 2021 05:54 AM

Happy to announce 4.5.1.AM2 (Developer Milestone 2) build for Eclipse Oxygen.1.

Downloads available at JBoss Tools 4.5.1 AM2.

What is New?

Full info is at this page. Some highlights are below.

OpenShift 3

New command to tune resource limits

A new command has been added to tune resource limits (CPU, memory) on an OpenShift deployment. It’s available for a Service, a DeploymentConfig, a ReplicationController or a Pod.

To activate it, go the the OpenShift explorer, select the OpenShift resource, right click and select Edit resource limits. The following dialog will show up:

edit resource limits

After you changed the resource limits for this deployment, it will be updated and new pods will be spawned (not for ReplicationController)

edit resource limits1

Discover Docker registry URL for OpenShift connections

When an OpenShift connection is created, the Docker registry URL is empty. When the CDK is started through the CDK server adapter, an OpenShift connection is created or updated if a matching OpenShift connection is found. But what if you have several OpenShift connections, the remaining ones will be left with the empty URL.

You can find the matching Docker registry URL when editing the OpenShift connection through the Discover button:

edit connection discover

Click on the Discover button and the Docker registry URL will be filled if a matching started CDK server adapter is found:

edit connection discover1

CDI Tools

CDI 2.0

CDI Tools now support CDI 2.0 projects. If your CDI project (with enabled CDI support) has CDI 2.0 jars in its classpath, CDI Tools will recognize it as CDI 2.0 project automatically. There is no need to use any special settings to distinguish CDI 1.0 or CDI 1.1 from CDI 2.0 in CDI Tools.

The new javax.enterprise.event.ObservesAsync is now being validated according to the CDI specifications.

Fuse Tooling

Apache Karaf 4.x Server Adapter

We are happy to announce the addition of new Apache Karaf server adapters. You can now download and install Apache Karaf 4.0 and 4.1 from within your development environment.

Apache Karaf 4x Server Adapters

Switch Apache Camel Version

You can now change the Apache Camel version used in your project. To do that you invoke the context menu of the project in the project explorer and navigate into the Configure menu. There you will find the menu entry called Change Camel Version which will guide you through this process.

Switch Camel Version

Improved Validation

The validation in the editor has been improved to find containers which lack mandatory child elements. (for instance a Choice without a child element)

Improved validation

Enjoy!

Jeff Maury


by jeffmaury at October 05, 2021 05:54 AM

JBoss Tools 4.5.0.AM2 for Eclipse Oxygen.0

by jeffmaury at October 05, 2021 05:54 AM

Happy to announce 4.5.0.AM2 (Developer Milestone 2) build for Eclipse Oxygen.0.

Downloads available at JBoss Tools 4.5.0 AM2.

What is New?

Full info is at this page. Some highlights are below.

OpenShift 3

OpenShift server and Kubernetes server versions displayed

The OpenShift server and Kubernetes server versions are now displayed in the OpenShift connection properties. This information is retrieved using an un-authenticated request login to the OpenShift cluster is not required. This allow user to verify the OpenShift and Kubernetes level when interacting.

Here is an example based on an OpenShift connection against CDK3:

openshift k8s versions

if the cluster is not started or accessible, then no values are displayed:

openshift k8s versions1

Docker

New Security Options

Support has been added when launching commands in a Container to specify a security option profile. This can be done in lieu of specifying privileged mode. For example, to run gdbserver, one can specify "seccomp:unprofiled" to allow ptrace commands to be run by the gdb server.

The Run Image Wizard has been modified to allow specifying an unconfined seccomp profile to replace the default seccomp profile.

LinuxToolsUnconfinedOption

Security options are also now shown in the Properties View.

LinuxToolsUnconfinedProperty

Fuse Tooling

Bean Support

We are happy to finally announce support for Beans (Spring / Blueprint).

Using the Route Editor you can now access Spring / Blueprint Beans in your Camel Context through the Configurations tab.

Configurations tab in Editor

In the Configurations tab you can see all global configuration elements of your Camel Context. You can Add, Edit and Delete elements using the buttons on the right side.

Configurations tab content

By clicking the Add or Edit button a wizard will be openend to guide you on the creation of the Bean.

New Bean wizard

In the wizard you can select an existing bean class from your project or create a new bean class. You can also specify constructor arguments and bean properties. Once created you can then modify the properties of that Bean inside the Properties view.

alt

Freemarker

Freemarker component deprecation

The Freemarker component has been marked deprecated as there is no more maintenance on the source code. It is still available in Red Hat Central and may be removed in the future.

Enjoy!

Jeff Maury


by jeffmaury at October 05, 2021 05:54 AM

JBoss Tools 4.21.0.AM1 for Eclipse 2021-09

by jeffmaury at October 05, 2021 05:54 AM

Happy to announce 4.21.0.AM1 (Developer Milestone 1) build for Eclipse 2021-09.

Downloads available at JBoss Tools 4.21.0 AM1.

What is New?

Full info is at this page. Some highlights are below.

OpenShift

Operator based services

When developing cloud native applications on OpenShift, developer may need to launch services (databases, messaging system,…​) that the application under development may need to connect to. The OpenShift tooling allowed to launch such services but it was based on the service catalog which is not available anymore on OpenShift 4.

The new feature is based on operators which is the devops way of installing and managing software on Kubernetes clusters. So when you want to launch a service for your application, you will have to choose from the list of installed operators on your cluster and then select type of deployment you want.

In the following example, there are two operators installed on our cluster: the Strimzi operator for setting up Kafka clusters on Kubernetes and a Postgresql operator.

For each operator, we can select the type of deployment we want to setup.

operator based services1

After you’ve entered the name of your service, it will appear in the application explorer view:

operator based services2

Hibernate Tools

A number of additions and updates have been performed on the available Hibernate runtime providers.

Runtime Provider Updates

The Hibernate 5.5 runtime provider now incorporates Hibernate Core version 5.5.7.Final and Hibernate Tools version 5.5.7.Final.

The Hibernate 5.3 runtime provider now incorporates Hibernate Core version 5.3.22.Final and Hibernate Tools version 5.3.22.Final.

Enjoy!

Jeff Maury


by jeffmaury at October 05, 2021 05:54 AM

JBoss Tools 4.19.1.AM1 for Eclipse 2021-03

by jeffmaury at October 05, 2021 05:54 AM

Happy to announce 4.19.1.AM1 (Developer Milestone 1) build for Eclipse 2021-03.

Downloads available at JBoss Tools 4.19.1 AM1.

What is New?

Full info is at this page. Some highlights are below.

OpenShift

Improved OpenShift Application explorer

When the OpenShift cluster has no applications or projects, user is required to create them. However, it may not be obvious for the user that the corresponding function is available from a sub menu of the New context menu.

So now, a link will be provided within the tree with an explanation message.

If no projects are available, user will be guided to create one:

application explorer enhanced navigation1

If no applications are available in a project, user will be guided to create a new component:

application explorer enhanced navigation2

Hibernate Tools

A number of additions and updates have been performed on the available Hibernate runtime providers.

Runtime Provider Updates

The Hibernate 5.4 runtime provider now incorporates Hibernate Core version 5.4.32.Final and Hibernate Tools version 5.4.32.Final.

Enjoy!

Jeff Maury


by jeffmaury at October 05, 2021 05:54 AM

JBoss Tools 4.19.0.AM1 for Eclipse 2021-03

by jeffmaury at October 05, 2021 05:54 AM

Happy to announce 4.19.0.AM1 (Developer Milestone 1) build for Eclipse 2021-03.

Downloads available at JBoss Tools 4.19.0 AM1.

What is New?

Full info is at this page. Some highlights are below.

OpenShift

Browser based login to an OpenShift cluster

When it comes to login to a cluster, OpenShift Tools supported two different authentication mechanisms:

  • user/password

  • token

The drawback is that it does not cover clusters where a more enhanced and modern authentication infrastructure is in place. So it is now possible to login to the cluster through an embedded web browser.

In order to use it, go to the Login context menu from the Application Explorer view:

weblogin1

Click on the Retrieve token button and an embedded web browser will be displayed:

weblogin2

Complete the workflow until you see a page that contains Display Token:

weblogin3

Click on Display Token:

The web browser is automatically closed and you’ll notice that the retrieved token has been set in the original dialog:

weblogin4

Devfile registries management

Since JBoss Tools 4.18.0.Final, the preferred way of developing components is now based on devfile, which is a YAML file that describe how to build the component and if required, launch other containers with other containers. When you create a component, you need to specify a devfile that describe your component. So either you component source contains its own devfile or you need to pick a devfile that is related to your component. In the second case, OpenShift Tools supports devfile registries that contains a set of different devfiles. There is a default registry (https://github.com/odo-devfiles/registry) but you may want to have your own registries. It is now possible to add and remove registries as you want.

The registries are displayed in the OpenShift Application Explorer under the Devfile registries node:

registries1

Please note that expanding the registry node will list all devfiles from that registry with a description:

registries2

A context menu on the Devfile registries node allows you to add new registries, and on the registry node to delete it.

Devfile enhanced editing experience

Although devfile registries can provide ready-to-use devfiles, there may be some advanced cases where users need to write their own devfile. As the syntax is quite complex, the YAML editor has been completed so that to provide:

  • syntax validation

  • content assist

Support for Python based components

Python-based components were supported but debugging was not possible. This release brings integration between the Eclipse debugger and the Python runtime.

Hibernate Tools

A number of additions and updates have been performed on the available Hibernate runtime providers.

Runtime Provider Updates

The Hibernate 5.4 runtime provider now incorporates Hibernate Core version 5.4.29.Final and Hibernate Tools version 5.4.29a.Final.

Server Tools

Wildfly 23 Server Adapter

A server adapter has been added to work with Wildfly 23.

EAP 7.4 Beta Server Adapter

The server adapter has been adapted to work with EAP 7.4 Beta.

Enjoy!

Jeff Maury


by jeffmaury at October 05, 2021 05:54 AM

JBoss Tools 4.18.0.AM1 for Eclipse 2020-09

by jeffmaury at October 05, 2021 05:54 AM

Happy to announce 4.18.0.AM1 (Developer Milestone 1) build for Eclipse 2020-09.

Downloads available at JBoss Tools 4.18.0 AM1.

What is New?

Full info is at this page. Some highlights are below.

Quarkus

Support for codestarts in New Quarkus project wizard

code.quarkus.io has added a new option codestart that allows extension that support this new feature to contribute sample code in the generated project. It is enabled by default and is accessible from the second step in the wizard:

quarkus30

OpenShift

Devfile based deployments

The Application Explorer view is now based on odo 2.x, which allows deployments to be based on devfile (developer oriented manifest file). The components from the default odo registry are listed with legacy S2I components:

devfile

It is also now possible to bootstrap from an empty project as the components from the registry may expose starter projects (sample code that will initialize your empty project).

devfile1

Hibernate Tools

A number of additions and updates have been performed on the available Hibernate runtime providers.

Runtime Provider Updates

The Hibernate 5.4 runtime provider now incorporates Hibernate Core version 5.4.25.Final and Hibernate Tools version 5.4.25.Final.

The Hibernate 5.3 runtime provider now incorporates Hibernate Core version 5.3.20.Final and Hibernate Tools version 5.3.20.Final.

Server Tools

Wildfly 22 Server Adapter

A server adapter has been added to work with Wildfly 22.

CDI Tools

Eclipse Microprofile support

CDI Tools now support Eclipse Microprofile. Eclipse Microprofile related assets are checked against @Inject injections points and are validated according to rules specified in various Eclipse Microprofile specifications.

Forge Tools

Forge Runtime updated to 3.9.8.Final

The included Forge runtime is now 3.9.8.Final.

Enjoy!

Jeff Maury


by jeffmaury at October 05, 2021 05:54 AM

The Thrill of Conquest

by Donald Raab at September 30, 2021 03:50 PM

A poem

Cadillac Mountain, Acadia National Park, Maine — Photo by Donald Raab
Cadillac Mountain, Acadia National Park, Maine — Photo by Donald Raab

Background

I wrote this poem in 1988 and it was published in my high school literary magazine.

The Thrill of Conquest

Snowflakes drop upon our brows,
slush beneath our feet.
The air around us freezes the tips of our gloves;
Still, we are determined to conquer this last great mountain.
Just one foot in front of the other,
thinking thoughts of hot cocoa brewing on a sizzling stove.
Keep moving, because surely if we stop,
the end will consume us.
Beneath our necks, winter’s chill makes its home.
We cannot go on much longer.
At long last! The apogee is in sight!
Thank the Lord for small miracles.
We reach the top, and slump to the ground in exhaustion.
Our flag is set in the ground, claiming this mountain ours.
All of a sudden,
the wind carries the sound of a ghostly voice,
“Children, dinner is ready.”
Oh well, so much for another adventure.
We jumps on our sleds,
and slide down our hill into the backyard.

— Donald Raab

Thank you for reading! I took the pictures this past weekend on a trip to Maine. I hope you enjoy them.

Sunset, Cadillac Mountain, Acadia National Park, Maine — Photo by Donald Raab
Sunset, Cadillac Mountain, Acadia National Park, Maine — Photo by Donald Raab

by Donald Raab at September 30, 2021 03:50 PM

Completed Kafka Connectivity

September 29, 2021 12:00 AM

Consuming messages from Apache Kafka in Eclipse Ditto

Eclipse Ditto did support publishing events and messages to Apache Kafka for quite some time now.
The time has come to support consuming as well.

A Kafka connection behaves slightly different from other consuming Connections in Ditto.
The following aspects are special:

Scalability

Kafka’s way of horizontal scaling is to use partitions. The higher the load the more partitions should be configured.
On consumer side this means that a so-called consumer group can have as many consuming clients as number of partitions exist.
Each partition would then be consumed by one client.

This perfectly matches with Ditto connections scaling, each Ditto connection builds such a consumer group.
For a connection there are two ways of scaling:

  1. clientCount on connection level
  2. consumerCount on source level

A connection client bundles all consumers for all sources and all publishers for all targets. It is guaranteed that for a single connection only one client can be instantiated per instance of the connectivity microservice.
This way Ditto provides horizontal scaling.

Therefore, the clientCount should never be configured higher than the number of available connectivity instances.

If the connectivity instance is not fully used by a single connection client, the consumerCount can be used to scale a connection’s consumers vertically. The consumerCount of a source indicates how many consumers should be started for a single connection client for this source. Each consumer is a separate consuming client in the consumer group of the connection.

This means that the number of partition should be greater or equal than clientCount multiplied by the highest consumerCount of a source.

Backpressure and Quality of Service

Usually there is an application connected to Ditto which is consuming either messages or events of devices connected to Ditto.
These messages and events can now be issued by devices via Kafka.
What happens now when the connected application temporarily can’t process the messages emitted by Ditto in the rate the devices publish their messages via Kafka into Ditto?
The answer is: “It depends.”

There are two steps of increasing delivery guarantee for messages to the connected application.

  1. Make use of acknowledgements
  2. Configure the qos for the source to 1

The first will introduce backpressure from the consuming application to the Kafka consumer in Ditto.
This means that the consumer will automatically slow down consuming messages when the performance of the connected application slows down. This way the application has time to scale up, while the messages are buffered in Kafka.

The second step can be used when it’s necessary to ensure that the application not just received but successfully processed the message. If the message could not be processed successfully or if the acknowledgement didn’t arrive in time, the Kafka consumer will restart consuming messages from the last successfully committed offset.

Expiry

Now that we know about backpressure we also know, that messages could remain in Kafka for some time.
The time can be limited by Kafka’s retention time, but this would be applied to all messages in the same way. What if some messages become invalid after some time, but others won’t?

Ditto provides an expiry of messages on a per-message level. That way Ditto filters such expired messages but still processes all others.

We embrace your feedback

Did you recognize a possible match of Ditto for some of your usecases? Do you miss something in this new feature?
We would love to get your feedback.



Ditto


The Eclipse Ditto team


September 29, 2021 12:00 AM

Life in a Beautiful Day

by Donald Raab at September 28, 2021 04:54 PM

What my cousin Chris taught me about living

My cousin Chris on the London Eye, 2004

It’s a Beautiful Day

I hope this story reminds you of one positive thing, every single day.

My cousin Chris passed away on September 28, 2012. This is the first time I am writing about him. I don’t really know what to write to be honest. Chris was less a cousin, and more of a brother to me. A brother from another mother he would say.

Chris died a year before my wife was diagnosed with Leukemia. He was only 42 years old. My memories of him and the life he lived brought me comfort and strength in the hardest times, as my wife fought her war against AML.

Chris loved the U2 song “Beautiful Day.” Every time I saw him, he would happily and emphatically say these words to me.

It’s a Beautiful Day

I love the song, and think fondly of Chris every time I hear it. It will forever be his song. I enjoy listening and singing along to it and smiling as I think of him.

Surprisingly, I have never seen the video of the song until today. I just watched the official music video for the song for the very first time. I did not know there might be more to the song than the amazing melody, and motivational lyrics. I think there will be a permanent palm print on my forehead after today.

The reason I say this is because Chris was an airline attendant for most of his career.

https://medium.com/media/b6374d90bb052db39b4b9250a8ef1a46/href

After watching this video, the song has an even stronger bond to Chris for me. I know Chris is smiling down at me as I learned this today.

Live a Beautiful Life every single Beautiful Day

Chris would always make me smile. Even during the darkest times of his short life, through all the battles he fought, Chris lived filled with happiness, love and with his motto ready to be shared with all.

Chris saw more of the world than most of us probably ever will. He came to visit my family when we lived in London in 2004, which is when I took the two pictures I have included in this post. Chris knew how to make the most out of a one or two day layover in a city. He knew how to live a beautiful life in a single day.

The last time I saw Chris was in NYC, the summer that he passed away. I have the last conversation I had with Chris saved on my phone from nine years ago. I was hoping to arrange a visit with him in Houston where he lived. He passed away before I got the chance. Our last words are a constant reminder to me, to live my life each day as a Beautiful Day.

Chris: Thank you buddy… I promise I will let u know!!! Love you
Me: Love U2… The band is great as well. ;)

If there is a heaven… I’m certain Chris is there enjoying every beautiful day. Chris was an angel on earth, possibly just dropping by for a quick layover to make sure we all learn how to live life in the moments we have. His life was a gift, and I am lucky to have been a part of it.

Wherever you are Chris, I love you, and I miss you.

It’s a Beautiful Day
My cousin Chris, enjoying a Beautiful Day

by Donald Raab at September 28, 2021 04:54 PM

Eclipse Foundation and OpenAtom Foundation Forge a Strategic Initiative Focused on OpenHarmony OS

by Jacob Harris at September 28, 2021 08:00 AM

Brussels and Beijing, September 28, 2021 – The Eclipse Foundation, Europe’s biggest open source organization, and the OpenAtom Foundation, China’s first open source foundation, today announced their intent to form a collaborative partnership focused on OpenAtom’s OpenHarmony Operating System (OS). The shared goal of this partnership is to jointly build a worldwide, vendor-neutral, and independent open source community, allowing developers, vendors, system integrators to increase their global reach in a single and unified ecosystem.

OpenHarmony is a next-generation, distributed multi-kernel operating system allowing machines and connected objects to work together and share software as well as hardware. The mission of this future collaboration between the Eclipse Foundation and the OpenAtom Foundation will be to further the development and adoption of OpenHarmony. In support of this, the Eclipse Foundation will be establishing additional open source projects and a working group of interested parties to build, deliver and promote an OpenHarmony compatible implementation for the global market. 

“The open source model has clearly established itself as the best possible means for global communities to collaborate around a shared, open technology,” said Mike Milinkovich, executive director at the Eclipse Foundation. “Through our collaboration with OpenAtom, we’re hoping to leverage innovation in both Europe and China to create a global solution that everyone can leverage.”

A warm welcome to the collaboration was also expressed by Tao Yang, chairman of the board of directors at the OpenAtom Foundation, “A unified and diversified open source community is crucial to the sustainable development of OpenHarmony. We are looking forward to making history with the Eclipse Foundation and developers around the world.”

About the Eclipse Foundation

The Eclipse Foundation provides our global community of individuals and organizations with a mature, scalable, and business-friendly environment for open source software collaboration and innovation. The Foundation is home to the Eclipse IDE, Jakarta EE, and over 400 open source projects, including runtimes, tools, and frameworks for cloud and edge applications, IoT, AI, automotive, systems engineering, distributed ledger technologies, open processor designs, and many others. The Eclipse Foundation is an international non-profit association supported by over 330 members, including industry leaders who value open source as a key enabler for their business strategies. To learn more, follow us on Twitter @EclipseFdn, LinkedIn or visit eclipse.org

About OpenAtom Foundation

The OpenAtom Foundation is dedicated to the promotion of public welfare for the global open source community. OpenAtom, officially founded in Beijing in June 2020, is jointly initiated  by leading companies such as Alibaba, Baidu, Huawei, Inspur, Qihoo, Tencent and China Merchants Bank, etc. OpenAtom endeavors to build an open framework for industry and information technology, to develop international open source communities, to improve efficiency of industrial collaboration, and to empower all industries by offering neutral management of intellectual property, consultation of open source strategy and compliance, operation and marketing services for open source projects, education and training, etc. with regard to for open source software, hardware, chips and content.

Third-party trademarks mentioned are the property of their respective owners.

Media contacts: 

Schwartz Public Relations for the Eclipse Foundation, AISBL
Julia Rauch / Sophie Dechansreiter / Tobias Weiß
Sendlinger Straße 42A
80331 Munich
EclipseFoundation@schwartzpr.de
+49 (89) 211 871 – 43 / -35 / -70

Nichols Communications for the Eclipse Foundation, AISBL
Jay Nichols
jay@nicholscomm.com
+1 408-772-1551
OpenAtom Foundation
Nathan Zhong, CMO of the OpenAtom Foundation
nathan@openatom.org 
 


by Jacob Harris at September 28, 2021 08:00 AM

Eclipse Theia Blueprint Beta 2 is released

by Jonas Helming, Maximilian Koegel and Philip Langer at September 27, 2021 10:59 AM

We are happy to announce the beta 2 release of Eclipse Theia Blueprint. Theia Blueprint is a template application allowing you...

The post Eclipse Theia Blueprint Beta 2 is released appeared first on EclipseSource.


by Jonas Helming, Maximilian Koegel and Philip Langer at September 27, 2021 10:59 AM

Announcing Eclipse Ditto Release 2.1.0

September 27, 2021 12:00 AM

The Eclipse Ditto teams announces availability of Eclipse Ditto 2.1.0.

As the first minor of the 2.x series it adds a lot of new features, the highlight surely being the full integration of Apache Kafka as Ditto managed connection.

Adoption

Companies are willing to show their adoption of Eclipse Ditto publicly: https://iot.eclipse.org/adopters/?#iot.ditto

From our various feedback channels we however know of more adoption.
If you are making use of Eclipse Ditto, it would be great to show this by adding your company name to that list of known adopters.
In the end, that’s one main way of measuring the success of the project.

Changelog

The main improvements and additions of Ditto 2.1.0 are:

  • Support consuming messages from Apache Kafka -> completing the Apache Kafka integration as fully supported Ditto managed connection type
  • Conditional requests (updates + retrievals)
  • Enrichment of extra fields for ThingDeleted events
  • Support for using (HTTP) URLs in Thing and Feature “definition” fields, e.g. linking to WoT (Web of Things) Thing Models
  • HMAC based authentication for Ditto managed connections
  • SASL authentication for Azure IoT Hub
  • Publishing of connection opened/closed announcements
  • Addition of new “misconfigured” status category for managed connections indicating that e.g. credentials are wrong or connection to endpoint could not be established to to configuration problems
  • Support “at least once” delivery for policy subject expiry announcements

The following notable fixes are included:

  • Fix “search-persisted” acknowledgement not working for thing deletion
  • Fix reconnect loop to MQTT brokers when using separate MQTT publisher client

The following non-functional work is also included:

  • Support for tracing reporting traces to an “Open Telemetry” endpoint
  • Improving cluster failover and coordinated shutdown + rolling updates
  • Logging improvements, e.g. configuring a logstash server to send logs to or more options to configure a logging file appender
  • Improving background deletion of dangling DB journal entries / snapshots based on the current MongoDB load
  • Improving search update by applying “delta updates” saving lots of bandwidth to MongoDB
  • Reducing cluster communication for search updates using a smart cache

Please have a look at the 2.1.0 release notes for a more detailed information on the release.

Artifacts

The new Java artifacts have been published at the Eclipse Maven repository as well as Maven central.

The Ditto JavaScript client release was published on npmjs.com:

The Docker images have been pushed to Docker Hub:



Ditto


The Eclipse Ditto team


September 27, 2021 12:00 AM

Into the Unknown

by Donald Raab at September 25, 2021 09:19 PM

A blog about a lesser known Eclipse Collections method

Photo by Mario Dobelmann on Unsplash

What am I getting into now?

Eclipse Collections has a method named into that was added in the 8.0 release. The signature of into is defined as follows on the RichIterable interface.

The method into on RichIterable

This method can be used to transfer the contents of any RichIterable “into” a target collection. The following examples are collections that are “unknown” as Eclipse Collections converter methods, but can be used just fine with into.

@Test
public void intoTheUnknown()
{
var integers = Interval.oneTo(10);

var into1 = integers.into(new CopyOnWriteArrayList<>());
Assertions.assertEquals(integers, into1);

var into2 = integers.into(new CopyOnWriteArraySet<>());
Assertions.assertEquals(integers.toSet(), into2);

var into3 = integers.into(new ArrayDeque<>());
Assertions.assertEquals(integers.toBag(),
Bags.mutable.withAll(into3));

var into4 = integers.into(new LinkedList<>());
Assertions.assertEquals(integers, into4);

var into5 = integers.into(new Stack<>());
Assertions.assertEquals(integers, into5);

var checkedList =
Collections.checkedList(new ArrayList<>(), Integer.class);
var into6 = integers.into(checkedList);
Assertions.assertEquals(integers, into6);

var checkedSet =
Collections.checkedSet(new HashSet<>(), Integer.class);
var into7 = integers.into(checkedSet);
Assertions.assertEquals(integers.toSet(), into7);
}

The into method is also useful when you want to drain multiple source collections into the same target collection.

@Test
public void intoList()
{
var target = Interval.oneTo(3).into(new ArrayList<>());
Interval.fromTo(4,6).into(target);
Interval.fromTo(7,10).into(target);
Assertions.assertEquals(Interval.oneTo(10), target);
}

That’s it. That’s the blog. I hope you enjoyed this short technical Java w/ Eclipse Collections blog.

I am a Project Lead and Committer for the Eclipse Collections OSS project at the Eclipse Foundation. Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it on GitHub.


by Donald Raab at September 25, 2021 09:19 PM

Let me repeat myself

by Donald Raab at September 25, 2021 05:13 AM

A very short Java blog.

I’ve been writing haiku for the past month, to take a break from technical blog writing. I thought I would get back into writing technical blogs with a really short Java blog. The code example pictured about demonstrates the repeat method that was added to the String class in Java 11. The code above outputs the following.

Bart Simpson would have loved to have this API when he was forced to write a sentence on the chalk board one hundred times describing something he would not do.

That’s it. That’s the blog. I hope you enjoyed this short technical Java blog.

I am a Project Lead and Committer for the Eclipse Collections OSS project at the Eclipse Foundation. Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it on GitHub.


by Donald Raab at September 25, 2021 05:13 AM

Eclipse IoT: 10 Years of Connecting the World One Device at a Time

by Mike Milinkovich at September 23, 2021 04:15 PM

It’s been 10 years since the Eclipse IoT Working Group was first established as the M2M Industry Working Group. I want to sincerely thank everyone that has helped make Eclipse IoT the leading community for open source IoT technology innovation and collaboration.

To celebrate this anniversary and a decade of achievements in open source IoT technologies, the Eclipse IoT community has a number of initiatives planned over the coming weeks. Keep an eye on the Eclipse IoT website, our blogs, newsletter, social posts, and your email for more information about planned activities, commemorative content, and tributes to key community achievements.

Powering the World’s Leading Commercial IoT Solutions

Today, the Eclipse IoT ecosystem is the largest open source IoT community in the world with 47 working group members, 47 projects, 360 contributors, and more than 32 million lines of code.

It’s impossible to overstate the impact this fast-growing community has had on commercial adoption of IoT solutions on a global scale. With dozens of IoT projects across device, gateway, cloud, security, edge, and other domains, the Eclipse IoT ecosystem provides easy access to all of the building blocks needed to develop end-to-end IoT solutions.

This has all been made possible by our community members. At this 10 year milestone, we want to recognize two founding members of the original working group—IBM and Eurotech—that continue to actively contribute to, and drive, Eclipse IoT technologies. Over the years, these innovators have been joined by dozens of additional member organizations, large and small, all of whom see the value that open innovation and collaboration bring to their organizations.

In addition to the original founding members, the current Eclipse IoT ecosystem now includes globally recognized players such as Bosch.IO, Red Hat, Huawei, Intel, Nokia, SAP, and Siemens, as well as smaller industrial IoT (IIoT) specialists such as Aloxy, Cedalo, itemis, and Kynetics; and edge IoT innovators such as ADLINK Technology and Edgeworx.

This broad and diverse mix of Eclipse IoT ecosystem participants has led to an extremely vibrant community that has helped drive commercial innovation and adoption at scale. As our IoT case studies highlight, Eclipse IoT members of all sizes and types are benefitting from new relationships, new business and market opportunities, and faster growth.

A Brief Word About Our IoT & Edge Research

Our 2021 IoT & Edge Computing Commercial Adoption Survey confirms that organizations clearly recognize the value of open source technologies for IoT solutions. Nearly 40 percent of survey respondents are using or evaluating the use of open source solutions exclusively, while another 35 percent are looking at a mix of open source and proprietary components. If you haven’t had a chance to read the full survey report, you can download it here

We recently launched the annual  IoT & Edge Developer Survey. Be sure to participate in what has become one of the leading research reports within the IoT and Edge Computing industries. Participate now

Congratulations to 10 Great Years and Here’s to the Next Decade!

I truly believe these first 10 years are just the beginning of what the dedicated and growing Eclipse IoT community will achieve through open source innovation and collaboration. I’m very much looking forward to seeing what comes next.

To learn more about the benefits of membership in Eclipse IoT, visit the working group website.


by Mike Milinkovich at September 23, 2021 04:15 PM

Eleven

by Donald Raab at September 23, 2021 05:42 AM

A haiku

Photo by Mel Elías on Unsplash

This is how many
Haiku I write before I
Write a new tech blog.

© Donald Raab

I have a week left to write a technical blog for September. Writing haiku has been a nice break for me since I wrote five technical blogs in August. I am driving to Maine this weekend and will hopefully find some inspiration for my next technical blog. Thank you for reading my haiku the past few weeks!


by Donald Raab at September 23, 2021 05:42 AM

Support conditional requests for things resources

September 23, 2021 12:00 AM

With the upcoming release of Eclipse Ditto version 2.1.0 it will be possible to execute conditional requests on things and their sub-resources.

Conditional requests for things resources

Ditto now supports conditional requests on things and all of its sub-resources based on a specified condition in the request. This functionality can be used via the HTTP API with an HTTP header or query parameter, as well as via the Ditto protocol, and the Ditto Java Client. For all three ways there is an example provided in this blog post.

With the new functionality, it is possible to retrieve, update, and delete things and all sub-resources based on a given condition. This turns useful, if you want for example to update a feature property of your thing, but only if the thing has a specific attribute set.

To be more concrete let’s say we have a thing with an attribute location, and we only want to update the temperature status of the feature water-tank to 45.5, if the location is “Wonderland”. To achieve this the following HTTP request can be used:

PUT /api/2/things/org.eclipse.ditto:coffeebrewer/features/water-tank/properties/status/temperature?condition=eq(attributes/location,"Wonderland")
45.5

Conditions can be specified using RQL syntax to check if a thing has a specific attribute or feature property value.

In case the condition does not match to the actual state of the thing, the request will fail with HTTP status code 412 - Precondition Failed. There will also be no event emitted for this case.

If the given condition is invalid, the request will fail with HTTP status code 400 - Bad Request.

More documentation for this feature can be found here: Conditional Requests

Permissions for conditional requests

In order to execute a conditional request, the authorized subject needs to have WRITE permission at the resource that should be changed by the request.

Additionally, the authorized subject needs to have READ permission at the resource used in the specified condition. Given the condition from the introduction condition=eq(attributes/location,"Wonderland"), read access on the single attribute would be sufficient. However, the condition can also be more complex, or include other sub-structures of the thing. Then of course, the authorized subject needs READ permission on all parameters of the specified condition.

Examples

The following sub-sections will show how to use conditional requests via the HTTP API, the Ditto protocol, and the Ditto Java Client.

To demonstrate the new conditional request, we assume that the following thing already exists:

{
  "thingId": "org.eclipse.ditto:coffeebrewer",
  "policyId": "org.eclipse.ditto:coffeebrewer-policy",
  "definition": "org.eclipse.ditto:coffeebrewer:0.1.0",
  "attributes": {
    "manufacturer": "ACME demo corp.",
    "location": "Wonderland",
    "serialno": "42",
    "model": "Speaking coffee machine"
  },
  "features": {
    "coffee-brewer": {
      "definition": ["org.eclipse.ditto:coffeebrewer:0.1.0"],
      "properties": {
        "brewed-coffees": 0
      }
    },
    "water-tank": {
      "properties": {
        "configuration": {
          "smartMode": true,
          "brewingTemp": 87,
          "tempToHold": 44.5,
          "timeoutSeconds": 6000
        },
        "status": {
          "waterAmount": 731,
          "temperature": 44.2,
          "lastModified": "2021-09-23T07:01:56Z"
        }
      }
    }
  }
}

Condition based on last modification

In this example the water-tanks’s temperature should only be updated if it was lastModified after “2021-08-25T12:38:27”.

Permissions to execute the example

For this example, the authorized subject could have READ and WRITE permissions on the complete thing resource. However, it is only necessary on the path thing:/features/water-tank/properties/status, because the temperature as well as the conditional part lastModified are located there.

Conditional requests via HTTP API

Using the HTTP API the condition can either be specified via HTTP Header or via HTTP query parameter.
In this section, we will show how to use both options.

Conditional request with HTTP Header

curl -X PATCH -H 'Content-Type: application/json' -H 'condition: gt(features/water-tank/properties/status/lastModified,"2021-09-23T07:00:00Z")' /api/2/things/org.eclipse.ditto:coffeebrewer/features/water-tank/properties/properties/temperature -d '{ temperature: 45.26, "lastModified": "'"$(date --utc +%FT%TZ)"'" }'

Conditional request with HTTP query parameter

curl -X PATCH -H 'Content-Type: application/json' /api/2/things/org.eclipse.ditto:coffeebrewer/features/water-tank/properties/status/temperature?condition=gt(features/water-tank/properties/status/lastModified,"2021-09-23T07:00:00Z") -d '{ temperature: 45.26, "lastModified": "'"$(date --utc +%FT%TZ)"'" }'

Result

After the request was successfully performed, the thing will look like this:

{
  "thingId": "org.eclipse.ditto:coffeebrewer",
  "policyId": "org.eclipse.ditto:coffeebrewer-policy",
  "definition": "org.eclipse.ditto:coffeebrewer:0.1.0",
  "attributes": {
    "manufacturer": "ACME demo corp.",
    "location": "Wonderland",
    "serialno": "42",
    "model": "Speaking coffee machine"
  },
  "features": {
    "coffee-brewer": {
      "definition": ["org.eclipse.ditto:coffeebrewer:0.1.0"],
      "properties": {
        "brewed-coffees": 0
      }
    },
    "water-tank": {
      "properties": {
        "configuration": {
          "brewingTemp": 87,
          "tempToHold": 44.5,
          "timeoutSeconds": 6000
        },
        "status": {
          "waterAmount": 731,
          "temperature": 45.26,
          "lastModified": "2021-09-23T07:05:36Z"
        }
      }
    }
  }
}

Conditional request via Ditto protocol

It is also possible to use conditional requests via the Ditto protocol. Applying the following Ditto command to the existing thing will lead to the same result as in the above HTTP example.

{
  "topic": "org.eclipse.ditto/coffeebrewer/things/twin/commands/modify",
  "headers": {
    "content-type": "application/json",
    "condition": "gt(features/water-tank/properties/status/lastModified,\"2021-09-23T07:00:00Z\")"
  },
  "path": "/features/water-tank/properties/status/temperature",
  "value": 45.26
}

Using conditional requests in the Ditto Java Client

The conditional requests are also supported via the Ditto Java Client with the upcoming (Ditto Java Client version 2.1.0).

Example for a conditional update of a thing with the Ditto Java client:

String thingId = "org.eclipse.ditto:coffeebrewer";
String featureId = "water-tank";
Feature feature = ThingsModelFactory.newFeatureBuilder()
        .properties(ThingsModelFactory.newFeaturePropertiesBuilder()
            .set("status", JsonFactory.newObjectBuilder()
                .set("temperature", 45.2)
                .set("lastModified", Instant.now())
                .build())
            .build())
        .withId(featureId)
        .build();

Thing thing = ThingsModelFactory.newThingBuilder()
        .setId(thingId)
        .setFeature(feature)
        .build();

// initialize the ditto-client
DittoClient dittoClient = ... ;

dittoClient.twin().update(thing, Options.condition("gt(features/water-tank/properties/status/lastModified,'2021-09-23T07:00:00Z')"))
        .whenComplete((adaptable, throwable) -> {
            if (throwable != null) {
                LOGGER.error("Received error while sending conditional update: '{}' ", throwable.toString());
            } else {
                LOGGER.info("Received response for conditional update: '{}'", adaptable);
            }
        });

After running this code snippet, the existing thing should look like the above result for the HTTP example.

Feedback?

Please get in touch if you have feedback or questions towards this new functionality.



Ditto


The Eclipse Ditto team


September 23, 2021 12:00 AM

With Deepest Regrets

by Donald Raab at September 22, 2021 04:19 AM

A haiku

Photo by Aaron Burden on Unsplash

With deepest regrets
That which you have yet to write
At death, won’t be wrote

© Donald Raab

Last year in the midst of the pandemic I watched Hamilton on Disney Plus for the first time. The lyrics “why do you write like you’re running out of time” hit home for me. So many loved ones have passed away during the pandemic, and they will never get the chance now to write down their cherished thoughts and memories. I write like I am running out of time, because none of us know how much time we will get.


by Donald Raab at September 22, 2021 04:19 AM

Diagram Editors for Web-based Tools with Eclipse GLSP

by Brian King at September 15, 2021 05:12 PM

In this article, we introduce the Eclipse Graphical Language Server Platform (GLSP), a technology to efficiently build diagram editors for web- and cloud-based tools. These diagram editors can run inside an IDE, such as Eclipse Theia or VS Code; or can be used stand-alone in any web application. Eclipse GLSP fills an important gap in the implementation of graphical editors for web-based domain-specific tools. It is an ideal next-generation solution for replacing traditional desktop technologies such as GEF and GMF. Eclipse GLSP is a very active open source project within the Eclipse Cloud Development Tools ecosystem.

Diagram Editors in the Web/Cloud

There is now a big push to migrate tools and IDEs to web technologies and run them in the cloud or via Electron on the desktop. Eclipse Theia and VS Code offer two powerful frames for supporting such an endeavour (see this comparison). In recent years, we have also seen significant innovation  around enabling web-based tools such as the language server protocol (LSP) and the debug adapter protocol (DAP). The focus of early adopters has been very clearly on enabling textual programming. However, domain-specific use cases and tools often use graphical representations and diagram editors for better comprehension and more efficient development. Eclipse GLSP fills this gap. You can consider it to be like LSP and DAP, but for diagram editors. It provides a framework and a standardized way to efficiently create diagram editors that can be flexibly embedded into tools and applications.

A Feature Rich Framework

Eclipse GLSP began in 2018 and has been very actively developed since then. Due to many industrial adopters, the framework is very feature rich. This includes standard diagram features such as node and edges, a palette, moving/resizing, zooming, inline editing or compartments (see screenshot above). GLSP is targeted at diagram editors rather than “drawing boards”, so it also provides classic tool features such as undo/redo, validation and navigation between textual artifacts and the diagram. Last but not least, GLSP allows the integration of powerful layout and routing algorithms (such as ELK) to enable auto-layouting or advanced routing (see example below). With so many features, Eclipse GLSP is more than ready to be adopted for industrial diagram editors. To learn more, please refer to this detailed feature overview of Eclipse GLSP.

©  logi.cals GmbH

One very important benefit of using a web technology stack for rendering is that there are almost no limitations on what you can actually draw. Eclipse GLSP supports adding custom shapes via SVG and CSS. As you can see in the screenshot below, you have complete freedom to design your diagram elements, including animations.

Now that we have talked about the feature set, let’s take a look under the hood and provide an overview of how GLSP actually works.

How Does it Work?

Implementing a diagram editor based on GLSP consists of two main parts. (1) The rendering component is responsible for drawing things on screen and enables user interaction. (2) The business logic component implements the actual behavior of a diagram, e.g. which nodes can be created, what connections are allowed or how to manipulate domain data on diagram changes. Eclipse GLSP cleanly encapsulates both parts using a defined protocol (the Graphical Language Server Protocol).

Source: GLSP Homepage

The server manages the diagram state and manipulations. It also connects the diagram to surrounding features. As an example, the server could update a domain model or a database to represent the data of the diagram. As another example, the server can adopt layout algorithms to efficiently auto layout the diagram (see screenshot below). When implementing a custom diagram editor, you mostly need to implement a GLSP server. GLSP provides a helper framework for more efficiency. However, due to the defined protocol, you can actually use any language for that.

The default GLSP client is implemented using TypeScript, SVG and CSS. It interprets the protool message from the server and draws the result. Performance critical operations, such as drag and drop, will directly be handled by the client. In most scenarios, the default client already covers most requirements. So, when implementing a custom diagram editor, you usually only need to define how certain elements are rendered.

As you can see, the architecture of GLSP is similar to the language server protocol and the debug adapter protocol. These approaches are highly successful, as the defined split between server and client provides a lot of flexibility. It also requires much less effort to implement new diagrams, as the client is already provided by the framework. With very few lines of code you get full fledged diagrams, integrated with your custom tool! Also see this detailed introduction to Eclipse GLSP and a minimal example diagram editor to learn more about how GLSP works.

Integration with Tools and IDEs

Eclipse GLSP is based on standard web technologies and is easily integratable into any web application. A common scenario for adopters of GLSP is to integrate it into a tool or an IDE. For Eclipse Theia, VS Code and the Eclipse desktop IDE, GLSP provides out-of-the-box integration (see screenshot below). Integration in this context means features, such as that there is an editor component that manages the dirty state or that you can double click files to open diagrams. The integrations are generic and independent of the actual diagram editor implementation. As a consequence, you can provide the same diagram editor in different contexts, e.g. as part of a tool and as part of a regular web page. Please see this article about GLSP diagram editors in VS Code, in Theia and an overview of the available integrations.

Conclusion

Eclipse GLSP allows you to efficiently implement web-based diagram editors and either run them stand-alone or embed them into Eclipse Theia or VS Code. By adapting the same architectural pattern as LSP and DAP, it provides a clean separation between the visual concerns (rendering on the client) and the business logic (GLSP server). This reduces the amount of effort required, as the rendering client is already provided. It also provides flexibility to use the server language of choice and integrate the diagram with other components, such as a layout algorithm or any data source.

Eclipse GLSP is an active open source project within the Eclipse Cloud Development Tools ecosystem. It fills the important role of a next generation diagram editor framework for web-based tools. GLSP is built upon Eclipse Sprotty, it integrates well with EMF.cloud and obviously with Eclipse Theia and VS Code. There are several commercial adoptions of GLSP. If you are interested in trying an open example, check out the coffee editor provided by EMF.cloud.

If you want to learn more about Eclipse GLSP, check out this recent Eclipse Cloud Tool Time talk. The GLSP website provides more articles and videos and there are also professional services available for GLSP.

Finally, there will be a talk about GLSP at EclipseCon 2021, so be sure to  get registered!


by Brian King at September 15, 2021 05:12 PM

WTP 3.23 Released!

September 15, 2021 03:01 PM

The Eclipse Web Tools Platform 3.23 has been released! Installation and updates can be performed using the Eclipse IDE 2021-09 Update Site or through any of the related Eclipse Marketplace . Release 3.23 is included in the 2021-09 Eclipse IDE for Enterprise Java and Web Developers , with selected portions also included in several other packages . Adopters can download the R3.23 p2 repository directly and combine it with the necessary dependencies.

More news


September 15, 2021 03:01 PM

Eclipse IDE 2021-09 Is Now Available

by Shanda Giacomoni at September 15, 2021 01:27 PM

The latest Eclipse IDE release includes Java 17 support and theme and style improvements. Download the leading open platform for professional developers.

by Shanda Giacomoni at September 15, 2021 01:27 PM

Diagram Editors for Web-based Tools with Eclipse GLSP

by Christopher Guindon (webdev@eclipse-foundation.org) at September 15, 2021 12:00 AM

In this article, we introduce the Eclipse Graphical Language Server Platform (GLSP), a technology to efficiently build diagram editors for web- and cloud-based tools. These diagram editors can run inside an IDE, such as Eclipse Theia or VS Code; or can be used stand-alone in any web application. Eclipse GLSP fills an important gap in the implementation of graphical editors for web-based domain-specific tools. It is an ideal next-generation solution for replacing traditional desktop technologies such as GEF and GMF. Eclipse GLSP is a very active open source project within the Eclipse Cloud Development Tools ecosystem.

by Christopher Guindon (webdev@eclipse-foundation.org) at September 15, 2021 12:00 AM

The Eclipse Foundation Announces the Results of the 2021 Jakarta EE Developer Survey

by Jacob Harris at September 14, 2021 12:00 PM

BRUSSELS – September 14, 2021 – The Eclipse Foundation, one of the world’s largest open source software foundations, today announced the results of the industry’s most prominent survey for technical insights into enterprise Java, the 2021 Jakarta EE Developer Survey. The results definitively showcase significantly increased growth in the use of Jakarta EE 9 and interest in cloud native Java overall. The 2021 Jakarta EE Developer Survey Report is available to download on the  Jakarta EE website.

“Since the ‘big bang’ move to the jakarta namespace with Jakarta EE 9, enterprise Java has been experiencing something of a renaissance, ” said Mike Milinkovich, executive director of the Eclipse Foundation. “With the plan for Jakarta EE 10 already formalized and the continued growth in the use of Jakarta EE 9, the cloud native future of open source enterprise Java has never looked brighter.”

The objective of this survey, now in its fourth year, is to help Java ecosystem stakeholders better understand the requirements, priorities, and perceptions of enterprise developer communities. The survey also sought to help the Java ecosystem gain a better understanding of how the cloud native world for enterprise Java is unfolding and what that means for their respective strategies and businesses. Conducted from April 6 to May 31, 2021, 940 individuals participated in the survey. Java EE 8, Jakarta EE 8 and Jakarta EE 9 have now seen 75% adoption among respondents. 

Additional key findings from this year’s survey include:

  • Spring/Spring Boot continues to be the leading framework for building cloud native applications (60%), with its share increasing by 16 points (60% in 2021 up from 44% in 2020).
  • Jakarta EE is emerging as the second place cloud native framework with 47% usage in this year’s survey. 
  • MicroProfile adoption has increased to 34% (vs 29% in 2020).
  • The popularity of microservices holds steady with a nominal increase, with the usage of the microservices architecture for implementing Java systems in the cloud increasing since last year (43% in 2021 vs 39% in 2020).
  • Over 48% of respondents have either already migrated to Jakarta EE or plan to within the next 6-24 months.

As part of the report on the survey’s findings, the Eclipse Foundation has also incorporated specific recommendations for both enterprises and technology vendors and their respective enterprise Java strategies.  These include recommendations related to technology migration, application portability, balancing traditional enterprise applications with cloud native applications and more. 

The Jakarta EE community welcomes contributions and participation by all interested parties. As the Jakarta EE Working Group continues to build towards the release of Jakarta EE 10, including new cloud native functionality, this is the ideal time to join the community and have your voice heard. To learn more and to participate, all are welcome to connect with the global community at the following page: https://jakarta.ee/connect/  

Companies that value enterprise Java and would like to help shape its future can join the Jakarta EE Working Group. Membership in the working group allows enterprises and other organizations to support the sustainability of the community, participate in marketing programs, and engage directly with the community. Learn more about the benefits and advantages of membership here: https://jakarta.ee/membership/

For those organizations interested in learning more about how Jakarta EE can benefit them, the Jakarta EE Working Group has published a white paper on the features and capabilities of Jakarta EE that can help companies accelerate time-to-market for commercial offerings, increase the efficiency of software development, and transition to a cloud native future. Download it here: https://outreach.jakartaee.org/2021-developer-survey-report

Quotes from Jakarta EE Working Group Member Organizations 

IBM

“The 2021 Jakarta EE Developer Survey shows a healthy improvement in developer awareness and use of Jakarta EE across the board and is another indicator that it is the right platform for cloud-native Java innovation,” said Melissa Modjeski, IBM VP of App Platform and Integration. “As a runtime certified as both Jakarta EE and MicroProfile compatible, Open Liberty delivers on the promise of Cloud Native Java and is an essential element of accelerating open hybrid cloud adoption.”

Jelastic 

“The 2021 Jakarta EE Developer Survey results prove that cloud adoption keeps growing, as well as more and more enterprise projects decide to develop and run cloud-native applications,” said Tetiana Fydorenchyk, Jelastic VP of Marketing. “Also, considering that Jelastic was the first cloud platform with Jakarta EE 9 support, it’s satisfying to see that we helped to spread this version and make it available for those interested in the latest and upcoming improvements. Based on the research, faster pace of innovation is among the top priorities for Jakarta EE community, that’s why Jelastic PaaS will continue providing cutting-edge DevOps tools and most recent releases to simplify and accelerate adoption.”

Oracle

“We are very happy to see that the Jarkata EE developer survey findings demonstrate continued success and growth of the Jakarta EE platform,” said Tom Snyder, VP of Engineering, Enterprise Cloud Native Products at Oracle. “We are quite pleased to see the most requested feature is support for Kubernetes. We hope to bring the experience and expertise we've gained delivering and supporting Oracle Verrazzano Enterprise Container Platform to benefit technology and feature development of Jakarta EE Platform and MicroProfile. With Verrazzano, users can easily model, manage, secure, and maintain their enterprise applications developed in Jakarta EE using a wide range of platforms and frameworks running on premise, or in the cloud.”

Payara

“The 2021 Jakarta EE Developer Survey has produced a useful and encouraging set of results. They affirm both the Eclipse Foundation and Payara’s direction of innovation and provide valuable information on the priorities of our community,” said  Steve Millidge, CEO & Founder, Payara Services. “I was excited to see proof of wider mainstream adoption of Jakarta EE 8 and Jakarta EE 9, confirming community satisfaction ahead of the further innovations of Jakarta EE 10. It was also fantastic to see that Jakarta EE emerged as the second-place cloud native framework, as our own business priorities are closely aligned with Jakarta EE’s aim of providing strong business application development for the cloud. For example, we have helped customer Rakuten Card move to 100% cloud native with Jakarta EE and our new Payara Cloud project is designed specifically to provide an automated alternative to other cloud native application servers. We’re looking forward to sharing the results of the survey with our community and wider team and using it to shape Payara’s future plans.”

Tomitribe

“The 2021 Jakarta EE Developer survey highlights the continuing adoption of Jakarta EE by enterprises for cloud-native Java applications. The latest Jakarta EE 9.1 release is the most diverse release to date, with more vendors providing compatible implementations and support than ever before,” said Jonathan Gallimore, Tomitribe Director of Support. “Apache TomEE implements Jakarta EE 9.1 WebProfile to deliver a lightweight, easy-to-use, cloud-native Java solution. We are excited to help drive Apache TomEE forward as Jakarta EE continues to evolve to meet the needs of the community and its users.”

About the Eclipse Foundation

The Eclipse Foundation provides our global community of individuals and organizations with a mature, scalable, and business-friendly environment for open source software collaboration and innovation. The Foundation is home to the Eclipse IDE, Jakarta EE, and over 400 open source projects, including runtimes, tools, and frameworks for cloud and edge applications, IoT, AI, automotive, systems engineering, distributed ledger technologies, open processor designs, and many others. The Eclipse Foundation is an international non-profit association supported by over 330 members, including industry leaders who value open source as a key enabler for their business strategies. To learn more, follow us on Twitter @EclipseFdn, LinkedIn or visit eclipse.org

Third-party trademarks mentioned are the property of their respective owners.


###


Media contacts: 

Schwartz Public Relations for the Eclipse Foundation, AISBL
Julia Rauch / Sophie Dechansreiter / Tobias Weiß
Sendlinger Straße 42A
80331 Munich
EclipseFoundation@schwartzpr.de
+49 (89) 211 871 – 43 / -35 / -70

Nichols Communications for the Eclipse Foundation, AISBL
Jay Nichols
jay@nicholscomm.com
+1 408-772-1551


by Jacob Harris at September 14, 2021 12:00 PM

Top Trends in the Jakarta EE Developer Survey Results

by Mike Milinkovich at September 14, 2021 11:00 AM

Our annual Jakarta EE Developer Survey results gives everyone in the Java ecosystem insight into how the cloud native world for enterprise Java is unfolding and what the latest developments mean for their strategies and businesses. Here’s a brief look at the top technology trends revealed in this year’s survey.

For context, this year’s survey was completed by almost 950 software developers, architects, and decision-makers around the world. I’d like to sincerely thank everyone who took the time to complete the survey, particularly our survey partners, Jakarta EE Working Group members Fujitsu, IBM, Jelastic, Oracle, Payara, Red Hat, and Tomitribe, who shared the survey with their communities. Your support is crucial to help ensure the survey results reflect the viewpoints of the broadest possible Java developer audience.

Jakarta EE Continues to Deliver on Its Promise

Multiple data points from this year’s survey confirm that Jakarta EE is fulfilling its promise to accelerate business application development for the cloud.

As in the 2020 survey results, Jakarta EE emerged as the second-place cloud native framework with 47 percent of respondents saying they use the technologies. That’s an increase of 12 percent over the 2020 survey results, reflecting the industry’s increasing recognition that Jakarta EE delivers important strategic and technical benefits.

Almost half of the survey respondents have either already migrated to Jakarta EE or plan to within the next six to 24 months. Together, Java EE 8, Jakarta EE 8, and Jakarta EE 9 are now used by 75 percent of survey respondents. And Jakarta EE 9 usage reached nine percent despite the fact the software was only released in December 2020.

With the rise of Jakarta EE, it’s not surprising that developers are also looking for faster support from Java EE/Jakarta EE and cloud vendors.

Microservices Usage Continues to Increase

Interestingly, the survey revealed that monolithic approaches are declining in favor of hybrid architectures. Only 18 percent of respondents said they’re maintaining a monolithic approach, compared to 29 percent who have adopted a hybrid approach and 43 percent who are using microservices.

A little over a year ago, monolithic implementations were outpacing hybrid approaches, showing just how quickly the cloud native Java world is evolving. In alignment with these architectural trends, MicroProfile adoption is up five percent over last year to 34 percent.

Download the Complete Survey Results

For additional insight and access to all of the data collected in our 2021 Jakarta EE Developer survey, we invite everyone to download the survey results.


by Mike Milinkovich at September 14, 2021 11:00 AM

Bye Bye 'build' : the end of an era

by Denis Roy at September 10, 2021 01:34 PM

build:~ # halt

That's the last command anyone will ever type on the venerable "build.eclipse.org" server. Born in 2005, it was used as a general-purpose machine for running builds and jobs for our committers. Some folks ran ant from cron jobs, some ran Cruise Control, and in 2007, we installed Hudson - a single instance CI for any project that wanted to create a job and use it.

From there, we added worker nodes, but as usage increased, stability decreased.

Afterwards, we invented HIPP (a Hudson Instance Per Project) which, over the years, evolved into the current Jenkins+k8s-based Jiro (Jenkins Instance Running on OpenShift) offering we have at https://ci.eclipse.org.

The Build server went through numerous OS refreshes, and a couple of hardware refreshes over the years, and just wasn't being used  anymore. The current unit is an Intel SR1600 series from 2009 (you have to give credit to Intel, they know how to build them!), so after 12 years, it's time to turn it off -- or perhaps give it new life?

With some added RAM and a shiny new SSD, it will likely be repurposed towards the k8s build cluster, where it will relive its glory days and produce, once again, the binary output from the projects we all love.

Thanks, build, see you in your next life.


by Denis Roy at September 10, 2021 01:34 PM

Eclipse p2 site references

by Lorenzo Bettini at September 02, 2021 11:29 AM

Say you publish a p2 repository for your Eclipse bundles and features. Typically your bundles and features will depend on something external (other Eclipse bundles and features). The users of your p2 repository will have to also use the p2 repositories of the dependencies of your software otherwise they won’t be able to install your software. If your software only relies on standard Eclipse bundles and features, that is, something that can be found in the standard Eclipse central update site, you should have no problem: your users will typically have the Eclipse central update site already configured in their Eclipse installations. So, unless your software requires a specific version of an Eclipse dependency, you should be fine.

What happens instead if your software relies on external dependencies that are available only in other p2 sites? Or, put it another way, you rely on an Eclipse project that is not part of the simultaneous release or you need a version different from the one provided by a specific Eclipse release.

You should tell your users to use those specific p2 sites as well. This, however, will decrease the user experience at least from the installation point of view. One would like to use a p2 site and install from it without further configurations.

To overcome this issue, you should make your p2 repository somehow self-contained. I can think of 3 alternative ways to do that:

  • If you build with Tycho (which is probably the case if you don’t do releng stuff manually), you could use <includeAllDependencies> of the tycho-p2-repository plugin to “to aggregate all transitive dependencies, making the resulting p2 repository self-contained.” Please keep in mind that your p2 repository itself will become pretty huge (likely a few hundred MB), so this might not be feasible in every situation.
  • You can put the required p2 repositories as children of your composite update site. This might require some more work and will force you to introduce composite update sites just for this. I’ve written about p2 composite update sites many times in this blog in the past, so I will not consider this solution further.
  • You can use p2 site references that are meant just for the task mentioned so far and that have been introduced in the category.xml specification for some time now. The idea is that you put references to the p2 sites of your software dependencies and the corresponding content metadata of the generated p2 repository will contain links to the p2 sites of dependencies. Then, p2 will automatically contact those sites when installing software (at least from Eclipse, from the command line we’ll have to use specific arguments as we’ll see later). Please keep in mind that this mechanism works only if you use recent versions of Eclipse (if I remember correctly this has been added a couple of years ago).

In this blog post, I’ll describe such a mechanism, in particular, how this can be employed during the Tycho build.

The simple project used in this blog post can be found here: https://github.com/LorenzoBettini/tycho-site-references-example. You should be able to easily reuse most of the POM stuff in your own projects.

IMPORTANT: To benefit from this, you’ll have to use at least Tycho 2.4.0. In fact, Tycho started to support site references only a few versions ago, but only in version 2.4.0 this has been implemented correctly. (I personally fixed this: https://github.com/eclipse/tycho/issues/141.) If you use a (not so) older version, e.g., 2.3.0, there’s a branch in the above GitHub repository, tycho-2.3.0, where some additional hacks have to be performed to make it work (rewrite metadata contents and re-compress the XML files, just to mention a few), but I’d suggest you use Tycho 2.4.0.

There’s also another important aspect to consider: if your software switches to a different version of a dependency that is available on a different p2 repository, you have to update such information consistently. In this blog post, we’ll deal with this issue as well, keeping it as automatic (i.e., less error-prone) as possible.

The example project

The example project is very simple:

  • parent project with the parent POM;
  • a plugin project created with the Eclipse wizard with a simple handler (so it depends on org.eclipse.ui and org.eclipse.core.runtime);
  • a feature project including the plugin project. To make the example more interesting this feature also requires, i.e., NOT includes, the external feature org.eclipse.xtext.xbase. We don’t actually use such an Xtext feature, but it’s useful to recreate an example where we need a specific p2 site containing that feature;
  • a site project with category.xml that is used to generate during the Tycho build our p2 repository.

To make the example interesting the dependency on the Xbase feature is as follows

<requires>
   <import feature="org.eclipse.xtext.xbase" version="2.25.0" match="compatible"/>
</requires>

So we require version 2.25.0.

The target platform is defined directly in the parent POM as follows (again, to keep things simple):

<repositories>
  <repository>
    <id>2020-12</id>
    <layout>p2</layout>
    <url>https://download.eclipse.org/releases/2020-12</url>
  </repository>
  <repository>
    <id>2.25.0</id>
    <layout>p2</layout>
    <url>https://download.eclipse.org/modeling/tmf/xtext/updates/releases/2.25.0</url>
  </repository>
</repositories>

Note that I explicitly added the Xtext 2.25.0 site repository because in the 2020-12 Eclipse site Xtext is available with a lower version 2.24.0.

This defines the target platform we built (and in a real example, hopefully, tested) our bundle and feature.

The category.xml initially is defined as follows

<?xml version="1.0" encoding="UTF-8"?>
<site>
   <feature id="org.example.feature" version="0.0.0">
      <category name="org.example.category"/>
   </feature>
   <category-def name="org.example.category" label="P2 Example Composite Repository">
      <description>
         P2 Example Repository
      </description>
   </category-def>
</site>

The problem

If you generate the p2 repository with the Maven/Tycho build, you will not be able to install the example feature unless Xtext 2.25.0 and its dependencies can be found (actually, also the standard Eclipse dependencies have to be found, but as said above, the Eclipse update site is already part of the Eclipse distributions). You then need to tell your users to first add the Xtext 2.25.0 update site. In the following, we’ll handle this.

A manual, and thus cumbersome, way to verify that is to try to install the example feature in an Eclipse installation pointing to the p2 repository generated during the build. Of course, we’ll keep also this verification mechanism automatic and easy. So, before going on, following a Test-Driven approach (which I always love), let’s first reproduce the problem in the Tycho build, by adding this configuration to the site project (plug-in versions are configured in the pluginManagement section of the parent POM):

<properties>
  <build.destination>${project.build.directory}/installed-plugins</build.destination>
  <features>org.example.feature.feature.group</features>
  <sites>file:/${project.build.directory}/repository</sites>
</properties>
...
<plugin>
  <groupId>org.eclipse.tycho.extras</groupId>
  <artifactId>tycho-eclipserun-plugin</artifactId>
  <executions>
    <execution>
      <id>verify-feature-installation</id>
      <configuration>
        <jvmArgs>-Declipse.p2.mirrors=true</jvmArgs>
        <applicationsArgs>
          <args>-consoleLog</args>
          <args>-application</args>
          <args>org.eclipse.equinox.p2.director</args>
          <args>-nosplash</args>
          <args>-followReferences</args>
          <args>-destination</args>
          <args>${build.destination}</args>
          <args>-repository</args>
          <args>${sites}</args>
          <args>-installIUs</args>
          <args>${features}</args>
        </applicationsArgs>
      </configuration>
      <goals>
        <goal>eclipse-run</goal>
      </goals>
      <phase>verify</phase>
    </execution>
  </executions>
  <configuration>
    <repositories>
      <repository>
        <id>2020-12</id>
        <layout>p2</layout>
        <url>https://download.eclipse.org/releases/2020-12</url>
      </repository>
    </repositories>
    <dependencies>
      <dependency>
        <artifactId>org.eclipse.ant.core</artifactId>
        <type>eclipse-plugin</type>
      </dependency>
      <dependency>
        <artifactId>org.apache.ant</artifactId>
        <type>eclipse-plugin</type>
      </dependency>
      <dependency>
        <artifactId>org.eclipse.equinox.p2.repository.tools</artifactId>
        <type>eclipse-plugin</type>
      </dependency>
      <dependency>
        <artifactId>org.eclipse.equinox.p2.core.feature</artifactId>
        <type>eclipse-feature</type>
      </dependency>
      <dependency>
        <artifactId>org.eclipse.equinox.p2.extras.feature</artifactId>
        <type>eclipse-feature</type>
      </dependency>
      <dependency>
        <artifactId>org.eclipse.osgi.compatibility.state</artifactId>
        <type>eclipse-plugin</type>
      </dependency>
      <dependency>
        <artifactId>org.eclipse.equinox.ds</artifactId>
        <type>eclipse-plugin</type>
      </dependency>
      <dependency>
        <artifactId>org.eclipse.core.net</artifactId>
        <type>eclipse-plugin</type>
      </dependency>
    </dependencies>
  </configuration>
</plugin>

The idea is to run the standard Eclipse p2 director application through the tycho-eclipserun-plugin. The dependency configuration is standard for running such an Eclipse application. We try to install our example feature from our p2 repository into a temporary output directory (these values are defined as properties so that you can copy this plugin configuration in your projects and simply adjust the values of the properties). Also, the arguments passed to the p2 director are standard and should be easy to understand. The only non-standard argument is -followReferences that will be crucial later (for this first run it would not be needed).

Running mvn clean verify should now highlight the problem:

!ENTRY org.eclipse.equinox.p2.director ...
!MESSAGE Cannot complete the install because one or more required items could not be found.
!SUBENTRY 1 org.eclipse.equinox.p2.director...
!MESSAGE Software being installed: Feature 2.0.0.v20210827-1002 (org.example.feature.feature.group 2.0.0.v20210827-1002)
!SUBENTRY 1 org.eclipse.equinox.p2.director ...
!MESSAGE Missing requirement: Feature 2.0.0.v20210827-1002
   (org.example.feature.feature.group 2.0.0.v20210827-1002)
   requires
     'org.eclipse.equinox.p2.iu;
      org.eclipse.xtext.xbase.feature.group [2.25.0,3.0.0)'
   but it could not be found

This would mimic the situation your users might experience.

The solution

Let’s fix this: we add to the category.xml the references to the same p2 repositories we used in our target platform. We can do that manually (or by using the Eclipse Category editor, in the tab Repository Properties):

The category.xml initially is defined as follows

<?xml version="1.0" encoding="UTF-8"?>
<site>
   <feature id="org.example.feature" version="0.0.0">
      <category name="org.example.category"/>
   </feature>
   <category-def name="org.example.category" label="P2 Example Composite Repository">
      <description>
         P2 Example Repository
      </description>
   </category-def>
   <repository-reference location="http://download.eclipse.org/releases/2020-12" enabled="true" />
   <repository-reference location="http://download.eclipse.org/modeling/tmf/xtext/updates/releases/2.25.0" enabled="true" />
</site>

Now when we create the p2 repository during the Tycho build, the content.xml metadata file will contain the references to the p2 repository (with a syntax slightly different, but that’s not important; it will contain a reference to the metadata repository and to the artifact repository, which usually are the same). Now, our users can simply use our p2 repository without worrying about dependencies! Our p2 repository will be self-contained.

Let’s verify that by running mvn clean verify; now everything is fine:

!ENTRY org.eclipse.equinox.p2.director ...
!MESSAGE Overall install request is satisfiable
!SUBENTRY 1 org.eclipse.equinox.p2.director ...
!MESSAGE Add request for Feature 2.0.0.v20210827-1009
  (org.example.feature.feature.group 2.0.0.v20210827-1009) is satisfiable

Note that this requires much more time: now the p2 director has to contact all the p2 sites defined as references and has to also download the requirements during the installation. We’ll see how to optimize this part as well.

In the corresponding output directory, you can find the installed plugins; you can’t do much with such installed bundles, but that’s not important. We just want to verify that our users can install our feature simply by using our p2 repository, that’s all!

You might not want to run this verification on every build, but, for instance, only during the build where you deploy the p2 repository to some remote directory (of course, before the actual deployment step). You can easily do that by appropriately configuring your POM(s).

Some optimizations

As we saw above, each time we run the clean build, the verification step has to access remote sites and has to download all the dependencies. Even though this is a very simple example, the dependencies during the installation are almost 100MB. Every time you run the verification. (It might be the right moment to stress that the p2 director will know nothing about the Maven/Tycho cache.)

We can employ some caching mechanisms by using the standard mechanism of p2: bundle pool! This way, dependencies will have to be downloaded only the very first time, and then the cached versions will be used.

We simply introduce another property for the bundle pool directory (I’m using by default a hidden directory in the home folder) and the corresponding argument for the p2 director application:

...
<bundlepool>${user.home}/.bundlepool</bundlepool>
...
<args>-bundlepool</args>
<args>${bundlepool}</args>
...

Note that now the plug-ins during the verification step will NOT be installed in the specified output directory (which will store only some p2 properties and caches): they will be installed in the bundle pool directory. Again, as said above, you don’t need to interact with such installed plug-ins, you only need to make sure that they can be installed.

In a CI server, you should cache the bundle pool directory as well if you want to benefit from some speed. E.g., this example comes with a GitHub Actions workflow that stores also the bundle pool in the cache, besides the .m2 directory.

This will also allow you to easily experiment with different configurations of the site references in your p2 repository. For example, up to now, we put the same sites used for the target platform. Referring to the whole Eclipse releases p2 site might be too much since it contains all the features and bundles of all the projects participating in Eclipse Simrel. In the target platform, this might be OK since we might want to use some dependencies only for testing. For our p2 repository, we could tweak references so that they refer only to the minimal sites containing all our features’ requirements.

For this example we can replace the 2 sites with 4 small sites with all the requirements (actually the Xtext 2.25.0 is just the same as before):

<repository-reference location="http://download.eclipse.org/eclipse/updates/4.18" enabled="true" />
<repository-reference location="http://download.eclipse.org/modeling/tmf/xtext/updates/releases/2.25.0" enabled="true" />
<repository-reference location="http://download.eclipse.org/tools/orbit/downloads/2020-12" enabled="true" />
<repository-reference location="http://download.eclipse.org/modeling/emf/emf/builds/release/latest" enabled="true" />

You can verify that removing any of them will lead to installation failures.

The first time this tweaking might require some time, but you now have an easy way to test this!

Keeping things consistent

When you update your target platform, i.e., your dependencies versions, you must make sure to update the site references in the category.xml accordingly. It would be instead nice to modify this information in a single place so that everything else is kept consistent!

We can use again properties in the parent POM:

<properties>
...
  <eclipse-version>2020-12</eclipse-version>
  <eclipse-version-number>4.18</eclipse-version-number>
  <xtext-version>2.25.0</xtext-version>
</properties>
...
<repositories>
  <repository>
    <id>${eclipse-version}</id>
    <layout>p2</layout>
    <url>https://download.eclipse.org/releases/${eclipse-version}</url>
  </repository>
  <repository>
    <id>${xtext-version}</id>
    <layout>p2</layout>
    <url>https://download.eclipse.org/modeling/tmf/xtext/updates/releases/${xtext-version}</url>
  </repository>
</repositories>

We want to rely on such properties also in the category.xml, relying on the Maven standard mechanism of copy resources with filtering.

We create another category.xml in the subdirectory templates of the site project using the above properties in the site references (at least in the ones where we want to have control on a specific version):

<?xml version="1.0" encoding="UTF-8"?>
<site>
   <feature id="org.example.feature" version="0.0.0">
      <category name="org.example.category"/>
   </feature>
   <category-def name="org.example.category" label="P2 Example Composite Repository">
      <description>
         P2 Example Repository
      </description>
   </category-def>
   <repository-reference location="http://download.eclipse.org/eclipse/updates/${eclipse-version-number}" enabled="true" />
   <repository-reference location="http://download.eclipse.org/modeling/tmf/xtext/updates/releases/${xtext-version}" enabled="true" />
   <repository-reference location="http://download.eclipse.org/tools/orbit/downloads/${eclipse-version}" enabled="true" />
   <repository-reference location="http://download.eclipse.org/modeling/emf/emf/builds/release/latest" enabled="true" />
</site>

and in the site project we configure the Maven resources plugin appropriately:

<plugin>
  <artifactId>maven-resources-plugin</artifactId>
  <executions>
    <execution>
      <id>replace-references-in-category</id>
      <phase>generate-resources</phase>
      <goals>
        <goal>copy-resources</goal>
      </goals>
      <configuration>
        <outputDirectory>${basedir}</outputDirectory>
        <resources>
          <resource>
            <directory>${basedir}/templates/</directory>
            <includes>
              <include>category.xml</include>
            </includes>
            <filtering>true</filtering>
          </resource>
        </resources>
      </configuration>
    </execution>
  </executions>
</plugin>

Of course, we execute that in a phase that comes BEFORE the phase when the p2 repository is generated. This will overwrite the standard category.xml file (in the root of the site project) by replacing properties with the corresponding values!

By the way, you could use the property eclipse-version also in the configuration of the Tycho Eclipserun plugin seen above, instead of hardcoding 2020-12.

Happy releasing! 🙂


by Lorenzo Bettini at September 02, 2021 11:29 AM

IoT and Edge Developers: Let Your Voices Be Heard

by Mike Milinkovich at August 26, 2021 12:05 PM

Today, the Eclipse IoT and Edge Native Working Groups have launched the 2021 IoT and Edge Developer Survey. This is the seventh year for our annual survey, which has become one of the most widely referenced technical surveys within the IoT & Edge computing industry.

This year’s survey expands on previous editions to be more inclusive of trends in edge computing technologies. Our goal is to present a better understanding of the challenges developers face within both sectors, and to provide insights into the technical issues faced by their respective developer communities around the world. 

We welcome your participation. Your input will help IoT and edge ecosystem stakeholders with the data to align their strategies with latest trends and apply investments where needed most. Start the survey now.

You Can Influence Industry Direction

Developers, service providers, technology manufacturers, and  adopters within the IoT & edge ecosystem can all influence industry direction through survey participation. Last year’s survey received more than 1,600 responses, with the results being shared by more than 20 media outlets.

The 2020 IoT Developer Survey results revealed that IoT and edge application development is increasing at a rapid pace, fueled by growth in investments into predominantly industrial markets. It also indicated  that smart agriculture, industrial automation, and automotive are key target industries for application development.

Our expectation for the 2021 survey is that it will offer even more visibility around IoT & edge development trends, and what those trends mean to stakeholders. The survey results will also be used to help the Eclipse IoT and Edge Native Working Groups with their open source roadmaps as they work to address the evolving needs for IoT and edge development tools, architectures, deployment technologies, security, connectivity, and other requirements along the edge-to-cloud continuum.

The Developer Survey Complements the Commercial Adoption Survey

The results of the IoT and Edge Developer Survey will help complete the picture painted by our recent 2021 IoT and Edge Commercial Adoption Survey. That survey found that IoT and edge computing technologies are being adopted at an accelerated rate by a growing number of organizations. The results also revealed that 74 percent of organizations factor open source into their deployment plans, a 14 percent increase over the 2019 IoT Commercial Adoption Survey results.

With a deeper understanding of the unique challenges faced by IoT and edge developers and the latest commercial adoption trends, the entire ecosystem is better informed and better able to meet the growing demand for IoT and edge solutions.

Complete the IoT and Edge Developer Survey by October 5

The 2021 IoT and Edge Developer Survey is open through October 5. Please take a few minutes to complete the survey now, while it’s top of mind.

As usual, the survey report will be published under the Creative Commons Attribution 4.0 International License (CC BY 4.0), which means that the entire IoT and edge ecosystem can benefit from the insights it provides. Stay tuned for additional blog posts and promotional activities once the report is available.


by Mike Milinkovich at August 26, 2021 12:05 PM

Dependency Cycles During Load Time

by n4js dev (noreply@blogger.com) at August 20, 2021 06:10 AM

When programming in large code bases it can happen inadvertently that cycles of imports are created. In JavaScript, import statements trigger loading and initialization of the specified file directly. In case there is a dependency cycle, files might only be initialized partially and hence errors might occur later during runtime. In this post we present how N4JS detects and avoids these cases by showing validation errors in the source code.


Introduction

Let's start with the most simple example in JavaScript to illustrate the essential problem.


console.log(s); // prints 'test'?
export const s = "test";

Executing the two-liner above results in: 

ReferenceError: Cannot access 's' before initialization.

This is quite obvious and wouldn't surprise anyone. It is obvious because the read access is stated right before the definition of the constant s in the same file. However, it wouldn't be very obvious anymore when both the read access and the definition of s happen in separate files. Let's split up the example into the files F1.mjs and F2.mjs.

F1.mjs


import * as F2 from "./F2.mjs";
export const s = "test";
console.log(F2.s); // prints undefined?

F2.mjs


import * as F1 from "./F1.mjs";
export const s = F1.s;

Executing this example results in a similar error:

ReferenceError: s is not defined.

And again the cause for the error is an access to a not yet initialized variable. As a side note: Modifying the variable to be a 'var' instead of a 'const' would fix the error and return the print-out "undefined". This is due to hoisting of var symbols, but is still not the intended result which would be the print-out "test".

So far, both of the examples either give an unintended result or a compile time error is shown. Errors like these can only be identified after they actually happened during tests or production. While the two-liner example seems way too obvious to actually occur often in practice, the second case can easily hide in projects of many files and imports. A further difference is that even if the first example results in a runtime error, it can usually easily be identified and fixed. The second example however can span across many files by increasing the size of the cycle of import statements and therefore is hard to find and fix.

Two important properties of the execution semantics of JavaScript in Node.js can be witnessed here:

(1) In case a file m is started or imported that imports another file m', a subsequent import back to file m will be skipped. As a result, file m' might be only initialized partially when accessing not yet initialized elements from m.

(2) There is an exception to (1) regarding functions. Since functions are hoisted, they do not have to be reached by the control flow to get initialized. Hoisting will initialize them immediately so that they can be called from any location.

Let's look at a third example which reveals a similar case of reference errors. This time the error occurs depending on the entry point of the program. Have a look at the two files below which either result in the print-out "test" or "undefined" depending on which file was the entry point for node.js. Starting with file G1.mjs causes the execution to follow the green indicators and yields "test" whereas starting with file G2.mjs follows the red indicators and yields "undefined".

These kinds of errors might not be of interest when implementing a stand-alone application since these programs usually have a single and well known entry point only. Yet, cycles can occur also in parts of the program and then the entry point is determined by the order of import statements. Moreover, when writing libraries and exposing an API that spans across several files, the entry point can differ a lot and is defined by the library's user. Hence, in case of an unfortunate setup of files and import statements, a library might suffer from unexpected behavior depending on which part of its API was called first.

Also note that all the examples stated their imports at the top and all other statements below. When mixing import statements or dynamic imports with other code, it is even easier to create reference errors.


Validations in N4JS

One of the goals for N4JS is to provide both many handy and powerful language constructs along with type safety and strong validations. The reason behind the latter one is to prevent especially those errors to happen at runtime that are hard to find and hard to reproduce. Migrated to N4JS, the second example would show validation errors at the references to F1.s and F2.s due to the dependency cycle. The approach to detect these cases is explained in the following paragraphs by first laying out the terminology, reasoning about the general problem afterwards, and then defining the error cases in N4JS.


Terminology

Top level elements are those AST elements of a JavaScript file that are direct children of the root element such as import statements, const or class declarations, and others. Some top level elements can contain expressions or statements such as initializers of consts or extends clauses of classes. These initializers are executed when loading a file. A reference located in such initializers to a top level element (imported or not) is called load time reference.

In addition to compile time and runtime, the term load time is used to refer to the first phase of runtime during which all import statements and top level elements of the started JavaScript file are executed. In this regard we assume that initialization is performed during load time. In a separate step later, some specific calls to the API of imported files would perform the actual requested functionality.

A dependency between two files consist of an import statement and may have imported elements that may be used in the same file. Dependencies with unused imported elements are called unused imports, and those without imported elements are called bare imports. The target of a dependency is the imported file and also the imported element (except for bare imports). There exists at least one dependency for each import statement, and for each code reference to an imported (top level) element. Dependencies are differentiated into three kinds:

Compile time dependencies arise from all non-unused import statements. Runtime dependencies are the subset of compile time dependencies that is necessary at runtime only, i.e. it does not include unused imports or imports used for type information. (We assume that unused imports do not have intended side effects like bare imports have.) Load time dependencies are the subset of runtime dependencies with load time references.

A dependency cycle exists when traversing import statements of one file to the imported files will eventually lead to one of the already visited files. Note that the term dependency cycle refers to files and not necessarily to imported elements. Dependency cycles are differentiated as follows: Compile time dependency cycles are those relying on compile time dependencies. Runtime dependency cycles rely on runtime dependencies and are of special interest later. Load time dependency cycles rely on load time dependencies and are evaluated to errors in N4JS.


Reasoning

To get a clearer understanding, it is important to know the impact of dependency cycles in a program. An inherent property of dependency cycles is that at least one of the import statements during load time gets skipped since it would load a file that is already processed. In a cycle free program, all import statements of all files can be understood as a directed graph of files connected by import statements that define a partial load order. It is usually harmless that the total order of loading files depends on the entry point, i.e. which file is imported first or started the program, since it complies to that partial order. However, in case of dependency cycles the graph contains a cycle which will be broken up at load time to re-establish a directed graph and partial order. That means that the loading of at least one file of each cycle will be skipped because it is already being loaded. Other files that depends on that skipped import might be initialized partially only. Consequently, the entry point, e.g. the order of import statements, impacts whether a file is initialized partially or completely after its import statement was executed. Sorting import statements is a very common IDE feature and usually deemed to be innocent of causing runtime errors. Yet this assumption does not necessarily hold if the program contains dependency cycles.

We learned that partial initialization occurs if a load time initializer accesses a reference to a not yet initialized element of a skipped file. Probably that not yet initialized element will be initialized later during load time, but harm was already done since the current file had read the wrong value. Where exactly did the problem occur? References to not yet initialized elements can be located not only directly in load time initializers but can also be at locations reachable transitively e.g. by calling other functions in between starting from the initializer. Determining all reachable references from load time initializers which potentially access not yet initialized values can only be done by an expensive analysis that is usually imprecise due to over-approximation. In many cases it is even impossible due to reflective calls, dynamic loading etc. However, a simpler way to rule out accesses to partial initialized elements is to make a clear cut and forbid any expressions or statements in load time initializers that cannot be evaluated at compile time, e.g. function calls. On the downside, this strictness also reduces some programming freedom and even rules out legal load time references that would not cause runtime errors.

To summarize the approach: Either runtime dependency cycles need to be removed or - if that is not possible - load time initializers need to be restricted to not reference potentially skipped files.

A very interesting situation is when a runtime dependency cycle C contains a file m that has a load time initializer with a dependency d to file m'. This means that the cycle becomes a cycle that has a correct and an incorrect way of loading its files: Due to load time dependency d file m' must be loaded without being skipped. Still, at least one other import must be skipped to break the cycle. To make sure that loading of m' is not skipped, m' must not be the entry point of the cycle C. Choosing another entry point e.g. file m will result in partially loading m first, loading the rest of the cycle C including m' completely until another import to m is skipped. In other words: A load time dependency to a file m' within a cycle C constrains m' to never be the entry point into C. This situation is illustrated in the figure below.

The figure above shows the third example with additional information about its dependencies and cycles. As you can see there exists a runtime dependency cycle (indicated in blue), since the two files reference each other in runtime import statements. Also indicated in orange there exists a load time dependency because the reference to G2.s is located in an expression of a top level element that is evaluated during load time. Hence, this dependency imposes the constraint that the entry point to the third example must be G1.

In contrast note that the second example has a load time dependency cycle due to the two load time dependencies created by the accesses to F1.s and F2.s.

Error cases

Four types of errors are indicated in different situations regarding load time dependencies. Based on a source code analysis, runtime dependency cycles and references located in top level elements are detected first.

(1) Given this information, load time dependency cycles can be identified and be evaluated to errors. These errors are attached to the references of the load time dependencies.

Three other types of errors occur if and only if there exists a runtime dependency cycle C of modules m and m' (and maybe including others).

(2) Any load time reference in C that references a top level element in C (imported ones or in the same file) is marked with an error. This includes all load time dependencies. The reason to forbid any references to e.g. local or imported functions from C is that these may reach and access partially initialized variables. In N4JS there is one exception to that rule: extends clauses of classes. Load time references are still allowed here and not causing problems because extends clauses in N4JS are already restricted to references to other classes only (and not arbitrary expressions like in JavaScript). Note that ordinary dependencies (i.e. that do not have references in load time code) are still allowed, e.g. within the body of methods.

(3) Any dependency d to a module m' is marked with an error if and only if there exists a load time dependency to m' already. In other words: There may be no other dependency in C to m' if d is a load time dependency. In case an importing module m* is not in C a dependency to m' is allowed.

(4) However, when importing m' from m*, it is mandatory to also import another module m prior to import m*. Otherwise, an error is shown. The import of m prior to the import of m' will ensure that loading of m' is not skipped.

When programming with N4JS and errors like that occur, there are two ways to solve them. First and best solution is to remove the dependency cycle, which in many cases is a code smell already. This can be done by breaking the cycle or merging two or more files or file parts that mutually depend on each other. In case that is not possible, removing some load time dependencies is necessary. However, keep in mind that any load time dependency in a dependency cycle will impose a runtime execution order on the importing file to be loaded always prior to the imported file.

H1.n4js


import * as H2 from "H2";
class C extends H2.C {} // no error (3) here

H2.n4js


import "H1";
export public class C {}

The last example shows a case similar to the third example: The are two files that have a runtime dependency cycle. Additionally, there is a load time dependency created by the extends clause that references H2.C. Note that the third example produces the validation error (3) at the load time reference G2.s because we disallow all non-compile time expressions or statements in load time initializers. This shows where simplifications of our approach might be improved in the future. Since we leave an exception to error (3) in case the load time dependency is an extends clause, the last example shows no errors in N4JS.


Conclusion

The core problem are read accesses to variables that are not yet initialized. While these kinds of problems are relatively obvious and easy to find when happening in a single file, it is much harder to detect them when they are caused due to dependency cycles of two or more files. For the single file case, several IDEs and languages already provide validations and put error markers to read accesses of undefined symbols, such as VSCode for TypeScript. By introducing the validations described in this blog post, N4JS also can rule out initialization errors due to dependency cycles from happening. Unfortunately, in some cases this approach is too strict but we hope to relax some of the restrictions to improve the compromise of program safety and programming freedom.


by Marcus Mews


by n4js dev (noreply@blogger.com) at August 20, 2021 06:10 AM

gRPC Remote Services Development with Bndtools - video tutorials

by Scott Lewis (noreply@blogger.com) at August 16, 2021 09:11 PM

Here are four new videos that show how to define, implement and run/debug gRPC-based remote services using bndtools, eclipse, and ECF remote services.

Part 1 - API Generation - The generation of a OSGi remote service API using bndtools code generation and the protoc/gRPC compiler. The example service API has both unary and streaming gRPC method types supported by the reactivex API.

Part 2 - Implementation and Part 3 - Consumer - bndtools-project-template-based creation of remote service impl and consumer projects

Part 4 - Debugging - Eclipse/bndtools-based running/debugging of the remote service creating in parts 1-3.


by Scott Lewis (noreply@blogger.com) at August 16, 2021 09:11 PM

Eclipse JKube 1.4.0 is now available!

July 27, 2021 05:00 PM

A newer version of Eclipse JKube is available, jump to Eclipse JKube 1.5.1 announcement.

On behalf of the Eclipse JKube team and everyone who has contributed, I'm happy to announce that Eclipse JKube 1.4.0 has been released and is now available from Maven Central.

Thanks to all of you who have contributed with issue reports, pull requests, feedback, spreading the word with blogs, videos, comments, etc. We really appreciate your help, keep it up!

What's new?

Without further ado, let's have a look at the most significant updates:

Multi-layer support for Container Images

Until now JKube pre-assembled everything needed to generate the container image in a temporary directory that was then added to the image with a single COPY statement. This means that for any single change we do to the application code, this layer would change. This is especially inefficient for the Jib build strategy.

Since this release, we can define our image build model with several layer assemblies and improve this inefficiency by packaging different layers (dependencies, application slim jars, etc.). We've also updated the Quarkus Generator to take advantage of this new feature. Check the following demo for more details:

Support DockerImage as output for OpenShift builds

OpenShift Container Platform comes with an integrated container image registry. By default, when you build your image using OpenShift Maven Plugin and S2I strategy, the build configuration is set up to push into this internal registry.

JKube provides now the possibility to push the image to an external registry by leveraging OpenShift's Build output configuration.

The following property will enable this configuration. Check the embedded video for more details.

<jkube.build.buildOutput.kind>DockerImage</jkube.build.buildOutput.kind>

Using this release

If your project is based on Maven, you just need to add the kubernetes maven plugin or the openshift maven plugin to your plugin dependencies:

<plugin>
  <groupId>org.eclipse.jkube</groupId>
  <artifactId>kubernetes-maven-plugin</artifactId>
  <version>1.4.0</version>
</plugin>

How can you help?

If you're interested in helping out and are a first time contributor, check out the "first-timers-only" tag in the issue repository. We've tagged extremely easy issues so that you can get started contributing to Open Source and the Eclipse organization.

If you are a more experienced developer or have already contributed to JKube, check the "help wanted" tag.

We're also excited to read articles and posts mentioning our project and sharing the user experience. Feedback is the only way to improve.

Project Page | GitHub | Issues | Gitter | Mailing list | Stack Overflow

Eclipse JKube Logo

July 27, 2021 05:00 PM

5 Reasons to Adopt Eclipse Theia

by Brian King at July 13, 2021 12:02 PM

Recently I wrote about the momentum happening in the Eclipse Theia project. In this post, I want to highlight some good reasons to adopt Theia as your IDE solution. The core use case for Theia is as a base upon which to build a custom IDE or tool. However, if you are a developer looking for a great tool to use, you will find some motivation here as well. The inspiration for this post comes from Theia project lead Marc Dumais’ ‘Why Use Theia?’ talk that he gave at the recent Cloud DevTools Community Call. So, you could say this is Marc’s post!

1. Modern Technology Stack

Theia is Web-first. It’s built on modern Web technologies, and if we compare it with traditional IDEs such as the Eclipse Desktop IDE or IntelliJ it is a big departure in terms of technologies used.

These best of breed web-based technologies include Node.js, HTML5 and CSS, TypeScript, and npm. Theia supports all modern browsers, including Electron. So from a UI perspective, you can finally say goodbye to SWT or Swing and benefit from the modern rendering capabilities of HTML5. This will dramatically improve the look and feel of any tool built on Theia compared to previous platforms. Even better, you can use modern UI frameworks, such as React, Vue.js or Angular within Theia!

The use of npm connects Theia to a huge ecosystem of available frameworks for almost any purpose. However, it is worth mentioning that it is also very easy to integrate other technologies, e.g. Java, Python or C++ on the backend due to the very flexible architecture.

It’s important to note that these technologies are not only state-of-the-art for modern tools, they also heavily overlap with how business applications are being built today, allowing Theia to benefit from the ongoing evolution of a large ecosystem. This also makes recruiting easier. As an example, compare how many developers know how to develop in React vs SWT these days.

In a nutshell, the technology stack of Theia is powerful, modern and, last but not least, very common.

2. Cloud and Desktop

Eclipse Theia is designed to be used on the web as well as on the desktop. While other tools and platforms are typically created for either desktop or web use, supporting both use cases is in the core DNA of Eclipse Theia. And, we have  adopters in both camps, as well as those that take full advantage of the power of Theia to provide both options at the same time and based on the same code. Having both options enables adopters to implement a long-term evolution strategy. Many companies start with a desktop tool and move to a full cloud-based solution later. Having this flexibility with minimal overhead is a unique and powerful benefit of Theia!

3. Extensible Framework

Eclipse Theia is much more configurable and extensible than other tools like VS Code. While a VS Code extension can add behavior to the IDE at runtime, there are limitations. For example, an extension can register support for searching for symbol references in a new language. The VS Code API covers many of the “standard” use cases when adding support for new programming languages. However, you cannot change the behavior of the IDE in many respects or leave out parts of the IDE that are not needed.

Eclipse Theia, on the other hand, is designed in a way that almost any part of the IDE can be omitted, replaced or customized without changing the Theia source code. You can create your own Theia build and add your own modules to override or extend most parts of the IDE through dependency injection.  

Eclipse Theia supports the same extension API as VS Code. This means extensions created for VS Code are also usable in Theia. Most popular extensions can be obtained from the public Open VSX Registry.

It’s also easy to make your Theia-based application your own. Name/brand it, make it look different, customize views and user interface elements. You can adapt and customize almost anything, and therefore, build tools that fulfil your domain-specific and custom requirements.

To learn more, please see this article about VS Code extensions vs. Theia extensions and this comparison between Eclipse theia and VS Code.

 

Source: VS Code extensions vs. Theia extensions

4. Multi-Language Support Through LSP and DAP

Traditionally, language support was implemented independently in editors, meaning there was little or no consistency in features between them.To solve this, Microsoft specified Language Server Protocol, a way to standardize the communication between language tooling and code editor. This architecture allows the separation of development of the actual code editor (e.g. Monaco) and the language support (language server). This invention has boosted the development of support for all types of languages.

Similarly for debugging, another crucial function of an IDE, Debug Application Protocol (DAP) was built to define a way for IDEs to work with debuggers.

These technologies originated in VS Code, and are starting to appear in more places. Eclipse Theia, since its inception, provides full support for LSP and DAP. You can therefore benefit from the ever growing ecosystem of available language servers. Even more, the ecosystem around Theia, for example the Graphical Language Server Protocol (GLSP), which works similarly to LSP, but for diagram editors.

If you want to provide support for your own custom language, you can simply develop a language server for it. This makes your language available in Theia and also in any other tool that supports LSP, DAP or GLSP.

5. Truly Open Source and Vendor Neutral

Many tool technologies are open source. However, there are some details and attributes of an open source project that make a huge difference for adopters of a technology. This is especially true for tools, as the maintenance cycle is typically rather long, sometimes decades. Adopters of a platform or framework therefore, should focus on the strategic consequences. Let us look at the criteria more in detail.

Fully Open Source

Eclipse Theia and all its components are fully open sourced. There are no proprietary parts (like there are in VS Code for example, see this comparison).

License

Eclipse Theia is licensed under the Eclipse Public License (EPL). The EPL allows for commercial use, meaning you can build commercial products based on Theia without license issues. The EPL has a great track record of being commercially adopted, so many details, such as “derivative work” are well defined.

Intellectual Property Management

Defining a license for a project is a first step, but if developers use copied code or dependencies that are incompatible with the EPL, an adopter of the project might become guilty of a copyright violation. Theia is an Eclipse project and therefore its code and dependencies are vetted by the Eclipse foundation. There are defined agreements for contributors and regular reviews (including dependencies) to ensure IP cleanness of the code base. This significantly lowers the risk for adopters to get into license issues.

Governance

Many open source projects are almost exclusively driven and controlled by a single vendor. Eclipse Theia follows the Eclipse Foundation development process. It governs the collaboration and decision making in the project and ensures a level playing field for all members of the community. For adopters, the two most obvious benefits are: (1) No single party can drive the decisions, no single party can change the rules, meaning that it is a safe long-term option. (2) The rules ensure you can gain influence and be part of the decision making by participating in the community. This way you can make sure that the project evolves in a direction that suits your requirements.

Vendor Neutral

Not only does the governance model of Eclipse Theia ensure vendor neutrality, the project is also very diverse in terms of contributors. If you look at the contributing companies below (a select list only), you can clearly see that Theia enjoys the broad support that is so  important for innovation, maintenance and the long-term availability of a project.

In a nutshell, Eclipse Theia benefits from a diverse base of contributors and follows a proven license, IP and governance model that has enabled and preserved strategic investments for more than two decades.

Bonus: A Vibrant Ecosystem

Last but not least, Theia is built around a vibrant ecosystem. There are several comercial adopters that have built their solutions with Theia including Arm Mbed Studio, Arduino Pro IDE, Red Hat CodeReady Workspaces, and Google Cloud Shell

Many adopters, service providers and contributors are organized and participate in the Eclipse Cloud DevTools Working Group. Current members include Arm, Broadcom, EclipseSource, Ericsson, IBM, Intel, RedHat, SAP, STMicroelectronics, and TypeFox. The working group structure allows these companies to coordinate their efforts, use cases and strategies. It brings together parties with a common goal, e.g. there is a special interest group for building tools for embedded programming. This set-up allows for great initiatives that serve a common goal and are developed in collaboration. As an example, the ecosystem provides Open VSX, a free and open alternative to the VS Code marketplace. As another example, Eclipse Theia blueprint provides a template for building Theia applications.

In addition to Theia as a core platform, there is a robust ecosystem of supporting projects and technologies. Eclipse has always been a great place for frameworks around building tools to solve all kinds of requirements. For example, there is a framework for building web-based editors called Eclipse GLSP. As another example, EMF.cloud transfers a lot of concepts from the EMF ecosystem to the cloud, e.g. model management, model comparison or model validation. Finally, quite a few existing technologies have targeted Theia to make the transition to the web, including Xtext, TraceCompass and many more. So when building on Theia, you do not just get a framework for building tools and IDEs, you can also benefit from the larger ecosystem being built around it!

Conclusion

As you can see, there are many reasons to adopt Eclipse Theia. We listed several important ones in this blog, but there are many more to discover. As you research options, you might find other solutions that are on par with Theia in specific categories. However, the combination of advantages Theia offers is unique. That is not by accident: Theia was explicitly created as an open, flexible and extensible platform to “develop and deliver multi-language Cloud & Desktop IDEs and tools with modern, state-of-the-art web technologies.”   

To see what is coming next, check out the roadmap which is updated quarterly. The roadmap is a moving snapshot that shows priorities of contributing organizations. Common goals are discussed weekly at the Theia Dev Meeting and additional capabilities and features identified there will make it onto the roadmap. Take a look at the project to evaluate how to get involved. The best places to look first are the GitHub project and the community forum.


by Brian King at July 13, 2021 12:02 PM

Back to the top