Skip to main content

MapIterable.getOrDefault() : New but not so new API

by Nikhil Nanivadekar at August 09, 2020 02:04 PM

MapIterable.getOrDefault() : New but not so new API

Sunset at Port Hardy (June 2019)

Eclipse Collections comes with it’s own List, Set, and Map implementations. These implementations extend the JDK List, Set, and Map implementations for easy interoperability. In Eclipse Collections 10.3.0, I introduced a new API MapIterable.getOrDefault(). In Java 8, Map.getOrDefault() was introduced, so what makes it a new API for Eclipse Collections 10.3.0? Technically, it is new but not so new API! Consider the code snippets below, prior to Eclipse Collections 10.3.0:

MutableMap.getOrDefault() compiles and works fine
ImmutableMap.getOrDefault() does not compile

As you can see in the code, MutableMap has getOrDefault() available, however ImmutableMap does not have it. But there is no reason why ImmutableMap should not have this read-only API. I found that MapIterable already had getIfAbsentValue() which has the same behavior. Then why did I still add getOrDefault() to MapIterable?

I added MapIterable.getOrDefault() mainly for easy interoperability. Firstly, most Java developers will be aware of the getOrDefault() method, only Eclipse Collections users would be aware of getIfAbsentValue(). By providing the API same as the JDK it reduces the necessity to learn a new API. Secondly, even though getOrDefault() is available on MutableMap, it is not available on the highest Map interface of Eclipse Collections. Thirdly, I got to learn a Java compiler check which I had not experienced before. I will elaborate this check a bit more in detail because I find it interesting.

After I added getOrDefault() to MapIterable, various Map interfaces in Eclipse Collections started giving compiler errors with messages like: org.eclipse.collections.api.map.MutableMapIterable inherits unrelated defaults for getOrDefault(Object, V) from types org.eclipse.collections.api.map.MapIterable and java.util.Map. This I thought was cool, because at compile time, the Java compiler is ensuring that if there is an API with default implementation in more than one interface in a multi-interface scenario, then Java will not decide which implementation to pick but rather throw compiler errors. Hence, Java ensures at compile time that there is no ambiguity regarding which implementation will be used at runtime. How awesome is that?!? In order to fix the compile time errors, I had to add a default implementations on the interfaces which gave the errors. I always believe in Compiler Errors are better than Runtime Exceptions.

Post Eclipse Collections 10.3.0 the below code samples will work:

MapIterable.getOrDefault() compiles and works fine
MutableMap.getOrDefault() compiles and works fine
ImmutableMap.getOrDefault() compiles and works fine

Eclipse Collections 10.3.0 was released on 08/08/2020 and is one of our most feature packed releases. The release constitutes numerous contributions from the Java community.

I am a Project Lead and Committer for the Eclipse Collections OSS project at the Eclipse Foundation. Eclipse Collections is open for contributions.

Show your support star us on GitHub.

Eclipse Collections Resources:
Eclipse Collections comes with it’s own implementations of List, Set and Map. It also has additional data structures like Multimap, Bag and an entire Primitive Collections hierarchy. Each of our collections have a rich API for commonly required iteration patterns.

  1. Website
  2. Source code on GitHub
  3. Contribution Guide
  4. Reference Guide

by Nikhil Nanivadekar at August 09, 2020 02:04 PM

My third blogiversary

by Donald Raab at August 08, 2020 04:45 AM

Three years of public blogging and still going strong.

Keeping calm and focused in 2020 is a challenge. Writing is an outlet.

Three years and counting

Three years ago, I wrote my first public blog on Medium. It was about Symmetry in API design. Two Years later I published a blog celebrating two years of blogging.

Two Years and Fifty Blogs

On to 2020 — Finding More Bloggers

I set my goal for 2020 to find more bloggers and help them find their voices.

I have a new sense of community and purpose since I was selected as a Java Champion. I want to help more bloggers find their voices. It is hard to write, and very hard to write regularly, but it is so critically important to leave a bit of what we know to the current and future generations of developers to learn from.

I am happy to report that I have found some more bloggers and they have begun sharing their stories. Here are three folks who started blogging on Medium in 2020: Alex Goldberg, Sirisha Pratha, Vladimir Zakharov. Congrats and good journey to all of them and all of the other bloggers out there who have been finding their voices in 2020. Keep writing and telling your stories! I want to keep reading!

2020 —Surviving

My Top blog in 2020 so far is “Java Streams are great but it’s time for better Java Collections.”

Java Streams are great but it’s time for better Java Collections

My personal favorite blog so far in 2020 is “What I learned about COVID-19 from Acute Myeloid Leukemia.”

What I learned about COVID-19 from Acute Myeloid Leukemia

My blogging style changed in 2020 with this blog. This blog is raw and unfiltered and in it I share personal stories that terrify me. I hope it helps some folks keep their focus on what is important in life as we all strive to survive, and to appreciate all of the folks who put themselves at risk every day to help others.

On to 2021 — Surviving 2020

That’s it, that’s my whole plan. Stay safe, stay healthy, stay sane. Social distance, wear masks, stay home as much as possible. Spend time with family, write when I can, take care of my family, mental and physical health.

Thank you for taking the time to read my blogs. I hope you enjoy them and learn something useful from them now and again.

I am a Project Lead and Committer for the Eclipse Collections OSS project at the Eclipse Foundation. Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it on GitHub.


by Donald Raab at August 08, 2020 04:45 AM

Eager is Easy, Lazy is Labyrinthine

by Donald Raab at August 07, 2020 11:41 PM

From initialization to iteration, learning eager is easier than learning lazy.

java.util.* interfaces and classes (yellow) and custom collection interfaces (cyan) with eager iteration methods

The difference between eager and lazy

An eager algorithm executes immediately and returns a result. A lazy algorithm defers computation until it is necessary to execute and then produces a result.

Eager and lazy algorithms both have pros and cons. Eager algorithms are easier to understand and debug. They can also be highly optimized for a single use case (e.g. filter). Lazy algorithms sometimes result in less computation and if there are multiple steps in the computation(e.g.filter, map, reduce), there will be less temporary garbage created.

I usually prefer using eager algorithms by default, and lazy algorithms when I see an opportunity for an optimization. Both eager and lazy algorithms are useful so I always want both available in my toolkit.

Here’s a simple example showing eager initialization and lazy initialization.

Eager Initialization

class Someclass
{
private final List<String> strings = new ArrayList<>();

public List<String> getStrings()
{
return this.strings;
}
}

In the eager case, the List named strings is initialized immediately when an instance of SomeClass is created. This allows the variable strings to be defined as final as it will only ever be initialized when the instance of the of the class is created.

Lazy Initialization

class Someclass
{
private List<String> strings;

public List<String> getStrings()
{
if (this.strings == null)
{
this.strings = new ArrayList<>();
}
return this.strings;
}
}

In the lazy case, the List named strings is initialized only if the method getStrings is called. The computation required is deferred until the method is called. If the method is never called, then the extra computation is never required.

It’s Hard Work Being Lazy

The eager implementation of initialization is slightly less complicated than the lazy implementation shown above. In the case of eager and lazy iteration, the difference in complexity is much more pronounced.

The following code example shows a how a filter can be applied to a collection using an eager and lazy implementation. The lazy implementation is using Java Streams. The eager implementation is using a proof of concept collections framework with a type called MutableList that was shown in the diagram above. The filter method on MutableList applies a Predicate to each element of the list and returns a MutableList. The code for the POC collections framework is linked at the bottom of this blog.

@Test
public void filter()
{
MutableList<Integer> list = MutableList.of(1, 2, 3, 4, 5);

// eager filter method on MutableList
MutableList<Integer> eagerFilter =
list.filter(each -> each % 2 == 0);

// lazy filter method on java.util.stream.Stream
List<Integer> lazyFilter = list.stream()
.filter(each -> each % 2 == 0)
.collect(Collectors.toList());

var expected = List.of(2, 4);
Assert.assertEquals(expected, eagerFilter);
Assert.assertEquals(expected, lazyFilter);
}

Both of these examples should be easy enough to read, and have the same result as the test code illustrates. Both examples take a list of integers from 1 to 5 and filter the evens out, resulting in a list with 2 and 4. There is only one method call (filter) required for the eager implementation, compared to the four method calls (stream, filter, collect, toList) required for the lazy implementation.

The real complexity lies beneath the implementation code here and can be seen during debugging if we put a break point in the Predicate.

I will start by debugging the eager filter method. I will put a breakpoint on the lambda that tests if an integers is even, which should pause the execution for each integer in the list.

Debugging eager filter

Breakpoint on the Predicate for eager filter

When I debug the code this is the stack trace I see.

The Lambda code is at the top of the stack trace

If I step into filter method on MutableList, this is the code I see.

Debugging the implementation of the filter method on MutableList

This is easy to reason about and I have only one method to look into to understand how the filter method on MutableList works. I can see that the lambda is turned into a Predicate and the test method of the Predicate interface is called in an if statement inside of the for loop that is iterating over the list.

Debugging lazy filter

Now let’s debug the lazy filter method on Stream.

Breakpoint on the Predicate for lazy filter

When I debug the code this is the stack trace I see.

The Lambda code is at the top of the stack trace

Similar to when I debugged the eager code, the lambda code is at the top of the stack trace. However, I do not see the filter method on Stream in the stack trace. This is because filter returns a new Stream but does not execute the code in the lambda. The execution happens in the collect method because it is a terminal operation. If I step into the collect method this is what I see.

Debugging the implementation of the collect method on Stream

If I want to find the loop that is iterating through the elements of the list, I will need to step into the forEachRemaining method in the stack trace which is on ArrayListSpliterator inside of the ArrayList class.

Debugging forEachRemaining on ArrayListSpliterator in ArrayList

Lazy iteration is harder to understand than eager iteration. You will need to navigate a labyrinth of methods to follow the path of execution with Java Streams. Developers looking to understand how an algorithm works will most likely find it easier to understand an eager implementation first.

Understanding the Order of Things

If we stack several operations together, and use the peek method to output the current value of something, we can trace the order in which things are completed with both eager and lazy execution.

Tracing the order of execution of filter, map, reduce using eager and lazy algorithms

Output for Eager

filter: 1
filter: 2
filter: 3
filter: 4
filter: 5
map: 2
map: 4
reduce: 2
reduce: 4

Output for Lazy

stream filter: 1
stream filter: 2
stream map: 2
stream reduce: 2
stream filter: 3
stream filter: 4
stream map: 4
stream reduce: 4
stream filter: 5

Notice how the order of eager matches the order of the method calls. It goes filter, followed by map, and then by reduce. In the case of lazy, the order of methods is determined by the data. For each element that is matches in filter, map and reduce then are executed for that element.

The total number of executions for both eager and lazy are the same here: 9. The eager approach although easier to understand and reason about is generating two temporary MutableList instances (one for filter and one for map), where the lazy approach does not generate any temporary collections. This is one place where there might be a performance gain using the lazy approach, especially if the source MutableList is large.

The area where lazy really shines, is when short-circuiting can happen, reducing the total amount of work necessary.

Understanding the Effects of Short-circuiting

If we use a short-circuiting method like anyMatch, we can see some real potential performance benefits of using lazy iteration.

Output for Eager

filter: 1
filter: 2
filter: 3
filter: 4
filter: 5
map: 2
map: 4
anyMatch: 2
anyMatch: 4

Output for Lazy

stream filter: 1
stream filter: 2
stream map: 2
stream anyMatch: 2

This illustrates nicely where lazy iteration shines. The lazy iteration does not have to visit the entire collection. After all the hard work of implementation and understanding, we can benefit from reducing the total amount of work necessary by using lazy iteration.

Eager or Lazy? Why not both?

It makes sense to have eager implementations of algorithms directly on collections interfaces and to also have symmetric lazy implementations available on Streams. Having the eager iteration methods directly on the collections will make the code easier to learn, to teach and to debug. This lowers the cost for developers to build an understanding of iteration pattern implementations. The lazy implementations on Streams are a great performance optimization and are easy to move to with symmetric APIs on the collections interfaces like filter, map, reduce, etc. As with all performance optimizations, the code for lazy can be harder to understand and debug.

Further Information

The following blog explains eager, lazy, serial and parallel in more depth and compares the performance of different algorithms against large data sets.

The 4 am Jamestown-Scotland ferry and other optimization strategies

If you would like to see the code I used for the custom collections, check out the Deck of Cards Kata in the GitHub repo here.

If you would like to learn more about the potential benefits of having eager methods on collections in Java today, then check out the following blog.

Java Streams are great but it’s time for better Java Collections

Have fun programming!

I am a Project Lead and Committer for the Eclipse Collections OSS project at the Eclipse Foundation. Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it on GitHub.


Eager is Easy, Lazy is Labyrinthine was originally published in Javarevisited on Medium, where people are continuing the conversation by highlighting and responding to this story.


by Donald Raab at August 07, 2020 11:41 PM

Eclipse Vert.x 4 beta 1 released!

by vietj at July 28, 2020 12:00 AM

We are extremely pleased to announce the first 4.0 beta release of Eclipse Vert.x .

Vert.x 4 is the evolution of the Vert.x 3.x series that will bring key features to Vert.x.

SQL client metrics

Vert.x 4 supports metrics for clients which are critical for monitoring application performance.

While the capabilities are generic and can apply to any client, each client needs a specific integration. Obviously the SQL client was the perfect candidate in mind for this new feature.

Micrometer metrics will report these metrics as

  • vertx_sql_queue_pending: number of requests scheduled but not yet executed
  • vertx_sql_queue_time: time spent in queue before processing
  • vertx_sql_processing_pending: number of request being processed
  • vertx_sql_processing_time: requests latencies

A better API for JDBC Client

Our JDBC client will not go away in Vert.x 4, we do recknognize that JDBC is important because it supports the most important number of databases in the ecosystem.

When we designed the SQL client API we strived a lot to come with the simplest and most powerful API for asynchonous SQL client.

This release brings an implementation of the SQL client API for JDBC.

The 3.x API series will continue to be supported for the lifetime of Vert.x 4.

Event loop affinity

Using Vert.x from a non Vert.x thread is a very common use case we have been supporting since Vert.x 3.

When you use a Vert.x resource (like a client) from a non Vert.x thread, Vert.x 3 obtains a new event-loop everytime it happens.

In Vert.x 4 we decided to pin the first event loop to the non Vert.x thread. The goal is to prevent some data races and also makes reasonning about this easier.

Vertx vertx = Vertx.vertx();

for (int i = 0;i < 4;i++) {
  String msg = "Message " + i;
  vertx.runOnContext(v -> {
    System.out.println(i);
  });
}

Running this with Vert.x 3 will print the 4 lines but they are likely to not be reordered, this code could also be running in parallel (that is two different threads running at the same time on a different CPU core).

Running this with Vert.x 4 will print the 4 lines in the correct order and always with the same thread. This eliminates some potential data races and also allows to reason about what will happen at runtime.

Vert.x Json Schema supports Draft2019-09

The new vertx-json-schema module now supports the latest Json Schema Draft2019-09 spec. You can finally play with the new $recursiveRef to build extensible recursive schemas and with unevaluatedProperties/unevaluatedItems to define strict schemas. Look at the module documentation to start using it.

Clustering configuration simplified

In Vert.x 3, cluster host was set to localhost by default in EventBusOptions. Consequently, a lot of new users were confused about why event bus consumers and producers were not able to communicate even if the underlying cluster manager was configured correctly.

Also, when using the CLI tool or the Launcher class, Vert.x tried to find a host among available network interfaces if none was provided with the -cluster-host argument. Sometimes, the host chosen by the cluster manager and Vert.x were not the same.

Starting with Vert.x 4 beta 1, the cluster host default has been removed and, if users don’t provide any, Vert.x will ask the cluster manager which one it picked before trying to find one itself. This applies whether Vert.x is embedded in any Java program or started with the CLI tool or with the Launcher class.

So far, only vertx-hazelcast and vertx-infinispan cluster managers can provide Vert.x with a cluster host. When other cluster managers are used, Vert.x will choose one itself.

Cluster manager upgrades

vertx-hazelcast has been upgraded to Hazelcast 4.0.2 and vertx-infinispan to Infinispan 11.0.1.Final.

Finally

This is the Beta1 relase of Vert.x 4, you can of course expect more betas as we get feedback from the community and fix issues that we failed to catch before.

You can also read the milestone announces to know more about the overral changes:

The deprecations and breaking changes can be found on the wiki.

For this release there are no Docker images.

The release artifacts have been deployed to Maven Central and you can get the distribution on Maven Central.

You can bootstrap a Vert.x 4.0.0.Beta1 project using https://start.vertx.io.

The documentation has been deployed on this preview web-site https://vertx-web-site.github.io/docs/

That’s it! Happy coding and see you soon on our user or dev channels.


by vietj at July 28, 2020 12:00 AM

Jakarta EE Community Update July 2020

by Tanja Obradovic at July 25, 2020 12:44 PM

With the Jakarta EE 9 milestone release out, upcoming JakartaOne Livestream events, and a new Jakarta EE community for Chinese-speaking developers, there are more ways than ever to get involved in cloud native technologies for Java.

Heads Up: JakartaOne Livestream Is Fall 2020 

The JakartaOne Livestream virtual conference showcases the technical benefits and architectural advances that become possible with cloud native Java, Eclipse MicroProfile, Jakarta EE, and Java EE technologies.

 This one-day event (date to be announces shortly) is a great way for the Java community, developers, and architects to share best practices, technical insight, experiences, use cases, and innovations and to discuss the future of Jakarta EE.

 This year’s JakartaOne Livestream event builds on the success of last year’s event — our first-ever — which attracted more than 1,400 participants and was very well received by attendees.

 Stay tuned for registration details. In the meantime:

·      Visit the JakartaOne Livestream website

·      Submit a paper: The call for papers closes early August

·      Follow @JakartaOneConf on Twitter for live event updates, speaker announcements, news, and more

JakartaOne Livestream Events in Portuguese and Spanish

We’re very pleased to tell you about two additional JakartaOne Livestream events that will help more community members benefit from the expertise of our global Jakarta EE community.

 These events follow the similar format and have the same benefits as the JakartaOne Livestream event in English on September 16, but sessions are presented in the local language.

 JakartaOne Livestream – Brazil, August 29

·      Registration is open

·      Sessions are in Portuguese

JakartaOne Livestream – Español, October 12

·      Call for papers: Closes August 10

·      Sessions are in Spanish

·      Stay tuned for registration details!

The Jakarta EE 9 Milestone Release Is Out and Needs Your Feedback

Now that the Jakarta EE 9 milestone release is available, we’re asking the entire Jakarta EE community to use the software and report back on any issues. Your efforts will go a long way toward helping our community ensure the quality and timeliness of the General Availability release.

Here are the basic steps:

·      Download the milestone release and run/test your application to plan your namespace changes, if needed.

·      Use the Eclipse Transformer project to make the namespace changes.

·      Report any issues you come across.

Also, please review the Jakarta EE 9 specification documents to make sure all “Jakartafication” is done properly.

For inspiration, visit Markus Karg’s website. Markus is actively using the milestone release and, as you’ll see in the video on his site, got into the spirit of our virtual release party with his own, very impressive-looking cupcake.

Speaking of the release party, it was a great success, with more than 200 people attending live or watching the replay as of June 30. If you haven’t seen the replay, you can register for it here.

One final note on Jakarta EE 9: We encourage everyone in the Jakarta EE community to reach out to the companies that create their favorite developer tools and ask them to support Jakarta EE 9! You can always point developers' tools vendors to Jakarta EE 9 Tools Vendors Datasheet to help them out. 

2020 Jakarta EE Developer Survey Results Are Available

With more than 2,100 survey responses, this year’s survey results provide considerable insight into how the cloud native world for enterprise Java is unfolding and what that means for the Java ecosystem.

 To access the complete survey results, register here.

 For more insight into the significance of the survey results and the Jakarta EE 9 milestone release:

·      Read Mike Milinkovich’s blog on the tremendous growth we’re seeing in Jakarta EE.

·      Read the press release.

Join Community Update Calls

Jakarta EE community calls are open to everyone! For upcoming dates and connection details, see the Jakarta EE Community Calendar.

 The next call will be August 12 at 11:00 a.m. EDT. Topics will include:

●  Jakarta EE 9 update and Java SE 8 and Java SE 11 direction: Kevin Sutter

●  Developer tools support for Jakarta EE 9: David Blevins

●  Update from the Eclipse Foundation on news, events, programs, and marketing: Ivar Grimstad, Shabnam Mayel, and Tanja Obradovic

●  Topics and questions from the community

We know it’s not always possible to join calls in real time, so here are links to the recordings and presentations:

·      July 15 call and presentations

·      The complete playlist

NEW: Chinese-Speaking Jakarta EE Community

We’re super-excited to share the news that the Jakarta EE Community China is being formed by individuals and organizations interested in Jakarta EE.

All communications and work done in this community will be in Chinese. The goals are to:

·      Engage more Chinese-speaking developers in the Jakarta EE community

·      Ensure vendor neutrality

 To get involved in the community, please complete this form in Chinese.

 If you’re an English-speaking member of the Jakarta EE community and are just curious about the form, here’s a translated version for your information.

The Next Friends of Jakarta EE Call Is August 26

The Friends of Jakarta EE monthly calls are held on the fourth Wednesday of every month. This call is by the community, for the community — simply an opportunity for everyone to get together virtually and talk once a month.

Here are the details for our next call:

·      Date: Wednesday, August 26 at 11:00 a.m. EDT

·      Agenda: https://bit.ly/2zkgQWc

·      Zoom: https://eclipse.zoom.us/j/92996495448

·      Calendar: https://bit.ly/2XKpcQa

Stay Connected With the Jakarta EE Community

The Jakarta EE community is very active and there are a number of channels to help you stay up to date with all of the latest and greatest news and information. Subscribe to your preferred channels today:

·      Social media: Twitter, Facebook, LinkedIn Group

·      Mailing lists: jakarta.ee-community@eclipse.org, jakarta.ee-wg@eclipse.org, project mailing lists

·      Newsletters, blogs, and emails: Eclipse newsletter, Jakarta EE blogs

·      Meetings: Jakarta Tech Talks, Jakarta EE Update, Jakarta Town Hall, and Eclipse Foundation events and conferences

You can find the complete list of channels here.

To help shape the future of open source, cloud native Java, get involved in the Jakarta EE Working Group.

To learn more about Jakarta EE-related plans and check the date for the next Jakarta Tech Talk, be sure to bookmark the Jakarta EE Community Calendar.


by Tanja Obradovic at July 25, 2020 12:44 PM

A web-based modeling tool based on Eclipse Theia

by Jonas Helming and Maximilian Koegel at July 24, 2020 08:20 AM

Are you interested in implementing a new domain-specific tool in the cloud and based on Eclipse Theia? Or do you want...

The post A web-based modeling tool based on Eclipse Theia appeared first on EclipseSource.


by Jonas Helming and Maximilian Koegel at July 24, 2020 08:20 AM

Eclipse RCP and REST – JAX-RS Extensions

by Patrick at July 23, 2020 04:40 PM

This is a continuation of a series of blog posts demonstrating the use of the ECF Remote Services JAX-RS Jersey Client within an Eclipse RCP application. The previous posts are:

In this post I’ll demonstrate how to use standard JAX-RS extensions in this environment.

Introduction to JAX-RS extensions

JAX-RS extensions can be used to customize many different aspects of REST service requests and responses. The current extensions supported by the ECF Remote Services JAX-RS Jersey Client are:

  • ClientRequestFilter and ClientResponseFilter
  • ContextResolver
  • ExceptionMapper
  • Feature
  • MessageBodyReader and MessageBodyWriter
  • ReaderInterceptor and WriterInterceptor

The specific uses of these extensions is beyond the scope of this article, but I will show a few examples below that should provide a starting point.

Extending JAX-RS using Declarative Services

As the ECF client (and Eclipse RCP for that matter) is built on OSGi, the best way to register JAX-RS extensions is by contributing them as OSGi services. It is fairly trivial to create and register an extension using Declarative Services annotations. For example, here is the code for a ClientRequestFilter.

@Component(service = ClientRequestFilter.class)
public class LaunchServiceClientRequestFilter implements ClientRequestFilter {

	@Override
	public void filter(ClientRequestContext context) throws IOException {
		MultivaluedMap<String, Object> headers = context.getHeaders();
		Map<String, Cookie> cookies = context.getCookies();
		
		/* Manipulate headers or cookies before request is made */
	}
}

A common use case for this type of filter is to intercept and modify the request headers and cookies being sent to the REST service. For example, you may want to manage cookies that relate to back-end security services.

Contributing a custom ObjectMapper

One of the most common JAX-RS customizations to make is to contribute a service-specific ObjectMapper. A REST service client will often need a custom ObjectMapper to control the way JSON is serialized and/or deserialized into Java POJOs.

In the SpaceX Launch Service that we’ve been using as an example, the Launch POJO is currently configured to map from the SpaceX JSON format to Java camel casing using an annotation on the POJO.

@JsonNaming(PropertyNamingStrategy.SnakeCaseStrategy.class) // r/SpaceX uses underscored field names
public class Launch {

	private String flightNumber;
	private String missionName;
	
	public String getFlightNumber() {
		return flightNumber;
	}
	
	public String getMissionName() {
		return missionName;
	}
}

But as the number of POJOs used to define the service increases, it makes more sense to centralize this customization in a contributed ObjectMapper. This contribution can be made with a ContextResolver defined as an OSGi service using DS annotations.

@Component(service = ContextResolver.class)
public class LaunchServiceObjectMapperResolver implements ContextResolver<ObjectMapper> {

	private ObjectMapper objectMapper;
	
	public LaunchServiceObjectMapperResolver() {
	    objectMapper = new ObjectMapper();
	    objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
	    objectMapper.setPropertyNamingStrategy(PropertyNamingStrategy.SNAKE_CASE);
	    
	    /* Add custom serializers or deserializers, etc. if needed */
	}

	@Override
	public ObjectMapper getContext(Class<?> type) {
		return objectMapper;
	}
}

At this point, the JAX-RS annotation can be removed from the Launch POJO.

Wrapping up

JAX-RS extensions give you the power to customize many aspects of REST calls within an Eclipse RCP application, and it’s very simple to contribute these extensions using OSGi Declarative Services annotations. The example code on GitHub has been updated to demonstrate how this works.

Note that if you’ve been using earlier versions of the ECF Remote Services JAX-RS Jersey Client, you’ll need to reload the target to bring in the latest version (1.13.8 or later).

https://github.com/modular-mind/spacex-client


by Patrick at July 23, 2020 04:40 PM

Dogfooding the Eclipse Dash License Tool

by waynebeaton at July 22, 2020 03:43 PM

There’s background information about this post in my previous post. I’ve been using the Eclipse Dash License Tool on itself.

$ mvn dependency:list | grep -Poh "\S+:(system|provided|compile)$" | java -jar licenses.jar -
Querying Eclipse Foundation for license data for 7 items.
Found 6 items.
Querying ClearlyDefined for license data for 1 items.
Found 1 items.
Vetted license information was found for all content. No further investigation is required.
$ _

Note that in this example, I’ve removed the paths to try and reduce at least some of the clutter. I also tend to add a filter to sort the dependencies and remove duplicates (| sort | uniq), but that’s not required here so I’ve left it out.

The message that “[v]etted license information was found for all content”, means that the tool figures that all of my project’s dependencies have been fully vetted and that I’m good to go. I could, for example, create a release with this content and be fully aligned with the Eclipse Foundation’s Intellectual Property Policy.

The tool is, however, only as good as the information that it’s provided with. Checking only the Maven build completely misses the third party content that was introduced by Jonah’s helpful contribution that helps us obtain dependency information from a yarn.lock file.

$ cd yarn
$ node index.js | java -jar licenses.jar -
Querying Eclipse Foundation for license data for 1 items.
Found 0 items.
Querying ClearlyDefined for license data for 1 items.
Rejected: https://clearlydefined.io/definitions/npm/npmjs/@yarnpkg/lockfile/1.1.0
Found 0 items.
License information could not automatically verified for the following content:

npm/npmjs/@yarnpkg/lockfile/1.1.0 (null)

Please create contribution questionnaires for this content.

$ _

So… oops. Missed one.

Note that the updates to the IP Policy include a change that allows project teams to leverage third-party content (that they believe to be license compatible) in their project code during development. All content must be vetted by the IP due diligence process before it may be leveraged by any release. So the project in its current state is completely onside, but the license of that identified bit of content needs to be resolved before it can be declared as proper release as defined by the Eclipse Foundation Development Process.

This actually demonstrates why I opted to create the tool as CLI that takes a flat list of dependencies as input: we use all sorts of different technologies, and I wanted to focus the tool on providing license information for arbitrary lists of dependencies.

I’m sure that Denis will be able to rewrite my bash one-liner in seven keystrokes, but here’s how I’ve combined the two so that I can get complete picture with a “single” command:

$ { mvn dependency:list | grep -Poh "\S+:(system|provided|compile)$" ; cd yarn && node index.js; } | java -jar licenses.jar -
Querying Eclipse Foundation for license data for 8 items.
Found 6 items.
Querying ClearlyDefined for license data for 2 items.
Rejected: https://clearlydefined.io/definitions/npm/npmjs/@yarnpkg/lockfile/1.1.0
Found 1 items.
License information could not automatically verified for the following content:

npm/npmjs/@yarnpkg/lockfile/1.1.0 (null)

Please create contribution questionnaires for this content.
$ _

I have some work to do before I can release. I’ll need to engage with the Eclipse Foundation’s IP Team to have that one bit of content vetted.

As a side effect, the tool generates a DEPENDENCIES file. The dependency file lists all of the dependencies provided in the input in ClearlyDefined coordinates along with license information, whether or not the content is approved for use or is restricted (meaning that further investigation is required), and the authority that determined the status.

maven/mavencentral/org.glassfish/jakarta.json/1.1.6, EPL-2.0 OR GPL-2.0 WITH Classpath-exception-2.0, approved, emo_ip_team
maven/mavencentral/commons-codec/commons-codec/1.11, Apache-2.0, approved, CQ15971
maven/mavencentral/org.apache.httpcomponents/httpcore/4.4.13, Apache-2.0, approved, CQ18704
maven/mavencentral/commons-cli/commons-cli/1.4, Apache-2.0, approved, CQ13132
maven/mavencentral/org.apache.httpcomponents/httpclient/4.5.12, Apache-2.0, approved, CQ18703
maven/mavencentral/commons-logging/commons-logging/1.2, Apache-2.0, approved, CQ10162
maven/mavencentral/org.apache.commons/commons-csv/1.8, Apache-2.0, approved, clearlydefined
npm/npmjs/@yarnpkg/lockfile/1.1.0, unknown, restricted, none

Most of the content was vetted by the Eclipse Foundation’s IP Team (the entries marked “CQ*” have corresponding entries in IPZilla), one was found in ClearlyDefined, and one requires further investigation.

The tool produces good results. But, as I stated earlier, it’s only as good as the input that it’s provided with and it only does what it is designed to do (it doesn’t, for example, distinguish between prerequisite dependencies and dependencies of “works with” dependencies; more on this later). The output of the tool is obviously a little rough and could benefit from the use of a proper configurable logging framework. There’s a handful of other open issues for your consideration.


by waynebeaton at July 22, 2020 03:43 PM

JBoss Tools and Red Hat CodeReady Studio for Eclipse 2020-06

by jeffmaury at July 21, 2020 07:00 AM

JBoss Tools 4.16.0 and Red Hat CodeReady Studio 12.16 for Eclipse 2020-06 are here waiting for you. Check it out!

crstudio12

Installation

Red Hat CodeReady Studio comes with everything pre-bundled in its installer. Simply download it from our Red Hat CodeReady product page and run it like this:

java -jar codereadystudio-<installername>.jar

JBoss Tools or Bring-Your-Own-Eclipse (BYOE) CodeReady Studio require a bit more:

This release requires at least Eclipse 4.16 (2020-06) but we recommend using the latest Eclipse 4.16 2020-06 JEE Bundle since then you get most of the dependencies preinstalled.

Once you have installed Eclipse, you can either find us on the Eclipse Marketplace under "JBoss Tools" or "Red Hat CodeReady Studio".

For JBoss Tools, you can also use our update site directly.

http://download.jboss.org/jbosstools/photon/stable/updates/

What is new?

Our main focus for this release was a new tooling for the Quarkus framework, improvements for container based development and bug fixing. Eclipse 2020-06 itself has a lot of new cool stuff but let me highlight just a few updates in both Eclipse 2020-06 and JBoss Tools plugins that I think are worth mentioning.

OpenShift

Secure URL support

It is now possible to create secured URLs in the Application Explorer View. If you select this option, the created URL will be accessible through https.

secure url

When such an URL is displayed in the tree, the icon now has a secure lock indicator.

secure url1

OpenShift Container Platform 4.5 support

With the new OpenShift Container Platform (OCP) 4.5 now available, JBoss Tools is compatible with this major release in a transparent way. Just define your connection to your OCP 4.4 based cluster as you did before for an OCP 3 cluster, and use the tooling !

Quarkus

Server Tools

Wildfly 20 Server Adapter

A server adapter has been added to work with Wildfly 20. It adds support for Java EE 8, Jakarta EE 8 and Microprofile 3.3.

Hibernate Tools

Hibernate Runtime Provider Updates

A number of additions and updates have been performed on the available Hibernate runtime providers.

Runtime Provider Updates

The Hibernate 5.4 runtime provider now incorporates Hibernate Core version 5.4.17.Final and Hibernate Tools version 5.4.17.Final.

The Hibernate 5.3 runtime provider now incorporates Hibernate Core version 5.3.17.Final and Hibernate Tools version 5.3.17.Final.

Platform

Views, Dialogs and Toolbar

Create missing folders from the New File wizard

You can now create missing folders directly via the New File wizard, without explicitly creating folders beforehand.

file and folder

Text Editors

Support for ligatures on Windows

Eclipse now supports font ligatures on Windows. It was already supported on Linux and macOS. You can specify the font with ligatures to be used by the Text editors using the preference:

General > Appearance > Colors and Font > Basic > Text Font

Screenshot of ligatures rendered in the Java Editor on Windows 10:

eclipse ligatures support win

Themes and Styling

Native dark scrollbars in Windows dark theme

The Eclipse dark theme now uses the native dark scrollbars and retired the software solution for the editor area.

dark theme scrollbars
Eclipse toolbar’s styling on Windows aligned with Win 10

The default Eclipse light theme has been updated to align better with the Windows 10 default theme.

Old:

old light theme

New:

new light theme
Square tabs for views

Square tabs are now used by default for the views in the Eclipse IDE.

dark theme square tabs

In order to switch back to using round tabs, a preference has been added.

round tabs preference option
Consistent toolbar colors in dark theme

The toolbar styling in the dark theme is now consistent.

dark theme toolbar

Preferences

Verify installation operations against current JRE

A new option (on by default) is available in the Install/Update preference page: Verify provisioning operation is compatible with current running JRE. This enables some extra check when installing, updating or uninstalling content using the standard dialogs so the operation will fail with a useful message if the units you’re installing require a newer or incompatible Java runtime than the one that’s currently in use to run the IDE.

incompatibleJREPref

Here is how the error message looks like, for example when you’re trying to install a unit that requires Java 14 and you’re running the Eclipse IDE with an older Java version:

incompatibleJREMessage
Preference to inline rename resource

The preference to rename resource inline or using dialog was added in 4.15 as a radio button and has now been changed to a check box.

inlineRenameResource

Debug

'Select All' and 'Deselect All' for Import breakpoints wizard

You can now use Select All or Deselect All buttons to select or deselect all the breakpoint markers during import of breakpoints.

import selectall

General Updates

Show key bindings when command is invoked

For presentations, screen casts and learning purposes, it is very helpful to show the corresponding key binding when a command is invoked. This was added some releases ago.

show keybindings

It is now possible to enable this feature separately for keyboard interaction and mouse clicks. So you can enable it for mouse clicks only, for keyboard interaction only or for both. Enabling this only for mouse clicks is very helpful for users who want to learn existing key bindings.

You can enable this on the Preferences dialog via the Show key binding when command is invoked group on the General > Keys preference page. To change this setting quickly the command 'Toggle Show Key Bindings' can be used (e.g. via the find actions dialog).

show keybindings pref
Ant 1.10.8

Eclipse has adopted Ant version 1.10.8.

Java Developement Tools (JDT)

Java 14 Support

Java 14

Java™ 14 is available and Eclipse JDT supports Java 14 for the Eclipse 4.16 release.

The release notably includes the following Java 14 features:

  • JEP 361: Switch Expressions (Standard).

  • JEP 359: Records (Preview).

  • JEP 368: Text Blocks (Second Preview).

  • JEP 305: Pattern Matching for Instanceof (Preview).

Please note that preview option should be on for preview language features. For an informal introduction of the support, please refer to Java 14 Examples wiki.

Set JDK Compliance to 14

You can set the JDK compliance to 14 and enable the preview features in Preferences > Java > Compiler:

jdk compliance 14
Template to create new record

You can use the new_record template to create a record in an empty .java file:

newrecord
Record Creation Wizard

You can create a new record using the Record creation wizard that can be opened by:

  • Right Click on the Project > New > Record

  • Right Click on the Project > New > Other and search for Record

  • Right Click on the Project > New > Other > Java > Record

The Record creation wizard comes up as shown below.

fileAddJ14RecordCreation

Note: In older workspaces the "Record" entry may not appear directly under the "New" menu in the Java perspective. To resolve this, either use a new workspace or launch eclipse with the option -clearPersistedState for your existing workspace.

Enable preview features

You can now quickly enable the preview features on an applicable Java project by right-clicking on it and selecting Configure > Enable preview features:

enable preview

You can also change the default severity (warning) of the preview features compile problem in the opened Project properties dialog:

preview severity

Java Editor

Non-blocking Java code completion

By default, code completions in the Java editor are now configured to be computed (when possible) in a separate non-UI thread in order to prevent UI freezes in case of long computations.

Users can restore the legacy behavior in Preferences > Java > Editor > Content Assist > Advanced by unchecking the enable non-blocking completion checkbox; integrators can change the value of the org.eclipse.jdt.ui.content_assist_noUIThread_computation to false.

jdtNonBlockingCompletionPref
Merge control workflows

A new clean up has been added that merges conditions of if/else if/else that have the same blocks when it is possible.

The code in the blocks should be the same. An else block may be different and won’t be merged. One condition may be made opposite to allow the merge. The conditions are merged with || to keep the control workflow the same. Parenthesis is added to avoid priority issue. Most of the brackets, formatting and comments are kept.

To select the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog select Merge conditions of if/else if/else that have the same blocks on the Unnecessary Code tab.

merge control workflows preferences

For the given code:

merge control workflows before

You get this after the clean up:

merge control workflows after
Local variable type inference

A new clean up has been added that makes use of the var keyword for the local variable when it is possible and is enabled only for Java 10 and higher.

The clean up replaces the explicit variable type by var when this type can be known by the variable initialization. It also replaces the diamond operator in instance creation by a parameterized type. Eventually, it adds a suffix to initialization number literal to match the variable type. In any case, the variable type is still exactly the same.

To select the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog select Use the local variable type inference on the Code Style tab.

var preferences

For the given code:

var before

You get this after the clean up:

var after
Prefer lazy logical operators

A new clean up has been added that replaces eager logical operators by lazy operators when it is possible.

The clean up respectively replaces | and & by || and && when the following operands can’t make side effect. Any assignments, increments, decrements, object creations or method call may cause side effect. So, in such case, it will keep the eager operator. It also leaves the binary operations as it is.

To select the clean up, invoke Source > Clean Up…​, use a custom profile, and on the Configure…​ dialog select Use the lazy logical operator on the Code Style tab.

lazy logical preferences

For the given code:

lazy logical before

You get this after the clean up:

lazy logical after
Quick fix to change return statement to yield statement in Switch Expression

A quick fix has been added to convert a return statement in a Switch Expression to yield statement.

quickfix switch expression return to yield

Java Formatter

Record declarations support

A lot of new settings have appeared in the formatter profile to control the formatting of record declarations. They are very similar to existing settings related to other type declarations. To see them all, you can use the filter field and type in the keyword record.

formatter records

Debug

Synthetic variables inspection

The JDT debugger is now capable of inspecting synthetic variables which are generated by the Java compilers. One such example is debugging the following method ` java.util.stream.ReferencePipeline.filter(Predicate<? super P_OUT>)` and inspecting the predicate variable.

Before:

synthetic var without fix

Now:

synthetic var with fix

Preferences

Substring Matching

The content assist preference option Show Substring Matches has been removed and the feature is now always enabled.

Any application or user can still disable it using the VM property: -Djdt.codeCompleteSubstringMatch=false

And more…​

You can find more noteworthy updates in on this page.

What is next?

Having JBoss Tools 4.16.0 and Red Hat CodeReady Studio 12.16 out we are already working on the next release for Eclipse 2020-09.

Enjoy!

Jeff Maury


by jeffmaury at July 21, 2020 07:00 AM

Eclipse Vert.x 3.9.2 released!

by vietj at July 21, 2020 12:00 AM

We are extremely pleased to announce that the Eclipse Vert.x version 3.9.2 has been released.

Among all bug fixes you can find in 3.9.2 this enhancement

Meet the Reactive DB2 Client

The Reactive SQL Client family gets a new child with an implementation contributed by our fellow maintainer Andy Guibert.

Using DB2 client is as straightforward as its elder sibblings:

DB2ConnectOptions connectOptions = new DB2ConnectOptions()
  .setPort(50000)
  .setHost("the-host")
  .setDatabase("the-db")
  .setUser("user")
  .setPassword("secret");

// Create the client pool
DB2Pool client = DB2Pool.pool(connectOptions, poolOptions);

// A simple query
client
  .query("SELECT * FROM users WHERE id='julien'")
  .execute(ar -> {
  if (ar.succeeded()) {
    RowSet result = ar.result();
    System.out.println("Got " + result.size() + " rows ");
  } else {
    System.out.println("Failure: " + ar.cause().getMessage());
  }

  // Now close the pool
  client.close();
});

Reactive MySQL Client domain socket support

The MySQL reactive Client can now connect using domain sockets.

// Connect Options
// Socket file name /var/run/mysqld/mysqld.sock
MySQLConnectOptions connectOptions = new MySQLConnectOptions()
    .setHost("/var/run/mysqld/mysqld.sock")
    .setDatabase("the-db");

// Create the pooled client
MySQLPool client = MySQLPool.pool(connectOptions, new PoolOptions().setMaxSize(5));

Finally

The 3.9.2 release notes can be found on the wiki, as well as the list of deprecations and breaking changes

Docker images are available on Docker Hub.

The Vert.x distribution can be downloaded on the website but is also available from SDKMan and HomeBrew.

The event bus client using the SockJS bridge is available from:

The release artifacts have been deployed to Maven Central and you can get the distribution on Bintray.

That’s it! Happy coding and see you soon on our user or dev channels.


by vietj at July 21, 2020 12:00 AM

ECF 3.14.12 released - Now with gRPC for OSGi Remote Services

by Scott Lewis (noreply@blogger.com) at July 15, 2020 12:16 AM

ECF 3.14.12 was just released

Highlights of this Release

New OSGi Remote Services Distribution provider based upon gRPC/Protocol Buffers. Along with a grpc-osgi-generator project...which allows the generation of a service API from a proto3 service declaration...this provider allows gRPC-based services to be exported and imported as OSGi Remote Services.  This now includes support for unary, server-streaming, and client-streaming gRPC calls.

Enhanced Support for Bndtools-based development of OSGi Remote Services.   The ECF Bndtools Workspace now includes the latest version of ECF Remote Services, along with the gRPC distribution provider, a Hazelcast-based discovery and distribution provider, and project and bndrun templates for creating, running, testing, and debugging OSGi Remote Services in Eclipse+Bndtools 5.



by Scott Lewis (noreply@blogger.com) at July 15, 2020 12:16 AM

Why ServiceCaller is better (than ServiceTracker)

July 07, 2020 08:00 PM

My previous post spurned a reasonable amount of discussion, and I promised to also talk about the new ServiceCaller which simplifies a number of these issues. I also thought it was worth looking at what the criticisms were because they made valid points.

The first observation is that it’s possible to use both DS and ServiceTracker to track ServiceReferences instead. In this mode, the services aren’t triggered by default; instead, they only get accessed upon resolving the ServiceTracker using the getService() call. This isn’t the default out of the box, because you have to write a ServiceTrackerCustomizer adapter that intercepts the addingService() call to wrap the ServiceTracker for future use. In other words, if you change:

serviceTracker = new ServiceTracker<>(bundleContext, Runnable.class, null);
serviceTracker.open();

to the slightly more verbose:

serviceTracker = new ServiceTracker<>(bundleContext, Runnable.class,
new ServiceTrackerCustomizer<Runnable, Wrapped<Runnable>>() {
public Wrapped<Runnable> addingService(ServiceReference<Runnable> ref) {
return new Wrapped<>(ref, bundleContext);
}
}
}
static class Wrapped<T> {
private ServiceReference<T> ref;
private BundleContext context;
public Wrapped(ServiceReference<T> ref, BundleContext context) {
this.ref = ref;
this.context = context;
}
public T getService() {
try {
return context.getService(ref);
} finally {
context.ungetService(ref);
}
}
}

Obviously, no practical code uses this approach because it’s too verbose, and if you’re in an environment where DS services aren’t widely used, the benefits of the deferred approach are outweighed by the quantity of additional code that needs to be written in order to implement this pattern.

(The code above is also slightly buggy; we’re getting the service, returning it, then ungetting it afterwards. We should really just be using it during that call instead of returning it in that case.)

Introducing ServiceCaller

This is where ServiceCaller comes in.

The approach of the ServiceCaller is to optimise out the over-eager dereferencing of the ServiceTracker approach, and apply a functional approach to calling the service when required. It also has a mechanism to do single-shot lookups and calling of services; helpful, for example, when logging an obscure error condition or other rarely used code path.

This allows us to elegantly call functional interfaces in a single line of code:

Class callerClass = getClass();
ServiceCaller.callOnce(callerClass, Runnable.class, Runnable:run);

This call looks for Runnable service types, as visible from the caller class, and then invoke the function getClass() as lambda. We can use a method reference (as in the above case) or you can supply a Consumer<T> which will be passed the reference that is resolved from the lookup.

Importantly, this call doesn’t acquire the service until the callOnce call is made. So, if you have an expensive logging factory, you don’t have to initialise it until the first time it’s needed – and even better, if the error condition never occurs, you never need to look it up. This is in direct contrast to the ServiceTracker approach (which actually needs more characters to type) that accesses the services eagerly, and is an order of magnitude better than having to write a ServiceTrackerCustomiser for the purposes of working around a broken API.

However, note that such one-shot calls are not the most efficient way of doing this, especially if it is to be called frequently. So the ServiceCaller has another mode of operation; you can create a ServiceCaller instance, and hang onto it for further use. Like its single-shot counterpart, this will defer the resolution of the service until needed. Furthermore, once resolved, it will cache that instance so you can repeatedly re-use it, in the same way that you could do with the service returned from the ServiceTracker.

private ServiceCaller<Runnable> service;
public void start(BundleContext context) {
this.service = new ServiceCaller<>(getClass(), Runnable.class);
}
public void stop(BundleContext context) {
this.service.unget();
}
public void doSomething() {
service.call(Runnable::run);
}

This doesn’t involve significantly more effort than using the ServiceTracker that’s widely in use in Eclipse Activators at the moment, yet will defer the lookup of the service until it’s actually needed. It’s obviously better than writing many lines of ServiceTrackerCustomiser and performs better as a result, and is in most cases a type of drop-in replacement. However, unlike ServiceTracker (which returns you a service that you can then do something with afterwards), this call provides a functional consumer interface that allows you to pass in the action to take.

Wrapping up

We’ve looked at why ServiceTracker has problems with eager instantiation of services, and the complexity of code required to do it the right way. A scan of the Eclipse codebase suggests that outside of Equinox, there are very few uses of ServiceTrackerCustomiser and there are several hundred calls to ServiceTracker(xxx,yyy,null) – so there’s a lot of improvements that can be made fairly easily.

This pattern can also be used to push down the acquisition of the service from a generic Plugin/Activator level call to where it needs to be used. Instead of standing this up in the BundleActivator, the ServiceCaller can be used anywhere in the bundle’s code. This is where the real benefit comes in; by packaging it up into a simple, functional consumer, we can use it to incrementally rid ourselves of the various BundleActivators that take up the majority of Eclipse’s start-up.

A final note on the ServiceCaller – it’s possible that when you run the callOnce method (or the call method if you’re holding on to it) that a service instance won’t be available. If that’s the case, you get notified by a false return call from the call method. If a service is found and is processed, you’ll get a true returned. For some operations, a no-op is a fine behaviour if the service isn’t present – for example, if there’s no LogService then you’re probably going to drop the log event anyway – but it allows you to take the corrective action you need.

It does mean that if you want to capture return state from the method call then you’ll need to have an alternative approach. The easiest way is to have an final Object result[] = new Object[1]; before the call, and then the lambda can assign the return value to the array. That’s because local state captured by lambdas needs to be a final reference, but a final reference to a mutable single element array allows us to poke a single value back. You could of course use a different class for the array, depending on your requirements.

So, we have seen that ServiceCaller is better than ServiceTracker, but can we do even better than that? We certainly can, and that’s the purpose of the next post.


July 07, 2020 08:00 PM

Why ServiceTracker is Bad (for DS)

July 02, 2020 08:00 PM

In a presentation I gave at EclipseCon Europe in 2016, I noted that there were prolems when using ServiceTracker and on slide 37 of my presentation noted that:

  • ServiceTracker.open() is a blocking call
  • ServiceTracker.open() results in DS activating services

Unfortunately, not everyone agrees because it seems insane that ServiceTracker should do this.

Unfortunately, ServiceTracker is insane.

The advantage of Declarative Services (aka SCR, although no-one calls it that) is that you can register services declaratively, but more importantly, the DS runtime will present the existence of the service but defer instantiation of the component until it’s first requested.

The great thing about this is that you can have a service which does many class loads or timely actions and defer its use until the service is actually needed. If your service isn’t required, then you don’t pay the cost for instantiating that service. I don’t think there’s any debate that this is a Good Thing and everyone, so far, is happy.

Problem

The problem, specifically when using ServiceTracker, is that you have to do a two-step process to use it:

  1. You create a ServiceTracker for your particular service class
  2. You call open() on it to start looking for services
  3. Time passes
  4. You acquire the service form the ServiceTracker to do something with it

There is a generally held mistaken belief that the DS component is not instantiated until you hit step 4 in the above. After all, if you’re calling the service from another component – or even looking up the ServiceReference yourself – that’s what would happen.

What actually happens is that the DS component is instantiated in step 2 above. That’s because the open() call – which is nicely thread-safe by the way, in the way that getService() isn’t – starts looking for services, and then caches the InitialTracked service, which causes DS to instantiate the component for you. Since most DS components often have a default, no-arg constructor, this generally misses most people’s attention.

If your component’s constructor – or more importantly, the fields therein, cause many classes to be loaded or perform substantial work or calculation, the fact that you’re hitting a ServiceTracker.open() synchronized call can take some non-trivial amount of time. And since this is typically in an Activator.start() method, it means that your nicely delay-until-its-needed component is now on the critical path of this bundle’s start-up, despite not actually needing the service right now.

This is one of the main problems in Eclipse’s start-up; many, many thousands of classes are loaded too eagerly. I’ve been working over the years to try and reduce the problem but it’s an uphill struggle and bad patterns (particularly the use of Activator) are endemic in a non-trivial subset of the Eclipse ecosystem. Of course, there are many fine and historical reasons why this is the case, not the least of which is that we didn’t start shipping DS in the Eclipse runtime until fairly recently.

Repo repro

Of course, when you point this out, not everyone is aware of this subtle behaviour. And while opinions may differ, code does not. I have put together a sample project which has two bundles:

  • Client, which has an Activator (yeah I know, I’m using it to make a point) that uses a ServiceTracker to look for Runnable instances
  • Runner, which has a DS component that provides a Runnable interface

When launched together, as soon as the ServiceTracker.open() method is called, you can see the console printing "Component has been instantiated" message. This is despite the Client bundle never actually using the service that the ServiceTracker causes to be obtained.

If you run it with the system property -DdisableOpen=true, the ServiceTracker.open() statement is not called, and the component is not instantiated.

This is a non-trivial reason as to why Eclipse startup can be slow. There are many, many uses of ServiceTracker to reach out to other parts of the system, and regardless of whether these are lazy DS components or have been actively instantiated, the use of ServiceTracker.open() causes them to all be eagerly activated, even before they’re needed. We can migrate Eclipse’s services to DS (and in fact, I’m working on doing just that) but until we eliminate the ServiceTracker from various Activators, we won’t see the benefit.

The code in the github repository essentially boils down to:

public void start(BundleContext bundleContext) throws Exception {
serviceTracker = new ServiceTracker<>(bundleContext, Runnable.class, null);
if (!Boolean.getBoolean("disableOpen")) {
serviceTracker.open(); // This will cause a DS component to be instantiated even though we don't use it
}
}

Unfortunately, there’s no way to use ServiceTracker to listen to lazily activated services, and as an OSGi standard, the behaviour is baked in to it.

Fortunately, there’s a lighter-weight tracker you can use called ServiceCaller – but that’s a topic for another blog post.

Summary

Using ServiceTracker.open() will cause lazily instantiated DS components to be activated eagerly, before the service is used. Instead of using ServiceTracker, try moving your service out to a DS component, and then DS will do the right thing.


July 02, 2020 08:00 PM

How to install RDi in the latest version of Eclipse

by Wim at June 30, 2020 03:57 PM

Monday, June 29, 2020
In this blog, I am going to show you how to install IBM RDi into the latest and the greatest version of Eclipse. If you prefer to watch a video then scroll down to the end.

Read more


by Wim at June 30, 2020 03:57 PM

Quarkus – Supersonic Subatomic IoT

by Jens Reimann at June 30, 2020 03:22 PM

Quarkus is advertised as a “Kubernetes Native Java stack, …”, so we took it to a test, and checked what benefits we can get, by replacing an existing service from the IoT components of EnMasse, the cloud-native, self-service messaging system.

The context

For quite a while, I wanted to try out Quarkus. I wanted to see what benefits it brings us in the context of EnMasse. The IoT functionality of EnMasse is provided by Eclipse Honoâ„¢, which is a micro-service based IoT connectivity platform. Hono is written in Java, makes heavy use of Vert.x, and the application startup and configuration is being orchestrated by Spring Boot.

EnMasse provides the scalable messaging back-end, based on AMQP 1.0. It also takes care of the Eclipse Hono deployment, alongside EnMasse. Wiring up the different services, based on an infrastructure custom resource. In a nutshell, you create a snippet of YAML, and EnMasse takes care and deploys a messaging system for you, with first-class support for IoT.

Architecture diagram, explaining the tenant service.
Architectural overview – showing the Tenant Service

This system requires a service called the “tenant service”. That service is responsible for looking up an IoT tenant, whenever the system needs to validate that a tenant exists or when its configuration is required. Like all the other services in Hono, this service is implemented using the default stack, based on Java, Vert.x, and Spring Boot. Most of the implementation is based on Vert.x alone, using its reactive and asynchronous programming model. Spring Boot is only used for wiring up the application, using dependency injection and configuration management. So this isn’t a typical Spring Boot application, it is neither using Spring Web or any of the Spring Messaging components. And the reason for choosing Vert.x over Spring in the past was performance. Vert.x provides an excellent performance, which we tested a while back in our IoT scale test with Hono.

The goal

The goal was simple: make it use fewer resources, having the same functionality. We didn’t want to re-implement the whole service from scratch. And while the tenant service is specific to EnMasse, it still uses quite a lot of the base functionality coming from Hono. And we wanted to re-use all of that, as we did with Spring Boot. So this wasn’t one of those nice “greenfield” projects, where you can start from scratch, with a nice and clean “Hello World”. This is code is embedded in two bigger projects, passes system tests, and has a history of its own.

So, change as little as possible and get out as much as we can. What else could it be?! And just to understand from where we started, here is a screenshot of the metrics of the tenant service instance on my test cluster:

Screenshot of original resource consumption.
Metrics for the original Spring Boot application

Around 200MiB of RAM, a little bit of CPU, and not much to do. As mentioned before, the tenant service only gets queries to verify the existence of a tenant, and the system will cache this information for a bit.

Step #1 – Migrate to Quarkus

To use Quarkus, we started to tweak our existing project, to adopt the different APIs that Quarkus uses for dependency injection and configuration. And to be fair, that mostly meant saying good-bye to Spring Boot specific APIs, going for something more open. Dependency Injection in Quarkus comes in the form of CDI. And Quarkus’ configuration is based on Eclipse MicroProfile Config. In a way, we didn’t migrate to Quarkus, but away from Spring Boot specific APIs.

First steps

Starting with adding the Quarkus Maven plugin and some basic dependencies to our Maven build, and off we go.

And while replacing dependency inject was a rather smooth process, the configuration part was a bit more tricky. Both Hono and Microprofile Config have a rather opinionated view on the configuration. Which made it problematic to enhance the Hono configuration in the way that Microprofile was happy. So for the first iteration, we ended up wrapping the Hono configuration classes to make them play nice with Microprofile. However, this is something that we intend to improve in Hono in the future.

Packaging the JAR into a container was no different than with the existing version. We only had to adapt the EnMasse operator to provide application arguments in the form Quarkus expected them.

First results

From a user perspective, nothing has changed. The tenant service still works the way it is expected to work and provides all the APIs as it did before. Just running with the Quarkus runtime, and the same JVM as before:

Screenshot of resource consumption with Quarkus in JVM mode.
Metrics after the conversion to Quarkus, in JVM mode

We can directly see a drop of 50MiB from 200MiB to 150MiB of RAM, that isn’t bad. CPU isn’t really different, though. There also is a slight improvement of the startup time, from ~2.5 seconds down to ~2 seconds. But that isn’t a real game-changer, I would say. Considering that ~2.5 seconds startup time, for a Spring Boot application, is actually not too bad, other services take much longer.

Step #2 – The native image

Everyone wants to do Java “native compilation”. I guess the expectation is that native compilation makes everything go much faster. There are different tests by different people, comparing native compilation and JVM mode, and the outcomes vary a lot. I don’t think that “native images” are a silver bullet to performance issues, but still, we have been curious to give it a try and see what happens.

Native image with Quarkus

Enabling native image mode in Quarkus is trivial. You need to add a Maven profile, set a few properties and you have native image generation enabled. With setting a single property in the Maven POM file, you can also instruct the Quarkus plugin to perform the native compilation step in a container. With that, you don’t need to worry about the GraalVM installation on your local machine.

Native image generation can be tricky, we knew that. However, we didn’t expect this to be as complex as being “Step #2”. In a nutshell, creating a native image compiles your code to CPU instruction, rather than JVM bytecode. In order to do that, it traces the call graph, and it fails to do so when it comes to reflection in Java. GraalVM supports reflection, but you need to provide the information about types, classes, and methods that want to participate in the reflection system, from the outside. Luckily Quarkus provides tooling to generate this information during the build. Quarkus knows about constructs like de-serialization in Jackson and can generate the required information for GraalVM to compile this correctly.

However, the magic only works in areas that Quarkus is aware of. So we did run into some weird issues, strange behavior that was hard to track down. Things that worked in JVM mode all of a sudden were broken in native image mode. Not all the hints are in the documentation. And we also didn’t read (or understand) all of the hints that are there. It takes a bit of time to learn, and with a lot of help from some colleagues (many thanks to Georgios, Martin, and of course Dejan for all the support), we got it running.

What is the benefit?

After all the struggle, what did it give us?

Screenshot of resource consumption with Quarkus in native image mode.
Metrics when running as native image Quarkus application

So, we are down another 50MiB of RAM. Starting from ~200MiB, down to ~100MiB. That is only half the RAM! Also, this time, we see a reduction in CPU load. While in JVM mode (both Quarkus and Spring Boot), the CPU load was around 2 millicores, now the CPU is always below that, even during application startup. Startup time is down from ~2.5 seconds with Spring Boot, to ~2 seconds with Quarkus in JVM mode, to ~0.4 seconds for Quarkus in native image mode. Definitely an improvement, but still, neither of those times is really bad.

Pros and cons of Quarkus

Switching to Quarkus was no problem at all. We found a few areas in the Hono configuration classes to improve. But in the end, we can keep the original Spring Boot setup and have Quarkus at the same time. Possibly other Microprofile compatible frameworks as well, though we didn’t test that. Everything worked as before, just using less memory. And except for the configuration classes, we could pretty much keep the whole application as it was.

Native image generation was more complex than expected. However, we also saw some real benefits. And while we didn’t do any performance tests on that, here is a thought: if the service has the same performance as before, the fact that it requires only half the of memory, and half the CPU cycles, this allows us to run twice the amount of instances now. Doubling throughput, as we can scale horizontally. I am really looking forward to another scale test since we did do all other kinds of optimizations as well.

You should also consider that the process of building a native image takes quite an amount of time. For this, rather simple service, it takes around 3 minutes on an above-than-average machine, just to build the native image. I did notice some decent improvement when trying out GraalVM 20.0 over 19.3, so I would expect some more improvements on the toolchain over time. Things like hot code replacement while debugging, are things that are not possible with the native image profile though. It is a different workflow, and that may take a bit to adapt. However, you don’t need to commit to either way. You can still have both at the same time. You can work with JVM mode and the Quarkus development mode, and then enable the native image profile, whenever you are ready.

Taking a look at the size of the container images, I noticed that the native image isn’t smaller (~85 MiB), compared to the uber-JAR file (~45 MiB). Then again, our “java base” image alone is around ~435 MiB. And it only adds the JVM on top of the Fedora minimal image. As you don’t need the JVM when you have the native image, you can go directly with the Fedora minimal image, which is around ~165 MiB, and end up with a much smaller overall image.

Conclusion

Switching our existing Java project to Quarkus wasn’t a big deal. It required some changes, yes. But those changes also mean, using some more open APIs, governed by the Eclipse Foundation’s development process, compared to using Spring Boot specific APIs. And while you can still use Spring Boot, changing the configuration to Eclipse MicroProfile opens up other possibilities as well. Not only Quarkus.

Just by taking a quick look at the numbers, comparing the figures from Spring Boot to Quarkus with native image compilation: RAM consumption was down to 50% of the original, CPU usage also was down to at least 50% of original usage, and the container image shrank to ~50% of the original size. And as mentioned in the beginning, we have been using Vert.x for all the core processing. Users that make use of the other Spring components should see more considerable improvement.

Going forward, I hope we can bring the changes we made to the next versions of EnMasse and Eclipse Hono. There is a real benefit here, and it provides you with some awesome additional choices. And in case you don’t like to choose, the EnMasse operator has some reasonable defaults for you 😉


Also see

This work is based on the work of others. Many thanks to:

The post Quarkus – Supersonic Subatomic IoT appeared first on ctron's blog.


by Jens Reimann at June 30, 2020 03:22 PM

Eclipse Dirigible 5.0 - celebrating 5 years in open source with 5 killer features

by Nedelcho Delchev at June 29, 2020 12:00 AM

Eclipse Dirigible just turned five years in open source. Five years in Eclipse Foundation within the Eclipse Cloud Development group. Five years of innovations with many friends around the globe, lots of realised dreams, first-class happiness.

GraalVM’s GraalJS

Fast, easy-to-use, easy-to-upgrade from Nashorn or Rhino, ECMA 2020 compliant, deeply integrated with the host JVM, fast interoperability with JVM languages like Scala and Kotlin, embeddable, JVM agnostic, stable, robust… just perfect for our needs. We were quite happy so far by using Mozilla Rhino as the default scripting engine, but its slow adoption of the most recent ECMA specs was definitely an issue. Hence, we were kind of forced to look for another option for the future development of the stack. The biggest surprise was about the time and efforts it took to adapt our API layer to use GraalJS instead of Rhino - literally zero. How many projects or products support straigthforward and compatible migration from one major version to another? The fact that GraalJS is even a totally different project, driven by different people and still provides a smooth migration path from Rhino, deserves admirations.

Chrome DevTools

Another invaluable gift coming along with GraalJS is the Chrome DevTools debug protocol support. A few years ago we tried to adapt Rhino and V8 debug API and to expose them to the Chrome DevTools, but it was quite unstable due to the lack of public specification of the debug protocol itself and on the other hand not so trivial behavior of the tools themselves. So, you can imagine how speechless we were left once we tried to connect the dots and it just worked. The decision to replace our Debug Perspective’s own tools with the well-known yet very powerful Chrome DevTools came naturally.

Debug GraalJS in Chrome DevTools

Xterm.js

We were also happy to add one more jewel in the most recent release was the Xterm.js terminal interface written in JavaScript and run entirely in the browser. It is adopted quite widely by many tools already as most prominent ones are the VSCode itslef and Eclipse Theia/Che projects. It connects to the server-side endpoint via websocket, so no need to open the port 22 on the server as it was a security-related requirement from our side. It integrates nicely with the ttyd terminal server, which we have embedded in the stack as well.

Xterm.js

Monaco

Major advantage of VSCode is its editor - Monaco. During the past few years it became the most advanced open source code editor, that’s easy to embed and enhance. The investment and support by Microsoft in VSCode and Monaco in particular, gives a good perspective and confidence of the project’s future. We were quite happy by using Orion, but recently we decided to bet on Monaco as the default code editor for version 5.0 and above of Dirigible. All the innovations and integrations related to writing source code assumed to go to Monaco now. Another benefit of using Monaco is its diff editor, which became a part of the last but not least major feature of 5.0 release.

Git Support

Git support in Dirigible was available, since the very beginning. You could clone, push pull or share projects. So far, the supported operations were over-simplified due to the fact that the file system (workspaces, projects, files) was abstracted. It was possible to have workspaces stored in a RDBMS for instance. This had its advantage when you had to run Dirigible on a platform with limited functionalities or by some other reasons. Of course, the drawback was the very limited support of Git integrations, which in fact is more important for developers than having an abstract file system. In 5.0 we decided to first stick to the native file system only, then it was possible to implement a full-fledged Git perspective with listing and changes of branches, low-level operations on files for staging, diff editor, etc.

New Git Perspective

Conclusion

With the latest release we set the future direction of the Dirigible project from a technology perspective. We fixed the problematic dependencies by betting on new and emerging projects as well as reverted some of the questionable architectural decisions from the past the future of Dirigible looks quite bright.

What’s next? Now, we can safely focus on what Eclipse Dirigible always was supposed to give to developers - high-productivity application development platform. Many improvements to the MDA tools are planned already related to the built-in extensibility of the entities, distributed models, security validations and role-based access management in generation templates. The unified inbox for process events as well as better integration of user tasks are examples of what we see in the BPM related tools as next steps.

Do you want to join forces with us in this endeavor?

Graduation

One more good news came along with this release - completed graduation review!

We are mature - ayeeeee! 💃 🕺 🧑��👨��🧑��👨��👩��🤵👰👨�🦳🧑�🦳👩�🚀🧑�🚒🧑�🌾 OMG! 🤦�♂�


by Nedelcho Delchev at June 29, 2020 12:00 AM

JBoss Tools 4.16.0.AM1 for Eclipse 2020-06

by jeffmaury at June 26, 2020 10:30 AM

Happy to announce 4.16.0.AM1 (Developer Milestone 1) build for Eclipse 2020-06.

Downloads available at JBoss Tools 4.16.0 AM1.

What is New?

Full info is at this page. Some highlights are below.

OpenShift

Secure URL support

It is now possible to create secured URLs in the Application Explorer View. If you select this option, the created URL will be accessible through https.

secure url

When such an URL is displayed in the tree, the icon now has a secure lock indicator.

secure url1

Hibernate Tools

Hibernate Runtime Provider Updates

A number of additions and updates have been performed on the available Hibernate runtime providers.

Runtime Provider Updates

The Hibernate 5.4 runtime provider now incorporates Hibernate Core version 5.4.17.Final and Hibernate Tools version 5.4.14.Final.

The Hibernate 5.3 runtime provider now incorporates Hibernate Core version 5.3.17.Final and Hibernate Tools version 5.3.16.Final.

Server Tools

Wildfly 20 Server Adapter

A server adapter has been added to work with Wildfly 20. It adds support for Java EE 8, Jakarta EE 8 and Microprofile 3.3.

Enjoy!

Jeff Maury


by jeffmaury at June 26, 2020 10:30 AM

Updates to the Eclipse IP Due Diligence Process

by waynebeaton at June 25, 2020 07:23 PM

In October 2019, The Eclipse Foundation’s Board of Directors approved an update to the IP Policy that introduces several significant changes in our IP due diligence process. I’ve just pushed out an update to the Intellectual Property section in the Eclipse Foundation Project Handbook.

I’ll apologize in advance that the updates are still a little rough and require some refinements. Like the rest of the handbook, we continually revise and rework the content based on your feedback.

Here’s a quick summary of the most significant changes.

License certification only for third-party content. This change removes the requirement to perform deep copyright, provenance and scanning of anomalies for third-party content unless it is being modified and/or if there are special considerations regarding the content. Instead, the focus for third-party content is on license compatibility only, which had previously been referred to as Type A due diligence.

Leverage other sources of license information for third-party content. With this change to license certification only for third-party content, we are able to leverage existing sources of information license information. That is, the requirement that the Eclipse IP Team personally review every bit of third-party content has been removed and we can now leverage other trusted sources.

ClearlyDefined is a trusted source of license information. We currently have two trusted sources of license information: The Eclipse Foundation’s IPZilla and ClearlyDefined. The IPZilla database has been painstakingly built over most of the lifespan of the Eclipse Foundation; it contains a vast wealth of deeply vetted information about many versions of many third-party libraries. ClearlyDefined is an OSI project that combines automated harvesting of software repositories and curation by trusted members of the community to produce a massive database of license (and other) information about content.

Piggyback CQs are no longer required. CQs had previously been used for tracking both the vetting process and the use of third-party content. With the changes, we are no longer required track the use of third-party content using CQs, so piggyback CQs are no longer necessary.

Parallel IP is used in all cases. Previously, our so-called Parallel IP process, the means by which project teams could leverage content during development while the IP Team completed their due diligence review was available only to projects in the incubation phase and only for content with specific conditions. This is no longer the case: full vetting is now always applied in parallel in all cases.

CQs are not required for third-party content in all cases. In the case of third-party content due diligence, CQs are now only used to track the vetting process.

CQs are no longer required before third-party content is introduced. Previously, the IP Policy required that all third-party content must be vetted by the Eclipse IP Team before it can be used by an Eclipse Project. The IP Policy updates turn this around. Eclipse project teams may now introduce new third-party content during a development cycle without first checking with the IP Team. That is, a project team may commit build scripts, code references, etc. to third-party content to their source code repository without first creating a CQ to request IP Team review and approval of the third-party content. At least during the development period between releases, the onus is on the project team to—​with reasonable confidence—​ensure any third-party content that they introduce is license compatible with the project’s license. Before any content may be included in any formal release the project team must engage in the due diligence process to validate that the third-party content licenses are compatible with the project license.

History may be retained when an existing project moves to the Eclipse Foundation. We had previously required that the commit history for a project moving to the Eclipse Foundation be squashed and that the initial contribution be the very first commit in the repository. This is no longer the case; existing projects are now encouraged (but not required) to retain their commit history. The initial contribution must still be provided to the IP Team via CQ as a snapshot of the HEAD state of the existing repository (if any).

The due diligence process for project content is unchanged.

If you notice anything that looks particularly wrong or troubling, please either open a bug report, or send a note to EMO.


by waynebeaton at June 25, 2020 07:23 PM

Eclipse JustJ

by Ed Merks (noreply@blogger.com) at June 25, 2020 08:18 AM

I've recently completed the initial support for provisioning the new Eclipse JustJ project, complete with a logo for it.


I've learned several new technologies and honed existing technology skills to make this happen. For example, I've previously used Inkscape to create nicer images for Oomph; a *.png with alpha is much better than a *.gif with a transparent pixel, particularly with the vogue, dark-theme fashion trend, which for old people like me feels more like the old days of CRT monitors than something modern, but hey, to each their own. In any case, a *.svg is cool, definitely looks great at every resolution, and can easily be rendered to a *.png.

By the way, did you know that artwork derivative of  Eclipse artwork requires special approval? Previously the Eclipse Board of Directors had to review and approve such logos, but now our beloved, supreme leader, Mike Milinkovich, is empowered to do that personally.

Getting to the point where we can redistribute JREs at Eclipse has been a long and winding road.  This of course required Board approval and your elected Committer Representatives helped push that to fruition last year.  Speaking of which, now there is an exciting late-breaking development: the move of AdoptOpenJDK to Eclipse Adoptium.  This will be an important source JREs for JustJ!

One of the primary goals of JustJ is to provide JREs via p2 update sites such that a product build can easily incorporate a JRE into the product. With that in place, the product runs out-of-the-box regardless of the JRE installed on the end-user's computer, which is particularly useful for products that are not Java-centric where the end-user doesn't care about the fact that Eclipse is implemented using Java.  This will also enable the Eclipse Installer to run out-of-the-box and will enable the installer to create an installation that, at the user's discretion, uses a JRE provided by Eclipse. In all cases, this includes the ability to update the installation's embedded JRE as new ones are released.

The first stage is to build a JRE from a JDK using jlink.  This must run natively on the JDK's actual supported operating system and hardware architecture.  Of course we want to automate this step, and all the steps involved in producing a p2 repository populated with JREs.  This is where I had to learn about Jenkins pipeline scripts.  I'm particularly grateful to Mikaël Barbero for helping me get started with a simple example.  Now I am a pipeline junkie, and of course I had to learn Groovy as well.

In the initial stage, we generate the JREs themselves, and that involves using shell scripts effectively.  I'm not a big fan of shell scripts, but they're a necessary evil.  I authored a single script that produces JREs on all the supported operating systems; one that I can run locally on Windows and on my two virtual boxes as well. The pipeline itself needs to run certain stages on specific agents such that their steps are performed on the appropriate operating system and hardware.  I'm grate to Robert Hilbrich of DLR for supporting JustJ's builds with their organization's resource packs!  He's also been kind enough to be one of our first test guinea pigs building a product with a JustJ JRE.  The initial stage produces a set of JREs.


In the next stage, JREs need to be wrapped into plugins and features to produce a p2 repository via a Maven/Tycho build.  This is a huge amount of boiler plate scaffolding that is error-prone to author and challenging to maintain, especially when providing multiple JRE flavors.  So of course we want to automate the generation of this scaffolding as well.  Naturally if we're going to generate something, we need a model to capture the boiled-down essence of what needs to be generated.  So I whipped together an EMF model and used JET templates to sketch out the scaffolding. With the super cool JET Editor, these are really easy to author and maintain.  This stage is described in the documentation and produces a p2 update site.  The sites are automatically maintained and the index pages are automatically generated.

To author nice documentation I had to learn PHP much better.  It's really quite cool and very powerful, particularly for producing pages with dynamic content.  For example, I used it to implement more flexible browsing support of download.eclipse.org so that one can really see all the files present, even when there is an index.html or index.php in the folder.  In any case, there is now lots of documentation for JustJ to describe everything in detail, and it was authored with the help of PHP scaffolding.

Last but not least, there is an Oomph setup to automate the provisioning of a full development environment along with a tutorial to describe in detail everything in that workspace.  There's no excuse not to contribute.  While authoring this tutorial, I found that creating nice, appropriately-clipped screen captures is super annoying and very time consuming, so I dropped a little goodie into Oomph to make that easier.   You might want to try it. Just add "-Dorg.eclipse.oomph.ui.screenshot=<some-folder-location>" to your eclipse.ini to enable it.  Then, if you hit Ctrl twice quickly, screen captures will be produced immediately based on where your application currently has focus.  If you hit Shift twice quickly, screen captures will be produced after a short delay.  This allows you to bring up a menu from the menu bar, from a toolbar button, or a context menu, and capture that menu.  In all cases, the captures include the "simulated" mouse cursor and starts with the "focus", expanding outward to the full enclosing window.

The bottom line, JustJ generates everything given just a set of URLs to JDKs as input, and it maintains everything automatically.  It even provides an example of how to build a product with an embedded JRE to get you started quickly.  And thanks to some test guinea pigs, we know it really works as advertised.


On the personal front, during this time period, I finished my move to Switzerland.  Getting up early here is a feast for the eyes! The movers were scurrying around my apartment the same days as the 2020-06 release, which was also the same day as one of the Eclipse Board meetings.  That was a little too much to juggle at once!

At this point, I can make anything work and I can make anything that already works work even better. Need help with something?  I'm easy to find...

by Ed Merks (noreply@blogger.com) at June 25, 2020 08:18 AM

Release 5.0

June 25, 2020 12:00 AM

New version 5.0 has been released.

Release is of Type B

Features

  • Roles management for Documents Manager only distro
  • Managed Database for CMS only distro
  • Keep SQL Queries last state
  • File history support for Git Perspective
  • Execute a selected snippet in SQL View
  • Execute API v4
  • Lifecycle API v4
  • SOAP API v4
  • Websocket API v4
  • Websocket descriptor for server-side endpoints (*.websocket)
  • Close & Close All actions for editors
  • Unpublish project support

Fixes

  • Configuration management enhancements
  • Remove alert, when changing perspectives
  • Delete of git project from the workspace fix
  • Debug not working in Docker container fix
  • Public access for Cloud Foundry distro fixes

  • Minor fixes

Statistics

  • 52K+ Users
  • 74K+ Sessions
  • 183 Countries
  • 376 Repositories in DirigibleLabs

Operational

Enjoy!


June 25, 2020 12:00 AM

Jakarta EE Is Taking Off

by Mike Milinkovich at June 23, 2020 11:03 AM

With the results of the 2020 Jakarta EE survey and the initial milestone release of the Jakarta EE 9, it’s clear the community’s collective efforts are resonating with the global Java ecosystem.

Before I get to the survey results, I want to say a huge thank you to everyone who took the time to participate in the survey. We received nearly 2,200 responses from software developers, architects, and decision-makers around the world — an increase of almost 20 percent over last year’s survey. With your insight, we’ve gained a clear and comprehensive view of enterprise Java strategies and priorities globally, which in turn we are freely sharing with the ecosystem.

Jakarta EE Adoption and Compatible Implementations Are on the Rise

Less than a year after its initial release, Jakarta EE has emerged as the second-place cloud native framework with 35 percent of respondents saying they use it. While the Spring and Spring Boot frameworks are still the leading choices for building cloud native applications, their usage share dropped 13 percent to 44 percent in the 2020 survey results.

Combined, Java EE 8 and Jakarta EE 8 hit the mainstream with 55 percent adoption. Jakarta EE 8 was responsible for 17 percent of that usage, despite only shipping for the first time in September 2019. This is truly significant growth.

We’re also seeing a strong uptick in Jakarta EE 8 compatible products. Companies including IBM, Red Hat, Payara, Primeton, TmaxSoft, and Apusic now have Jakarta EE 8 Full Platform compatible products. Since January 2020, we’ve had four new Full Platform compatible implementations and one new Web Profile compatible implementation. In addition to Eclipse GlassFish 5.1, this brings Jakarta EE 8 adoption to 12 compatible products. This is an outstanding achievement for the Jakarta EE community to have more full platform compatible products in 8 months than Java EE 8 had in over 2 years. You can see the complete list here.

You can also expect to see additional compatible implementations in the coming months as more applications are passing Technology Compatibility Kit (TCK) tests and are well on their way to becoming certified as Jakarta EE 8-compatible products.

Architectural Approaches Are Evolving

This year’s Jakarta EE survey also showed a slight drop in the popularity of using a microservices architecture for implementing Java systems in the cloud compared to last year. At the same time, use of monolithic architectures for implementing Java systems in the cloud nearly doubled since last year’s survey and is now at 25 percent.

These results may indicate that companies are pragmatically choosing to simply “lift and shift” existing applications to the cloud instead of rearchitecting them as microservices.

Interestingly, the survey also indicated the Jakarta EE community would like to see better support for microservices in the platform. When you combine this fact with the rise of Jakarta EE, it’s reasonable to believe developers may be starting to favor vendor-neutral standards for building Java microservices over single-vendor microservices frameworks.

The Industry Is Moving to the New Jakarta EE Namespace

The support we’re seeing for the adoption of the new namespace in Jakarta EE 9 reinforces the value the industry sees in Jakarta EE. Technology leaders are already investing to ensure their software supports the Jakarta EE 9 namespace changes and others have indicated they will do the same. Some of these implementations include:

  • Eclipse GlassFish 6.0 milestone release is available to download
  • Jetty 11.0.0-alpha0 milestone release is available to download
  • Apache Tomcat 10.0 M6 milestone release is available to download
  • Payara Platform 6 milestone release coming in Q4 2020
  • OpenLiberty 20.0.0.7 Beta release is available with basic Web application support to download
  • Apache TomEE 9.0 milestone release using Eclipse Transformer project tools is available to download
  • WildFly 21 is planning a milestone release for fall 2020
  • Piranha Micro p20.6.1 milestone release is available to download.

While the Jakarta EE 9 tooling release doesn’t include new features, it’s a very important and necessary step on the road to Jakarta EE 10 and the next era of innovation using cloud native technologies for Java. With the full Jakarta EE 9 release in fall this year, Jakarta EE will be ideally positioned to drive true open source, cloud native innovation using Java.

Diversity, Achieved

One of the items that I am particularly happy about is the achievement of establishing Jakarta EE as a vendor-neutral, community-led technology platform. When we started the process of moving Java EE from Oracle to the Eclipse Foundation there were some who doubted that it could be accomplished successfully. The numbers tell the story: Oracle’s contributions are still leading the pack at 27%, but the community-at-large is

JakartEEDev v2
now over 40%. Contributions from our other members are led by Payara, VMware (Pivotal), Red Hat, and IBM. Based on these results, it is clear that Jakarta EE has truly achieved its original objective of becoming a vendor-neutral, community-led industry initiative. A lot of people worked very hard to achieve this, and I’m thrilled by the results.

Discover Jakarta EE

Here are three ways to learn more about Jakarta EE and understand why it’s gaining mainstream adoption so quickly:

  • Join the community at the Jakarta EE 9 Milestone Release Virtual Party and networking opportunity on Tuesday, June 23 at 11:00 a.m. EDT. To register for the event, click here.
  • Find out more about the Jakarta EE 9 milestone release here.
  • Review the complete 2020 Jakarta EE Survey results here.

Edit: Reflect IBM’s contributions
Edit #2: Add link to Apache TomEE download


by Mike Milinkovich at June 23, 2020 11:03 AM

AdoptOpenJDK to Become Eclipse Adoptium

by Alex Blewitt at June 19, 2020 04:00 PM

The AdoptOpenJDK project is to move under the Eclipse umbrella as Eclipse Adoptium as part of a transition to an open-source foundation. Having a vendor-neutral open-source foundation to steward the AdoptOpenJDK project will give a strong basis for the future. Read on to find out what it means from a practical perspective and how the transition will play out.

By Alex Blewitt

by Alex Blewitt at June 19, 2020 04:00 PM

Eclipse RCP and REST – Making Asynchronous Calls

by Patrick at June 18, 2020 09:15 PM

In my last blog post I described how to access REST services from an Eclipse RCP application using the ECF Remote Services JAX-RS Jersey Client Provider. It turns out that with a few minor changes we can also access these REST services asynchronously.

I’ll demonstrate here how to modify the SpaceX Launch Service example to use asynchronous calls.

Adding the osgi.async intent

OSGi R7 introduced the Asynchronous Service Specification which applies to services generally. It also introduced a way for remote services to support asynchronous calls.

While there are parts of the specification that manage asynchronous behavior on both the client and server, my main concern is allowing a client to make an asynchronous REST call without knowing anything about the server implementation. Especially when using microservice architectures, the less we know about the server the better.

In the end, all we need to do is add one property to our endpoint description, which is managed by the EDEF XML file. We’ll be adding the osgi.async intent, like this:

<property name="service.intents" value-type="String">
    <array>
        <value>osgi.async</value>
    </array>
</property>

Note that this property can be added whether or not you want to immediately support asynchronous calls. Once the intent has been added, your JAX-RS interfaces can support a mixture of synchronous and asynchronous calls.

Adding an asynchronous method to the JAX-RS interface

On the JAX-RS interface, the only change we need to make is to add a method that returns one of four types:

  • Future
  • CompletableFuture
  • CompletableStage
  • Promise (OSGi specific)

When the ECF JAX-RS Jersey Client finds one of these return types, it will automatically wrap the REST call with the appropriate asynchronous type and return that to the caller. So for our SpaceX Launch Service, we could modify the interface to look like this.

@Path("/launches")
public interface LaunchService {

	@GET
	@Produces(MediaType.APPLICATION_JSON)
	@Path("/")
	public List<Launch> getLaunches();

        /* new asynchronous method */

	@GET
	@Produces(MediaType.APPLICATION_JSON)
	@Path("/")
	public CompletableFuture<List<Launch>> getLaunchesAsync();
}

Note that OSGi Promises specification is especially useful when running on JVMs that do not support CompletableFuture. In my code, I prefer to use native Java types whenever possible.

Integrating with Eclipse RCP

In the SpaceX example, the Eclipse RCP client accesses the Launch Service using dependency injection. We can make the asynchronous request by calling the method returning a CompletableFuture and managing the callback.

public class LaunchPart {

     @Inject
     @Service
     private LaunchService launchService;

     @PostConstruct
     public void createComposite(Composite parent) {

          CompletableFuture<List<Launch>> = launchService.getLaunchesAsync();

          launchesFuture.thenAccept((launches) -> {
               /* process launch data here */
          });
     }
}

Wrapping up

With a few small changes, we can easily create JAX-RS interfaces that mix synchronous and asynchronous behavior. The example code on GitHub has been updated to demonstrate how this works.

https://github.com/modular-mind/spacex-client


by Patrick at June 18, 2020 09:15 PM

WTP 3.18 Released!

June 17, 2020 11:55 PM

The Eclipse Web Tools Platform 3.18 has been released! Installation and updates can be performed using the Eclipse IDE 2020-06 Update Site or through the Eclipse Marketplace . Release 3.18 is included in the 2020-06 Eclipse IDE for Enterprise Java Developers , with selected portions also included in several other packages . Adopters can download the R3.18.0 p2 repository directly and combine it with the necessary dependencies.

More news


June 17, 2020 11:55 PM

Jakarta EE Community Update June 2020

by Tanja Obradovic at June 17, 2020 08:23 PM

As always, there’s a lot going on in the Jakarta EE community, but the Jakarta EE 9 milestone release is definitely the highlight!

 

Get Involved in Jakarta EE 9 Milestone Release Activities

We invite all Jakarta EE developers and Java User Group (JUG) members to help us celebrate the Jakarta EE 9 milestone release and test the software.

Please register for the Jakarta EE Milestone Release party today!

 We’ll start with the celebration. On June 23 at 11:00 EDT, we’re hosting a virtual release party for the entire Jakarta EE community. To mark the occasion, and as it is a milestone release - we’ll celebrate with a small cake - cupcake! We’re encouraging everyone to do the same, but even more so, once you start trying out the milestone release, please  make your own cupcake (recipe suggestions chocolate or vanilla), complete it with a celebratory Jakarta EE flag, and share a selfie picture of it on social media (tag it with #JakartaEE or use our handle @JakartaEE). If cupcakes are not your thing, you can take a selfie with the flag only as well! You can also use the example Twitter card below illustrates the idea.

Help ensure the Jakarta EE 9 software is ready for full release in fall 2020! You will have all details after the Milestone Release party on the June 23th, so you be able to 

·      Download the milestone release and run/ test your application and plan the work namespace changes if needed

·      Use the Eclipse Transformer project for the namespace changes. 

·      Report issues you come across

·      Also, review the Jakarta EE 9 specification documents to make sure all “Jakartafication” is done properly

_________________________________

 

Subscribe to Jakarta EE Mailing Lists

Simply scan the QR code below to choose the mailing lists you want to subscribe to. You can also access the complete listing here

_________________________________

 

Get a First-Time Contributor’s Perspective

Ken Fogel’s Jakarta Tech Talk — My First Pull Request: The Jakarta EE Examples Adventure — is ideal for anyone who is new to the Jakarta EE community and wondering how to make their first contribution.

 We have a great lineup of Jakarta Tech Talks scheduled. To stay up to date on our latest talks, visit the Jakarta Tech Talks webpage.

_________________________________

 

NEW: Friends of Jakarta EE Monthly Call

Exciting news for the Community: We have now set up Friends of Jakarta EE monthly calls that will be held on the fourth Wednesday of every month. This call is by the community, for the community. The call plays no formal role in Jakarta EE Working Group activities, it’s simply an opportunity for the community to get together virtually, set their own agenga and talk once a month.

 On this call, the people who attend are the right people, the topics discussed are the right topics, and the outcomes are the right outcomes.

 Here are the details for our first call:

·      Date: Wednesday, June 24 at 11:00 a.m. EDT

·      Agenda: https://bit.ly/2zkgQWc

·      Zoom: https://eclipse.zoom.us/j/92996495448

·      Calendar: https://bit.ly/2XKpcQa

 _________________________________

 

Join Community Update Calls

Jakarta EE community calls are open to everyone! For upcoming dates and connection details, see the Jakarta EE Community Calendar.

 

The next call will be July 8 at 11:00 a.m. EDT and topics will include:

·      Update on TCK work: Scott Marlow, Cesar Hernandez

●  Jakarta EE 9 release update: Kevin Sutter

●  Tools support for Jakarta EE 9 and help from the community: Neil Patterson

●  Update from the Eclipse Foundation: Ivar Grimstad, Shabnam Mayel, Tanja Obradovic

 We know it’s not always possible to join calls in real time, so here are links to the recordings and presentations:

·      June 10 call and presentation.

·      The complete playlist.

 _____________________________

 

Participate in Upcoming JUG Meetups

Check the list below for the JUG meetup that works best for you:

·      Niš JUG (Niš, Serbia) and MKJUG (Macedonia): Thursday, June 18 at 6:00 p.m. CEST with speaker Tanja Obradovic from the Eclipse Foundation.

·  KCJUG (Kansas City, United States): Thursday, June 18 at 5:30 p.m. CDT with speakers Kevin Sutter and Billy Korando from IBM.

 For the complete list of JUG meetups, locations, and times, click here.

 _________________________________

 

Stay Connected With the Jakarta EE Community

The Jakarta EE community is very active and there are a number of channels to help you stay up to date with all of the latest and greatest news and information. Subscribe to your preferred channels today:

·  Social media: Twitter, Facebook, LinkedIn Group

·  Mailing lists: jakarta.ee-community@eclipse.org, jakarta.ee-wg@eclipse.org, project mailing lists

·  Newsletters, blogs, and emails: Eclipse newsletter, Jakarta EE blogs

·  Meetings: Jakarta Tech Talks, Jakarta EE Update, Jakarta Town Hall, and Eclipse Foundation events and conferences

 You can find the complete list of channels here.

 And, get involved in the Jakarta EE Working Group to help shape the future of open source, cloud native Java.

 To learn more about Jakarta EE-related plans and check the date for the next Jakarta Tech Talk, be sure to bookmark the Jakarta EE Community Calendar.

 _________________________________

 


by Tanja Obradovic at June 17, 2020 08:23 PM

Eclipse Foundation Support for the Black Community

by Mike Milinkovich at June 10, 2020 02:00 PM

The events of the past several weeks have reminded us yet again that racism remains a reality in our society. It is terribly sad and frustrating to be reminded that in 2020 hate and injustice still rule the lives of so many. It is heartbreaking that we even have to say “Black lives matter”. I and the Eclipse Foundation stand in solidarity with the Black community and will continue our efforts to provide an inclusive, diverse, and welcoming community for all.

I encourage everyone in the Eclipse community to listen and learn from our colleagues who have experienced racism, whether personally or professionally. It is only by opening our hearts and our minds to the experiences of others that we can overcome fear and bias.

For sixteen years the Eclipse Foundation has been home to an open, welcoming, and diverse community. But we all can, and must, do more to reject discrimination and foster mutual understanding and respect. I am committed to furthering the discussion and encourage foundation staff and the broader Eclipse community to contact me with ideas about how we can become more inclusive.


by Mike Milinkovich at June 10, 2020 02:00 PM

SiriusCon 2020: Sirius to the Web with Obeo Cloud Platform

June 10, 2020 10:00 AM

TLDR; On June 18th connect to @SiriusCon Live 2020 to watch a demo of the Obeo Cloud Platform.

Here we go again

“Mamma mia, does it show again
My my, just how much I’ve missed you?“
Mamma Mia

On the 18th of June, I will have the pleasure of presenting the Obeo Cloud Platform, with my co-speaker Stéphane Bégaudeau, at SiriusCon Live 2020. During SiriusCon 2020, we will not only demonstrate OCP, but also make a big announcement about its future. So, attend SiriusCon and be in for a huge surprise!

After such a long time, it is great to be able to connect with the Eclipse Sirius community.

If you have to create a graphical modeling workbench for your own DSL, then you know how powerful Sirius is to help you design your own modeler based on EMF. In just a few hours, you get a modeling studio dedicated to your own domain. The main issue faced by our customers arises when they need to distribute the bundle to their end users. A truly effective deployment is difficult to come by. Many struggle to either deploy or maintain their solution. And their team are feeling the pinch.

To solve these challenges, I am excited to demonstrate the Obeo Cloud Platform (OCP), the Cloud-based solution developed by Obeo for deploying modeling tools to the web. With OCP Modeler, modeling tools developed with Sirius can be installed on a Cloud server and are rendered in a web browser. Our purpose is to carry the spirit of Sirius. What our users typically love in Sirius is:

  • the ability to define your modeling workbench in a configuration file,
  • no code generation involved as everything is interpreted at runtime,
  • flexible even for complex models.

So we kept all those principles and now allow you to easily define workbenches running in the cloud. We rebuilt the Sirius runtime from the ground up, for the long haul.

I’ve been talking about it for a long time now! And we’re really proud to show that OCP is available and working today, so our talk will be purely a live demo: no slides (almost), no bullet points, not feature lists. Just us, showing the capabilities of this modeling environment!

OCP Modeler

We will give you as well an overview of the context, the roadmap and how OCP Modeler positions with Sirius.

Referring to ABBA’s words, take a chance on OCP!

“Honey I’m still free
Take a chance on me
Gonna do my very best, baby can’t you see
Gotta put me to the test, take a chance on me“
Take a chance on me

In the meantime, as a preparation, you can dance, you can djibe and most importantly you have to register. And no, even if I am a big #ABBAFan the surprise is not that I will do my talk in a fancy glittering costume… neither Stéphane even if it could be very memorable :).

Dancing


June 10, 2020 10:00 AM

Clean Sheet Service Update (0.8)

by Frank Appel at May 23, 2020 09:25 AM

Written by Frank Appel

Thanks to a community contribution we’re able to announce another Clean Sheet Service Update (0.8).

The Clean Sheet Eclipse Design

In case you've missed out on the topic and you are wondering what I'm talking about, here is a screenshot of my real world setup using the Clean Sheet theme (click on the image to enlarge). Eclipse IDE Look and Feel: Clean Sheet Screenshot For more information please refer to the features landing page at http://fappel.github.io/xiliary/clean-sheet.html, read the introductory Clean Sheet feature description blog post, and check out the New & Noteworthy page.

 

Clean Sheet Service Update (0.8)

This service update fixes a rendering issue of ruler numbers. Kudos to Pierre-Yves B. for contributing the necessary fixes. Please refer to the issue #87 for more details.

Clean Sheet Installation

Drag the 'Install' link below to your running Eclipse instance

Drag to your running Eclipse* workspace. *Requires Eclipse Marketplace Client

or

Select Help > Install New Software.../Check for Updates.
P2 repository software site: @ http://fappel.github.io/xiliary/
Feature: Code Affine Theme

After feature installation and workbench restart select the ‘Clean Sheet’ theme:
Preferences: General > Appearance > Theme: Clean Sheet

 

On a Final Note, …

Of course, it’s interesting to hear suggestions or find out about potential issues that need to be resolved. Feel free to use the Xiliary Issue Tracker or the comment section below for reporting.

I’d like to thank all the Clean Sheet adopters for the support! Have fun with the latest update :-)

The post Clean Sheet Service Update (0.8) appeared first on Code Affine.


by Frank Appel at May 23, 2020 09:25 AM

Your Voice Matters - Take the IoT Developer Survey!

by Thabang Mashologu at May 21, 2020 01:42 PM

Completing the 2020 IoT Developer Survey takes less than ten minutes of your time. But adding your voice helps the entire IoT ecosystem better understand where IoT solution development is headed, the technology stack being used to get there, and how edge computing fits into the picture. By participating in the IoT industry’s largest developer survey, you have a unique opportunity to influence the direction the ecosystem takes at a time when it’s rapidly evolving.

 

The more responses we receive, the more insight we gain, and the more value IoT developers and other members of the IoT ecosystem will realize from the survey results. Last year we received more than 1,700 survey responses, including more than 1,100 responses from developers working on IoT projects in a professional capacity, and we already have more than 400 responses this year.

 

When that many relevant voices speak, everyone from original equipment manufacturers (OEMs) and software vendors to hardware manufacturers, service providers, and enterprises can learn how the latest IoT product and service development trends affect their strategies and businesses. 

 

The survey is in its sixth year, but this is the first time it includes questions about edge technologies and tools. With your responses, the Eclipse IoT Working Group and the Eclipse Edge Native Working Group will have the insight needed to continue aligning their roadmaps with your priorities and requirements for cloud-to-edge IoT solution development.

 

Add Your Voice Now

The 2020 IoT Developer Survey is open until June 26, but I encourage everyone to take those few minutes and add their voice to the survey now while it’s top of mind. Everyone who completes the survey will receive the findings report once the results are analyzed.

 

To have your say, click here.


by Thabang Mashologu at May 21, 2020 01:42 PM

Getting started with the fabric8 Kubernetes Java client

by Rohan Kumar at May 20, 2020 07:00 AM

Fabric8 has been available as a Java client for Kubernetes since 2015, and today is one of the most popular client libraries for Kubernetes. (The most popular is client-go, which is the client library for the Go programming language on Kubernetes.) In recent years, fabric8 has evolved from a Java client for the Kubernetes REST API to a full-fledged alternative to the kubectl command-line tool for Java-based development.

Fabric8 is much more than a simple Java Kubernetes REST client. Its features include a rich domain-specific language (DSL), a model for advanced code handling and manipulation, extension hooks, a mock server for testing, and many client-side utilities. In addition to hooks for building new extensions, the fabric8 Kubernetes Java client has extensions for Knative, Tekton, Kubernetes Service Catalog, Red Hat OpenShift Service Catalog, and Kubernetes Assertions.

Kubernetes client

<dependency>  
  <groupId>io.fabric8</groupId>  
  <artifactId>kubernetes-client</artifactId>
  <version>4.10.3</version>
</dependency>

OpenShift client

<dependency>  
  <groupId>io.fabric8</groupId>  
  <artifactId>openshift-client</artifactId>
  <version>4.10.3</version>
</dependency>

Tekton client

<dependency>  
  <groupId>io.fabric8</groupId>  
  <artifactId>tekton-client</artifactId>
  <version>4.10.3</version>
</dependency>

Knative client 

<dependency>  
  <groupId>io.fabric8</groupId>  
  <artifactId>knative-client</artifactId>
  <version>4.10.3</version>
</dependency>

Istio client

<dependency>  
  <groupId>me.snowdrop</groupId>
  <artifactId>istio-client</artifactId>  
  <version>1.6.5-Beta2</version>
</dependency>
Service Catalog client

<dependency> 
  <groupId>io.fabric8</groupId>  
  <artifactId>servicecatalog-client</artifactId>  
  <version>4.10.3</version>  
  <type>bundle</type>
</dependency>

Note: The Istio client is not a direct part of the fabric8 repository, but is based on fabric8).

Also, many popular projects use fabric8 Kubernetes client extensions, including Quarkus, Apache Camel, Apache Spark, and many more. See which projects work with this Kubernetes and OpenShift Java client here.

Using fabric8 with Kubernetes

Using fabric8 is straightforward, especially because it offers an API for accessing Kubernetes resources. To get started with the Java client, you just add it as a dependency in your Maven pom.xml:

  <dependency>
    <groupId>io.fabric8</groupId>
    <artifactId>kubernetes-client</artifactId>
    <version>4.10.3</version>
  </dependency>

Alternatively, you could use build.gradle:

dependencies {
    compile 'io.fabric8:kubernetes-client:4.10.3'
}

Next, we’ll look at a couple of common examples.

Example 1: Listing pods in a namespace

Here’s an example of listing all of the client pods in a namespace:

try (KubernetesClient client = new DefaultKubernetesClient()) {

    client.pods().inNamespace("default").list().getItems().forEach(
            pod -> System.out.println(pod.getMetadata().getName())
    );

} catch (KubernetesClientException ex) {
    // Handle exception
    ex.printStackTrace();
}

Example 2: Server authentication

When you use DefaultKubernetesClient, it will try to read the ~/.kube/config file in your home directory and load information required for authenticating with the Kubernetes API server. You can override this configuration with the system property KUBECONFIG.

If you are using DefaultKubernetesClient from inside a Pod, it will load ~/.kube/config from the ServiceAccount volume mounted inside the Pod. For a more complex configuration, you can simply pass a Config object inside DefaultKubernetesClient, like this:

Config config = new ConfigBuilder()
        .withMasterUrl("https://api.rh-idev.openshift.com:443")
        .build();
try (KubernetesClient client = new DefaultKubernetesClient(config)) {

    client.pods().inNamespace("default").list().getItems().forEach(
            pod -> System.out.println(pod.getMetadata().getName())
    );

} catch (KubernetesClientException ex) {
    // Handle exception
    ex.printStackTrace();
}

Example 3: Creating a Simple Deployment:

Suppose you want to build up a quick Deployment object and apply it onto Kubernetes Cluster. You can easily leverage on rich builder classes provided by fabric8 to construct your Kubernetes resources on the fly. Here is an example of building up a simple NginxDeployment:

try (KubernetesClient client = new DefaultKubernetesClient()) {
    Deployment deployment = new DeploymentBuilder()
            .withNewMetadata()
               .withName("nginx-deployment")
               .addToLabels("app", "nginx")
            .endMetadata()
            .withNewSpec()
               .withReplicas(1)
               .withNewSelector()
                   .addToMatchLabels("app", "nginx")
               .endSelector()
               .withNewTemplate()
                   .withNewMetadata()
                      .addToLabels("app", "nginx")
                   .endMetadata()
                   .withNewSpec()
                      .addNewContainer()
                          .withName("nginx")
                          .withImage("nginx:1.7.9")
                          .addNewPort().withContainerPort(80).endPort()
                      .endContainer()
                   .endSpec()
               .endTemplate()
            .endSpec()
            .build();

    client.apps().deployments().inNamespace("default").createOrReplace(deployment);
}

 

Example 4: Loading your Kubernetes resource YAMLs into Java Objects:

With Fabric8 Kubernetes Client, you can easily load your resource manifests into Java objects provided by it’s Kubernetes Model. Suppose you have a ServiceYAML like this one:

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app: MyApp
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376

Now in order to load this YAML object into a Kubernetes Service object. You need to do something like this:

Service service = client.services()
        .load(LoadServiceYaml.class.getResourceAsStream("/test-svc.yml"))
        .get();

Example 5: Doing CRUD operations for Kubernetes resource using Client:

You can easily create, replace, edit, or delete your Kubernetes resources using Fabric8 Kubernetes Client API. We provide a rich DSL to achieve these operations. Here is an example of basic CRUD operations of a Deployment object:

    // Create
    client.apps().deployments().inNamespace("default").create(deployment);

    // Get
    Deployment deploy = client.apps().deployments()
            .inNamespace("default")
            .withName("deploy1")
            .get();

    // Update, adding dummy annotation
    Deployment updatedDeploy = client.apps().deployments()
            .inNamespace("default")
            .withName("deploy1")
            .edit()
            .editMetadata().addToAnnotations("foo", "bar").endMetadata()
            .done();

    // Deletion
    Boolean isDeleted = client.apps().deployments()
            .inNamespace("default")
            .withName("deploy1")
            .delete();

    // Deletion with some propagation policy
    Boolean bDeleted = client.apps().deployments()
            .inNamespace("default")
            .withName("deploy1")
            .withPropagationPolicy(DeletionPropagation.BACKGROUND)
            .delete();

 

Learn more about fabric8

Fabric8’s development team consists of mostly Java developers, so a Java developer’s perspective heavily influences this client. In this article, I’ve demonstrated just a few of fabric8’s features for using Kubernetes APIs in a Java environment. For more examples, see the Kubernetes Java client examples repository. And for a deep dive into using fabric8, visit the Fabric8 Kubernetes Java Client Cheat Sheet.

Share

The post Getting started with the fabric8 Kubernetes Java client appeared first on Red Hat Developer.


by Rohan Kumar at May 20, 2020 07:00 AM

Advancing Global Open Source Collaboration From Our New European Base

by Thabang Mashologu at May 12, 2020 05:04 PM

Today, we announced the Eclipse Foundation is establishing itself as a European-based organization. By creating the Eclipse Foundation AISBL, an international non-profit association based in Brussels, we will be in the ideal position to foster global industry collaboration in strategic open source technologies, including cloud, edge computing, IoT, artificial intelligence, connected vehicles, telecommunications, and many more.

 

Our transition to Europe, which we expect to be legally finalized by July, will help us build on our recent international growth. European and global leaders such as Bosch, the German Aerospace Center (DLR), Fraunhofer FOKUS, Fujitsu, Huawei, IBM, Intel, IOTA Foundation, Microsoft, Oracle, Red Hat, SAP, and more than 300 other technology innovators have invested in open source collaboration at the Eclipse Foundation to sharpen their competitive edge.

 

In the first quarter of 2020 alone, the Eclipse Foundation added nearly 40 new member companies, five new working groups, and received 11 new project proposals. 

 

Open Source Collaboration Has High Strategic Value in Europe

The Foundation’s growth in Europe has been particularly significant as increasing numbers of businesses, researchers, academics, and government organizations on the continent realize that industrial open source collaboration is the fastest and easiest way to innovate around complex technologies in a way that enables sustainable value creation.

 

European policy makers and industry representatives have also recognized the need to pool their strengths to achieve goals that individual organizations cannot achieve on their own and to compete more effectively in international markets. The European Commission considers open source initiatives to be strategically important to shaping Europe’s digital future and offers resources to help European organizations understand how they can leverage open source as a business advantage.

 

With 170 member organizations and more than 900 committers in Europe, the Eclipse Foundation is the largest open source organization in Europe. We also have an international reach and an established reputation for enabling well-governed open source software communities that provide a level playing field for all ecosystem members.

 

Together, these factors made it an easy decision to focus more resources on this critical geography while we continue to support and expand our membership and our communities globally.

 

Eclipse Foundation Projects Target European and Global Technology Priorities

The Eclipse Foundation currently hosts a number of projects that align with Europe’s — and the world’s — technology priorities. Here are a few examples:

· Eclipse Kuksa unifies vehicle, IoT, cloud, and security technologies across the complete tooling stack for the connected vehicle domain to enable a standardized approach to Vehicle-To-Cloud (V2C) scenarios across all vehicles.

· Eclipse Che is a Kubernetes-native IDE that makes it much faster and easier to develop enterprise applications that leverage containers and Kubernetes.

· Eclipse Theia is a true open source alternative to Microsoft Visual Studio (VS) Code that gives organizations and developers a single, modern technology stack to build customized IDEs for desktops and browsers. 

· Eclipse Deeplearning4j takes deep learning and AI applications out of the theoretical, academic world and into the real world where they can be applied in useful and meaningful ways across industries.

· Eclipse fog05 is a fog computing platform that provides a decentralized infrastructure for distributing compute, storage, control, and networking functions closer to users along a cloud-to-edge continuum.

· Jakarta EE brings developers the modern enterprise Java technologies needed to develop, deploy, and manage server-side and cloud native applications.

· Eclipse Capella provides an open source solution for model-based systems engineering (MBSE).

· Eclipse IoT Packages develops fully integrated packages that demonstrate how two or more Eclipse IoT projects can be used together to deliver particular functionality or address a particular challenge.

 

Get More Information

If you’re reading this blog, there’s a very good chance you’re already very familiar with the benefits of open source collaboration at the Eclipse Foundation. If that’s not the case, you can learn more about the benefits of membership here.

 

If you would like more insight into the Eclipse Foundation’s role at the center of European open source innovation, read our new white paper Enabling Digital Transformation in Europe Through Global Open Source Collaboration.


by Thabang Mashologu at May 12, 2020 05:04 PM

Using Google's grpc-java for OSGi Remote Services

by Scott Lewis (noreply@blogger.com) at May 06, 2020 12:25 AM


A cool thing about Google's grpc is that a service creator can declare a service via a protocol buffers file (.proto file), and then the protoc compiler (along with grpc-java compiler plugin) generates many of the Java classes for both implementing and using that service.

OSGi Remote Services require a service interface to represent the service contract, and this service interface is usually created directly by the programmer.   Through a additional plugin, protoc can now generate a OSGi service interface along with all grpc classes... from the .proto file service declaration.   

For example, consider the following protocol buffers input file:
syntax = "proto3";
package grpc.health.v1;
option java_multiple_files = true;
option java_outer_classname = "HealthProto";
option java_package = "io.grpc.health.v1";
message HealthCheckRequest {
  string message = 1;
}
message HealthCheckResponse {
  enum ServingStatus {
    UNKNOWN = 0;
    SERVING = 1;
    NOT_SERVING = 2;
    SERVICE_UNKNOWN = 3;  // Used only by the Watch method.
  }
  ServingStatus status = 1;
}
service HealthCheck {
  // Unary method
  rpc Check(HealthCheckRequest) returns (HealthCheckResponse);
  // Streaming method
  rpc Watch(HealthCheckRequest) returns (stream HealthCheckResponse);
}
Running protoc+grpc-java+grpc-osgi-generator on this file results in generation of a HealthCheckService class along with all message classes (e.g. HealthCheckRequest, HealthCheckResponse, HealthProto, etc).   All of the Java classes in this example directory were created simply by running protoc+grpc-java+grpc-osgi-generator on the above proto file.

The generated Java classes can then be used to implement an OSGi Remote Service, with HealthCheckService as the service interface.   Tt runtime, the HealthCheckServiceImpl can be exported (via the Grpc Provider) which uses grpc to provide the comm and json serialization for the HealthCheckService method calls.

The net effect is that remote service programmers can easily and quickly go from abstract service declaration (in proto file) to a running/functioning OSGi remote service:
  1. Declare a service in proto file -- example proto file
  2. Run protoc+grpc-java+grpc-osgi-generator to generate the Java code for the declared service - example Java generated code
  3. Implement the service API - example service implementation
  4. Use Declarative Services to export using ECF Remote Services + Grpc Distribution Provider - example (see @Component annotation for OSGi Remote Services-required service properties to trigger export)
The remote service programmer writes no communication nor serialization code (both are provided by the Grpc distribution provider).   See here for the complete generated healthcheck api plugin, here for the impl plugin, and here for a simple remote service consumer.


by Scott Lewis (noreply@blogger.com) at May 06, 2020 12:25 AM

How to create/develop an Eclipse Theia IDE plugin

by Jonas Helming and Maximilian Koegel at May 04, 2020 11:18 AM

This article provides an overview on how to develop a Eclipse Theia plugin and thereby extend the Theia IDE with new...

The post How to create/develop an Eclipse Theia IDE plugin appeared first on EclipseSource.


by Jonas Helming and Maximilian Koegel at May 04, 2020 11:18 AM

Announcing Eclipse Ditto Release 1.1.0

April 29, 2020 12:00 AM

Today, approximately 4 months after Eclipse Ditto’s 1.0.0 release, the team is happy to announce the first minor (feature) update of Ditto 1.0:
Eclipse Ditto 1.1.0

The Ditto team was quite busy, 1.1.0 focuses on the following areas:

  • Management of Policies via Ditto Protocol
  • Possibility to search via Ditto Protocol
  • Enrich published Ditto events/message via additional custom fields of the affected thing
  • Support for establishing managed connections via MQTT 5
  • End-2-end acknowledgements preparing Ditto to enable “at least once” processing
    • Addition of acknowledgement APIs in Ditto Java client
  • Officially documented pre-authenticated authentication mechanism
  • Use of Java 11 for running Ditto containers
  • Deprecation of API version 1 (authorization via ACL mechanism)
  • Use of CBOR as cluster internal replacement for JSON serialization
  • Further improvements on increasing throughput

Please have a look at the 1.1.0 release notes for a more detailed information on the release.

Artifacts

The new Java artifacts have been published at the Eclipse Maven repository as well as Maven central.

Also the Ditto Java client’s artifacts were published to Maven central.

The Docker images have been pushed to Docker Hub:

Kubernetes ready: Helm chart

In order to run Eclipse Ditto in a Kubernetes environment, best rely on the official Helm chart and deploy Ditto via the Helm package manager.



Ditto


The Eclipse Ditto team


April 29, 2020 12:00 AM

Clean Sheet Service Update (0.7)

by Frank Appel at April 24, 2020 08:49 AM

Written by Frank Appel

It’s been a while, but today we’re happy to announce a Clean Sheet Service Update (0.7).

The Clean Sheet Eclipse Design

In case you've missed out on the topic and you are wondering what I'm talking about, here is a screenshot of my real world setup using the Clean Sheet theme (click on the image to enlarge). Eclipse IDE Look and Feel: Clean Sheet Screenshot For more information please refer to the features landing page at http://fappel.github.io/xiliary/clean-sheet.html, read the introductory Clean Sheet feature description blog post, and check out the New & Noteworthy page.

 

Clean Sheet Service Update (0.7)

This service update provides the long overdue JRE 11 compatibility on windows platforms. Kudos to Pierre-Yves B. for contributing the necessary fixes. Please refer to the issues #88 and #90 for more details.

Clean Sheet Installation

Drag the 'Install' link below to your running Eclipse instance

Drag to your running Eclipse* workspace. *Requires Eclipse Marketplace Client

or

Select Help > Install New Software.../Check for Updates.
P2 repository software site: @ http://fappel.github.io/xiliary/
Feature: Code Affine Theme

After feature installation and workbench restart select the ‘Clean Sheet’ theme:
Preferences: General > Appearance > Theme: Clean Sheet

 

On a Final Note, …

Of course, it’s interesting to hear suggestions or find out about potential issues that need to be resolved. Feel free to use the Xiliary Issue Tracker or the comment section below for reporting.

I’d like to thank all the Clean Sheet adopters for the support! Have fun with the latest update :-)

The post Clean Sheet Service Update (0.7) appeared first on Code Affine.


by Frank Appel at April 24, 2020 08:49 AM

Using the remote OSGi console with Equinox

by Mat Booth at April 23, 2020 02:00 PM

You may be familiar with the OSGi shell you get when you pass the "-console" option to Equinox on the command line. Did you know you can also use this console over Telnet sessions or SSH sessions? This article shows you the bare minimum needed to do so.


by Mat Booth at April 23, 2020 02:00 PM

EclipseCon 2020 CFP is Open

April 16, 2020 08:30 PM

If you are interested in speaking, our call for proposals is now open. Please visit the CFP page for information on how to submit your talk.

April 16, 2020 08:30 PM

Digital twins of devices connected via LoRaWAN to TTN

April 16, 2020 12:00 AM

TTVC logo


A workshop of the 2020 The Things Virtual Conference on April 16th 2020 is/was about how to connect Eclipse Ditto to “The Things Network” via TTN’s MQTT broker in order to automatically update digital twins of devices connected via LoRaWAN to the TTN backend.

You can find the slides here.

This blogpost helps setting up this kind of connection and shall also be used as a step-by-step tutorial during the workshop.

Requirements

You’ll need:

  • an operating system capable of running Docker (best use a Linux distribution)
  • 4 CPU cores and 4GB of RAM are advised (less can work, but the Ditto cluster startup is more fragile then)
  • to have installed: curl and git

Also, you’ll need a TTN account and an existing application with at least one device if you want to follow the hands-on part and want to create digital twins of your devices connected to TTN.

Preparation

Please follow these initial preparation steps (if you don’t already have Docker and Docker Compose installed).

When you have access to a Kubernetes cluster and already have worked with Helm (the package manager for Kubernetes), you can alternatively install Ditto via its official Helm chart.

Install Docker

Assumption: You’re running a Debian or Ubuntu based Linux distribution containing the apt package manager.

sudo apt install docker.io
sudo service docker start
sudo usermod -a -G docker <your-username>

Logout and login again so that your user gets the “docker” group.

Install Docker Compose

Follow the installation guide here, in short:

sudo curl -L "https://github.com/docker/compose/releases/download/1.25.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

Clone Ditto codebase

That is required to get the docker-compose.yaml file and other resources required to run Ditto with Docker Compose.

git clone --depth 1 https://github.com/eclipse/ditto.git

Startup Ditto cluster

Change directory into the just cloned git repository - optionally adjust the DITTO_EXTERNAL_PORT variable to where Ditto is reachable after the start:

cd ditto/deployment/docker/
export DITTO_EXTERNAL_PORT=80
docker-compose up -d

Verify that Ditto is running:

docker-compose ps

The output should look similar like this:

         Name                       Command               State           Ports         
----------------------------------------------------------------------------------------
docker_concierge_1       /sbin/tini -- java -jar st ...   Up      8080/tcp              
docker_connectivity_1    /sbin/tini -- java -jar st ...   Up      8080/tcp              
docker_gateway_1         /sbin/tini -- java -Dditto ...   Up      0.0.0.0:8081->8080/tcp
docker_mongodb_1         docker-entrypoint.sh mongo ...   Up      27017/tcp             
docker_nginx_1           nginx -g daemon off;             Up      0.0.0.0:80->80/tcp    
docker_policies_1        /sbin/tini -- java -jar st ...   Up      8080/tcp              
docker_swagger-ui_1      nginx -g daemon off;             Up      80/tcp, 8080/tcp      
docker_things-search_1   /sbin/tini -- java -jar st ...   Up      8080/tcp              
docker_things_1          /sbin/tini -- java -jar st ...   Up      8080/tcp

Verify that your Ditto cluster is healthy. Please give it ~1 minute in order to properly start up.

curl -u devops:foobar http://localhost:${DITTO_EXTERNAL_PORT}/status/health

The returned output should start with:

{"label":"roles","status":"UP", ... }

If your Ditto cluster has trouble starting up (e.g. because you only have less CPU cores than advised), try the following startup command instead:

docker-compose start mongodb; sleep 30; docker-compose start policies things; sleep 60; docker-compose start concierge; sleep 60; docker-compose start things-search; sleep 60; docker-compose start connectivity; sleep 60; docker-compose up -d

Configure connection to TTN MQTT broker

The Things Network provides a built in MQTT broker which you can connect to using your TTN application credentials. For a more detailed description on that topic, please refer to the TTN MQTT Quick Start.

Eclipse Ditto can establish connections to MQTT brokers. This is a schematic picture of what we now will do:

TTN to Ditto via MQTT

In order to connect to your own TTN application, perform the following steps.

You can find the <AppId> (application ID) and <AppKey> (access key) in your TTN console of your application. For <Region>, e.g. choose 'eu' when your application is in handled by the Handler ‘ttn-handler-eu’.

Please export your application’s credentials locally to environment variables:

export TTN_REGION='<Region>'
export TTN_APP_ID='<AppID>'
export TTN_APP_KEY='<AppKey>'

After having done that, you can already create the connection of Ditto to the TTN MQTT broker:

curl -X POST -u devops:foobar -H 'Content-Type: application/json' -d '{
    "targetActorSelection": "/system/sharding/connection",
    "headers": {
        "aggregate": false
    },
    "piggybackCommand": {
        "type": "connectivity.commands:createConnection",
        "connection": {
            "id": "ttn-connection-via-mqtt",
            "name": "TTN-MQTT",
            "connectionType": "mqtt",
            "connectionStatus": "open",
            "uri": "tcp://'"${TTN_APP_ID}"':'"${TTN_APP_KEY}"'@'"${TTN_REGION}"'.thethings.network:1883",
            "failoverEnabled": true,
            "clientCount": 1,
            "validateCertificates": false,
            "sources": [{
                "addresses": [
                    "'"${TTN_APP_ID}"'/devices/+/up"
                ],
                "consumerCount": 1,
                "qos": 0,
                "authorizationContext": [
                  "pre-authenticated:ttn-connection"
                ],
                "enforcement": {
                    "input": "{{ source:address }}",
                    "filters": [
                        "'"${TTN_APP_ID}"'/devices/{{ thing:name }}/up"
                    ]
                },
                "replyTarget": {
                    "enabled": false
                },
                "payloadMapping": [
                    "ttn-demo-mapping"
                ]
            }],
            "mappingDefinitions": {
                "ttn-demo-mapping": {
                     "mappingEngine": "JavaScript",
                     "options": {
                         "incomingScript": "function mapToDittoProtocolMsg(\n  headers,\n  textPayload,\n  bytePayload,\n  contentType\n) {\n\n  let ttnJson = JSON.parse(textPayload);\n  let deviceId = ttnJson['"'"'dev_id'"'"'];\n  let payloadFields = ttnJson['"'"'payload_fields'"'"'];\n  \n  let attributesObj = {\n    hardwareSerial: ttnJson['"'"'hardware_serial'"'"'],\n    ttnCounter: ttnJson['"'"'counter'"'"']\n  };\n  \n  let featuresObj = {\n    temperature: {\n      properties: {\n        value: payloadFields['"'"'temperature_7'"'"']\n      }\n    },\n    pressure: {\n      properties: {\n        value: payloadFields['"'"'barometric_pressure_10'"'"']\n      }\n    },\n    humidity: {\n      properties: {\n        value: payloadFields['"'"'relative_humidity_8'"'"']\n      }\n    }\n  };\n  \n  let thing = {\n    attributes: attributesObj,\n    features: featuresObj\n  };\n  \n  let dittoHeaders = {\n    '"'"'response-required'"'"': false,\n    '"'"'If-Match'"'"': '"'"'*'"'"'\n  };\n\n  return Ditto.buildDittoProtocolMsg(\n    '"'"'org.eclipse.ditto.ttn.demo'"'"',\n    deviceId,\n    '"'"'things'"'"',\n    '"'"'twin'"'"',\n    '"'"'commands'"'"',\n    '"'"'modify'"'"',\n    '"'"'/'"'"',\n    dittoHeaders,\n    thing\n  );\n}",
                         "outgoingScript": "function mapFromDittoProtocolMsg() { return null; }",
                         "loadBytebufferJS": "false",
                         "loadLongJS": "false"
                     }
                }
            }
        }
    }
}' http://localhost:${DITTO_EXTERNAL_PORT}/devops/piggyback/connectivity?timeout=8s

Explanation - what is done here:

  • using curl with the devops (admin) user and its initial password foobar we create a connection of type mqtt (you can find further information on that in Ditto’s MQTT docs)
  • we use the TTN application credentials in the configured "uri", connect via plain TCP (SSL is also possible but in this case a little more complicated as the server certificate of the TTN MQTT broker would have to be imported)
  • we add an entry in "sources":
    • defining the MQTT topic ("addresses") to subscribe to
    • specifying in which "authorizationContext" messages from this connection shall be executed
    • defining in the "enforcement" that, based on the MQTT topic, a device may only update the Ditto twin having the same name
    • declaring that a custom payload mapping shall be applied for each incoming message
  • in the "mappingDefinitions" we define the previously used “ttn-demo-mapping” as JavaScript based mapping:
    • only an “incoming” script is defined as we don’t handle downstream messages to TTN in this example
    • when you want to understand the script in more depth, please take a look at the details about it
Tip: As you have other custom payload_fields for your TTN devices, please adjust the script if you want to see the device’s custom payload fields in your Ditto twins.

Create a common policy for the twins to be created

Eclipse Ditto secures each API access to the managed twins by applying authorization of the authenticated user.
Those “rules” which authenticated user may access which twins are defined in Policies.

In order to proceed with our scenario, we create a single Policy which shall be used for all twins we create in a later step:

curl -X PUT -u ditto:ditto -H 'Content-Type: application/json' -d '{
   "policyId": "org.eclipse.ditto.ttn.demo:twin-policy",
   "entries": {
       "USER": {
           "subjects": {
              "nginx:ditto": {
                  "type": "basic auth user authenticated via nginx"
              }
           },
           "resources": {
               "thing:/": {
                   "grant": ["READ", "WRITE"],
                   "revoke": []
               },
               "policy:/": {
                   "grant": ["READ", "WRITE"],
                   "revoke": []
               },
               "message:/": {
                   "grant": ["READ", "WRITE"],
                   "revoke": []
               }
           }
       },
       "TTN": {
           "subjects": {
              "pre-authenticated:ttn-connection": {
                  "type": "used in the connections authorizationContext to the TTN MQTT"
              }
           },
           "resources": {
               "thing:/": {
                   "grant": ["WRITE"],
                   "revoke": []
               }
           }
       }
   }
}' http://localhost:${DITTO_EXTERNAL_PORT}/api/2/policies/org.eclipse.ditto.ttn.demo:twin-policy

Explanation - what is done here:

  • we create a new Policy with the ID "org.eclipse.ditto.ttn.demo:twin-policy"
  • it contains 2 entries:
    • "USER": this Policy entry contains the authorization information of the user of the twin APIs (authenticated via the contained “nginx” acting as reverse proxy). This user may READ+WRITE the things (twins), this created policy and may also send and receive messages.
    • "TTN": this Policy entry contains the authorization information of the connection to the TTN MQTT broker (the subject was configured as "authorizationContext" when we created the connection. This connection may only WRITE (update) the things (twins).

Create digital twins

Now we have everything in place in order to create digital twins for our devices connected to TTN.

Please export all device ids you want to create digital twins for as comma separated environment variable:

export TTN_DEVICE_IDS='<comma-separated-list-of-your-device-ids>'

After having done that, we can already create the twins in Ditto as the ditto user:

for dev_id in ${TTN_DEVICE_IDS//,/ }
do
    # call your procedure/other scripts here below
    echo "Creating digital twin with Thing ID: org.eclipse.ditto.ttn.demo:$dev_id"
    curl -X PUT -u ditto:ditto -H 'Content-Type: application/json' -d '{
       "policyId": "org.eclipse.ditto.ttn.demo:twin-policy"
    }' http://localhost:${DITTO_EXTERNAL_PORT}/api/2/things/org.eclipse.ditto.ttn.demo:$dev_id
done

Explanation - what is done here:

  • we split the passed in TTN_DEVICE_IDS environment variable by , and iterate over all contained device ids
  • for each device ID we create a new Thing (twin) referencing the already previously created Policy

Access your digital twins via API

Congratulations, if you have done it so far your TTN devices do now have digital twin representations in Eclipse Ditto.

Tip: Install the command line tool jq and pipe the output of the below curl commands to it in order to get prettified and colored JSON
Note: Alternatively to curl, you may also use the locally deployed swagger-ui at http://localhost:${DITTO_EXTERNAL_PORT}/apidoc/ in order to try out Ditto’s HTTP API - make sure to select /api/2 - local Ditto in the ‘Servers’ section - when asked for credentials, use username ‘ditto’ and password ‘ditto’

You can now, for example, use Ditto’s HTTP APIs in order

  • to retrieve the latest reported values: curl -u ditto:ditto http://localhost:${DITTO_EXTERNAL_PORT}/api/2/things/org.eclipse.ditto.ttn.demo:<dev_id>
  • to get a live stream of updates to the twins using SSE (Server Sent Events): curl --http2 -u ditto:ditto -H 'Accept:text/event-stream' -N http://localhost:${DITTO_EXTERNAL_PORT}/api/2/things
  • to list all available twins via the search API: curl -u ditto:ditto http://localhost:${DITTO_EXTERNAL_PORT}/api/2/search/things
    • alternatively, use your browser and open http://localhost:${DITTO_EXTERNAL_PORT}/api/2/search/things
    • when asked for credentials, use username “ditto” and password “ditto”
  • formulate a search query, e.g. only searching for twins with a temperature above 24°, sorted by the last modification, the most recent first to get the most active twin as first result:
    • curl -u ditto:ditto "http://localhost:${DITTO_EXTERNAL_PORT}/api/2/search/things?filter=gt(features/temperature/properties/value,24.0)&option=sort(-_modified),size(5)&fields=thingId,policyId,attributes,features,_modified,_revision"

Which other possibilities do we now have?

Now you have all the possibilities Eclipse Ditto as digital twin framework provides, e.g.:

  • directly use your device’s data in a web application consuming Ditto’s HTTP API
  • directly use your device’s data in a mobile app using Ditto’s bidirectional WebSocket
  • make use of the Eclipse Ditto Java or JavaScript clients which also use the WebSocket to integrate your device’s data
  • create another connection (optionally also applying JavaScript based payload mapping)
    • to e.g. Apache Kafka and forward all the modifications made to your devices to there
    • or using HTTP push in order to call another HTTP API (e.g. insert time series data into an InfluxDB via its HTTP API)



For time reasons we do not go deeper into additional topics, they are possible however, please consult the Ditto documentation:

  • the WebSocket channel and subscribing for change notifications
  • sending downward messages to devices
  • live commands (not retrieving persisted data of devices, but live data)
  • a more detailed introduction into authentication mechanisms (OpenID Connect with OAuth2.0 is possible)
  • possibilities to configure your Policies on every resource level, e.g. allowing individuals to only access certain values of a twin
  • and many other things..

Additional resources

Cleanup after the workshop

Simply perform in the ditto/deployment/docker folder:

docker-compose down

And unistall docker + docker-compose (for docker-compose, just remove the downloaded file) again, if you don’t need it.

JavaScript payload mapping script in detail

Similar to the TTN console’s decoding/converting capabilities of “Payload Formats” of an TTN application, Ditto is able to apply a custom JavaScript function for each consumed message.
That is necessary in order to convert the received data into a Ditto Protocol message including the JSON hierarchy of a so called Thing being the representation of a digital twin.

As the above injected JavaScript payload mapping script is formatted in a single line, this is the script we used pretty formatted, including the jsdoc of the provided function and some other inline comments.

If you need to adjust the script in order to use your own payload_fields, please replace all newlines with \n and escape the single quotes ' in the script with the following replacement: '"'"'. Otherwise the single quotes won’t get correctly escaped in the bash. You can remove the comments before making a single line of the script.

/**
 * Maps the passed parameters to a Ditto Protocol message.
 * @param {Object.<string, string>} headers - The headers Object containing all received header values
 * @param {string} [textPayload] - The String to be mapped
 * @param {ArrayBuffer} [bytePayload] - The bytes to be mapped as ArrayBuffer
 * @param {string} [contentType] - The received Content-Type, e.g. "application/json"
 * @returns {(DittoProtocolMessage|Array<DittoProtocolMessage>)} dittoProtocolMessage(s) -
 *  The mapped Ditto Protocol message,
 *  an array of Ditto Protocol messages or
 *  <code>null</code> if the message could/should not be mapped
 */
function mapToDittoProtocolMsg(
  headers,
  textPayload,
  bytePayload,
  contentType
) {

  let ttnJson = JSON.parse(textPayload);          // we simply parse the incoming TTN message as JSON
  let deviceId = ttnJson['dev_id'];               // and extract some fields we require
  let payloadFields = ttnJson['payload_fields'];  // the 'payload_fields' content is - obviously - different for your application
  
  let attributesObj = {                           // the attributes of a Thing are meant for unstructured data 
    hardwareSerial: ttnJson['hardware_serial'],
    ttnCounter: ttnJson['counter']
  };
  
  let featuresObj = {                             // the features of a Thing e.g. contain sensor data of devices
    temperature: {
      properties: {
        value: payloadFields['temperature_7']
      }
    },
    pressure: {
      properties: {
        value: payloadFields['barometric_pressure_10']
      }
    },
    humidity: {
      properties: {
        value: payloadFields['relative_humidity_8']
      }
    }
  };
  
  let thing = {                                   // a Thing can contain both attributes and features
    attributes: attributesObj,
    features: featuresObj
  };
  
  let dittoHeaders = {
    'response-required': false,     // we don't expect a response sent back to TTN
    'If-Match': '*'                 // we only want to update the thing if it already exists
  };

  return Ditto.buildDittoProtocolMsg(
    'org.eclipse.ditto.ttn.demo',   // this is the namespace used as prefix for Ditto Thing IDs
    deviceId,                       // the TTN device ID is used as "name" part of the Ditto Thing ID 
    'things',
    'twin',
    'commands',
    'modify',
    '/',
    dittoHeaders,
    thing
  );
}

An example message received from the TTN MQTT broker:

{
  "app_id": "iot-campus-be12",
  "dev_id": "node0",
  "hardware_serial": "70B3D5499A2D3954",
  "port": 2,
  "counter": 9449,
  "payload_raw": "B2cA6AhoKwpzJ8oEAwH4",
  "payload_fields": {
    "analog_out_4": 5.04,
    "barometric_pressure_10": 1018.6,
    "relative_humidity_8": 21.5,
    "temperature_7": 23.2
  },
  "metadata": {
    ...
  }
}

would be transformed to the following Ditto Protocol message:

{
  "topic": "org.eclipse.ditto/node0/things/twin/commands/modify",
  "path": "/",
  "value": {
    "attributes": {
      "hardwareSerial": "70B3D5499A2D3954",
      "ttnCounter": 9449
    },
    "features": {
      "temperature": {
         "properties": {
          "value": 23.2
        }
      },
      "pressure": {
        "properties": {
          "value": 1018.6
        }
      },
      "humidity": {
        "properties": {
          "value": 21.5
        }
      }
    }
  }
}



Ditto


The Eclipse Ditto team


April 16, 2020 12:00 AM

Add Your Voice to the 2020 Jakarta EE Developer Survey

April 07, 2020 01:00 PM

Our third annual Jakarta EE Developer Survey is now open and I encourage everyone to take a few minutes and complete the survey before the April 30 deadline.

April 07, 2020 01:00 PM

Red Hat XML language server becomes LemMinX, bringing new release and updated VS Code XML extension

by David Kwon at March 27, 2020 07:00 AM

A new era has begun for Red Hat’s XML language server, which was migrated to the Eclipse Foundation under a new project name: Eclipse LemMinX (a reference to the Lemmings video game). The Eclipse LemMinX project is arguably the most feature-rich XML language server available. Its migration opens more doors for future development and utilization. In addition, shortly after its migration, the Eclipse LemMinX project and Red Hat also released updates: Eclipse LemMinX version 0.11.1 and the Red Hat VS Code XML extension.

Eclipse LemMinX version 0.11.1

Eclipse LemMinX version 0.11.1 mainly focuses on bug fixes that are outlined in the changelog here. For some history, Eclipse LemMinX started as an open source project created by Angelo ZERR in mid-2018. Angelo’s XML language server implementation was well ahead of the game in terms of features and code infrastructure. As Red Hat’s interest in an XML language server continued to grow, Red Hat joined forces with Angelo (who later officially joined Red Hat as a senior software engineer) to create the most feature-rich and easy-to-use XML language server possible.

Thanks to the XML language server’s popularity and functionality, clients like Eclipse (with Wild Web Developer), VS Code (with XML Language Support by Red Hat), and Vim/Neovim (with coc-xml) started consuming the XML language server. In addition, all LSP features (completion, validation, quick fix, etc.) provided by the XML language server are easily extensible. This helped motivate other projects to extend the LSP features, instead of implementing them themselves from scratch.

For example, there are extensions specific for Maven and Liferay. The Maven extension extends the completion feature to manage advanced dependency completion, and the Liferay extension extends the hover feature to fit specific use cases. We hope that the contribution to the Eclipse Foundation facilitates easier consumption from related projects and attracts new contributors beyond people from Red Hat.

Red Hat VS Code XML extension

In addition, we released the Red Hat VS Code XML extension (which, of course, consumes the Eclipse LemMinX XML language server to provide language features). This extension provides an excellent all-in-one package for editing XML, XSD, and DTD files in VS Code, but what makes this extension stand out is the support for XSD and DTD schema validation for XML files.

This new release also focussed on bug fixes, which are outlined in the changelog here.

Share

The post Red Hat XML language server becomes LemMinX, bringing new release and updated VS Code XML extension appeared first on Red Hat Developer.


by David Kwon at March 27, 2020 07:00 AM

Eclipse IoT Website Redesign

March 24, 2020 02:12 PM

The Eclipse IoT website redesign is now live! This project was a huge undertaking for us; we added 5,245 lines of code, closed 15 issues, removed 82,361 lines of code and made 84 commits.

Eclipse IoT Homepage

Yes, you got that right, the end result was -77,116 lines of code because we took the opportunity to clean up our codebase.

To kickoff this initiative, the community help us define the goals for this project:

  1. Improve our information architecture:
    Led by Frédéric Desbiens, the Eclipse IoT community created a new structure for the website.
  2. Contribute to the recruitment of new members and adopters:
    Adopters and Members are now top-level menu items. We also created a new “How to be Listed as an Adopter” page.
  3. Ensure the website cathers to both technical and non-technical visitors:
    We made some big improvements to our Community and Resources sections. These sections cathers to both technical and non-technical users since you can find Case-Studies, Market Reports, Videos, White Papers and some additional information on how you can stay informed about what’s currently going on with Eclipse IoT.
  4. Drive adoption for our technologies:
    We now fetch project information from the Eclipse PMI each time we push a change to the website. Our stale project page is now a thing of the past!

In an effort to communicate our project plans with our community, we created a public GitHub project with two milestones.

Being open and transparent allows us to natually inform our communities about our efforts and we think it’s a great way for us to collaborate and share tasks. As the project manager, this workflow allows me to ensure that the project is moving forward as planned.

We also created a set of brand guidelines for the Eclipse IoT Working Group. These guidelines include the brand font (Roboto), logo variations, color swatches, and acceptable logo treatments. This will help us consistently deploy the brand across different digital and print channels as well as Eclipse IoT events.

Overall, I am very happy with this new redesign! A huge thank you to Eric Poirier, Matt Joanisse, a graphic designer hired by the Foundation to work on the site, Christie Witt, Joe Speed, Frédéric Desbiens and Martin Lowe!


March 24, 2020 02:12 PM

Eclipse Oomph: Suppress Welcome Page

by kthoms at March 19, 2020 04:37 PM

I am frequently spawning Eclipse workspaces with Oomph setups and the first action I do when a new workspace is provisioned is to close Eclipse’s welcome page. So I wanted to suppress that for a current project setup. So I started searching where Eclipse stores the preference that disables the intro page. The location of that preference is within the workspace directory at

.metadata/.plugins/org.eclipse.core.runtime/.settings/org.eclipse.ui.prefs

The content of the preference file is

eclipse.preferences.version=1
showIntro=false

So to make Oomph create the preference file before the workspace is started the first time use a Resource Creation task and set the Target URL

${workspace.location|uri}/.metadata/.plugins/org.eclipse.core.runtime/.settings/org.eclipse.ui.prefs

Then put the above mentioned preference content as Content value.


by kthoms at March 19, 2020 04:37 PM

WTP 3.17 Released!

March 18, 2020 03:30 PM

The Eclipse Web Tools Platform 3.17 has been released! Installation and updates can be performed using the Eclipse IDE 2020-03 Update Site or through the Eclipse Marketplace . Release 3.17 is included in the 2020-03 Eclipse IDE for Enterprise Java Developers , with selected portions also included in several other packages . Adopters can download the R3.17 build directly and combine it with the necessary dependencies.

More news


March 18, 2020 03:30 PM

MPS’ Quest of the Holy GraalVM of Interpreters

by Niko Stotz at March 11, 2020 11:19 PM

A vision how to combine MPS and GraalVM

Way too long ago, I prototyped a way to use GraalVM and Truffle inside JetBrains MPS. I hope to pick up this work soon. In this article, I describe the grand picture of what might be possible with this combination.

Part I: Get it Working

Step 0: Teach Annotation Processors to MPS

Truffle uses Java Annotation Processors heavily. Unfortunately, MPS doesn’t support them during its internal Java compilation. The feature request doesn’t show any activity.

So, we have to do it ourselves. A little less time ago, I started with an alternative Java Facet to include Annotation Processors. I just pushed my work-in-progress state from 2018. As far as I remember, there were no fundamental problems with the approach.

Optional Step 1: Teach Truffle Structured Sources

For Truffle, all executed programs stem from a Source. However, this Source can only provide Bytes or Characters. In our case, we want to provide the input model. The prototype just put the Node id of the input model as a String into the Source; later steps resolved the id against MPS API. This approach works and is acceptable; directly passing the input node as object would be much nicer.

Step 2: Implement Truffle Annotations as MPS Language

We have to provide all additional hints as Annotations to Truffle. They are complex enough, so we want to leverage MPS’ language features to directly represent all Truffle concepts.

This might be a simple one-to-one representation of Java Annotations as MPS Concepts, but I’d guess we can add some more semantics and checks. Such feedback within MPS should simplify the next steps: Annotation Processors (and thus, Truffle) have only limited options to report issues back to us.

We use this MPS language to implement the interpreter for our DSL. This results in a TruffleLanguage for our DSL.

Step 3: Start Truffle within MPS

At the time when I wrote the proof-of-concept, a TruffleLanguage had to be loaded at JVM startup. To my understanding, Truffle overcame this limitation. I haven’t looked into the current possibilities in detail yet.

I can imagine two ways to provide our DSL interpreter to the Truffle runtime:

  1. Always register MpsTruffleLanguage1, MpsTruffleLanguage2, etc. as placeholders. This would also work at JVM startup. If required, we can register additional placeholders with one JVM restart.
    All non-colliding DSL interpreters would be MpsTruffleLanguage1 from Truffle’s point of view. This works, as we know the MPS language for each input model, and can make sure Truffle uses the right evaluation for the node at hand. We might suffer a performance loss, as Truffle had to manage more evaluations.

    What are non-colliding interpreters? Assume we have a state machine DSL, an expression DSL, and a test DSL. The expression DSL is used within the state machines; we provide an interpreter for both of them.
    We provide two interpreters for the test DSL: One executes the test and checks the assertions, the other one only marks model nodes that are covered by the test.
    The state machine interpreter, the expression interpreter, and the first test interpreter are non-colliding, as they never want to execute on the same model node. All of them go to MpsTruffleLanguage1.
    The second test interpreter does collide, as it wants to do something with a node also covered by the other interpreters. We put it to MpsTruffleLanguage2.

  2. We register every DSL interpreter as a separate TruffleLanguage. Nice and clean one-to-one relation. In this scenario, we probably had to get Truffle Language Interop right. I have not yet investigated this topic.

Step 4: Translate Input Model to Truffle Nodes

A lot of Truffle’s magic stems from its AST representation. Thus, we need to translate our input model (a.k.a. DSL instance, a.k.a. program to execute) from MPS nodes into Truffle Nodes.

Ideally, the Truffle AST would dynamically adopt any changes of the input model — like hot code replacement in a debugger, except we don’t want to stop the running program. From Truffle’s point of view this shouldn’t be a problem: It rewrites the AST all the time anyway.

DclareForMPS seems a fitting technology. We define mapping rules from MPS node to Truffle Node. Dclare makes sure they are in sync, and input changes are propagated optimally. These rules could either be generic, or be generated from the interpreter definition.

We need to take care that Dclare doesn’t try to adapt the MPS nodes to Truffle’s optimizing AST changes (no back-propagation).

We require special handling for edge cases of MPS → Truffle change propagation, e.g. the user deletes the currently executed part of the program.

For memory optimization, we might translate only the entry nodes of our input model immediately. Instead of the actual child Truffle Nodes, we’d add special nodes that translate the next part of the AST.
Unloading the not required parts might be an issue. Also, on-demand processing seems to conflict with Dclare’s rule-based approach.

Part II: Adapt to MPS

Step 5: Re-create Interpreter Language

The MPS interpreter framework removes even more boilerplate from writing interpreters than Truffle. The same language concepts should be built again, as abstraction on top of the Truffle Annotation DSL. This would be a new language aspect.

Step 6: Migrate MPS Interpreter Framework

Once we had the Truffle-based interpreter language, we want to use it! Also, we don’t want to rewrite all our nice interpreters.

I think it’s feasible to automatically migrate at least large parts of the existing MPS interpreter framework to the new language. I would expect some manual adjustment, though. That’s the price we had to pay for two orders of magnitude performance improvement.

Step 7: Provide Plumbing for BaseLanguage, Checking Rules, Editors, and Tests

Using the interpreter should be as easy as possible. Thus, we have to provide the appropriate utilities:

  • Call the interpreter from any BaseLanguage code.
    We had to make sure we get language / model loading and dependencies right. This should be easier with Truffle than with the current interpreter, as most language dependencies are only required at interpreter build time.
  • Report interpreter results in Checking Rules.
    Creating warnings or errors based on the interpreter’s results is a standard use-case, and should be supported by dedicated language constructs.
  • Show interpreter results in an editor.
    As another standard use-case, we might want to show the interpreter’s results (or a derivative) inside an MPS editor. Especially for long-running or asynchronous calculations, getting this right is tricky. Dedicated editor extensions should take care of the details.
  • Run tests that involve the interpreter.
    Yet another standard use-case: our DSL defines both calculation rules and examples. We want to assure they are in sync, meaning executing the rules in our DSL interpreter and comparing the results with the examples. This must work both inside MPS, and in a headless build / CI test environment.

Step 8: Support Asynchronous Interpretation and/or Caching

The simple implementation of interpreter support accepts a language, parameters, and a program (a.k.a. input model), and blocks until the interpretation is complete.

This working mode is useful in various situations. However, we might want to run long-running interpretations in the background, and notify a callback once the computation is finished.

Example: An MPS editor uses an interpreter to color a rule red if it is not in accordance with a provided example. This interpretation result is very useful, even if it takes several seconds to calculate. However, we don’t want to block the editor (or even whole MPS) for that long.

Extending the example, we might also want to show an error on such a rule. The typesystem runs asynchronously anyways, so blocking is not an issue. However, we now run the same expensive interpretation twice. The interpreter support should provide configurable caching mechanisms to avoid such waste.

Both asynchronous interpretation and caching benefit from proper language extensions.

Step 9: Integrate with MPS Typesystem and Scoping

Truffle needs to know about our DSL’s types, e.g. for resolving overloaded functions or type casting. We already provide this information to the MPS typesystem. I didn’t look into the details yet; I’d expect we could generate at least part of the Truffle input from MPS’ type aspect.

Truffle requires scoping knowledge to store variables in the right stack frame (and possibly other things I don’t understand yet). I’d expect we could use the resolved references in our model as input to Truffle. I’m less optimistic to re-use MPS’ actual scoping system.

For both aspects, we can amend the missing information in the Interpreter Language, similar to the existing one.

Step 10: Support Interpreter Development

As DSL developers, we want to make sure we implemented our interpreter correctly. Thus, we write tests; they are similar to other tests involving the interpreter.

However, if they fail, we don’t want to debug the program expressed in our DSL, but our interpreter. For example, we might implement the interpreter for a switch-like construct, and had forgotten to handle an implicit default case.

Using a regular Java debugger (attached to our running MPS instance) has only limited use, as we had to debug through the highly optimized Truffle code. We cannot use Truffle’s debugging capabilities, as they work on the DSL.
There might be ways to attach a regular Java debugger running inside MPS in a different thread to its own JVM. Combining the direct debugger access with our knowledge of the interpreter’s structure, we might be able to provide sensible stepping through the interpreter to the DSL developer.

Simpler ways to support the developers might be providing traces through the interpreter, or ship test support where the DSL developer can assure specific evaluators were (not) executed.

Step 11: Create Language for Interop

Truffle provides a framework to describe any runtime in-memory data structure as Shape, and to convert them between languages. This should be a nice extension of MPS’ multi-language support into the runtime space, supported by an appropriate Meta-DSL (a.k.a. language aspect).

Part III: Leverage Programming Language Tooling

Step 12: Connect Truffle to MPS’ Debugger

MPS contains the standard interactive debugger inherited from IntelliJ platform.

Truffle exposes a standard interface for interactive debuggers of the interpreted input. It takes care of the heavy lifting from Truffle AST to MPS input node.

If we ran Truffle in a different thread than the MPS debugger, we should manage to connect both parts.

Step 13: Integrate Instrumentation

Truffle also exposes an instrumentation interface. We could provide standard instrumentation applications like “code” coverage (in our case: DSL node coverage) and tracing out-of-the-box.

One might think of nice visualizations:

  • Color node background based on coverage
  • Mark the currently executed part of the model
  • Project runtime values inline
  • Show traces in trace explorer

Other possible applications:

  • Snapshot mechanism for current interpreter state
  • Provide traces for offline debugging, and play them back

Part IV: Beyond MPS

Step 14: Serialize Truffle Nodes

If we could serialize Truffle Nodes (before any run-time optimization), we would have an MPS-independent representation of the executable DSL. Depending on the serialization format (implement Serializable, custom binary format, JSON, etc.), we could optimize for use-case, size, loading time, or other priorities.

Step 15: Execute DSL stand-alone without Generator

Assume an insurance calculation DSL.
Usually, we would implement

  • an interpreter to execute test cases within MPS,
  • a Generator to C to execute on the production server,
  • and a Generator to Java to provide an preview for the insurance agent.

With serialized Truffle Nodes, we need only one interpreter:

Part V: Crazy Ideas

Step 16: Step Back Debugger

By combining Instrumentation and debugger, it might be feasible to provide step-back debugging.

In the interpreter, we know the complete global state of the program, and can store deltas (to reduce memory usage). For quite some DSLs, this might be sufficient to store every intermediate state and thus arbitrary debug movement.

Step 17: Side Step Debugger

By stepping back through our execution and following different execution paths, we could explore alternate outcomes. The different execution path might stem from other input values, or hot code replacement.

Step 18: Explorative Simulations

If we had a side step debugger, nice support to project interpretation results, and a really fast interpreter, we could run explorative simulations on lots of different executions paths. This might enable legendary interactive development.


by Niko Stotz at March 11, 2020 11:19 PM

Postmortem - February 7 storage and authentication outage

by Denis Roy at February 20, 2020 04:12 PM

On Friday, February 7 2020, Eclipse.org suffered a severe service disruption to many of its web properties when our primary authentication server and file server suffered a hardware failure.

For 90 minutes, our main website, www.eclipse.org, was mostly available, as was our Bugzilla bug tracking tool, but logging in was not possible. Wiki, Eclipse Marketplace and other web properties were degraded. Git and Gerrit were both completely offline for 2 hours and 18 minutes. Authenticated access to Jiro -- our Jenkins+Kubernetes-based CI system, was not possible, and builds that relied on Git access failed during that time.

There was no data loss, but there were data inconsistencies. A dozen Git repositories and Gerrit code changes were in an inconsistent state due to replication schedules, but thanks to the distributed nature of Git, the code commits were still in local developer Git repositories, as well as on the failed server, which we were eventually able to revive (in an offline environment). Data inconsistencies were more severe in our LDAP accounts database, where dozens of users were unable to log in, and in some isolated cases, users reported that their account was reverted back to old data from years prior.

In hindsight, we feel this outage could have, and should have been avoided. We’ve identified many measures we must enact to prevent such unplanned outages in the future. Furthermore, our communication and incident handling processes proved to be flawed, and will be scrutinized and improved, to ensure our community is better informed during unplanned incidents.

Lastly, we’ve identified aging hardware and Single Points of Failure (SPoF) that must be addressed.

 

File server & authentication setup

At the center of the Eclipse infra is a pair of servers that handle 2 specific tasks:

  • Network Attached Storage (NAS) via NFS

  • User Authentication via OpenLDAP

The server pair consists of a primary system, which handles all the traffic, and a hot spare. Both servers are configured identically for production service, but the spare server sits idly and receives data periodically from the primary. This specific architecture was originally implemented in 2005, with periodical hardware upgrades over time.

 

Timeline of events

Friday Feb 7 - 12:33pm EST: Fred Gurr (Eclipse Foundation IT/Releng team) reports on the Foundation’s internal Slack channel that something is happening to the Infra. Denis observes many “Flaky” status reports on https://status.eclipse.org but is in transit and cannot investigate further. Webmaster Matt Ward investigates.

12:43pm: Matt confirms that our primary nfs/ldap server is not responding, and activates “Plan A: assess and fix”.

12:59pm: Denis reaches a computer and activates “Plan B: prepare for Failover” while Matt works on Plan A. The “Sorry, we are down” page is served for all Flaky services except www.eclipse.org, which continues to be served successfully by our nginx cache.

1:18pm: The standby server is ready to assume the “primary” role.

1:29pm: Matt makes the call for failover, as the severity of the hardware failure is not known, and not easily recoverable.

1:49pm: www.eclipse.org, Bugzilla, Marketplace, Wiki return to stable service on the new primary.

2:18pm: Git and Gerrit return to stable service.

2:42pm: Our Kubernetes/OpenShift cluster is updated to the latest patchlevel and all CI services restarted.

4:47pm: All legacy JIPP servers are restarted, and all other remaining services report functional.  At this time, we are not aware of any issues.

During the weekend, Matt continues to monitor the infra. Authentication issues crop up over the weekend, which are caused by duplicated accounts and are fixed by Matt.

Monday, 4:49am EST: Mikaël Barbero (Eclipse Foundation IT/Releng team) reports that there are more duplicate users in LDAP that cannot log into our systems. This is now a substantial issue. They are fixed systematically with an LDAP duplicate finder, but the process is very slow.

10:37am: First Foundation broadcast on the cross-project mailing list that there is an issue with authentication.

Tuesday, 9:51am: Denis blogs about the incident and posts a message to the eclipse.org-committers mailing list about the ongoing authentication issues. The message, however, is held for moderation and is not distributed until many hours later.

Later that day: Most duplicated accounts have been removed, and just about everything is stabilized. We do not yet understand the source of the duplicates.

Wednesday: duplicate removals continue, as well as investigation into the cause.

Thursday 9:52am: We file a dozen bugs against projects whose Git and Gerrit repos may be out of sync. Some projects had already re-pushed or rebased their missing code patches and resolved the issue as FIXED.

Friday, 2:58pm: All remaining duplicates are removed. Our LDAP database is fully cleaned. The failed server re-enters production as the hot standby - even though its hardware is not reliable. New hardware is sourced and ordered.

 

Hardware failure

The physical servers assuming our NAS/LDAP setup are server-class hardware, 2U chassis with redundant power supplies, ECC (error checking and correction) memory, RAID-5 disk arrays with a battery-backup RAID controller memory. Both primary and standby servers were put into production in 2011.

On February 7, the primary server experienced a kernel crash from the RAID controller module. The RAID controller detected an unrecoverable ECC memory error. The entire server became unresponsive.

As originally designed in 2005, periodical (batched) data updates from the primary to the hot spare were simple to set up and maintain. This method also had a distinct advantage over live replication: rapid recovery in case of erasure (accidental or malicious) or data tampering. Of course, this came at a cost of possible data loss. However, it was deemed that critical data (in our case, Source Code) susceptible to loss during the short time was also available on developer workstations.


Failover and return to stability

As the standby server was prepared for production service, the reasons for the crash on the primary server were investigated. We assessed the possibility of continuing service on the primary; that course of action would have provided the fastest recovery with the fewest surprises later on.

As the nature of the hardware failure remained unknown, failover was the only option. We confirmed that some data replication tasks had run less than one hour prior to failure, and all data replication was completed no later than 3 hours prior. IP addresses were updated, and one by one, services that depended on NFS and authentication were restarted to flush caches and minimize any potential for an inconsistent state.

At about 4:30pm, or four hours after the failure, both webmasters were confident that the failover was successful, and that very little dust would settle over the weekend.
 

Authentication issues

Throughout the weekend, we had a few reports of authentication issues -- which were expected, since we failed over to a standby authentication source that was at least 12 hours behind the primary. These issues were fixed as they were reported, and nothing seemed out of place.

On Monday morning, Feb 10th, the Foundation’s Releng team reported that several committers had authentication issues to the CI systems. We then suspected that something else was at play with our authentication database, but it was not clear to us what had happened, or what the magnitude was. The common issue was duplicate accounts -- some users had an account in two separate containers simultaneously, which prevented users from being able to authenticate. These duplicates were removed as rapidly as we could, and we wrote scripts to identify old duplicates and purge them -- but with >450,000 accounts, it was time-consuming.

At that time, we got so wrapped up in trying to understand and resolve the issue that we completely underestimated its impact on the community, and we were absolutely silent about it.

 

Problem solved

On Friday afternoon, February 14, we were able to finally clean up all the duplicate accounts and understand why they existed in the first place.

Prior to December, 2011, our LDAP database only contained committer accounts. In December 2011, we imported all the non-committer accounts from Bugzilla and Wiki into an LDAP container we named “Community”. This allowed us to centralize authentication around a single source of truth: LDAP.

All new accounts were, and are created in the Community container, and are moved into the Committer container if/when they became an Eclipse Committer.

Our primary->secondary LDAP sync mechanism was altered, at that time, to sync the Community container as well -- but it was purely additive. Once you had an account in Community, it was there for life on the standby server, even if you became a committer later on. Or if you’d ever change your email address. This was the source of the duplicate accounts on the standby server.

A new server pair has been ordered on February 14, 2020 . These servers will be put into production service as soon as possible, and the old hardware will be recommissioned to clustered service. With these new machines, we believe our existing architecture and configuration can continue to serve us well over the coming months and years.

 

Take-aways and proposed improvements

Although the outage didn’t last incredibly long (2 hours from failure to the beginning of restored service), we feel it shouldn’t have occurred in the first place. Furthermore, we’ve identified key areas where our processes can be improved - notably, in how we communicate with you.

Here are the action items we’re committed to implementing in the near term, to improve our handling of such incidents:

  • Communication: Improved Service Status page.  https://status.eclipse.org gives a picture of what’s going on, but with an improved service, we can communicate the nature of outages, the impact, and estimated time until service is restored.

  • Communication: Internally, we will improve communication within our team and establish a maintenance log, whereby members of the team can discover the work that has been done.

  • Staffing: we will explore the possibility of an additional IT hire, thus enhancing our collective skillset, and enabling more overall time on the quality and reliability of the infra.

  • Aging Hardware: we will put top-priority on resolving aging SPoF, and be more strict about not running hardware devices past their reasonable life expectancy.

    • In the longer term, we will continue our investment in replacing SPoF with more robust technologies. This applies to authentication, storage, databases and networking.

  • Process and procedures: we will allocate more time to testing our disaster recovery and business continuity procedures. Such tests would likely have revealed the LDAP sync bug.

We believe that these steps will significantly reduce unplanned outages such as the one that occured on February 7. They will also help us ensure that, should a failure occur, we recover and return to a state of stability more rapidly. Finally, they will help you understand what is happening, and what the timelines to restore service are, so that you can plan your work tasks and remain productive.


by Denis Roy at February 20, 2020 04:12 PM

Anatomy of a server failure

by Denis Roy at February 11, 2020 02:51 PM

Last Friday, Feb 7 at around 12:30pm (Ottawa time), I received a notification from Fred Gurr (part of our release engineering team) that something was going on with the infra. The multitude of colours on the Eclipse Service Status page confirmed it -- many of our services and tools were either slow, or unresponsive.

After some initial digging, we discovered that the primary backend file server (housing Git, Gerrit, web session data, and a lot of files for our various web properties) was not responding. It was also host to our accounts database -- the center for all user authentication.

Jumping into action

It's a well-rehearsed routine for colleage Matt Ward and I -- he worked on assessing the problem and identifying the fix, while I worked on Plan B - failover to our hot standby. At around 1:35pm, roughly 1 hour into the outage, Matt made the call -- failover is the only option, as a  hardware component has failed. 20 minutes later, most services had either recovered or were well on their way.

But the failover is not perfect. Data is sync'ed every 2 hours. Account and authentication info is replicated nightly. This was a by-design strategy decision, as it offers us a recovery window in case of data erasure, corruption or unauthenticated access.

Lessons learned

The failed server was put in service in 2011, celebrating its *gasp* ninth year of 24/7 service. That is a few years too many, and although it (and its standby counterpart) were slated for replacement in 2017, the effort was pushed back to make room for competing priorities. In a moment of bitter irony, the failed hardware was planned to be replaced in the second quarter of this year -- mere months away. We gambled with the house, we lost.

Cleaning up

Today, there is much dust to settle. Our authentication database has some gremlins that we need to fix, and there could be a few missing commits that were not replicated.

We also need to source replacement hardware for the failed component, so that we can re-enable our hot standby. At the same time, we need to immediately source replacement servers for those 2011 dinosaurs. They've served us well, but their retirement is long overdue.


by Denis Roy at February 11, 2020 02:51 PM

Interfacing null-safe code with legacy code

by Stephan Herrmann at February 06, 2020 07:38 PM

When you adopt null annotations like these, your ultimate hope is that the compiler will tell you about every possible NullPointerException (NPE) in your program (except for tricks like reflection or bytecode weaving etc.). Hallelujah.

Unfortunately, most of us use libraries which don’t have the blessing of annotation based null analysis, simply because those are not annotated appropriately (neither in source nor using external annotations). Let’s for now call such code: “legacy”.

In this post I will walk through the options to warn you about the risks incurred by legacy code. The general theme will be:

Can we assert that no NPE will happen in null-checked code?

I.e., if your code consistently uses null annotations, and has passed analysis without warnings, can we be sure that NPEs can only ever be thrown in the legacy part of the code? (NPEs inside legacy code are still to be expected, there’s nothing we can change about that).

Using existing Eclipse versions, one category of problems would still go undetected whereby null-checked code could still throw NPE. This has been recently fixed bug.

Simple data flows

Let’s start with simple data flows, e.g., when your program obtains a value from legacy code, like this:

NullFrom_getProperty

You shouldn’t be surprised, the javadoc even says: “The method returns null if the property is not found.” While the compiler doesn’t read javadoc, it can recognize that a value with unspecified nullness flows into a variable with a non-null type. Hence the warning:

Null type safety (type annotations): The expression of type ‘String’ needs unchecked conversion to conform to ‘@NonNull String’

As we can see, the compiler warned us, so we are urged to fix the problem in our code. Conversely, if we pass any value into a legacy API, all bad that can happen would happen inside legacy code, so nothing to be done for our mentioned goal.

The underlying rule is: legacy values can be safely assigned to nullable variables, but not to non-null variables (example Properties.getProperty()). On the other hand, any value can be assigned to a legacy variable (or method argument).

Put differently: values flowing from null-checked to legacy pose no problems, whereas values flowing the opposite direction must be assumed to be nullable, to avoid problems in null-checked code.

Enter generics

Here be dragons.

As a minimum requirement we now need null annotations with target TYPE_USE (“type annotations”), but we have this since 2014. Good.

NullFromLegacyList

Here we obtain a List<String> value from a Legacy class, where indeed the list names is non-null (as can be seen by successful output from names.size()). Still things are going south in our code, because the list contained an unexpected null element.

To protect us from this problem, I marked the entire class as @NonNullByDefault, which causes the type of the variable names to become List<@NonNull String>. Now the compiler can again warn us about an unsafe assignment:

Null type safety (type annotations): The expression of type ‘List<String>’ needs unchecked conversion to conform to ‘List<@NonNull String>’

This captures the situation, where a null value is passed from legacy to null-checked code, which is wrapped in a non-null container value (the list).

Here’s a tricky question:

Is it safe to pass a null-checked value of a parameterized type into legacy code?

In the case of simple values, we saw no problem, but the following example tells us otherwise once generics are involved:
NullIntoNonNullList

Again we have a list of type List<@NonNull String>, so dereferencing values obtained from that list should never throw NPE. Unfortunately, the legacy method printNames() succeeded to break our contract by inserting null into the list, resulting in yet another NPE thrown in null-checked code.

To describe this situation it helps to draw boundaries not only between null-checked and legacy code, but also to draw a boundary around the null-checked value of parameterized type List<@NonNull String>. That boundary is breached when we pass this value into legacy code, because that code will only see List<String> and happily invoke add(null).

This is were I recently invented a new diagnostic message:

Unsafe null type conversion (type annotations): The value of type ‘List<@NonNull String>’ is made accessible using the less-annotated type ‘List<String>’

By passing names into legacy code, we enable a hidden data flow in the opposite direction. In the general case, this introduces the risk of NPE in otherwise null-checked code. Always?

Wildcards

Java would be a much simpler language without wildcards, but a closer look reveals that wildcards actually don’t only help for type safety but also for null-safety. How so?

If the legacy method were written using a wildcard, it would not be (easily) possible to sneak in a null value, here are two attempts:
SneakAttempts

The first attempt is an outright Java type error. The second triggers a warning from Eclipse, despite the lack of null annotations:

Null type mismatch (type annotations): ‘null’ is not compatible to the free type variable ‘?’

Of course, compiling the legacy class without null-checking would still bypass our detection, but chances are already better.

If we add an upper bound to the wildcard, like in List<? extends CharSequence>, not much is changed. A lower bound, however, is an invitation for the legacy code to insert null at whim: List<? super String> will cause names.add() to accept any String, including the null value. That’s why Eclipse will also complain against lower bounded wildcards:

Unsafe null type conversion (type annotations): The value of type ‘List<@NonNull String>’ is made accessible using the less-annotated type ‘List<? super String>’

Comparing to raw types

It has been suggested to treat legacy (not null-annotated) types like raw types. Both are types with a part of the contract ignored, thereby causing risks for parts of the program that still rely on the contract.

Interestingly, raw types are more permissive in the parameterized-to-raw conversion. We are generally not protected against legacy code inserting an Integer into a List<String> when passed as a raw List.

More interestingly, using a raw type as a type argument produces an outright Java type error, so my final attempt at hacking the type system failed:

RawTypeArgument

Summary

We have seen several kinds of data flow with different risks:

  • Simple values flowing checked-to-legacy don’t cause any specific headache
  • Simple values flowing legacy-to-checked should be treated as nullable to avoid bad surprises. This is checked.
  • Values of parameterized type flowing legacy-to-checked must be handled with care at the receiving side. This is checked.
  • Values of parameterized type flowing checked-to-legacy add more risks, depending on:
    • nullness of the type argument (@Nullable type argument has no risk)
    • presence of wildcards, unbounded or lower-bounded.

Eclipse can detect all mentioned situations that would cause NPE to be thrown from null-checked code – the capstone to be released with Eclipse 2020-03, i.e., coming soon …


by Stephan Herrmann at February 06, 2020 07:38 PM

Eclipse and Handling Content Types on Linux

by Mat Booth at February 06, 2020 03:00 PM

Getting deep desktop integration on Linux.


by Mat Booth at February 06, 2020 03:00 PM

Setting up e(fx)clipse RCP development for Java11+ and PDE

by Tom Schindl at January 28, 2020 03:00 PM

As I’m currently converting a Java-8 project to AdoptJDK-11 and JavaFX-11+ I thought it would be a good idea document the steps involved.

Prequisits

I assume you have installed:

Configure your Eclipse

Java Settings

Make AdoptJDK-11 the default JRE unless it is already the default.

Make sure AdoptJDK-11 is used for the Java-SE-11 EE

e(fx)clipse Settings

Open the JavaFX-Preference Page and point it to your JavaFX-11-SDK

This step is required because JavaFX is not part of AdoptJDK-11 and hence Eclipse won’t find the libraries and your code won’t compile inside the IDE (we’ll revisit this topic below once more)

Setup a target platform

Create your project

Bootstrap your project

Check your project setup

Check if Eclipse correctly recognized the javafx-library and magically added them to your plug-in dependendencies

Implement the UI

Add javax.annotation to your MANIFEST.MF

Before you can write the Java-Code for your UI you need to add javax.annotation-package to your bundle (this used to ship with Java-8 has been removed since then)

Create a Java-Class

package my.app.app;

import javax.annotation.PostConstruct;

import javafx.scene.control.Label;
import javafx.scene.layout.BorderPane;

public class SamplePart {
  @PostConstruct
  void init(BorderPane root) {
    root.setCenter(
      new Label(System.getProperty("javafx.version"))
    );
  }
}

Adapt your e4xmi

Running your application

While everything happily compiles running the application would fail because in the initial steps we only satisfied the Eclipse compiler by magically injecting the JavaFX-Libraries in your Plug-in-Dependency (see above).

To run the application we need to decide how we’d like to ship JavaFX:

  • next to your application in a folder
  • as part of your eclipse application inside the the plugins-directory
  • you jlink yourself a JDK

We’ll not take a look at the 3rd solution as part of this blog post!

Running with an external folder

Open the generated launch configuration and append -Defxclipse.java-modules.dir=PATH_TO_YOUR_JAVAFX_LIBS in the VM arguments-field

Running with bundled javafx-modules

We provide OSGi-Bundles who contain the original and unmodified JavaFX-Modules (note you can NOT use them are OSGi-Dependencies!) you can use them by adding http://downloads.efxclipse.bestsolution.at/p2-repos/openjfx-11/repository/

Add them to your launch configuration

Exporting your application

The project wizard already generated the basic infrastructure for you but we need to make some small changes. We assume you’ve chosen to option to ship the JavaFX-modules as part of the plugins-directory to keep it simple.

The wizard already added the JavaFX-Standard-Feature into your product-File

It also added the parts to satisfy the compiler in your releng/pom.xml

While most of the stuff is already in place we need to make 2 small modifications:

  • Update the tycho-version property to 1.5.0
  • Change the export environment to match the operation-system(s) you want to target
    • Windows: os=win32, ws=win32, arch=x86_64
    • Linux: os=linux, ws=gtk, arch=x86_64
    • OS-X: os=macosx, ws=cocoa, arch=x86_64

Producing a native launcher

As we anyway have to produce a platform-dependent we can also add the creation of a native launcher. For that open your .product-File:

  • Tick the “The product includes native launcher artifacts”
  • Change the application to main-thread-application


by Tom Schindl at January 28, 2020 03:00 PM

JDT without Eclipse

January 16, 2020 11:00 PM

The JDT (Java Development Tools) is an important part of Eclipse IDE but it can also be used without Eclipse.

For example the Spring Tools 4, which is nowadays a cross-platform tool (Visual Studio Code, Eclipse IDE, …), is highly using the JDT behind the scene. If you would like to know more, I recommend you this podcast episode: Spring Tools lead Martin Lippert

A second known example is the Java Formatter that is also part of the JDT. Since a long time there are maven and gradle plugins that performs the same formatting as Eclipse IDE but as part of the build (often with the possibly to break the build when the code is wrongly formatted).

Reusing the JDT has been made easier since 2017 when it was decided to publish each release and its dependencies on maven central (with following groupId: org.eclipse.jdt, org.eclipse.platform). Stephan Herrmann did a lot of work to achieve this goal. I blogged about this: Use the Eclipse Java Development Tools in a Java SE application and I have pushed a simple example the Java Formatter is used in a simple main(String[]) method built by a classic minimal Maven project: java-formatter.

Workspace or not?

When using the JDT in an headless application, two cases needs to be distinguished:

  1. Some features (the parser, the formatter…) can be used in a simple Java main method.

  2. Other features (search index, AST rewriter…) require a workspace. This imply that the code run inside an OSGi runtime.

To illustrate this aspect, I took some of the examples provided by the site www.programcreek.com in the blog post series Eclipse JDT Tutorials and I adapted them so that each code snippet can be executed inside a JUnit test. This is the Programcreek examples project.

I have split the unit-tests into two projects:

  • programcreek-standalone for the one that do not require OSGi. The maven project is really simple (using the default convention everywhere)

  • programcreek-osgi for the one that must run inside an OSGi runtime. The bnd maven plugins are configured in the pom.xml to take care of the OSGi stuff.

If you run the test with Maven, it will work out-of-the box.

If you would like to run them inside an IDE, you should use one that starts OSGi when executing the tests (in the same way the maven build is doing it). To get a bnd aware IDE, you can use Eclipse IDE for Java Developers with the additional plugin Bndtools installed, but there are other possibilities.

Source code can be found on GitHub: programcreek-examples


January 16, 2020 11:00 PM

Oracle made me a Stackoverflow Guru

by Stephan Herrmann at January 16, 2020 06:40 PM

Just today Oracle helped me to become a “Guru” on Stackoverflow! How did they do it? By doing nothing.

In former times, I was periodically enraged, when Oracle didn’t pay attention to the feedback I was giving them during my work on ecj (the Eclipse Compiler for Java) – at least not the attention that I had hoped for (to be fair: there was a lot of good communication, too). At those times I had still hoped I could help make Java a language that is completely and unambiguously defined by specifications. Meanwhile I recognized that Java is at least three languages: the language defined by JLS etc., the language implemented by javac, and the language implemented by ecj (and no chance to make ecj to conform to both others). I realized that we were not done with Java 8 even 3 years after its release. Three more years later it’s still much the same.

So let’s move on, haven’t things improved in subsequent versions of Java? One of the key new rules in Java 9 is, that

“If [a qualified package name] does not name a package that is uniquely visible to the current module (§7.4.3), then a compile-time error occurs”.

Simple and unambiguous. That’s what compilers have to check.

Except: javac doesn’t check for uniqueness if one of the modules involved is the “unnamed module”.

In 2018 there was some confusion about this, and during discussion on stackoverflow I raised this issue to the jigsaw-dev mailing list. A bug was raised against javac, confirmed to be a bug by spec lead Alex Buckley. I summarized the situation in my answer on stackoverflow.

This bug could have been easily fixed in javac version 12, but wasn’t. Meanwhile upvotes on my answer on stackoverflow started coming in. The same for Java 13. The same for Java 14. And yet no visible activity on the javac bug. You need ecj to find if your program violates this rule of JLS.

Today the 40th upvote earned me the “Guru” tag on stackoverflow.

So, please Oracle, keep that bug unresolved, it will earn me a lot of reputation for a bright future – by doing: nothing 🙂


by Stephan Herrmann at January 16, 2020 06:40 PM

Building and running Equinox with maven without Tycho

January 12, 2020 11:00 PM

Eclipse Tycho is a great way to let maven build PDE based projects. But the Plug-in Development Environment (PDE) model is not the only way to work with OSGi.

In particular, since 2 or 3 years the Eclipse Platform jars (including the Equinox jars) are regularly published on Maven Central (check the artifacts having org.eclipse.platform as groupId).

I was looking for an alternative to P2 and to the target-platform mechanism.

bnd and bndtools logo

Bnd and Bndtools are always mentioned as potential alternative to PDE (I attended several talks discussing this at EclipseCon 2018: Migrating from PDE to Bndtools in Practice, From Zero to a Professional OSGi Project in Minutes). So I decided to explore this path.

This StackOverflow question catches my attention: How to start with OSGi. I had a close look at the answer provided by Peter Kriens (the founder of the Bnd and Bndtools projects), where he discusses the different possible setup:

  • Maven Only

  • Gradle Only

  • Eclipse, M2E, Maven, and Bndtools

  • Eclipse, Bndtools, Gradle

Even in the "Maven Only" or "Gradle Only" setups, the proposed solution relies on plugins using bnd under the hood.

How to start?

My project is quite simple, the dependencies are already on maven central. I will not have a complex use-case with multiple versions of the same library or with platform dependent artifacts. So fetching the dependencies with maven is sufficient.

I decided to try the "Maven Only" model.

How to start?

I was not sure to understand how to use the different bnd maven plugins: bnd-maven-plugin, bnd-indexer-maven-plugin, bnd-testing-maven-plugin, bnd-export-maven-plugin

Luckily I found the slides of the Bndtools and Maven: A Brave New World workshop (given at EclipseCon 2017) and the corresponding git repository: osgi-community-event2017.

The corresponding effective-osgi maven archetypes used during the workshop are still working well. I could follow the step-by-step guide (in the readme of the maven archetypes project). I got everything working as described and I could find enough explanations about the generated projects. I think I understood what I did and this is very important when you start.

After some cleanup and a switch from Apache Felix to Eclipse Equinox, I got my running setup and I answered my question: "How to start with OSGi without PDE and Tycho".

The corresponding code is in this folder: effectiveosgi-example.


January 12, 2020 11:00 PM

4 Years at The Linux Foundation

by Chris Aniszczyk at January 03, 2020 09:54 AM

Late last year marked the 4th year anniversary of the formation of the CNCF and me joining The Linux Foundation:

As we enter 2020, it’s amusing for me to reflect on my decision to join The Linux Foundation a little over 4 years ago when I was looking for something new to focus on. I spent about 5 years at Twitter which felt like an eternity (the average tenure for a silicon valley employee is under 2 years), focused on open source and enjoyed the startup life of going from a hundred or so engineers to a couple of thousand. I truly enjoyed the ride, it was a high impact experience where we were able to open source projects that changed the industry for the better: Bootstrap (changed front end development for the better), Twemoji (made emojis more open source friendly and embeddable), Mesos (pushed the state of art for open source infrastructure), co-founded TODO Group (pushed the state of corporate open source programs forward) and more!

When I was looking for change, I wanted to find an opportunity that could impact more than I could just do at one company. I had some offers from FAANG companies and amazing startups but eventually settled on the nonprofit Linux Foundation because I wanted to build an open source foundation from scratch, teach other companies about open source best practices and assumed non profit life would be a bit more relaxing than diving into a new company (I was wrong). Also, I was throughly convinced that an openly governed foundation pushing Kubernetes, container specifications and adjacent independent cloud native technologies would be the right model to move open infrastructure forward.

As we enter 2020, I realize that I’ve been with one organization for a long time and that puts me on edge as I enjoy challenges, chaos and dread anything that makes me comfortable or complacent. Also, I have a strong desire to focus on efforts that involve improving the state of security and privacy in a connected world, participatory democracy, climate change; also anything that pushes open source to new industries and geographies.

While I’m always happy to entertain opportunities that align to my goals, the one thing that I do enjoy at the LF is that I’ve had the ability to build a variety of new open source foundations improving industries and communities: CDF, GraphQL Foundation, Open Container Initiative (OCI), Presto Foundation, TODO Group, Urban Computing Foundation and more.

Anyways, thanks for reading and I look forward to another year of bringing open source practices to new industries and places, the world is better when we are collaborating openly.


by Chris Aniszczyk at January 03, 2020 09:54 AM

An update on Eclipse IoT Packages

by Jens Reimann at December 19, 2019 12:17 PM

A lot has happened, since I wrote last about the Eclipse IoT Packages project. We had some great discussions at EclipseCon Europe, and started to work together online, having new ideas in the progress. Right before the end of the year, I think it is a good time to give an update, and peek a bit into the future.

Homepage

One of the first things we wanted to get started, was a home for the content we plan on creating. An important piece of the puzzle is to explain to people, what we have in mind. Not only for people that want to try out the various Eclipse IoT projects, but also to possible contributors. And in the end, an important goal of the project is to attract interested parties. For consuming our ideas, or growing them even further.

Eclipse IoT Packages logo

So we now have a logo, a homepage, built using using templates in a continuous build system. We are in a position to start focusing on the actual content, and on the more tricky tasks and questions ahead. And should you want to create a PR for the homepage, you are more than welcome. There is also already some content, explaining the main goals, the way we want to move forward, and demo of a first package: “Package Zero”.

Community

While the homepage is a good entry point for people to learn about Eclipse IoT and packages, our GitHub repository is the home for the community. And having some great discussions on GitHub, quickly brought up the need for a community call and a more direct communication channel.

If you are interested in the project, come and join our bi-weekly community call. It is a quick, 30 minutes call at 16:00 CET, and open to everyone. Repeating every two weeks, starting 2019-12-02.

The URL to the call is: https://eclipse.zoom.us/j/317801130. You can also subscribe to the community calendar to get a reminder.

In between calls, we have a chat room eclipse/packages on Gitter.

Eclipse IoT Helm Chart Repository

One of the earliest discussion we had, was around the question of how and were we want to host the Helm charts. We would prefer not to author them ourselves, but let the projects contribute them. After all, the IoT packages project has the goal of enabling you to install a whole set of Eclipse IoT projects, with only a few commands. So the focus is on the integration, and the expert knowledge required for creating project Helm chart, is in the actual projects.

On the other side, having a one-stop shop, for getting your Eclipse IoT Helm charts, sounds pretty convenient. So why not host our own Helm chart repository?

Thanks to a company called Kiwigrid, who contributed a CI pipeline for validating charts, we could easily extend our existing homepage publishing job, to also publish Helm charts. As a first chart, we published the Eclipse Ditto chart. And, as expected with Helm, installing it is as easy as:

Of course having a single chart is only the first step. Publishing a single Helm charts isn’t that impressive. But getting an agreement on the community, getting the validation and publishing pipeline set up, attracting new contributors, that is definitely a big step in the right direction.

Outlook

I think that we now have a good foundation, for moving forward. We have a place called “home”, for documentation, code and community. And it looks like we have also been able to attract more people to the project.

While our first package, “Package Zero”, still isn’t complete, it should be pretty close. Creating a first, joint deployment of Hono and Ditto is our immediate focus. And we will continue to work towards a first release of “Package Zero”. Finding a better name is still an item on the list.

Having this foundation in place also means, that the time is right, for you to think about contributing your own Eclipse IoT Package. Contributions are always welcome.

The post An update on Eclipse IoT Packages appeared first on ctron's blog.


by Jens Reimann at December 19, 2019 12:17 PM

Xtext 2.20 Release

by Karsten Thoms (thoms@itemis.de) at December 03, 2019 02:38 PM

Right on time for the Eclipse 2019-12 Simultaneous Release, we have shipped Xtext 2.20. This time we focussed more on maintenance work than on features. As with each release, the world around us is spinning fast, and keeping the whole technology stack up-to-date and testing against it is quite time consuming.

Let’s talk about Xtend

For a long time, the Java language missed some features that could make a developer’s life easier. This was one of the reasons that a broad range of languages running on the Java Virtual Machine (JVM) became popular, Xtend being one of them. With its powerful lambda expressions, extension methods, and template support, Xtend had some sweet spots back in 2013, which Java did not have. And even with the availability of lambdas with Java 8, it took some years for projects to catch up with that. Xtend provided this for years, while still being able to produce Java 1.6-compliant code.

Now the (Java) world has changed, and some nice language features have been added to Java, making the gap to Xtend smaller. Back in 2013, we claimed Xtend to be the “Java 10 of today”. We are realistic enough to state that Xtend is not and will not be the “Java 17 of today”. However, there are still areas where we see Xtend as beneficial over other Java and other JVM languages. To be more specific, we still think that Xtend is the most powerful language supporting template expressions. The most common use case for this are code generators. Besides that, writing unit tests with Xtend feels much cleaner than with Java.

However, we decided to encourage to use Xtend only for these areas, and not as the primary general-purpose language. And we start doing this with the “New Project” wizard. The configuration that this wizard creates for a new Xtext project, will now use Java as the language for generated skeleton classes, so that newly-created projects (and especially new users) are using Java by default. This is just a changed default for the generated MWE2 workflow, and users, who still prefer to use Xtend for the generated artifacts, can simply modify the workflow file. We expect that those users are advanced anyway. Xtend will stay the default language for the code generator and unit test fragments.

Additionally, we have started to clean up the code base and to refactor some of the Xtend code to Java. As Xtend already is compiled into Java, this basically means that we take those sources and clean them up. This will be an ongoing maintenance work. If you like to contribute to Xtext, this would be a good starting point for refactoring contributions.

New Xtend features

After that being said, there is some good news about some features that have been added to Xtend’s Eclipse integration. We are very happy about some useful contributions from Vivien Jovet in this area.

A new refactoring has been implemented that allows the user to refactor a call to a static method either as static import or as a static extension. This allows the user to produce more readable and fluent code.

EMBED:

Xtext_Release_2_20_refactoring_import_static_method

 

The testing support for Xtend has been improved:

  • An Xtend unit test can now be triggered within the Eclipse IDE when the cursor is located around the test class definition.
  • As known from JDT’s JUnit integration, Xtend now also provides quickfixes if the JUnit library is missing on the classpath. By using the quickfix, the library can be added for either JUnit 4 or 5.

It’s time to get rid off old generator workflows

Already back in 2015, we changed to new Xtend-based generator fragments and deprecated the old Xpand-based language generator. If you still use an old generator workflow based on the org.eclipse.xtext.generator bundle (the new bundle is org.eclipse.xtext.xtext.generator, please note the duplicated .xtext segment), then it is time for you to finally take action!

The old generator is based on the Xpand language, which is dormant for a while. We are refactoring Xtext to avoid any dependency on Xpand, except for the deprecated generator bundle. Also, we do not change the old generator templates anymore, so we strongly recommend to use the maintained new generator infrastructure. Although it is not scheduled yet, dropping the whole old generator completely is just a matter of time. So, please, if you still have any anciently-structured Xtext projects, migrate them to the recommended infrastructure! If you need help on this, get in contact with us. We have enough experience to help you quickly on that.

Create new projects and files from the toolbar

If you want to allow creation of projects and files for your DSL from the toolbar, then this is good news for you: The fragments for generating the infrastructure for wizards have been extended by an option called generateToolbarButton. As the name already suggests, the generator fragments will generate the button to the toolbar, if this option is enabled in the fragment’s configuration in your generator workflow.

Making our maintenance work easier

With 4 releases per year and 3 milestone releases towards any release, it is quite some effort to make these releases. As we finished our hopefully last build infrastructure change to Eclipse JIRO with the previous release, we were able to invest a bit of time into enhancing our build pipelines again.

As a result, initiating a milestone or final release is mostly triggering a parameterized build job now and then waiting several hours until everything has been build. Actually, while I’m writing this article, the final Xtext release is being build for you, which has been triggered 3.5 hours before. Yes, it still builds that long. And it is still painful to orchestrate the build over all Xtext repositories. There are still some steps that require manual action (releasing to Maven Central, updating Eclipse Marketplace, sending notifications to the communication channels), but we slowly add all automatable tasks to the pipelines.

Also, we interacted with the Eclipse infrastructure staff to get us in the position that our technical build user is able to raise pull requests on GitHub automatically. This enabled us to create a bot update pipeline that lets us automate some frequently occurring update changes. This is, for example, updating the version, versions to use (like Tycho), the Orbit URL, etc. The job raises pull requests for us, so we can safely verify that nothing is missing and that everything is properly built. It is very much like these dependency update bots like Dependabot that are coming up more and more, but tightly tailored to the very specific needs of the Xtext project. We are still at the beginning here. Some first pull requests merged for 2.20 have been created by the bot job. We expect that the bot will be triggered automatically in the future and that the bot user will become one of the most active Xtext contributors then.

Conclusion

Xtext 2.20 is a maintenance release. For users of a recent Xtext version it will be a drop-in replacement. Users of old versions and project structures are recommended to upgrade their projects, in order to keep their projects compatible.

The Xtext project started to discourage the usage of Xtend where the latter’s language features do not have a significant benefit over Java. And internally, the project started to refactor the codebase to follow this recommendation.

For build and release engineering, the project improved towards more automated tasks and benefits from reduced manual maintenance tasks.

The project team is happy about receiving contributions. We are especially grateful about new feature ideas that are actively developed by contributors.

Do you want to know more? Have a look at the release notes for Xtext & Xtend.


by Karsten Thoms (thoms@itemis.de) at December 03, 2019 02:38 PM

Obeo Cloud Platform

December 02, 2019 10:00 AM

TLDR; This is almost the story of my first CTO pregnancy experience, organizational stuff inside.

It’s been almost 2 years since I started operating as Obeo’s CTO, 2 years since I accepted the challenge to take the lead of our R&D. As part of this, almost 1 year ago I started to organize the development of our new generation modeling tool solution. And for the past 9 months my team has been busy working full time on this new product.

When you become pregnant a product manager, you are basically a story trigger for everyone around you: people love to tell their failure-project and their stupid-leader story. It is scary to be at the place where you are the one who decides. It is even more frightening when you are at the point where you redesign your products with a completely different technological layer. This is where I was one year ago. At Obeo, we have been developing for years modeling workbenches based on the Eclipse Platform. Today our customers want more and asked us to modernize the modeling stack by making it cloud compliant. So how do we go from this statement to a first software release?

If you’re a product manager or affiliate, you’re probably aware that the first nine months of a new software product are always a big adventure. CTOs-to-be may have a lot of questions about what they can expect and the changes they’ll go through. Do you know when to expect to feel your product move? When to look for a UI/UX designer or a continuous deployment pipeline? Customer interest on what you are developing? Is that the signs of preterm labor?

1st month - Your imagination has no limit

The beginning of a new product is not easy. Where should we start? Should we rely on the prototypes we already have? Should we built a completely new project? How will you build your team? Who will be in? When? What will be the first scenario that will be supported in your software? How should you be organized? These are all the first questions you will come across.

What I will remember from that first month is that at some point you have to take decision: that is why you are here, that’s your job, and that will remind you why there is a C in CTO. You have the power of choice. There is not a unique good solution for everything. When every other person has a piece of advice for you, you start to understand that not everything works for everyone. I found what I believe works best for me. Of course, what works for me may not work for you: it depends on your company technological background but also on how you want to get your collaborators engaged as well. Each project, team and context is different.

Take the time to discuss this with others in your company, take the time to find what they need but also what are they dreaming about this new product. I developed my product vision for example by asking people to fill in a survey with questions such as in 1, 2, 5 years what is the final goal, the business needs, the users expectations, the success factor, etc. See the product vision template available on my github for details.

Product Vision Template

Write a summary of all this. And in the end, find your own voice in the ocean of opinions. Build your team, define your first scenario, share with everyone what you decided and do not hesitate to fail! Your first organisation might not be the best one: try, learn, update, reorganize, try something else… You are not alone in this task, this is a collective effort. The entire team cares about its organization and discusses it regularly by making retrospectives, having a dedicated moment during regular meetings, defining process to improve their collaboration, selecting tools…

The questional period you just went through, your team will go through the same. Which technical bricks will they rely on? Which part of our existing code will integrate this new project? Should we start by building the simplest scenario, this without taking into account the whole complexity of the end product you are trying to build? What should in the end be the software architecture?

Important point, give some time to your team to discuss all those things. But ask them to produce something even during that period. In our case, we wrote documents (AsciiDoc rocks!) and decided to keep a trace of all our decisions. For that, we used ADR - Architectural Decision Record: it is like reading the pocket reference of your project. After several months of use, it has turned out to be really useful. It helps people who are working part time on the project to get what we decided and why and help remember why we took a given decision. It forces us to discuss and validate all the important decisions together. You also need to stop this questioning period by just pushing your team to contribute code and not just loosing them in the limbs of the imaginative product.

Mid months - Scared! Believe! Realize!

Pregnancy, giving software birth and the first 3 months is such an emotional roller coaster with so many changes. As you move forward on your development journey, it is amazing to discover all the things your sweety-software can do before he’s born. We went through different themes during these months: Persistence, CRUD, Diagram, Properties views, Edition, Concurrence, Authentication… Then everything is on track, you are organized as a team, code is being produced, your scenarios are more and more rich. And at some point, we realized we were actually growing life within us. It did not come right when we started the project.

I felt it in 3 specific moments —

  • When we chose the name - Obeo Cloud Platform,
  • When I had a first ultrasound scan preview of the UI - fortunately we have a Design team to work with,
  • And when he kicked for the first time - I mean his first public live demo.

We will also never forget our baby shower celebration at EclipseCon Europe! This was an amazing moment for us. We were so happy to present our new work to the Sirius community. That day I was feeling the kicks in my belly. As Steve Jobs puts it, “A lot of times, people don’t know what they want until you show it to them.” That’s why we decided to give a preview of our new product even before it is polished, and why we launched a beta testing team. The idea is to give them a preview access and organize live remote testing sessions to grow the feedback river with real end-users. Join us now and share your needs!

8-9th months - You are (almost) releasing!

Today, I do not even realise it is already the last months of pregnancy: we are getting closer and closer to our due date. I have this mixed feeling of excitement and nervousness and want our software product to come as soon as possible.

Our product is getting ready for birth, and the whole Obeo family is preparing to welcome a new member. You are also invited to join us, stay tuned for the upcoming SiriusCon Live Q1 2020.

At the end of this year, on his first days, our software product will be tiny and little. It will know only a few things but will do them well. Next we will continue to feed him: baby bottles first and then we will go through the diversification phase introducing new kind of food. My team will help him to grow, you, our users, our customers will be part of its education. We have many projects and plans, but we need you to turn true customers problems into product features.

Sometimes it’s important to stop and look back. It really made me appreciate and comprehend what an amazing experience I and my company went through ❤ and still are.

Hope you had a good read.


December 02, 2019 10:00 AM

Eclipse m2e: How to use a WORKSPACE Maven installation

by kthoms at November 27, 2019 09:39 AM

Today a colleague of me asked me about the Maven Installations preference page in Eclipse. There is an entry WORKSPACE there, which is disabled and shows NOT AVAILABLE. He wanted to know how to enable a workspace installation of Maven.

Since we both did not find the documentation of the feature I digged into the m2e sources and found class MavenWorkspaceRuntime. The relevant snippets are the method getMavenDistribution() and the MAVEN_DISTRIBUTION constant:

private static final ArtifactKey MAVEN_DISTRIBUTION = new ArtifactKey(
      "org.apache.maven", "apache-maven", "[3.0,)", null); //$NON-NLS-1$ //$NON-NLS-2$ //$NON-NLS-3$

...

protected IMavenProjectFacade getMavenDistribution() {
  try {
    VersionRange range = VersionRange.createFromVersionSpec(getDistributionArtifactKey().getVersion());
    for(IMavenProjectFacade facade : projectManager.getProjects()) {
      ArtifactKey artifactKey = facade.getArtifactKey();
      if(getDistributionArtifactKey().getGroupId().equals(artifactKey.getGroupId()) //
          && getDistributionArtifactKey().getArtifactId().equals(artifactKey.getArtifactId())//
          && range.containsVersion(new DefaultArtifactVersion(artifactKey.getVersion()))) {
        return facade;
      }
    }
  } catch(InvalidVersionSpecificationException e) {
    // can't happen
  }
  return null;
}

From here you can see that m2e tries to look for workspace (Maven) projects and to find one the has the coordinates org.apache.maven:apache-maven:[3.0,).

So the answer how to enable a WORKSPACE Maven installation is: Import the project apache-maven into the workspace. And here is how to do it:

  1. Clone Apache Maven from https://github.com/apache/maven.git
  2. Optionally: check out a release tag
    git checkout maven-3.6.3
  3. Perform File / Import / Existing Maven Projects
  4. As Root Directory select the apache-maven subfolder in your Maven clone location

Now you will have the project that m2e searches for in your workspace:

And the Maven Installations preference page lets you now select this distribution:


by kthoms at November 27, 2019 09:39 AM

Modernizing our GitHub Sync Toolset

November 19, 2019 08:10 PM

I am happy to announce that my team is ready to deploy a new version of our GitHub Sync Toolset on November 26, 2019 from 10:00 to 11:00 am EST.

We are not expecting any disruption of service but it’s possible that some committers may lose write access to their Eclipse project GitHub repositories during this 1 hour maintenance window.

This toolset is responsible for syncronizing Eclipse committers accross all our GitHub repositories and on top of that, this new release will start syncronizing contributors.

In this context, a contributor is a GitHub user with read access to the project GitHub repositories. This new feature will allow committers to assign issues to contributors who currently don’t have write access to the repository. This feature was requested in 2015 via Bug 483563 - Allow assignment of GitHub issues to contributors.

Eclipse Committers are reponsible for maintaining a list of GitHub contributors from their project page on the Eclipse Project Management Infrastructure (PMI).

To become an Eclipse contributor on a GitHub for a project, please make sure to tell us your GitHub Username in your Eclipse account.


November 19, 2019 08:10 PM

Jakarta Microprofile REST Client in Eclipse

by Christian Pontesegger (noreply@blogger.com) at November 18, 2019 11:19 AM

Today we are going to implement a simple REST client for an Eclipse RCP application. Now with Jakarta @ Eclipse and all these nice Microprofile implementations this should be a piece of cake, right? Now lets see...

Source code for this tutorial is available on github as a single zip archive, as a Team Project Set or you can browse the files online.

Step 1: Dependencies

The Eclipse Microprofile REST Client repository is a good place to get started. It points to several implementations (at the bottom of the readme). Unfortunately these implementations do not host any kind of p2 sites which we could use directly. So our next stop is Eclipse Orbit, but same situation there. This means we need to collect our dependencies manually.

For my example I used RESTEasy, simply as it was the only one I could get working within reasonable time. To fetch dependencies, download the latest version of RESTEasy. As the RESTEasy download package does not contain the REST client API, we need to fetch that from another source. I found it in the Apache CXF project, so download the latest version too. If you know a better source, please let me know in the comments.

Now create a new Plug-in from Existing JAR Archives. Click on Add External... and add all jars from resteasy-jaxrs-x.y.z.Final/lib/*.jar. Further add apache-cxf-x.y.z/lib/jakarta.ws.rs-api-x.y.z.jar.
This plug-in now contains all dependencies we need for our client. Unfortunately also a lot of other stuff we probably do not need, but we leave the cleanup for later.

Step 2: Define the REST service

For our example we will build a client for the Petstore Service, which can be used for testing purposes. Further it provides a swagger interface to test the REST calls online. I recommend to check out the API and play with some commands online and with curl.

Lets write a simple client for the store with its 4 commands. The simplest seems to be the inventory command, so we will start there. Create a new Java interface:
package com.codeandme.restclient.resteasy;

import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;

public interface IStoreService {

@GET
@Path("/v2/store/inventory")
@Produces(MediaType.APPLICATION_JSON)
Response getInventory();
}
Everything necessary for RESTEasy is provided via annotations:

  • @Path defines the path for the command of the REST service
  • @GET defines that we have to use a GET command (there exist annotations for POST, DELETE, PUT)
  • @Produces finally defines the type of data we do get in response from the server.
Step 3: Create an instance of the service

Create a new class StoreServiceFactory:
package com.codeandme.restclient.resteasy;

import java.net.URI;
import java.net.URISyntaxException;

import org.jboss.resteasy.client.jaxrs.ResteasyClient;
import org.jboss.resteasy.client.jaxrs.ResteasyWebTarget;
import org.jboss.resteasy.client.jaxrs.internal.ResteasyClientBuilderImpl;

public class StoreServiceFactory {

public static IStoreService createStoreService() throws URISyntaxException {
ResteasyClient client = new ResteasyClientBuilderImpl().build();
ResteasyWebTarget target = client.target(new URI("https://petstore.swagger.io/"));
return target.proxy(IStoreService.class);
}
}

This is the programmatic way to create a client instance. There also exists another method called CDI, which I did not try out in Eclipse.

The service is ready and usable, so give it a try. The result object returned does contain some valuable information:

  • getStatus() provides the HTTP response status. 200 is expected for a successful getInventory()
  • getEntity() provides an InputStream which contains the JSON encoded response data from the server
Step 4: Response decoding

Our response is encoded as JSON collection of properties. In Java terms this basically reflects to a Map<String, String>. Instead of decoding the data manually, we let the framework do it for us:

Change the IStoreService to:

 Map<String, String> getInventory();
Anything else is done by the framework. Now how easy was that?

Step 5: POST request

To place an order we need order parameters. Best we encapsulate them in a dedicated Order class. From the definition of the order REST call we can see that we need following class properties: id, petId, quantity, shipDate, status, complete. Add these parameters as fields to the Order class and create getters/setters for them.

Now we can extend our IStoreService with the fileOrder() call:


@Path("/v2/store")
public interface IStoreService {

@GET
@Path("inventory")
@Produces(MediaType.APPLICATION_JSON)
Map<String, String> getInventory();

@POST
@Path("order")
@Consumes(MediaType.APPLICATION_JSON)
void fileOrder(Order order);
}

The Order automatically gets encoded as JSON object. No need for us to do the coding manually!

As parts of the path are the same for both calls, I moved the common component to the class level.

Step 6: Path parameters

To fetch an order we need to put the orderId in the request path. Coding of such parameters is put in curly braces. The parameter on the java call then gets annotated so the framework knows which parameter value to put into the path:

 @GET
@Path("order/{orderId}")
@Produces(MediaType.APPLICATION_JSON)
Order getOrder(@PathParam("orderId") int orderId);

Again the framework takes care of the decoding of the JSON data.

Step 7: DELETE an Order

Deleting needs the orderId as before:

 @DELETE
@Path("order/{orderId}")
void deleteOrder(@PathParam("orderId") int orderId);

The REST API does not provide a useful JSON response to the delete call. One option is to leave the response type to void. In case the command fails, an exception will be thrown (eg when the orderId is not found and the server returns 404).

Another option is to set the return type to javax.ws.rs.core.Response. Now we do get everything the server sends back and no execption is thrown anymore. Sometimes we might only be interested in the status code. This can be fetched when setting the return type to Response.Status. Again, no exception will be thrown on a 404.

Optional: Only have required RESTEasy dependencies

Looking at all these jars I could not figure out a good way to get rid of the ones unused by the REST client. So I provided unit tests for all my calls and then removed dependencies step by step until I found the minimal set of required jars.




by Christian Pontesegger (noreply@blogger.com) at November 18, 2019 11:19 AM

Getting to the Source

by Ed Merks (noreply@blogger.com) at November 08, 2019 09:04 AM

As a Java developer using JDT, no doubt you are intimately familiar with Ctrl-Shift-T to launch the Open Type dialog.  You might not even realize this is a shortcut accessible via the Navigate menu.  So you probably will not have noticed that this menu also contains Open Discovered Type:


Eclipse has a huge variety of open source projects maintained in a bewildering collection of Git repositories.  Many are hosted at Eclipse:
https://git.eclipse.org/c/
Others are hosted at Github:
https://github.com/eclipse/

Finding the Git repository that contains a particular Java class is like finding a needle in a haystack.  This is where Open Discovered Type comes to the rescue.  Once a week, Oomph indexes every *.java file in every Git repository hosted by git.eclipse.org and github.com/eclipse.  The Open Discovered Type dialog loads this information to populate a tree view of all these packages and classes.


Please read the help information the first time you use it.  It was written to help you get the most out of this dialog.  Also be patient the first time you launch the dialog; there's a lot of information to download.

Suffice to say, you can use the dialog much like you do Open Type.  So here we search for JavaCore and discover all the classes with that name:


We can select any one of them and discover all the Git repositories containing that class and we can use the context menu for each link for each repository or for the specific file in that repository to open the link where we want it opened.  From that link, you can of course see the full history of the repository or specific file.

As a bonus, if this repository provides an Oomph setup, you can easily use that Oomph setup to import the sources for this project into your workspace. If there is no Oomph setup, you'll have to do that manually.

In any case, contributing to Eclipse open source projects has never been easier.

by Ed Merks (noreply@blogger.com) at November 08, 2019 09:04 AM

Eclipse startup up time improved

November 05, 2019 12:00 AM

I’m happy to report that the Eclipse SDK integration builds starts in less than 5 seconds (~4900 ms) on my machine into an empty workspace. IIRC this used to be around 9 seconds 2 years ago. 4.13 (which was already quite a bit improved used around 5800ms (6887ms with EGit and Marketplace). For recent improvements in this release see https://bugs.eclipse.org/bugs/show_bug.cgi?id=550136 Thanks to everyone who contributed.

November 05, 2019 12:00 AM

Setup a Github Triggered Build Machine for an Eclipse Project

by Jens v.P. (noreply@blogger.com) at October 29, 2019 12:55 PM

Disclaimer 1: This blog post literally is a "web log", i.e., it is my log about setting up a Jenkins machine with a job that is triggered on a Github pull request. A lot of parts have been described elsewhere, and I link to the sources I used here. I also know that nowadays (e.g., new Eclipse build infrastructure) you usually do that via docker -- but then you need to configure docker, in which

by Jens v.P. (noreply@blogger.com) at October 29, 2019 12:55 PM

LiClipse 6.0.0 released

by Fabio Zadrozny (noreply@blogger.com) at October 25, 2019 06:59 PM

LiClipse 6.0.0 is now out.

The main changes is that many dependencies have been updated:

- it's now based on Eclipse 4.13 (2019-09), which is a pretty nice upgrade (in my day-to-day use I find it appears smoother than previous versions, although I know this sounds pretty subjective).

- PyDev was updated to 7.4.0, so, Python 3.8 (which was just released) is now already supported.

Enjoy!

by Fabio Zadrozny (noreply@blogger.com) at October 25, 2019 06:59 PM

Qt World Summit 2019 Berlin – Secrets of Successful Mobile Business Apps

by ekkescorner at October 22, 2019 12:39 PM

Qt World Summit 2019

Meet me at Qt World Summit 2019 in Berlin

QtWS19_globe

I’ll speak about development of mobile business apps with

  • Qt 5.13.1+ (Qt Quick Controls 2)
    • Android
    • iOS
    • Windows 10

ekkes_session_qtws19

Qt World Summit 2019 Conference App

As a little appetizer I developed a conference app. HowTo download from Google Play Store or Apple and some more screenshots see here.

02_sessions_android

sources at GitHub

cu in Berlin


by ekkescorner at October 22, 2019 12:39 PM

A nicer icon for Quick Access / Find Actions

October 20, 2019 12:00 AM

Finally we use a decent icon for Quick Access / Find Actions. This is now a button in the toolbar which allows you to trigger arbitrary commands in the Eclipse IDE.

October 20, 2019 12:00 AM

A Tool for Jakarta EE Package Renaming in Binaries

by BJ Hargrave (noreply@blogger.com) at October 17, 2019 09:26 PM

In a previous post, I laid out my thinking on how to approach the package renaming problem which the Jakarta EE community now faces. Regardless of whether the community chooses big bang or incremental, there are still existing artifacts in the world using the Java EE package names that the community will need to use together with the new Jakarta EE package names.

Tools are always important to take the drudgery away from developers. So I have put together a tool prototype which can be used to transform binaries such as individual class files and complete JARs and WARs to rename uses of the Java EE package names to their new Jakarta EE package names.

The tools is rule driven which is nice since the Jakarta EE community still needs to define the actual package renames for Jakarta EE 9. The rules also allow the users to control which class files in a JAR/WAR are transformed. Different users may want different rules depending upon their specific needs. And the tool can be used for any package renaming challenge, not just the specific Jakarta EE package renames.

The tools provides an API allowing it to be embedded in a runtime to dynamically transform class files during the class loader definition process. The API also supports transforming JAR files. A CLI is also provided to allow use from the command line. Ultimately, the tool can be packaged as Gradle and Maven plugins to incorporate in a broader tool chain.

Given that the tool is prototype, and there is much work to be done in the Jakarta EE community regarding the package renames, I have started a list of TODOs in the project' issues for known work items.

Please try out the tool and let me know what you think. I am hoping that tooling such as this will ease the community cost of dealing with the package renames in Jakarta EE.

PS. Package renaming in source code is also something the community will need to deal with. But most IDEs are pretty good at this sort of thing, so I think there is probably sufficient tooling in existence for handling the package renames in source code.

by BJ Hargrave (noreply@blogger.com) at October 17, 2019 09:26 PM

I’ll never forget that first EclipseCon meeting with you guys and Disney characters all around and…

by Doug Schaefer at October 16, 2019 01:18 AM

I’ll never forget that first EclipseCon meeting with you guys and Disney characters all around and the music. And all the late nights in the Santa Clara bar and summits and meetings talking until no one else was left. Great times indeed. Until we meet again Michael!


by Doug Schaefer at October 16, 2019 01:18 AM

Missing ECE already? Bring back a little of it - take the survey!

by Anonymous at October 15, 2019 09:22 PM

We hope you enjoyed the 2019 version of EclipseCon Europe and OSGi Community Event as much as we did.

Please share your thoughts and feedback by completing the short attendee survey. We read all responses, and we will use them to improve next year's event.

Speakers, please upload your slides to your session page. Attendees really appreciate this!


by Anonymous at October 15, 2019 09:22 PM

Open Source Gerrymandering

by Chris Aniszczyk at October 08, 2019 06:20 PM

Over the years, I have spent a lot of time thinking about and working on open source communities… from bootstrapping projects out of corporations (or broken communities), to starting brand new open source foundations.

I was recently having a conversation with an old colleague about bringing an open source project out of a company into the wild and how to setup the project for success. A key part of that discussion involved setting up the governance for the project and what that means. There was also discussion how neutral and open governance under a nonprofit foundation can be good for certain projects as research has shown that neutral foundations can promote growth and community better than other approaches. Also the conversation led to a funny side discussion on the concept of gerrymandering and open source.

For those who aren’t familiar with the term, it’s become popular in the US political lexicon as a “practice intended to establish a political advantage for a particular party or group by manipulating district boundaries.� A practical example of this is from my town of Austin TX which is in district 35 which snakes all the way from Austin to San Antonio for some reason.

The same concept of gerrymandering can apply to open source communities as open source projects can act like mini political institutions (or bigger ones in the case of Kubernetes). I shared some of my favorite examples with my friend so I figured I’d write this down for future reference and share it with folks as you really need to read the “fine print� to find these at times.

Apache Cassandra

The Apache Software Foundation (ASF) is a fantastic open source organization that has been around for a long time (they celebrated their 20th anniversary) and has had a lot of impact across the world. The way projects are governed in the ASF are through the Apache Way, which places a lot of emphasis on “community over code� amongst some other principles which are great practices for open source projects to follow.

There have been some interesting governance issues and lessons learned over the years in the ASF, in particular it can be challenging when you have a strong single vendor associated with a project as was with the case with Cassandra awhile ago:

As the ASF board noted in the minutes from its meeting with DataStax representatives, “The Board expressed continuing concern that the PMC was not acting independently and that one company had undue influence over the project.” There was some interesting press around the time this happened:

“Jagielski told me in an interview, echoing what he’d said on the Cassandra mailing list, that undue influence conflicts with project leadership obligations established by the ASF. As he suggested, the ASF tried many times to get a DataStax-heavy Project Management Committee (PMC) to pay attention to alleged trademark and other violations, to no avail. Whatever DataStax’s positive influence on the development of the project—in other words—it failed to exercise equivalent influence on governing the project in ASF fashion.â€�

The ASF basically forced a reorganization of the Cassandra PMC to be in more in lines with its values and then caused the primary vendor behind the project to pull engineers off the open source project.

Containerd

The containerd project is an industry-standard container runtime with an emphasis on simplicity, robustness and portability. The history of the project comes from being born at Docker where their open source projects had a governance policy essentially aligned with the BDFL philosophy with one of their project founders.

In CNCF, (which containered is a project of), project governance documents aren’t considered static and evolve over time to meet the needs of their community. For example, when containerd joined the CNCF their governance was geared towards a BDFL approach but over time evolved to a more neutral approach that spread authority across maintainers.

Cloud Foundry

Cloud Foundry is an open source community that has a large and mature ecosystem of PaaS focused projects. In the Cloud Foundry Foundation (CFF), they have a unique governance clauses in regards to how affiliates are treated and voting.

Pivotal Platinum Director Voting Power. The Platinum Director appointed by Pivotal (“Pivotal Director�) shall have five (5) votes on any matter submitted to a vote of the Board. (i) On a date one (1) year after the incorporation date set forth in the Certificate, the number of Pivotal Director’s votes will be reduced to three (3). (ii) On a date two (2) years after the incorporation date set forth in the Certificate, the number of Pivotal Director’s votes will be reduced to one (1)

To bootstrap the foundation, the originating company wanted a little bit of control for a couple of years, which can make sense in some situations as the beginning of a foundation can be a tumultuous time. In my opinion, it’s great to see the extra vote clause expire after 2 years, however, it’s still very unfair to the early potential members of the organization.

Another example of open source gerrymandering can be how votes are represented by member companies that are owned by a single entity:

At no time may a Member and its Affiliates have more than one Director who is an employee, officer, director, or consultant of that Member, except that Pivotal, EMC, and VMware, though Affiliates, shall each have one (1) Director on the Board).

This is an interesting tidbit given that Dell owns Pivotal, EMC and VMWare. In some organizations, usually there is legal language that collapses owned entities into one vote.

I personally I’m not the biggest fan of this approach as it makes things unfair from the beginning and can be an impediment to wide adoption across the industry. There can definitely be reasons of why you need to do this in the formation phase but it should be done with caution. If you saw the recent news that Pivotal was being spun back into VMWare and their woes with adoption, it shouldn’t come as a surprise in my opinion as one company was bearing too much of the burden in my opinion and not building a diverse community of contributors.

Cloud Native Computing Foundation (CNCF)

If you remember the early days of the container and orchestration wars, there was a lot different technologies, approaches and corporate politics. When CNCF was founded, the original charter included a clause that upgraded certain startup members from Silver to Platinum that were important in the ever evolving cloud native ecosystem.

“The Governing Board may extend a Platinum membership at the Silver Membership Scale rates on a year-by-year basis for up to 5 years to startup companies with revenues less than $50 million that are deemed strategic technology contributors by the Governing Board.�

In my opinion, that particular piece in the charter was important in bringing together all the relevant startups to the table along with the big established companies at the time.

In terms of projects, the CNCF Technical Oversight Committee (TOC) defines a set of principles to steward the technical community. The most important principle is around a minimum viable governance that enables projects to be self-governing. TOC members are available to provide guidance to the projects but do not control them. 

https://twitter.com/CloudNativeFdn/status/1167455648768045056

Unlike Apache and the Apache Way, CNCF does not require its hosted projects to follow any specific governance model. Instead, CNCF specifies that graduated projects need to “explicitly define a project governance and committer process.� So in reality, CNCF operates under the principle of subsidiarity, encouraging decisions to be made at the lowest project level consistent with their resolution.

GitLab

GitLab is a fantastic open source project AND company that I admire deeply for their transparency. The way the GitLab project is structured is that it’s wholly owned by the GitLab company (they also own the trademark). To the credit of GitLab, they make this clear via their stewardship principles online and discuss what they consider enterprise product work versus project work.

I’d love for them in the future to separate the branding from the company, project and the product as I believe it’s confusing and dilutes the messaging, but that’s just my opinion 🙂

Istio

Istio is a popular service mesh project originated at Google. It has documented its governance model publicly: https://github.com/istio/community/blob/master/STEERING-COMMITTEE.md

However, as you can see, it’s heavily tilted towards Google and there seems to be no limits on the number of spots on the steering committee from one company which is a common tactic in open governance approaches to keep things fair. On top of that, Google owns the trademark, domains and other project assets so I’d consider Istio to be heavily gerrymandered in Google’s versus the community’s interest.

JCP

I had the pleasure of serving on the Java Community Process (JCP) Executive Committee for a few years while I was at Twitter. It’s a great organization that drives standardization across the Java ecosystem, some of the fine print is interesting though:

“The EC is composed of 25 Java Community Process Members whose seats are allocated as follows: 16 Ratified Seats, 6 Elected Seats, and 2 Associate Seats, plus one permanent seat held by Oracle. (Oracle’s representative must not be a member of the PMO.) The EC is led by a non-voting Chair from the PMO.â€�

This essentially gives Oracle a permanent seat on the Executive Committee.

Here’s another fun clause:

Ballots to approve Umbrella JSRs that define the initial version of a new Platform Edition Specification or JSRs that propose changes to the Java language are approved if (a) at least a two-thirds majority of the votes cast are “yes” votes, (b) a minimum of 5 “yes” votes are cast, and (c) Oracle casts one of the “yes” votes. Ballots are otherwise rejected.

This essentially gives Oracle a veto vote on any JSR.

Note: The coolest thing the JCP has done is contribute the EE specification work to the Eclipse Foundation and form the Jakarta project over there to steward things in an open way.

Knative

Knative, like Istio mentioned above, is an open source project that was born at Google and controlled by Google. There have been a lot of discussion lately about this as Google recently decided to not openly govern the project and move it to a neutral foundation:

Kubernetes

Kubernetes operates under the auspices of the CNCF and openly governed by the Kubernetes Steering Committee (KSC). The Kubernetes project has grown significantly over time, but has done a great job of keeping things openly governed and inclusive in my opinion, especially compared to its project size these days. The KSC governs the project along with a variety of sub working groups. Also, the Kubernetes trademark is neutrally owned by the CNCF and openly governed via the Conformance Working Group which decides how certification works for the community, which there are nearly 100 certified solutions out there!

Spinnaker

The Spinnaker project was originally born at Netflix and recently spun out into the Continuous Delivery Foundation (CDF) as an openly governed project. The project assets, from domains to github to trademarks are all neutrally owned by the community through the CDF.

Vault

Vault is a fantastic and widely used secrets management tool from Hashicorp. It’s a single vendor controlled open source project that has an open core model with an open source and enterprise versions (see matrix). What this essentially means is that the buck stops at the single vendor on what features/fixes end up in the open source version, most likely that won’t include things that they sell in their enterprise offering.

Conclusion

I hope you learned something new about open source projects, foundations and communities as these things can be a little bit more complicated as you dig into the details. It’s really important to note that there is a difference between open source and open governance and you should always be skeptical of a project that claims it’s truly open if only one for profit company owns all the assets and control. While there’s nothing wrong with this approach at all, most organizations don’t set expectations up front which can lead to frustrations down the road. Note, there’s nothing wrong with single vendor controlled open source projects, I think they are great but I think they need to be upfront, similar to what GitLab stewardship principles on what they will put in open source versus their enterprise version.

In conclusion, as with anything in life, you should always read the fine print of an open source communities charter or legal paperwork to understand how it works. The lesson here is that every organization or project has its own rules and governance and it’s important that you understand how decisions are made and who has ownership of project assets like trademarks.


by Chris Aniszczyk at October 08, 2019 06:20 PM

JShell in Eclipse

by Jens v.P. (noreply@blogger.com) at October 08, 2019 12:16 PM

Java 9 introduced a new command line tool: JShell. This is a read–eval–print loop (REPL) for Java with some really nice features. For programmers I would assume writing a test is the preferred choice, but for demonstrating something (in a class room for example) this is a perfect tool if you are not using a special IDE such as BlueJ (which comes with its own REPL). The interesting thing about

by Jens v.P. (noreply@blogger.com) at October 08, 2019 12:16 PM

Removing “Contact Us

by tevirselrahc at October 07, 2019 02:17 PM

Unfortunately, because of the larger amount of spam, I now have to remove off the “Contact Us” page.

If you want to contact us, I would recommend you go through our twitter account.


by tevirselrahc at October 07, 2019 02:17 PM

Instanceof Type Guards in N4JS

by n4js dev (noreply@blogger.com) at September 30, 2019 08:06 AM

Statically typed languages like Java use instanceof checks to determine the type of an object at runtime. After a successful check, a type cast needs to be done explicitly in most of those languages. In this post we present how N4JS introduced type guards to perform these type casts implicitly. 

No error due to implicit cast in successful instanceof type guard

The example above shows that strict type rules on the any instance a causes errors to show up when accessing the unknown property pX. However, after asserting that a is an instance of X, the property pX can be accessed without errors. A separate type cast is unnecessary, since type inference now also considers instanceof type guard information.


Hover information on variable access of a shows the inferred type

The resulting type is the intersection type of the original type (which is here any) and of all type guards that must hold on a specific variable access (which is here only type X). Keeping the original types any or Object is not necessary and could be optimised later. In case the original type is different, it is necessary to include it in the resulting intersection type. The reason is that the type guard could check for an interface only. If so, property accesses to properties of the original types would cause errors.


Re-definition of a type guarded variable

Two distinct differences between type guards and type declarations are (1) their data flow nature and (2) their read-only effects. Firstly, when redefining (in the sense of the data flow) a variable, the type guard information gets lost. Consequently, subsequent accesses to the variable will no longer benefit from the type guard, since the type guard was invalidated by the re-definition. Secondly, only the original type information is considered for a redefinition. That means that the type guard does not change the expected type and, hence, does not limit the set of types that can be assigned to a type guarded variable.


Further examples for instanceof type guards in N4JS

Data flow analysis is essential for type guards and has been presented in a previous post. Based upon this information, type information for each variable access is computed. Since also complicated data flows are handled correctly, such as in for loops or short circuit evaluation, type guard information is already available in composed condition expressions (see function f3 and f5 above). Aside from being able to nest instanceof type guards (see function f4 above), they also can be used as a filter at the beginning of a function (see function f6 above) or inside a loop: Negating a type guard and then exiting the function or block leaves helpful valid type guard information on all the remaining control flow paths.

by Marcus Mews

by n4js dev (noreply@blogger.com) at September 30, 2019 08:06 AM

Team Sports for Developers! Edge Computing Mini-Hackathon

by Anonymous at September 26, 2019 09:21 PM

Do you like to build gadgets and/or hack? Then get a team together for the Edge Computing Mini-Hackathon, organized by Edgeworx.

Teams will be challenged to integrate at least one other Eclipse IoT project with Eclipse ioFog and showcase what they were able to accomplish. Representatives from all Eclipse projects are welcome to come help guide, coach, and influence participants to make use of their projects. There will be prizes for the standouts, plus giveaways (and fun) for all!

The event is part of Community Night on Tuesday, October 22, from 19:30 - 22:00 in the Theater Stage room at the Forum.


by Anonymous at September 26, 2019 09:21 PM

Blocked by an Eclipse Wizard?

by Wim at September 24, 2019 08:53 AM

Tuesday, September 24, 2019
There is a small but very useful patch in Eclipse 4.12 for people that do not want the UI to be blocked by wizards. There are many cases where it is desired that the underlying window can be reached WHILE the user is finishing the wizard. That's why it's strange that the Eclipse Wizard demands from us to always have full and utter attention.

Read more


by Wim at September 24, 2019 08:53 AM

How to Render a (Hierarchical) Tree in Asciidoctor

by Niko Stotz at September 21, 2019 03:16 PM

Showing a hierarchical tree, like a file system directory tree, in Asciidoctor is surprisingly hard. We use PlantUML to render the tree on all common platforms.

Example of rendered hierarchical tree

This tree is rendered from the following code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[plantuml, format=svg, opts="inline"]
----
skinparam Legend {
    BackgroundColor transparent
    BorderColor transparent
    FontName "Noto Serif", "DejaVu Serif", serif
    FontSize 17
}
legend
Root
|_ Element 1
  |_ Element 1.1
  |_ Element 1.2
|_ Element 2
  |_ Element 2.1
end legend
----

It works on all Asciidoctor implementations that support asciidoctor-diagram and renders well in both HTML and PDF. Readers can select the text (i.e. it’s not an image), and we don’t need to ship additional files.

We might want to externalize the boilerplate:

1
2
3
4
5
6
7
8
9
10
11
12
[plantuml, format=svg, opts="inline"]
----
!include asciidoctor-style.iuml
legend
Root
|_ Element 1
  |_ Element 1.1
  |_ Element 1.2
|_ Element 2
  |_ Element 2.1
end legend
----
asciidoctor-style.iuml
1
2
3
4
5
6
skinparam Legend {
    BackgroundColor transparent
    BorderColor transparent
    FontName "Noto Serif", "DejaVu Serif", serif
    FontSize 17
}

Thanks to PlantUML’s impressive reaction time, we soon won’t even need Graphviz installed.

Please find all details in the example repository and example HTML / example PDF rendering.


by Niko Stotz at September 21, 2019 03:16 PM

Let's Do It! Obeo loves The SeaCleaners

by Cédric Brun (cedric.brun@obeo.fr) at September 20, 2019 12:00 AM

I am deeply convinced a company is not only an economical actor. It has a much wider responsibility as any decision also has social, environmental or even political implications.

Looking at our environment state, its recent evolution and how it is forecasted to evolve indeed the task in front of us is huge. It would be easy to dismiss this as a problem our governments and big organizations should step up to, and indeed those in power have the responsibility, the ability and leverage to act and maybe bend those charts.

But I have a motto to “Focus on what you can control, then you can act” and so do I.

Obeo participates and hosts quite a few events each year and we are often struck by the nonsensical nature of the “goodies” industry and what global model they promote: built at the cheapest price, moved across the globe, distributed at the event and then pretty quickly to the bin.

Starting now, you won’t get any more goodies from us at conferences or events, but instead we will gladly discuss how we try to do our part, as a company, in this global challenge.

In relation to this initiative to stop producing waste we do not deem necessary: Obeo is partnering with The SeaCleaners organization to reduce plastic waste. The SeaCleaners is building a giant multihull boat designed to retrieve the plastic waste in the Ocean: The MANTA. The organization vision is that the preservation of the oceans is a global, long-term and worldwide matter that integrates economic, social, human, educational and scientific perspectives. They do that in a dynamic and solidarity-based project. You can learn more about this initiative on Obeo’s website.

The "Manta"

Furthermore, all the designs and blueprints of the Manta boat will be Open-Source and that enable enhancements and duplication at a global scale, a principle clearly aligned with our values and what we do within the Eclipse community.

The "Manta" boat technical data

That being said, it is just one step on a very specific part of our activity, but a step starting a journey with more to do to improve the way Obeo operates regarding its environmental responsibility. When you start building awareness of our impact on all the ins and outs of what we do, you realize even a non-industrial, software company can contribute.

Let's Do It! Obeo loves The SeaCleaners was originally published by Cédric Brun at CEO @ Obeo on September 20, 2019.


by Cédric Brun (cedric.brun@obeo.fr) at September 20, 2019 12:00 AM

Language-Workbench für Testsprachen

by Arne Deutsch (adeutsch@itemis.de) at September 13, 2019 02:12 PM

Kennen Sie das? Das Gefühl, all das schon einmal erlebt zu haben? Ein Déjà-vu? Selbiges beschlich mich vergangene Woche bei einem ersten Gespräch mit einem Automobilhersteller über das Tooling seiner hauseigenen Testsprache.

Language Workbench für Testsprachen

Das Problem ist jedes Mal dasselbe. Schon vor Jahren ist die Erkenntnis gereift, dass es nicht sinnvoll ist, die riesige Menge an Testfällen gegen fast jährlich neue Modelle immer wieder neu zu entwickeln. Jedes Mal wieder Unmengen an Zeit und Geld in die Programmierung zu stecken für Arbeiten, die doch eigentlich schon zig Mal erledigt wurden. Nur eben „ein klein wenig anders“.

Auch die Lösung dieses Problems war grundsätzlich die Richtige: eine eigene, kleine Programmiersprache, um Testfälle zu spezifizieren. Der eigentliche Testcode in C wird dann daraus generiert.

Auf diese Weise können sinnvolle Abstraktionen geschaffen werden, welche für verschiedene Modellserien anpassbar und parametrisierbar sind, ohne sich mit technischen Aspekten wie Zeigerarithmetik und Speicherverletzungen herumzuschlagen.

Doch nach einiger Zeit wurden die Schattenseiten dieses Ansatzes deutlich. Während das Tooling für gängige Programmiersprachen exzellent und ausgereift ist und die Entwickler mit mächtigen Werkzeugen zum Editieren des Codes verwöhnt werden, stellt sich die Situation für die neue Testsprache anders dar.

Natürlich liefert der Compiler mehr oder weniger hilfreiche Fehlermeldungen, und immerhin wurde ein einfaches Eclipse-Plugin entwickelt, um zumindest Schlüsselwörter hervorzuheben, aber von einer echten Toolunterstützung kann keine Rede sein. Es gibt keine Codevervollständigung, keine automatische Formatierung, und auch die Integration mit den anderen Tools ist minimalistisch.

Erste Abschätzungen deuten auf mehrere Personenjahre an Entwicklungsaufwand hin, um hier auch nur annähernd dahin zu kommen, wo die Entwicklung mit Java oder C schon lange ist. Und gemacht hat das auch noch keiner im Unternehmen.

Ein hoher Aufwand und ein hohes Risiko, welche in keinem Verhältnis zum Nutzen stehen.

Also war die eigene Sprache ein Irrweg? Oder muss man mit dem schlechten Tooling leben?

Mitnichten!

Es handelt sich hier um ein gelöstes Problem. Die Idee, domänenspezifische Sprachen mit Language-Workbenches zu entwickeln, existiert seit Jahrzehnten. Der Begriff wurde vor 14 Jahren geprägt. Doch während es sich damals noch um Experimente handelte, die noch nicht wirklich produktionstaugilch waren, sind diese Tools mittlerweile ausgereift und verkürzen die Entwicklung von Werkzeugen für DSLs um den Faktor 10 und mehr.

Mit wenigen Wochen Aufwand können bereits beeindruckende Ergebnisse erzielt werden; mit noch etwas mehr Mühe kommt man nahe an das Tooling heran, welches man von Java gewöhnt ist.

Insbesondere im Open-Source-Umfeld um Eclipse herum existiert mit Xtext eine Lösung, die exakt diesen Anwendungsfall optimal unterstützt, eine existierende Sprache mit wenig Aufwand um hervorragendes Tooling zu erweitern. Warum Zeit und Geld verschwenden, um das Rad mal wieder neu zu erfinden, wenn man einfach auf die Arbeit von anderen aufbauen kann? Du hast ein ähnliches Problem?

Schreibe uns gerne deine Erfahrungen in die Kommentare oder sprich uns an!

P.S.: Langweilig wird mir das nicht … auch wenn ich meine, das alles schon einmal erlebt zu haben. Hat ja manchmal auch Vorteile.


by Arne Deutsch (adeutsch@itemis.de) at September 13, 2019 02:12 PM

The Rising Adoption of Capella

by Cédric Brun (cedric.brun@obeo.fr) at September 04, 2019 12:00 AM

Witnessing an OSS technology getting together with a wide group of users is something I find exhilarating, I have experienced it with Acceleo, EMF Compare and Eclipse Sirius along the years, each time in different contexts and at different scales but discovering what is being done by others with a technology is always a source of excitement to me.

Capella was contributed by Thales to the Eclipse communities a few years ago already and fueled by the growing need to design Systems in a better way, by the interest in Model Based System Engineering and the qualities of the product in itself we can clearly see an acceleration in the last few months.

If you are wondering what is Capella and what it’s used for, here is a 2-minute video we prepared for you:

Worldwide awareness of this solution grows and adoption rises, organizations from Europe, North America and Asia are now using Capella and experiencing the benefits of using a tool which implements a method (coined “Arcadia”) and not only a language.

Capella Forum Activity

Looking at the numbers, just for this summer : more than 1200 downloads each month, a forum actvity which has been growing with a nice looking curve and monthly stats on YouTube reaching more than 2000 views: considering the size of the target audience this is a significant acceleration and that is without counting the deployment of System Modeling Workbench provided by Siemens which includes the technology.

Adopters not only use it but speak about it and as with any other tool having an opportunity to understanding how others are using it is highly valuable.

Rolls Royce, ArianeGroup or the Singapore University: they all have shared valuable information through the recent webinars :

More are coming and many already available through the Resources Page! BTW we can’t always get the authorization to keep them available online so your safest option is to register and attend.

Munich (Germany)

We also make sure to setup « in Real Life » opportunities to discuss Capella and MBSE. Occasions to talk with the team behind Capella and the experts arounds the world. Next up is Capella Day Munich 2019 in a couple of weeks (the 16th of September) organized by Thales and Obeo in conjunction with the Models Conference 2019. Here is a glimpse of the program :

The agenda is filled with general presentations, feedback by industrial users about their Capella deployment or specific add-ons/integration.

The program of Capella Day Munich 2019

You might want to hurry as we are almost sold out and such occasions are pretty unique!

I sincerely hope you’ll enjoy it, we are working hard to make it a success :-), if you can’t make it this time then know there are more occasions to come: AOSEC in Bangalore, EclipseCon in Germany (again!) where there might be a workshop focused on “MBSE at Eclipse” (Please add your name and interest on the corresponding wiki page )

The Rising Adoption of Capella was originally published by Cédric Brun at CEO @ Obeo on September 04, 2019.


by Cédric Brun (cedric.brun@obeo.fr) at September 04, 2019 12:00 AM

Time for Change

by Doug Schaefer at September 03, 2019 02:56 PM

First, let me get straight to the point. I have left BlackBerry/QNX and will be starting a new job in Ottawa next week. It’s a great opportunity to work on something new for a great company with a bunch of former colleagues I admire. As much as I’m looking forward to that much needed change, it sadly will take me away from the Eclipse community. This message is a goodbye and thank you.

Thinking back all the way to the beginning, I’m quickly overwhelmed by how many great people I have had the opportunity to work with thanks to the Eclipse CDT project. At the very beginning was Sky Matthews and John Prokopenko who let me weasel my way on as Rational’s technical lead on the project just as it was starting out in 2002 also at a time when I needed a change. Of course, I had a great team of developers at Rational with me that made it fun and easy. Not to mention the original team at QNX who were welcoming and made it easy to get involved. I have a special mention for Sebastien Marineau, CDT’s first project lead, who let me take a leadership role on the project and eventually hired me on at QNX to take over.

Then there was the early years on the CDT where we made our mark. Those early CDT Summits were so fun and really helped built up a team atmosphere. We had about a dozen companies sending contributors, a few of them competitors in the spirit of co-opetition, and we made it work. Then over the years we started getting independent contributors who just did it for the passion of building great C++ tooling they wanted to use themselves. It’s been a great mix and I am so lucky and proud to have been a part of it.

And of course, it was all topped off with our yearly EclipseCons. I am proud to have attended every one of the EclipseCon North America ones and was able to attend quite a few of the EclipseCon Europes in Germany. I have to thank Anne and Mike and Ralph and Wayne and Sharon and Perri and Donald and Ian and Lynn and all the Eclipse Foundation staff past and present for making me feel a part of the family. I always looked forward to the EclipseCon Express flights out of and return to Ottawa with many of them.

My fellow attendees at these conferences were amazing, from the first one at Disneyland where we had an overflow crowd at the CDT BOF and where I gave my first of many CDT talks, to all the friends I met at the bar or ran into at sessions, many of whom had nothing to do with CDT but made me feel so much a part of the bigger community. I will never forget the late nights in the bars chatting with friends like Michael Scharf and Ian Bull and Eric Cloninger and Gilles and Adrian and Jonah and Tom and so many others. As it turns out, last year in Ludwigsburg was a perfect finale where we had such a great time at the Nestor on Wednesday night. I will never forget you all.

I’m incredibly proud of what we built for the CDT. It still has the best indexer in the business, thanks to the parser we built back at Rational and the database I built at QNX and then with so many hands continually making it better and adjusting to the now ever changing C++ language spec. The Launch Bar achieved what I wanted by simplifying the Eclipse launch experience. CDT’s new Core Build fits naturally with the Launch Bar and makes it much simpler to integrate external build systems like CMake. And we have just started a GDB debug adapter using the debug adapter protocol which will pave the way to simplify integrating debuggers with the CDT.

The current set of active committers on the CDT have lately been pulling almost all the weight evolving it and getting releases out the door. Their great work has made my transition easier and will keep the CDT rolling along for years to come. And hopefully vendors will come back too and help provide funding for all this activity. We have an action plan to transition the project lead role. Follow the cdt-dev mailing list to find out more.

It’s sad to leave and the memories and friendships will be forever. I will keep my cdtdoug personal gmail account as a reminder of where I came from. But my new role will give me some much needed energy to keep things going for the next decade. I once questioned why you hardly see any retired engineers helping with open source projects or sharing their passion with the next generation. I promise you this, you will see me again.

Take care, and thank you.


by Doug Schaefer at September 03, 2019 02:56 PM

Redux App Development and Testing in N4JS (Chess Game Part 2)

by n4js dev (noreply@blogger.com) at August 29, 2019 04:04 PM

In large applications, Redux - an implementation of Flux architecture created by Facebook - is often used to organise application code by using a strict data flow in one direction only. Redux is UI agnostic, and can be used in conjunction with any UI library. As a continuation of our chess game tutorial with React, we show how to extract the entire program state out of React components, store it with Redux, and test it with N4JS. The full tutorial is available at eclipse.org/n4js and the sources can be found at github.com/Eclipse/n4js-tutorials.


The first part of the chess game tutorial discussed how to develop a chess game app with React and JSX in N4JS. We have stored the program state - which for instance contains information about the locations of all chess pieces - in the state of the React components directly. As applications become larger, however, the mix of program state and UI makes the application hard to comprehend and difficult to test. To address these issues, we extract the program state from the UI components in the second part of the tutorial.

When using React with Redux, we store the application state in Redux store instead of the state of React components. As a result, React components become stateless UI elements and simply render the UI using the data retrieved from the Redux store. In a Redux architecture, data flows strictly in one direction. The diagram below graphically depicts the action/data flow in a React/Redux app.

Strict data flow of flux architecture application


The action/data flow in the diagram can be roughly understood as follows:
  • When a user interaction is triggered on the React component (e.g. button clicked, text field edited etc.), an action is created. The action describes the changes needed to be updated in the application state. For instance, when a text field is edited, the action created may contain the new string of the text field.
  • Then the action is dispatched to the Redux store whereby the Redux store stores the application state, usually as a hierarchical tree of state.
  • The reducers take the action and the current application state and create an updated application state.
  • If the changes in the application state are to a certain React component, they are forwarded into the component in form of props. The change in props causes the component to re-render.

In the second part of the tutorial we further elaborate on the interaction of React and Redux and migrate the original chess non-Redux app. The tutorial explains the role of the reducer, and how the game state is stored and maintained in the Redux store. Based on storing the game using Redux, the tutorial shows how to test the game application with the N4JS test library Mangelhaft, by for instance checking that valid move destination squares are updated after a chess piece was selected.

Note that the way of testing the game logics is completely UI-agnostic and no React components are involved at all. This is thanks to the decoupling of game logics from UI with the help of Redux.


by Minh Quang Tran

by n4js dev (noreply@blogger.com) at August 29, 2019 04:04 PM

Back to the top