Skip to main content

Eclipse Vert.x 3.9.0 released!

by vietj at April 02, 2020 12:00 AM

We are extremely pleased to announce that the Eclipse Vert.x version 3.9.0 has been released.

SQL Client fluent queries

The query API becomes fluent with the addition of a Query API for queries creation and configuration.

Collectors query now becomes part of the Query interface.

This is a breaking API change done under the tech preview status given that SQL client is a Vert.x 4 feature back-ported to Vert.x 3.

client.prepareQuery(sql).execute(tuple, ar -> ...);

// With a collector
client.preparedQuery(sql).collecting(Collectors.toList()).execute(tuple, ar -> ...);

You can read more about this new feature here.

Redis client backport

For Vert.x 4.0 we are doing a full reboot to the redis client. The functionality for the new client is ready to test on the master branch. In order to give users the opportunity to test and make the upcoming client even better it has been backported to the 3.9.0 release. The new client will allow users to connect to a single node, sentinel or cluster of redis nodes. The easiest way to start is:

Redis.createClient(
      vertx,
      new RedisOptions()
        .setConnectionString("redis://localhost:7006"))
      .send(Request.cmd(Command.PING), send -> {
        if (send.succeeded()) {
          // Should have received a pong...
        }
      });

Same-site cookies

SameSite Cookie policy has been added to the HTTP Server Cookies.

This is also applicable to web which allows cookie sessions to use SameSite cookies:

SessionHandler.create(store)
  .setCookieSameSite(CookieSameSite.STRICT);

Same-site cookies let servers require that a cookie shouldn’t be sent with cross-site (where Site is defined by the registrable domain) requests, which provides some protection against cross-site request forgery attacks (CSRF).

Kafka client upgrade

The Kafka client has been upgraded to Kafka 2.4.0 .

Our reactive client exposes the Kafka Admin API. As of 2.4.0 this API has been replaced by a new Kafka Admin API and therefore the Vert.x Kafka Admin API changes, e.g listing consumers groups

adminClient.listConsumerGroups(ar -> {
    System.out.println("ConsumerGroups= " + ar.result());
});

Future API

With the recent addition of Future#onComplete(...) supporting several handlers per Future in 3.9 made the Future#setHandler(...) method feel awkward. Indeed setHandler conveys the meaning that the Future manages a single handler. In 3.9 we deprecate Future#setHandler in favor of Future#onComplete.

EDNS disabled by default

EDNS is an extension mechanism for DNS (https://fr.wikipedia.org/wiki/EDNS) that should be disabled by default. It might cause unwanted issue and should be disabled by default.

Auth Deprecation warnings

As of 3.9.0 many APIs will be start warning about deprecations. There is a big refactoring and some breaking API changes coming on 4.0.0 and these warnings are just to give users a heads up that API will change on the upcoming version.

For more information about the upcoming 4.0 changes you can read 4.0.0-Deprecations-and-breaking-changes

Dependency upgrades

In 3.9 we update a few versions:

  • Netty 4.1.48.Final
  • Jackson 2.10.2
  • Guava 25.1-jre
  • GraphQL Java 14

Finally

The 3.9.0 release notes can be found on the wiki, as well as the list of deprecations and breaking changes

Docker images are available on Docker Hub.

The Vert.x distribution can be downloaded on the website but is also available from SDKMan and HomeBrew.

The event bus client using the SockJS bridge is available from:

The release artifacts have been deployed to Maven Central and you can get the distribution on Bintray.

That’s it! Happy coding and see you soon on our user or dev channels.


by vietj at April 02, 2020 12:00 AM

Theia: An Open Source Alternative to Visual Studio Code

by Mike Milinkovich at April 01, 2020 04:06 PM

With the release of Eclipse Theia 1.0, organizations and vendors that build cloud and desktop integrated development environments (IDEs) have a production-ready, vendor-neutral, and open source framework for creating customized development environments for both desktops and browsers. Theia is an all-new code base with independent governance from the Eclipse desktop IDE.

Theia delivers an open source, extensible and adaptable platform that provides all of the capabilities of VS Code but which can be tailored to specific use cases. It can also leverage all of the extensions available for Microsoft Visual Studio (VS) Code — one of the world’s most popular development environments.

Industry Leaders Are Adopting Theia

The Theia project was started in 2016 by Ericsson and TypeFox. In addition to Eclipse Che using Theia as its web IDE, many organizations of all sizes rely on Theia as the foundational building block for their development environments:

  • Arm Mbed Studio builds on the Theia IDE
  • Google Cloud Shell runs Theia as its editor
  • The default IDE for Red Hat CodeReady Workspaces is based on Eclipse Theia
  • The frontend of Arduino Pro IDE is based on Theia
  • Gitpod’s development environment is based on Theia
  • SAP Business Application Studio (the next generation of SAP Web IDE) is based on Theia

Other prominent adopters include D-Wave Systems, EclipseSource, and IBM.

Last year, Theia’s momentum and adoption reached the point where the project team approached the Eclipse Foundation to host the project. With over 375 open source projects, we’ve established a track record of the vendor-neutral governance, processes, and the community building needed to further guide Theia’s growth.

Today, Theia is one of the Eclipse projects promoted by the Eclipse Cloud Development Tools Working Group (ECD WG), an industry collaboration focused on delivering development tools for, and in, the cloud.

Theia Goes Beyond VS Code

Theia is being developed by a diverse group of contributors, committers and supporting companies such TypeFox, Ericsson, Red Hat and ARM. With over fifty committers and contributors over the past three months, it is a fast-moving, welcoming, and open community where contributions are accepted from all.

Theia’s more than an alternative to VS Code. The main differentiator between Theia and VS Code is that Theia is specifically intended to be adopted by other companies and communities to build and deploy a modern web-based developer experience. VS Code is great, but it is only ever going to be a Microsoft product.

Theia is intended to be modified, extended, and distributed by folks who want to create developer tools that look as great as VS Code (including using the same Monaco Editor) and can make use of the VS Code extension ecosystem. Of course, it is licensed under the EPL 2.0, so it is easy for organizations or individuals to build and ship products using Theia.

Theia also provides important advantages that give IDE developers considerably more freedom and flexibility than VS Code offers. For example, Theia’s architecture is designed to be more modular and extensible than the VS Code so developers have a greater ability to customize their solutions for specific requirements. VS Code enables a great extension ecosystem. Theia goes beyond that and is designed to be modified and extended at all levels of its architecture.

In addition, Theia is designed from the ground up to run in both desktop environments as well as in browser and remote server environments. IDE developers can write the source code for their development environment once, then build a desktop IDE, a cloud IDE, or both, without rewriting any code. With Theia, it’s easy to move between desktop and cloud environments at any time.

Theia uses the Eclipse Open VSX Registry, an open-source alternative to the Microsoft Visual Studio marketplace. In the spirit of a true open source community, the extensions available in this free marketplace can be used in VS Code as well as in Theia.

Just the Beginning for Eclipse Theia

This first official Theia release confirms the technology is mature, stable, and ready for anyone and everyone to use as a foundation for their custom cloud or desktop IDE. Future releases will provide a desktop download for developers who want to use Theia directly as their development tool.

I want to sincerely thank everyone involved in bringing this important advance in open source cloud development tools to this critical point. With the excitement and rapid uptake that Theia has experienced, I’m sure this is just the first of many successful releases.

I encourage IDE developers to join the world-leading organizations that are already using Theia so they can start benefiting from its capabilities and flexibility.

To get involved with the Eclipse Theia Project and begin contributing, please visit https://theia-ide.org/.

For more information about the Eclipse Cloud Development Tools Working Group, view the Charter and ECD Working Group Participation Agreement (WGPA), or email membership@eclipse.org. You can also join the ECD Tools mailing list.


by Mike Milinkovich at April 01, 2020 04:06 PM

JBoss Tools 4.15.0.AM1 for Eclipse 2020-03

by jeffmaury at April 01, 2020 03:04 PM

Happy to announce 4.15.0.AM1 (Developer Milestone 1) build for Eclipse 2020-03.

Downloads available at JBoss Tools 4.15.0 AM1.

What is New?

Full info is at this page. Some highlights are below.

Please note that a regression has been found in the Fuse Tools. This is going to be fixed for the release (4.15.0.Final). Please find more information in this ticket.

Quarkus Tools

Language support for Kubernetes, Openshift, S2i and Docker properties

There is now completion, hover, documentation and validation for kubernetes., openshift., s2i., docker. properties

quarkus20

Enter kubernetes prefix:

quarkus21

Enter openshift prefix:

quarkus22

Enter s2i prefix:

quarkus23

Language support for MicroProfile REST Client properties

Likewise, there is now completion, hover, documentation and validation for the MicroProfile properties from REST Client.

After registering a REST client using @RegisterRestClient like so:

package org.acme;
      
      import javax.ws.rs.GET;
      import javax.ws.rs.Path;
      import javax.ws.rs.core.Response;
      
      import org.eclipse.microprofile.rest.client.inject.RegisterRestClient;
      
      @RegisterRestClient
      public interface MyServiceClient {
      	@GET
          @Path("/greet")
          Response greet();
      }

The related MicroProfile Rest config properties will have language feature support (completion, hover, validation, etc.).

quarkus24

Change the Java code so that the configuration key is changed:

package org.acme;
      
      import javax.ws.rs.GET;
      import javax.ws.rs.Path;
      import javax.ws.rs.core.Response;
      
      import org.eclipse.microprofile.rest.client.inject.RegisterRestClient;
      
      @RegisterRestClient(configKey = "myclient")
      public interface MyServiceClient {
      	@GET
          @Path("/greet")
          Response greet();
      }

and notice the code assist is changed accordingly:

quarkus25

Language support for MicroProfile Health

Likewise, there is now completion, hover, documentation and validation for the MicroProfile Health artifacts.

So if you have the following Health class:

package org.acme;
      
      import org.eclipse.microprofile.health.Health;
      
      @Health
      public class MyHealth {
      
      }

you will get a validation error (as the class does not implement the HealthCheck interface:

quarkus26

Similarely, if you have a class that implements HealthCheck but is not annotated with Health, some workflow applies:

package org.acme;
      
      import org.eclipse.microprofile.health.HealthCheck;
      import org.eclipse.microprofile.health.HealthCheckResponse;
      
      public class MyHealth implements HealthCheck {
      
      	@Override
      	public HealthCheckResponse call() {
      		// TODO Auto-generated method stub
      		return null;
      	}
      
      }

you will get a validation error (as the class is not annotated with Health interface:

quarkus27

As there are several ways to fix the problem, then several quick fixes are proposed.

Server Tools

Wildfly 19 Server Adapter

A server adapter has been added to work with Wildfly 19. It adds support for Java EE 8, Jakarta EE 8 and Microprofile 3.3.

Please note that while creating a Wildfly 19 server adapter, you may get a warning. This is going to be fixed for the release.

Related JIRA: JBIDE-27092

EAP 7.3 Server Adapter

The server adapter has been adapted to work with EAP 7.3.

Enjoy!

Jeff Maury


by jeffmaury at April 01, 2020 03:04 PM

The Eclipse Foundation Releases Eclipse Theia 1.0, a True Open Source Alternative to Visual Studio Code

March 31, 2020 12:00 PM

The Eclipse Foundation, one of the world’s largest open source foundations, today announced the release of Theia 1.0, a true open source alternative to Microsoft’s popular Visual Studio Code (VS Code) software.

March 31, 2020 12:00 PM

EclipseCon 2020 to Be Held as Virtual Event

March 27, 2020 02:00 PM

The Eclipse Foundation is announcing today that it will hold its annual EclipseCon conference as a virtual event.

March 27, 2020 02:00 PM

Red Hat XML language server becomes LemMinX, bringing new release and updated VS Code XML extension

by David Kwon at March 27, 2020 07:00 AM

A new era has begun for Red Hat’s XML language server, which was migrated to the Eclipse Foundation under a new project name: Eclipse LemMinX (a reference to the Lemmings video game). The Eclipse LemMinX project is arguably the most feature-rich XML language server available. Its migration opens more doors for future development and utilization. In addition, shortly after its migration, the Eclipse LemMinX project and Red Hat also released updates: Eclipse LemMinX version 0.11.1 and the Red Hat VS Code XML extension.

Eclipse LemMinX version 0.11.1

Eclipse LemMinX version 0.11.1 mainly focuses on bug fixes that are outlined in the changelog here. For some history, Eclipse LemMinX started as an open source project created by Angelo ZERR in mid-2018. Angelo’s XML language server implementation was well ahead of the game in terms of features and code infrastructure. As Red Hat’s interest in an XML language server continued to grow, Red Hat joined forces with Angelo (who later officially joined Red Hat as a senior software engineer) to create the most feature-rich and easy-to-use XML language server possible.

Thanks to the XML language server’s popularity and functionality, clients like Eclipse (with Wild Web Developer), VS Code (with XML Language Support by Red Hat), and Vim/Neovim (with coc-xml) started consuming the XML language server. In addition, all LSP features (completion, validation, quick fix, etc.) provided by the XML language server are easily extensible. This helped motivate other projects to extend the LSP features, instead of implementing them themselves from scratch.

For example, there are extensions specific for Maven and Liferay. The Maven extension extends the completion feature to manage advanced dependency completion, and the Liferay extension extends the hover feature to fit specific use cases. We hope that the contribution to the Eclipse Foundation facilitates easier consumption from related projects and attracts new contributors beyond people from Red Hat.

Red Hat VS Code XML extension

In addition, we released the Red Hat VS Code XML extension (which, of course, consumes the Eclipse LemMinX XML language server to provide language features). This extension provides an excellent all-in-one package for editing XML, XSD, and DTD files in VS Code, but what makes this extension stand out is the support for XSD and DTD schema validation for XML files.

This new release also focussed on bug fixes, which are outlined in the changelog here.

Share

The post Red Hat XML language server becomes LemMinX, bringing new release and updated VS Code XML extension appeared first on Red Hat Developer.


by David Kwon at March 27, 2020 07:00 AM

Eclipse IoT Website Redesign

March 24, 2020 02:12 PM

The Eclipse IoT website redesign is now live! This project was a huge undertaking for us; we added 5,245 lines of code, closed 15 issues, removed 82,361 lines of code and made 84 commits.

Eclipse IoT Homepage

Yes, you got that right, the end result was -77,116 lines of code because we took the opportunity to clean up our codebase.

To kickoff this initiative, the community help us define the goals for this project:

  1. Improve our information architecture:
    Led by Frédéric Desbiens, the Eclipse IoT community created a new structure for the website.
  2. Contribute to the recruitment of new members and adopters:
    Adopters and Members are now top-level menu items. We also created a new “How to be Listed as an Adopter” page.
  3. Ensure the website cathers to both technical and non-technical visitors:
    We made some big improvements to our Community and Resources sections. These sections cathers to both technical and non-technical users since you can find Case-Studies, Market Reports, Videos, White Papers and some additional information on how you can stay informed about what’s currently going on with Eclipse IoT.
  4. Drive adoption for our technologies:
    We now fetch project information from the Eclipse PMI each time we push a change to the website. Our stale project page is now a thing of the past!

In an effort to communicate our project plans with our community, we created a public GitHub project with two milestones.

Being open and transparent allows us to natually inform our communities about our efforts and we think it’s a great way for us to collaborate and share tasks. As the project manager, this workflow allows me to ensure that the project is moving forward as planned.

We also created a set of brand guidelines for the Eclipse IoT Working Group. These guidelines include the brand font (Roboto), logo variations, color swatches, and acceptable logo treatments. This will help us consistently deploy the brand across different digital and print channels as well as Eclipse IoT events.

Overall, I am very happy with this new redesign! A huge thank you to Eric Poirier, Matt Joanisse, a graphic designer hired by the Foundation to work on the site, Christie Witt, Joe Speed, Frédéric Desbiens and Martin Lowe!


March 24, 2020 02:12 PM

What I learned about COVID-19 from Acute Myeloid Leukemia

by Donald Raab at March 22, 2020 05:35 PM

Survival tactics of a germaphobe and spouse of an AML Warrior

Day 0 — This is the end
When everyone actually is out to get you, paranoia is just good thinking.
— Woody Allen

I’ve never written about this story before

I am not superstitious, except for when it comes to the story of my wife and family and Acute Myeloid Leukemia (AML). I am active on social media and write technical blogs regularly, but have never written anything personal about my wife’s six plus year war against the boogeyman — Acute Myeloid Leukemia. I am going to tell some of the story today, because an AML Warrior on Twitter has been sharing his story the past three months and has given me the courage to write out loud.

I am concerned about his well being as he prepares to get a stem cell transplant, and the well being of all those around him. He needs his spouse, his family, and many heroes in health services and the larger community to be healthy to help him fight this war and every battle he will fight.

So many people in this world need the support of others to survive. COVID-19 is an unknown threat to us all. We need to respect it and have an abundance of caution about it, even if we ourselves are not at a particular risk of dying from exposure to it.

This is not about me, and COVID-19 is not about you

I am not an AML Warrior. My wife is. My wife has been an AML warrior since July 2013. I lost track of the hospital visits, blood draws, rounds of chemo, blood transfusions, platelet transfusions and sleepless nights watching my wife in audible and visible pain hitting the emergency dispense button on the morphine drip.

I am my wife’s shield. I am the first line of defense. I am the barrier for all visitors at the door. I am a shower of Purell at the first hint of a germ. I will kill every invisible germ in a six foot radius of my wife with complete disregard for anyone’s personal feelings. The boogeyman is real, and although I can’t see him, I will do everything in my power to be his kryptonite and keep him away from my wife.

This story is about the warriors and the heroes who help our loved ones fight and survive the horrible boogeymen that threaten them every day. Our actions today with respect to COVID-19 will impact many of them in ways that may not be immediately obvious.

Let me flash back to July 2013, when my life and the life of my family changed forever.

July 2013 — I don’t need your sympathy

This is a battle in a continuous war. If you step in my wife’s hospital room then you have entered the front-lines and we do not need friendly fire. You need to listen and respect my wishes as someone who’s primary focus is doing whatever is necessary to increase the chances my wife will survive. I am her shield. You shall not pass unless I say so and you will follow my rules. You say you’re the new doctor here to take her vitals? I don’t care.

  • I need you to stay away if you have a cough, or are exhibiting anything that may only be an “allergy”. Not feeling like you’re ready to run a marathon and clean enough to perform heart surgery? Please just stay away.
  • Ready for a marathon, freshly showered, and wearing your cleanest clothes? Please wash your hands.
  • I need you to ignore any instincts you have. Ask me whether you should do something, before you instinctively do it. Going to shake hands? No. Hug? No. Kiss? WTF? No. Use my wife’s dedicated bathroom? No. Bring my wife flowers? No. Any other instincts you have? No.
  • Please wash your hands.
  • You need to perform any standard procedure on my wife? Wash your hands. Put on a mask. Put on gloves. Oops, accidentally touched your face instinctively with the gloves on? Take the gloves off, wash your hands and put on new gloves.

Repeat this several times a day for many years. I am proudly a germaphobe. The boogeyman is real. He is resilient. He will wait for you to let down your guard. Being a germaphobe is just good thinking when it comes to AML.

But I was not always a germaphobe. Let me flash back to my teenage years.

I am Unbreakable

When I was a teenager, I was in two near-death car accidents.

When I was 16, I went head first through the passenger window of a Lincoln Town car at full speed on my ten-speed bicycle and broke my jaw and clavicle. I was not wearing a helmet. I was in ICU for a week, and my mouth was wired shut for 8 weeks and I ate everything through a straw.

Note: Do not ever try and blend pizza and eat it through a straw. Ever.

When I was 17, I was thrown out of the rear-windshield of a Ford Mustang. I was in the rear left passenger seat, and was not wearing a seat belt. I recall being thrown up against the back of the driver-seat, and then having the wind knocked out of me as I hit the ground. I picked myself up off the ground and wandered around looking for my shoe. Except for a few stitches to my forehead where I had a laceration, I was miraculously fine.

My friends and I told these stories so many times at parties. The stories are fun to tell and are embellished with humor to elevate them to the level of myth and legend. They sound unbelievable and have a happy ending — I’m able to tell them.

When you hear and tell stories like this enough times, you start to believe you might actually be a superhero. I am not Superman. I was just lucky. These were just two random points in time where dice were rolled, and I didn’t crap out.

Fast forward to my 40’s. I’ve learned who the real heroes are, and they give more of themselves than any fun sounding story can at a party, no matter how many times you tell it.

AML humbled me

I spent a lot of time partying and going to bars in my early twenties. After my two near-death car accidents in my teens, I wanted to enjoy life as much as I could.

I did not know in my 20’s what Acute Myeloid Leukemia was. I didn’t understand what chemo does to the body. Chemo kills cancer cells, but it also kills the person taking it. Maybe not immediately, but it renders the person taking the chemo unable to defend themselves against common germs by killing their white blood cells. Chemo also kills red blood cells so you will have low energy. Chemo also destroys platelets so blood doesn’t clot and you bruise and bleed very easily. The picture above shows examples of the ease of bruising when unsuccessfully inserting an IV or taking blood when platelets are low.

AML is not like a car accident. AML is like being on a row boat with a leak in the middle of the ocean with hungry sharks swimming all around. If you’re lucky, after several years of fighting off sharks and paddling non-stop to any shore you can find, you might get lucky. Or, as my wife and I discovered, you might wind up back in the middle of the ocean for round two.

I have never told this story to more than close friends and family, because it terrifies me. The boogeyman is real. He is invisible. And he is waiting for you to let your shield down.

I see you COVID-19. Shield’s up!

I am not an expert or a hero, I am just a shield

Listen to the doctors and scientists and experts on disease control. You may not have an immediate risk of dying from COVID-19. The percentages might sound ridiculously low to you.

The percentages for other folks who are compromised are much scarier. Without the threat of COVID-19, the percentage chance my wife was given initially to survive given her risk assessment was 46%. We had the option to go with chemo only or a Stem Cell Transplant, and we went the chemo only route, due to another complication… My wife has Celiac Disease. Every medication, food, ointment, pain treatment, etc. had to be verified Gluten Free directly by the manufacturer. The Stem Cell transplant was going to require even more potential drugs and treatments. After the chemo only route was unsuccessful, we only had one option left. We then went with the Stem Cell Transplant option. The percentage survival chance we were given for Stem Cell Transplant after trying the chemo only route was 20%.

When the percentages are this challenging, you need the help of all the heroes you can find, and many you will never meet.

My family has been fortunate and blessed as it has been six and a half years since we began this war against AML. My heart and thoughts are with every family who has to fight their own war against their own boogeymen.

The Real Heroes

Remember, this story is not about me, and it is not about you. This story is about the real heroes of COVID-19 and AML and every other medical challenge we face as humans.

Who are the heroes?

  • The kind and generous folks that donate their blood, platelets and stem cells to help others that they will most likely never meet.
  • The nurses, doctors, pharmacists, and hospital staff, and all of the medical support staff who give their day in and day out to provide our loved ones with the care they need to survive.
  • Your family, friends, and colleagues who help you and your family daily in numerous ways while you are on auto-pilot trying to shield your loved one, and provide for your family and make sure you have health insurance, a place to sleep, and food to eat.

The heroes are real people with their own lives. They are forever part of our family. Thank you for all you have done and continue to do. We love you all.

This part is about you and me and choices we make

Are you going to a bar or event tonight to celebrate life? Do you know if you will pick up an invisible boogeyman to give to one of the heroes above?

I’m sure the heroes will be fine. They’ll stick it out for a couple weeks at home if you get them sick, and then get back to their jobs in a couple weeks when they’re well. But if a nurse or doctor or pharmacist can’t go to work, and can’t help a patient in their fight with AML or some other horrible cancer or disease, maybe their germaphobe shields won’t matter. Maybe it will only mean a half hour or hour longer to get the blood transfusion ordered from the new intern who is on call because the doctor couldn’t make it. Maybe the on call pharmacist will miss the note about making sure all medications are gluten free. Maybe the healthy person who would otherwise be donating life saving blood, platelets or stem cells will just have to wait a month or two before they can give them again.

Maybe this will mean the difference between life or death for someone else.

The night out or event you really want to attend can wait a few weeks or months. Have a drink at home and celebrate life with friends online and don’t risk those immediately around you. Take the appropriate precautions as indicated by the CDC and health professionals.

If you just happen to be hanging out temporarily with the boogeyman, please keep him to yourself. The person you may unintentionally harm with your personal disregard may have a face, a name and a family who loves them and wants them around for as many years as possible.

Thank you!

My beautiful wife and AML Warrior who is the love and inspiration of my life — July 2013
Me and the love of my life in June 2019 — Thank you to all the heroes who made this picture possible!

by Donald Raab at March 22, 2020 05:35 PM

Eclipse Oomph: Suppress Welcome Page

by kthoms at March 19, 2020 04:37 PM

I am frequently spawning Eclipse workspaces with Oomph setups and the first action I do when a new workspace is provisioned is to close Eclipse’s welcome page. So I wanted to suppress that for a current project setup. So I started searching where Eclipse stores the preference that disables the intro page. The location of that preference is within the workspace directory at

.metadata/.plugins/org.eclipse.core.runtime/.settings/org.eclipse.ui.prefs

The content of the preference file is

eclipse.preferences.version=1
showIntro=false

So to make Oomph create the preference file before the workspace is started the first time use a Resource Creation task and set the Target URL

${workspace.location|uri}/.metadata/.plugins/org.eclipse.core.runtime/.settings/org.eclipse.ui.prefs

Then put the above mentioned preference content as Content value.


by kthoms at March 19, 2020 04:37 PM

JBoss Tools and Red Hat CodeReady Studio for Eclipse 2019-12

by jeffmaury at March 19, 2020 01:04 PM

JBoss Tools 4.14.0 and Red Hat CodeReady Studio 12.14 for Eclipse 2019-12 are here waiting for you. Check it out!

crstudio12

Installation

Red Hat CodeReady Studio comes with everything pre-bundled in its installer. Simply download it from our Red Hat CodeReady product page and run it like this:

java -jar codereadystudio-<installername>.jar

JBoss Tools or Bring-Your-Own-Eclipse (BYOE) CodeReady Studio require a bit more:

This release requires at least Eclipse 4.14 (2019-12) but we recommend using the latest Eclipse 4.14 2019-12 JEE Bundle since then you get most of the dependencies preinstalled.

Once you have installed Eclipse, you can either find us on the Eclipse Marketplace under "JBoss Tools" or "Red Hat CodeReady Studio".

For JBoss Tools, you can also use our update site directly.

http://download.jboss.org/jbosstools/photon/stable/updates/

What is new?

Our main focus for this release was a new tooling for the Quarkus framework, improvements for container based development and bug fixing. Eclipse 2019-12 itself has a lot of new cool stuff but let me highlight just a few updates in both Eclipse 2019-12 and JBoss Tools plugins that I think are worth mentioning.

OpenShift

OpenShift Container Platform 4.3 support

With the new OpenShift Container Platform (OCP) 4.3 now available (see this article), JBoss Tools is compatible with this major release in a transparent way. Just define your connection to your OCP 4.3 based cluster as you did before for an OCP 3 cluster, and use the tooling !

New OpenShift Application Explorer view

A new OpenShift Application Explorer window has been added in addition to the OpenShift Explorer. It is based on OpenShift Do. It provides a different and simplified user experience allowing easy and rapid feedback through inner loop and debugging.

Let’s see it in action.

Opening the OpenShift Application Explorer view

If you opened a brand new workspace, you should see the OpenShift Application Explorer view in the list of views available on the botton part of the screen:

application explorer

If you don’t see the view being listed, you can open it through the Window -→ Show View -→ Other menu, enter open in the filter text box and then select OpenShift Application Explorer:

application explorer1
application explorer2

Expanding the root node will display the list of projects available on the cluster:

application explorer3
Java based micro service

We will show how to deploy a Java based microservice and how to use the various features. But we first need to load the component source code in our workspace. Thanks to the launcher wizard, we can do that easilly. Try Ctrl+N and select the Launcher project wizard:

application explorer4

Then click the Next button:

Select rest-http in the Mission field, vert.x community in the Runtime field, myservice in the Project name field:

application explorer5

Then click the Finish button: a new project will be added to your workspace. Once the dependencies resolution has been completed, we’re ready to start playing with the cluster.

Create the component

Now that we have the source code, we can create the component. From the OpenShift Application Explorer view, right select the project (myproject), and the click the New → Component menu:

application explorer6

Enter myservice in the Name field, click the Browse button to select the project we have just created, select java in the Component type field, select 8 in the Component version field, enter myapp in the Application field and uncheck the Push after create check-box:

application explorer7

Then click the Finish button. The component will be created and expanding the project node will now show the application that contains our component:

application explorer8

Expanding the application will now display our component:

application explorer9

The component has been created but it is not yet deployed on the cluster (as we unchecked the Push after create check-box. In order to deploy it,right select the component and click the Push menu. The deployment will be created and then a build will be launched. A new window will be created in the Console view. After a while, you should see the following output:

application explorer10

The component is now deployed to the cluster but we cannot access it as we need to define an URL to access it externally. Right select the component and click the New → URL menu:

application explorer11

Enter url1 in the Name field and select 8080 in the Port field:

application explorer12

Then click on the Finish button. The URL is created but not on the cluster, so we need to push again the component so that the local configuration is synchronized with the configuration on the cluster. The Console window will display a message claiming that a push is now required:

application explorer13

So push the component again (component → Push).

Let’s check that we can now access the service. Expand the component level so that we can see the URL we have just created:

application explorer14

Right select the URL and click the Open in Browser menu, you should see the new browser window:

application explorer15

You can test the service: enter demo in the text box and click the Invoke button:

application explorer16
Feedback loop

We will now see how we can get fast feedback on code changes. So let’s modify the application code and see how we can synchronize the changes to the cluster.

In the Project Explorer view, locate the HttpApplication.java file:

application explorer17

Double click on the file to open the editor:

application explorer18

On line 14, change the line:

  protected static final String template = "Hello, %s!";

to

  protected static final String template = "Hello, %s!, we modified the code";

and press the Ctrl+S key in order to save the file.

For the OpenShift Application Explorer, right click the component (myservice) and click the Push menu to send the changes to the cluster: the component will be built again on the cluster with the next changes and after a few seconds, it will be available again:

application explorer19

Select the browser window again, enter demo1 in the textbox (we need to change the value we used before in order to make sure cache is not involved) and click the Invoke button again:

application explorer20

We’ve seen that, through a sequence of code modification(s) followed by a synchronize action (push) to the cluster, we can get a very fast feedback. If you don’t want to manually synchronize the the cluster (push), you can opt to automatically synchronize to the cluster with the Watch action: each time a code modification is done locally on your workstation, a new build will be automatically launched on the cluster.

Going further: debug your application on the cluster

Testing an application through code changes is a great achievement so far but it may be difficult for complex applications where we need to understand how the code behaves without the need to use the UI. That’s why the next step is to be able to debug our application live on the cluster.

The new OpenShift Application Explorer allow such a scenario. We will first set up a breakpoint in our application code. Select again the HttpApplication.java file and scroll down to the greeting method:

application explorer21

On line 41, double click in the left ruler column so that a breakpoint is set:

application explorer22

We are now ready to debug our application. In order to do that, we need to launch a local (Java in our case) debugger that will be connected to our application on the cluster. This is what the Debug action is doing: right select the component (myservice) and click the Debug menu: you will see that port forwarding has been started so that our local (Java) debugger can connect to the remote Java virtual machine:

application explorer23

and then a local (Java) debugger is launched and connected to that port. Let’s check now that we can debug our application:

Select the browser window again, enter demo2 in the textbox (we need to change the value we used before in order to make sure cache is not involved) and click the Invoke button again: as our breakpoint is hit, you will be asked if you want to switch to the Debug perspective (this may not be displayed if you previously selected the Remember my decision checkbox:

application explorer24

Click the Switch button and you will see the Debug perspective:

application explorer25

You are now debugging a Java component running on a remote cluster just like it was running locally on your workstation. Please note that we demoed this feature using a Java based component but we also support the same feature to NodeJS based components.

Quarkus

Starting with this release, we’ve added a new Quarkus tooling for applications built on top of the Supersonic Subatomic Java Quarkus framework.

Quarkus project creation wizard

A new wizard has been added to create a new Quarkus application project in your workspace. In order to launch it, first enter Ctrl+N to get the list of available wizards

quarkus1

In the filter text box, enter the qu characters to filter the list of wizards:

quarkus2

Select the Quarkus Project wizard and click the Next button:

quarkus3

The Project type combo allows you to choose between Maven or Gradle tool used to manage your project. We’ll go with Maven for this tutorial.

Enter a project name (we will use code-with-quarkus) and click the Next button:

quarkus4

This dialog allows you to choose various parameters for you project, like the project coordinates (group id, artifact id and version) along with the base REST endpoint information. We’ll use the default so click on the Next button:

quarkus5

This dialog allows to select which Quarkus extensions you want to add to your project. The extensions are grouped by categories, so first select a specific category in the left table. We will choose the Web one:

quarkus6 1

You should have noticed that the middle table has been updated. In order to add an extension, double click on the extension in the middle table. We will add RESTEasy JAX-RS and RESTEasy Qute (a templating engine):

quarkus7 1

You should have noticed that the extensions that you double clicked on are now being added to the right table. If you want to remove an extension from the list of selected ones, double click again either in the center table or in the right table.

We are now all set so click on the Finish button to launch the project creation. The project creation job is then launched, dependencies are being retrieved and after a while, the new project will appear in the Project Explorer window:

quarkus8

We have successfully created our first Quarkus project. Let’s see now how we can launch this application and debug it.

Running the Quarkus application

Running a Quarkus application can be done from the workbench Run configurations. Select the Run → Run Configurations…​ menu to display the dialog allowing to create a Run configuration.

quarkus9

Scroll down until the Quarkus Application is visible and select it:

quarkus10 1

Click on the New configuration button (top left):

quarkus11 1

A workspace project needs to be associated with the configuration so click on the Browse button to see the project selection dialog:

quarkus12 1

As the workspace contains a single project, it is automatically selected and we can click on the OK button:

quarkus13 1

The configuration is now ready to be used. So let’s start our Quarkus application by clicking on the Run button:

You should see a new Console being displayed.

quarkus14

The application is being built and after a while, it will be started:

quarkus15

Debugging the Quarkus application

Debugging a Quarkus application is just a simple as launching the previous configuration we’ve just created in Debug. You just need to open the Run → Debug Configurations…​. menu and click on the Debug button.

It will start the Quarkus application like in the previous paragraph but also connect a remote JVM debug configuration to your running Quarkus application. So if you have set breakpoints in your application source files, the execution will automatically stops there.

application.properties content assist

Every Quarkus application is configured through a configuration called application.properties.

The content of this configuration file is dependent of the set of Quarkus extensions that your application is using. Some settings are mandatory, some others are not and the possible values are specific to the nature of the setting: boolean, integer, limited set of values (enumerations).

So, as a developer, you need to look at various guides and documentations (the core Quarkus and the extension specific ones)

So Quarkus Tools provides content assist on those specific files that:

  • validates the content of the application.properties files

  • provides you with the possible setting names and values

Let’s see it in action.

Go to src/main/resources/application.properties in the project and right click and select Open With → Generic Text Editor:

quarkus16

Go the third line of the file and invoke code completion (Ctrl + Space):

quarkus17

For each setting, a documentation is displayed when you mouse over the setting. Let try to add quarkus.http.port to the file and mouse over this name:

quarkus18

If we enter a wrong value (false instead of a numeric value), then the error will be highlighted:

quarkus19

This is the first set of features that will be integration into the next version of JBoss Tools. We encourage you to used it and if you are missing features and/or enhancements, don’t hesitate to report them here: JBoss Tools issue tracker

Hibernate Tools

Hibernate Runtime Provider Updates

A number of additions and updates have been performed on the available Hibernate runtime providers.

Runtime Provider Updates

The Hibernate 5.4 runtime provider now incorporates Hibernate Core version 5.4.12.Final and Hibernate Tools version 5.4.12.Final.

The Hibernate 5.3 runtime provider now incorporates Hibernate Core version 5.3.15.Final and Hibernate Tools version 5.3.15.Final.

Deprecation

The Forge, Java Server Tools (JST) and Visual Page Editor (VPE) have been deprecated. Even though they received an update with this release, we have no plan to maintain or add new features and they may be removed in the future.

In addition, the adapters for Red Hat JBoss Enterprise Application Server 4.3 and 5.0 have also been deprecated.

Platform

Views, Dialogs and Toolbar

New view menu icon

The view menu chevron icon (▽) is replaced by a modern equivalent, the vertical ellipsis ( ⠇).

Almost every view has a menu that may contain additional configuration settings like filters, layout settings, and so on. The view menu was often overlooked and we expect that this change will help users to find it.

view menu
Find Actions: The improved Quick Access

The formerly called Quick Access action has been retitled to Find Actions to better emphasize its goal.

The related UI has changed a bit to improve its usage and accessibility:

  • The widget item is now a regular toolbar item (button-like)

  • An icon is shown

  • Right-clicking on the tool item works and shows typical actions, including Hide

  • The proposals are now a regular dialog, centered on the workbench

These changes will greatly improve the experience if you’re using a screen reader as it relies on a more standardized focus state. This also leverages all the default and usual accessibility features of dialogs (moveable, resizable…​).

Loading the proposals has been improved as well to avoid UI freezes when loading proposals.

Find Actions finds text in file contents

Find Actions is now extended by the Quick Text Search feature to show the potential text matches in file contents also in the proposals.

file content find action

If the Quick Text Search bundle wasn’t started yet, you may miss those matches. In this case, you can use Find Actions itself to activate the Quick Text Search by finding and selecting the Activate bundle for 'File content' proposals entry.

activate file content
Find Actions lists workspace files

Find Actions can now list matching file names from the workspace (similar to the Open Resource dialog). Upon selection the file is opened in the editor.

find actions resources
Inline rename for simple resources while in Project Explorer.

In the Project Explorer, renaming (with the F2 shortcut or Rename context menu) will start an inline rename for normal resources when other files aren’t affected by the rename.

project explorer inline renaming

In cases where other files are affected by the rename, or the rename operation is customized, the rename dialog will appear as it previously did.

Text Editors

Show problem markers inline

You can now see the errors, warnings, and info markers inline in most of the text editors. No more mousing around to see the actual error message!

annotation code mining jdt

You can see the available quick fixes by clicking on the message.

annotation code mining quickfix

You can enable it on the General > Editors > Text Editors preference page and set Show Code Minings for Annotations to:

  • None (default)

  • Errors only

  • Errors and Warnings

  • Errors, Warnings and Info

Backspace/delete can treat spaces as tabs

If you use the Insert spaces for tabs option, now you can also change the backspace and delete keys behavior to remove multiple spaces at once, as if they were a tab.

The new setting is called Remove multiple spaces on backspace/delete and is found on the General > Editors > Text Editors preference page.

delete spaces as tabs

Debug

Collapse All Button in the Debug View

In the Debug View, now you can now use the new Collapse All button to collapse all the launches.

Before collapsing:

collapse all debug view before

After collapsing:

collapse all debug view after
Control character interpretation in Console View

The Console View can now interpret the control characters backslash (\b) and carriage return (\r).

This feature is disabled by default. You can enable it on the Run/Debug > Console preference page.

animated progress in console

Themes and Styling

Improvements in UI Forms Styling

CSS customization of ExpandableComposite and Section was reworked to give you more control over their styling. In dark mode, those elements now integrate better with other Form elements.

Old:

pom dark old

New:

pom dark new
Perspective switcher gets aligned with normal toolbar styling

The special styling of the Perspective switcher has been removed to make the Toolbar look consistent. This also reduces OS specific styling issues with the perspective switcher.

Old:

old perspective switcher

New:

new perspective switcher
Usage of consistent colors for the dark theme

The usage of different shades of gray in the dark theme was reduced.

The styling of the widgets is also not based on the selected view anymore, which makes the UI more consistent.

General Updates

Ant 1.10.7

Eclipse has adopted Ant version 1.10.7.

Support for the Ant include task added

The Ant include task (available in the Ant library since 1.8.0) is now fully recognized by the ant-ui-plugin and validated accordingly.

Java Developement Tools (JDT)

Java 13 Support

Java 13

Java™ 13 is available and Eclipse JDT supports Java 13 for the Eclipse 4.14 release.

The release notably includes the following Java 13 features:

  • JEP 354: Switch Expressions (Preview).

  • JEP 355: Text Blocks (Preview).

Please note that these are preview language feature and hence enable preview option should be on. For an informal introduction of the support, please refer to Java 13 Examples wiki.

Keyboard shortcut for Text Block creation

A keyboard shortcut Ctrl + Shift + ' is added in the Java Editor for text blocks, which is a preview feature added in Java 13.

Conditions under which this keyboard shortcut works are:

  • The Java Project should have a compliance of 13 or above and the preview features should be enabled.

  • The selection in the editor should not be part of a string or a comment or a text block.

Examples:

textblock pre creation1

Pressing the shortcut gives:

textblock post creation1

You can also encompass a selected text in text block as below:

textblock pre creation2

On pressing the shortcut, you get this:

textblock post creation2

Java Editor

Remove unnecessary array creation

A new cleanup action Remove unnecessary array creation has been added. It will remove explicit array creation for varargs parameters.

unnecessary array creation option

For the given code:

unnecessary array creation before

After cleanup, you get this:

unnecessary array creation after
Push negation down in expression

A new Java cleanup/save action Push down negation has been added. It reduces the double negation by reverting the arithmetic expressions.

For instance:

!!isValid; becomes isValid;

!(a != b); becomes (a == b);

push down negation
Provide templates for empty Java source files

When dealing with empty Java source files, some basic templates (class, interface, enum) will now be available from the content assist popup.

templates empty java file
Postfix completion proposal category

Postfix completion allows certain kinds of language constructs to be applied to the previously entered text.

For example: Entering "input text".var and selecting the var - Creates a new variable proposal, will result in String name = "input text".

postfix completion
try-with-resources quickfix

A quickfix has been added to create a try-with-resources statement based on the selected lines. Lines that are selected must start with declarations of objects that implement AutoCloseable. These declarations are added as the resources of the try-with-resources statement.

If there are selected statements that are not eligible resources (such as Objects that don’t implement AutoCloseable), then the first such statement and all the following selected statements will be placed in the try-with-resources body.

Method before applying try-with-resources:

tryWithResource1

Select all the lines inside the method, then use the short-cut Ctrl+1 and click on Surround with try-with-resources from the list:

tryWithResource2

This results in:

tryWithResource3
Javadoc tag checking for modules

Support has been added to check the Javadoc of a module-info.java file to report missing and duplicate @uses and @provides tags depending on the compiler settings (Preferences > Java > Compiler > Javadoc).

checkModuleJavadoc

Java Formatter

Formatting of text blocks

The code formatter can now handle text blocks, which is a preview feature added in Java 13. It’s controlled by the Text block indentation setting, found right in the Indentation section of the Profile Editor (Preferences > Java > Code Style > Formatter > Edit…​).

By default, text block lines are indented the same way as wrapped code lines, that is with two extra tabs relative to the starting indentation (or whatever is set as Default indentation for wrapped lines in the Line Wrapping section). You can also set it to use only one tab for indentation (Indent by one), align all lines to the position of the opening quotes (Indent on column), or preserve the original formatting (Do not touch).

formatter text block
Blank lines between Javadoc tags

The code formatter can now divide Javadoc tags into groups (by type, for example @param, @throws, @returns) and separate these groups with blank lines. This feature can be turned on in the Comments > Javadocs section by checking the Blank lines between tags of different type box.

Space after not operator

A new setting has been added to control whether a space should be added after not (!) operator, independently from other unary operators. To find it, expand sections Whitespace > Expressions > Unary operators and go to the last checkbox.

formatter space after not

JUnit

BREE update for org.eclipse.jdt.junit.runtime

The Bundle Required Execution Environment (BREE) for the org.eclipse.jdt.junit.runtime bundle is now J2SE-1.5.

Debug

No suspending on exception recurrence

A new workspace preference has been added for exception breakpoints: Suspend policy for recurring exception instances controls whether the same exception instance may cause the debugger to suspend more than once.

preference exception recurrence

This option is relevant when debugging an application that has try blocks at several levels of the architecture. In this situation an exception breakpoint may fire multiple times for the same actual exception instance: A throw statement inside a catch block may re-throw the same exception. The same holds for each finally block or try-with-resources block.

When the debugger stops due to an exception breakpoint, you may want to continue your debug session by pressing Resume (F8), but all that catching and re-throwing will force you to observe all locations where the same exception will surface again and again. Suspending at all try blocks on the call stack may also spoil your context of open Java editors, by opening more editors of classes that are likely irrelevant for the debugging task at hand.

The JDT Debugger will now detect this situation, and the first time it notices the same exception instance recurring at the surface, a new question dialog is shown:

dialog exception recurrence

If you select Skip in this dialog, the current exception instance will be dismissed for good. Still, new instances of the same exception type will cause suspending when they are thrown.

If you check Remember my decision your choice will be stored in the mentioned workspace preference to be effective for all exception breakpoints.

Even after choosing Skip — resp. Only once in the preferences — you can have the old behavior simply by pressing Step Return (F7) instead of Resume.

JDT Developers

Flag whether content assist extension needs to run in UI thread

The existing org.eclipse.jdt.ui.javaCompletionProposalComputer, org.eclipse.jdt.ui.javadocCompletionProposalComputer and org.eclipse.jdt.ui.javaCompletionProposalSorters extension points now allow a new attribute requiresUIThread that allows a developer to declare whether running in the UI thread is required or not.

This information will be used by the Content Assist operation to allow some optimizations and prevent UI freezes by reducing the amount of work happening in the UI thread.

To preserve backward compatibility, the default value for this attribute (if unset) is true, meaning the extension is expected to run in the UI thread.

And more…​

You can find more noteworthy updates in on this page.

What is next?

Having JBoss Tools 4.14.0 and Red Hat CodeReady Studio 12.14 out we are already working on the next release for Eclipse 2020-03.

Enjoy!

Jeff Maury


by jeffmaury at March 19, 2020 01:04 PM

WTP 3.17 Released!

March 18, 2020 03:30 PM

The Eclipse Web Tools Platform 3.17 has been released! Installation and updates can be performed using the Eclipse IDE 2020-03 Update Site or through the Eclipse Marketplace . Release 3.17 is included in the 2020-03 Eclipse IDE for Enterprise Java Developers , with selected portions also included in several other packages . Adopters can download the R3.17 build directly and combine it with the necessary dependencies.

More news


March 18, 2020 03:30 PM

EMF Forms and EMF Client Platform 1.24.0 released!

by Jonas Helming and Maximilian Koegel at March 18, 2020 02:43 PM

We are happy to announce that with the Eclipse Release 2020-03, we have also shipped  EMF Forms and EMF Client Platform...

The post EMF Forms and EMF Client Platform 1.24.0 released! appeared first on EclipseSource.


by Jonas Helming and Maximilian Koegel at March 18, 2020 02:43 PM

Easy SSO for Vert.x with Keycloak

by thomasdarimont at March 16, 2020 12:00 AM

Easy SSO for Vert.x with Keycloak

TL;DR

In this blog post you’ll learn:

  • How to implement Single Sign-on with OpenID Connect
  • How to use Keycloak’s OpenID Discovery to infer OpenID provider configuration
  • How to obtain user information
  • How to check for authorization
  • How to call a Bearer protected service with an Access Token
  • How to implement a form based logout

Hello Blog

This is my first post in the Vert.x Blog and I must admit that up until now I have never used Vert.x in a real project. “Why are you here?”, you might ask… Well I currently have two main hobbies, learning new things and securing apps with Keycloak. So a few days ago, I stumbled upon the Introduction to Vert.x video series on youtube by Deven Phillips and I was immediately hooked. Vert.x was a new thing for me, so the next logical step was to figure out how to secure a Vert.x app with Keycloak.

For this example I build a small web app with Vert.x that shows how to implement Single Sign-on (SSO) with Keycloak and OpenID Connect, obtain information about the current user, check for roles, call bearer protected services and properly handling logout.

Keycloak

Keycloak is a Open Source Identity and Access Management solution which provides support for OpenID Connect based Singe-Sign on, among many other things. I briefly looked for ways to securing a Vert.x app with Keycloak and quickly found an older Vert.x Keycloak integration example in this very blog. Whilst this is a good start for beginners, the example contains a few issues, e.g.:

  • It uses hardcoded OpenID provider configuration
  • Features a very simplistic integration (for the sake of simplicity)
  • No user information used
  • No logout functionality is shown

That somehow nerdsniped me a bit and so it came that, after a long day of consulting work, I sat down to create an example for a complete Keycloak integration based on Vert.x OpenID Connect / OAuth2 Support.

So let’s get started!

Keycloak Setup

To secure a Vert.x app with Keycloak we of course need a Keycloak instance. Although Keycloak has a great getting started guide I wanted to make it a bit easier to put everything together, therefore I prepared a local Keycloak docker container as described here that you can start easily, which comes with all the required configuration in place.

The preconfigured Keycloak realm named vertx contains a demo-client for our Vert.x web app and a set of users for testing.

docker run \
  -it \
  --name vertx-keycloak \
  --rm \
  -e KEYCLOAK_USER=admin \
  -e KEYCLOAK_PASSWORD=admin \
  -e KEYCLOAK_IMPORT=/tmp/vertx-realm.json \
  -v $PWD/vertx-realm.json:/tmp/vertx-realm.json \
  -p 8080:8080 \
  quay.io/keycloak/keycloak:9.0.0

Vert.x Web App

The simple web app consists of a single Verticle, runs on http://localhost:8090 and provides a few routes with protected resources. You can find the complete example here.

The web app contains the following routes with handlers:

  • / - The unprotected index page
  • /protected - The protected page, which shows a greeting message, users need to login to access pages beneath this path.
  • /protected/user - The protected user page, which shows some information about the user.
  • /protected/admin - The protected admin page, which shows some information about the admin, only users with role admin can access this page.
  • /protected/userinfo - The protected userinfo page, obtains user information from the bearer token protected userinfo endpoint in Keycloak.
  • /logout - The protected logout resource, which triggers the user logout.

Running the app

To run the app, we need to build our app via:

cd keycloak-vertx
mvn clean package

This creates a runnable jar, which we can run via:

java -jar target/*.jar

Note, that you need to start Keycloak, since our app will try to fetch configuration from Keycloak.

If the application is running, just browse to: http://localhost:8090/.

An example interaction with the app can be seen in the following gif: Vert.x Keycloak Integration Demo

Router, SessionStore and CSRF Protection

We start the configuration of our web app by creating a Router where we can add custom handler functions for our routes. To properly handle the authentication state we need to create a SessionStore and attach it to the Router. The SessionStore is used by our OAuth2/OpenID Connect infrastructure to associate authentication information with a session. By the way, the SessionStore can also be clustered if you need to distribute the server-side state.

Note that if you want to keep your server stateless but still want to support clustering, then you could provide your own implementation of a SessionStore which stores the session information as an encrypted cookie on the Client.

Router router = Router.router(vertx);

// Store session information on the server side
SessionStore sessionStore = LocalSessionStore.create(vertx);
SessionHandler sessionHandler = SessionHandler.create(sessionStore);
router.route().handler(sessionHandler);

In order to protected against CSRF attacks it is good practice to protect HTML forms with a CSRF token. We need this for our logout form that we’ll see later.

To do this we configure a CSRFHandler and add it to our Router:

// CSRF handler setup required for logout form
String csrfSecret = "zwiebelfische";
CSRFHandler csrfHandler = CSRFHandler.create(csrfSecret);
router.route().handler(ctx -> {
            // Ensures that the csrf token request parameter is available for the CsrfHandler
            // after the logout form was submitted.
            // See "Handling HTML forms" https://vertx.io/docs/vertx-core/java/#_handling_requests
            ctx.request().setExpectMultipart(true);
            ctx.request().endHandler(v -> csrfHandler.handle(ctx));
        }
);

Keycloak Setup via OpenID Connect Discovery

Our app is registered as a confidential OpenID Connect client with Authorization Code Flow in Keycloak, thus we need to configure client_id and client_secret. Confidential clients are typically used for server-side web applications, where one can securely store the client_secret. You can find out more aboutThe different Client Access Types in the Keycloak documentation.

Since we don’t want to configure things like OAuth2 / OpenID Connect Endpoints ourselves, we use Keycloak’s OpenID Connect discovery endpoint to infer the necessary Oauth2 / OpenID Connect endpoint URLs.

String hostname = System.getProperty("http.host", "localhost");
int port = Integer.getInteger("http.port", 8090);
String baseUrl = String.format("http://%s:%d", hostname, port);
String oauthCallbackPath = "/callback";

OAuth2ClientOptions clientOptions = new OAuth2ClientOptions()
    .setFlow(OAuth2FlowType.AUTH_CODE)
    .setSite(System.getProperty("oauth2.issuer", "http://localhost:8080/auth/realms/vertx"))
    .setClientID(System.getProperty("oauth2.client_id", "demo-client"))
    .setClientSecret(System.getProperty("oauth2.client_secret", "1f88bd14-7e7f-45e7-be27-d680da6e48d8"));

KeycloakAuth.discover(vertx, clientOptions, asyncResult -> {

    OAuth2Auth oauth2Auth = asyncResult.result();

    if (oauth2Auth == null) {
        throw new RuntimeException("Could not configure Keycloak integration via OpenID Connect Discovery Endpoint. Is Keycloak running?");
    }

    AuthHandler oauth2 = OAuth2AuthHandler.create(oauth2Auth, baseUrl + oauthCallbackPath)
        .setupCallback(router.get(oauthCallbackPath))
        // Additional scopes: openid for OpenID Connect
        .addAuthority("openid");

    // session handler needs access to the authenticated user, otherwise we get an infinite redirect loop
    sessionHandler.setAuthProvider(oauth2Auth);

    // protect resources beneath /protected/* with oauth2 handler
    router.route("/protected/*").handler(oauth2);

    // configure route handlers
    configureRoutes(router, webClient, oauth2Auth);
});

getVertx().createHttpServer().requestHandler(router).listen(port);

Route handlers

We configure our route handlers via configureRoutes:

private void configureRoutes(Router router, WebClient webClient, OAuth2Auth oauth2Auth) {

    router.get("/").handler(this::handleIndex);

    router.get("/protected").handler(this::handleGreet);
    router.get("/protected/user").handler(this::handleUserPage);
    router.get("/protected/admin").handler(this::handleAdminPage);

    // extract discovered userinfo endpoint url
    String userInfoUrl =  ((OAuth2AuthProviderImpl)oauth2Auth).getConfig().getUserInfoPath();
    router.get("/protected/userinfo").handler(createUserInfoHandler(webClient, userInfoUrl));

    router.post("/logout").handler(this::handleLogout);
}

The index handler exposes an unprotected resource:

private void handleIndex(RoutingContext ctx) {
    respondWithOk(ctx, "text/html", "

Welcome to Vert.x Keycloak Example


Protected"
); }

Extract User Information from the OpenID Connect ID Token

Our app exposes a simple greeting page which shows some information about the user and provides links to other pages.

The user greeting handler is protected by the Keycloak OAuth2 / OpenID Connect integration. To show information about the current user, we first need to call the ctx.user() method to get an user object we can work with. To access the OAuth2 token information, we need to cast it to OAuth2TokenImpl.

We can extract the user information like the username from the IDToken exposed by the user object via user.idToken().getString("preferred_username"). Note, there are many more claims like (name, email, givenanme, familyname etc.) available. The OpenID Connect Core Specification contains a list of available claims.

We also generate a list with links to the other pages which are supported:

private void handleGreet(RoutingContext ctx) {

    OAuth2TokenImpl oAuth2Token = (OAuth2TokenImpl) ctx.user();

    String username = oAuth2Token.idToken().getString("preferred_username");

    String greeting = String.format("

Hi %s @%s

"
, username, Instant.now()); String logoutForm = createLogoutForm(ctx); respondWithOk(ctx, "text/html", greeting + logoutForm); }

The user page handler shows information about the current user:

private void handleUserPage(RoutingContext ctx) {

    OAuth2TokenImpl user = (OAuth2TokenImpl) ctx.user();

    String username = user.idToken().getString("preferred_username");
    String displayName = oAuth2Token.idToken().getString("name");

    String content = String.format("

User Page: %s (%s) @%s

Protected Area"
, username, displayName, Instant.now()); respondWithOk(ctx, "text/html", content); }

Authorization: Checking for Required Roles

Our app exposes a simple admin page which shows some information for admins, which should only be visible for admins. Thus we require that users must have the admin realm role in Keycloak to be able to access the admin page.

This is done via a call to user.isAuthorized("realm:admin", cb). The handler function cb exposes the result of the authorization check via the AsyncResult res. If the current user has the admin role then the result is true otherwise false:

private void handleAdminPage(RoutingContext ctx) {

    OAuth2TokenImpl user = (OAuth2TokenImpl) ctx.user();

    // check for realm-role "admin"
    user.isAuthorized("realm:admin", res -> {

        if (!res.succeeded() || !res.result()) {
            respondWith(ctx, 403, "text/html", "

Forbidden

"
); return; } String username = user.idToken().getString("preferred_username"); String content = String.format("

Admin Page: %s @%s

Protected Area"
, username, Instant.now()); respondWithOk(ctx, "text/html", content); }); }

Call Services protected with Bearer Token

Often we need to call other services from our web app that are protected via Bearer Authentication. This means that we need a valid access token to access a resource provided on another server.

To demonstrate this we use Keycloak’s /userinfo endpoint as a straw man to demonstrate backend calls with a bearer token.

We can obtain the current valid access token via user.opaqueAccessToken(). Since we use a WebClient to call the protected endpoint, we need to pass the access token via the Authorization header by calling bearerTokenAuthentication(user.opaqueAccessToken()) in the current HttpRequest object:

private Handler createUserInfoHandler(WebClient webClient, String userInfoUrl) {

    return (RoutingContext ctx) -> {

        OAuth2TokenImpl user = (OAuth2TokenImpl) ctx.user();

        URI userInfoEndpointUri = URI.create(userInfoUrl);
        webClient
            .get(userInfoEndpointUri.getPort(), userInfoEndpointUri.getHost(), userInfoEndpointUri.getPath())
            // use the access token for calls to other services protected via JWT Bearer authentication
            .bearerTokenAuthentication(user.opaqueAccessToken())
            .as(BodyCodec.jsonObject())
            .send(ar -> {

                if (!ar.succeeded()) {
                    respondWith(ctx, 500, "application/json", "{}");
                    return;
                }

                JsonObject body = ar.result().body();
                respondWithOk(ctx, "application/json", body.encode());
            });
    };
}

Handle logout

Now that we got a working SSO login with authorization, it would be great if we would allow users to logout again. To do this we can leverage the built-in OpenID Connect logout functionality which can be called via oAuth2Token.logout(cb).

The handler function cb exposes the result of the logout action via the AsyncResult res. If the logout was successfull we destory our session via ctx.session().destroy() and redirect the user to the index page.

The logout form is generated via the createLogoutForm method.

As mentioned earlier, we need to protect our logout form with a CSRF token to prevent CSRF attacks.

Note: If we had endpoints that would accept data sent to the server, then we’d need to guard those endpoints with an CSRF token as well.

We need to obtain the generated CSRFToken and render it into a hidden form input field that’s transfered via HTTP POST when the logout form is submitted:

private void handleLogout(RoutingContext ctx) {

    OAuth2TokenImpl oAuth2Token = (OAuth2TokenImpl) ctx.user();
    oAuth2Token.logout(res -> {

        if (!res.succeeded()) {
            // the user might not have been logged out, to know why:
            respondWith(ctx, 500, "text/html", String.format("

Logout failed %s

"
, res.cause())); return; } ctx.session().destroy(); ctx.response().putHeader("location", "/?logout=true").setStatusCode(302).end(); }); } private String createLogoutForm(RoutingContext ctx) { String csrfToken = ctx.get(CSRFHandler.DEFAULT_HEADER_NAME); return "
" + String.format("", CSRFHandler.DEFAULT_HEADER_NAME, csrfToken) + ""
; }

Some additional plumbing:

private void respondWithOk(RoutingContext ctx, String contentType, String content) {
    respondWith(ctx, 200, contentType, content);
}

private void respondWith(RoutingContext ctx, int statusCode, String contentType, String content) {
    ctx.request().response() //
            .putHeader("content-type", contentType) //
            .setStatusCode(statusCode)
            .end(content);
}

More examples

This concludes the Keycloak integration example.

Check out the complete example in keycloak-vertx Examples Repo.

Thank you for your time, stay tuned for more updates! If you want to learn more about Keycloak, feel free to reach out to me. You can find me via thomasdarimont on twitter.

Happy Hacking!


by thomasdarimont at March 16, 2020 12:00 AM

MPS’ Quest of the Holy GraalVM of Interpreters

by Niko Stotz at March 11, 2020 11:19 PM

A vision how to combine MPS and GraalVM

Way too long ago, I prototyped a way to use GraalVM and Truffle inside JetBrains MPS. I hope to pick up this work soon. In this article, I describe the grand picture of what might be possible with this combination.

Part I: Get it Working

Step 0: Teach Annotation Processors to MPS

Truffle uses Java Annotation Processors heavily. Unfortunately, MPS doesn’t support them during its internal Java compilation. The feature request doesn’t show any activity.

So, we have to do it ourselves. A little less time ago, I started with an alternative Java Facet to include Annotation Processors. I just pushed my work-in-progress state from 2018. As far as I remember, there were no fundamental problems with the approach.

Optional Step 1: Teach Truffle Structured Sources

For Truffle, all executed programs stem from a Source. However, this Source can only provide Bytes or Characters. In our case, we want to provide the input model. The prototype just put the Node id of the input model as a String into the Source; later steps resolved the id against MPS API. This approach works and is acceptable; directly passing the input node as object would be much nicer.

Step 2: Implement Truffle Annotations as MPS Language

We have to provide all additional hints as Annotations to Truffle. They are complex enough, so we want to leverage MPS’ language features to directly represent all Truffle concepts.

This might be a simple one-to-one representation of Java Annotations as MPS Concepts, but I’d guess we can add some more semantics and checks. Such feedback within MPS should simplify the next steps: Annotation Processors (and thus, Truffle) have only limited options to report issues back to us.

We use this MPS language to implement the interpreter for our DSL. This results in a TruffleLanguage for our DSL.

Step 3: Start Truffle within MPS

At the time when I wrote the proof-of-concept, a TruffleLanguage had to be loaded at JVM startup. To my understanding, Truffle overcame this limitation. I haven’t looked into the current possibilities in detail yet.

I can imagine two ways to provide our DSL interpreter to the Truffle runtime:

  1. Always register MpsTruffleLanguage1, MpsTruffleLanguage2, etc. as placeholders. This would also work at JVM startup. If required, we can register additional placeholders with one JVM restart.
    All non-colliding DSL interpreters would be MpsTruffleLanguage1 from Truffle’s point of view. This works, as we know the MPS language for each input model, and can make sure Truffle uses the right evaluation for the node at hand. We might suffer a performance loss, as Truffle had to manage more evaluations.

    What are non-colliding interpreters? Assume we have a state machine DSL, an expression DSL, and a test DSL. The expression DSL is used within the state machines; we provide an interpreter for both of them.
    We provide two interpreters for the test DSL: One executes the test and checks the assertions, the other one only marks model nodes that are covered by the test.
    The state machine interpreter, the expression interpreter, and the first test interpreter are non-colliding, as they never want to execute on the same model node. All of them go to MpsTruffleLanguage1.
    The second test interpreter does collide, as it wants to do something with a node also covered by the other interpreters. We put it to MpsTruffleLanguage2.

  2. We register every DSL interpreter as a separate TruffleLanguage. Nice and clean one-to-one relation. In this scenario, we probably had to get Truffle Language Interop right. I have not yet investigated this topic.

Step 4: Translate Input Model to Truffle Nodes

A lot of Truffle’s magic stems from its AST representation. Thus, we need to translate our input model (a.k.a. DSL instance, a.k.a. program to execute) from MPS nodes into Truffle Nodes.

Ideally, the Truffle AST would dynamically adopt any changes of the input model — like hot code replacement in a debugger, except we don’t want to stop the running program. From Truffle’s point of view this shouldn’t be a problem: It rewrites the AST all the time anyway.

DclareForMPS seems a fitting technology. We define mapping rules from MPS node to Truffle Node. Dclare makes sure they are in sync, and input changes are propagated optimally. These rules could either be generic, or be generated from the interpreter definition.

We need to take care that Dclare doesn’t try to adapt the MPS nodes to Truffle’s optimizing AST changes (no back-propagation).

We require special handling for edge cases of MPS → Truffle change propagation, e.g. the user deletes the currently executed part of the program.

For memory optimization, we might translate only the entry nodes of our input model immediately. Instead of the actual child Truffle Nodes, we’d add special nodes that translate the next part of the AST.
Unloading the not required parts might be an issue. Also, on-demand processing seems to conflict with Dclare’s rule-based approach.

Part II: Adapt to MPS

Step 5: Re-create Interpreter Language

The MPS interpreter framework removes even more boilerplate from writing interpreters than Truffle. The same language concepts should be built again, as abstraction on top of the Truffle Annotation DSL. This would be a new language aspect.

Step 6: Migrate MPS Interpreter Framework

Once we had the Truffle-based interpreter language, we want to use it! Also, we don’t want to rewrite all our nice interpreters.

I think it’s feasible to automatically migrate at least large parts of the existing MPS interpreter framework to the new language. I would expect some manual adjustment, though. That’s the price we had to pay for two orders of magnitude performance improvement.

Step 7: Provide Plumbing for BaseLanguage, Checking Rules, Editors, and Tests

Using the interpreter should be as easy as possible. Thus, we have to provide the appropriate utilities:

  • Call the interpreter from any BaseLanguage code.
    We had to make sure we get language / model loading and dependencies right. This should be easier with Truffle than with the current interpreter, as most language dependencies are only required at interpreter build time.
  • Report interpreter results in Checking Rules.
    Creating warnings or errors based on the interpreter’s results is a standard use-case, and should be supported by dedicated language constructs.
  • Show interpreter results in an editor.
    As another standard use-case, we might want to show the interpreter’s results (or a derivative) inside an MPS editor. Especially for long-running or asynchronous calculations, getting this right is tricky. Dedicated editor extensions should take care of the details.
  • Run tests that involve the interpreter.
    Yet another standard use-case: our DSL defines both calculation rules and examples. We want to assure they are in sync, meaning executing the rules in our DSL interpreter and comparing the results with the examples. This must work both inside MPS, and in a headless build / CI test environment.

Step 8: Support Asynchronous Interpretation and/or Caching

The simple implementation of interpreter support accepts a language, parameters, and a program (a.k.a. input model), and blocks until the interpretation is complete.

This working mode is useful in various situations. However, we might want to run long-running interpretations in the background, and notify a callback once the computation is finished.

Example: An MPS editor uses an interpreter to color a rule red if it is not in accordance with a provided example. This interpretation result is very useful, even if it takes several seconds to calculate. However, we don’t want to block the editor (or even whole MPS) for that long.

Extending the example, we might also want to show an error on such a rule. The typesystem runs asynchronously anyways, so blocking is not an issue. However, we now run the same expensive interpretation twice. The interpreter support should provide configurable caching mechanisms to avoid such waste.

Both asynchronous interpretation and caching benefit from proper language extensions.

Step 9: Integrate with MPS Typesystem and Scoping

Truffle needs to know about our DSL’s types, e.g. for resolving overloaded functions or type casting. We already provide this information to the MPS typesystem. I didn’t look into the details yet; I’d expect we could generate at least part of the Truffle input from MPS’ type aspect.

Truffle requires scoping knowledge to store variables in the right stack frame (and possibly other things I don’t understand yet). I’d expect we could use the resolved references in our model as input to Truffle. I’m less optimistic to re-use MPS’ actual scoping system.

For both aspects, we can amend the missing information in the Interpreter Language, similar to the existing one.

Step 10: Support Interpreter Development

As DSL developers, we want to make sure we implemented our interpreter correctly. Thus, we write tests; they are similar to other tests involving the interpreter.

However, if they fail, we don’t want to debug the program expressed in our DSL, but our interpreter. For example, we might implement the interpreter for a switch-like construct, and had forgotten to handle an implicit default case.

Using a regular Java debugger (attached to our running MPS instance) has only limited use, as we had to debug through the highly optimized Truffle code. We cannot use Truffle’s debugging capabilities, as they work on the DSL.
There might be ways to attach a regular Java debugger running inside MPS in a different thread to its own JVM. Combining the direct debugger access with our knowledge of the interpreter’s structure, we might be able to provide sensible stepping through the interpreter to the DSL developer.

Simpler ways to support the developers might be providing traces through the interpreter, or ship test support where the DSL developer can assure specific evaluators were (not) executed.

Step 11: Create Language for Interop

Truffle provides a framework to describe any runtime in-memory data structure as Shape, and to convert them between languages. This should be a nice extension of MPS’ multi-language support into the runtime space, supported by an appropriate Meta-DSL (a.k.a. language aspect).

Part III: Leverage Programming Language Tooling

Step 12: Connect Truffle to MPS’ Debugger

MPS contains the standard interactive debugger inherited from IntelliJ platform.

Truffle exposes a standard interface for interactive debuggers of the interpreted input. It takes care of the heavy lifting from Truffle AST to MPS input node.

If we ran Truffle in a different thread than the MPS debugger, we should manage to connect both parts.

Step 13: Integrate Instrumentation

Truffle also exposes an instrumentation interface. We could provide standard instrumentation applications like “code” coverage (in our case: DSL node coverage) and tracing out-of-the-box.

One might think of nice visualizations:

  • Color node background based on coverage
  • Mark the currently executed part of the model
  • Project runtime values inline
  • Show traces in trace explorer

Other possible applications:

  • Snapshot mechanism for current interpreter state
  • Provide traces for offline debugging, and play them back

Part IV: Beyond MPS

Step 14: Serialize Truffle Nodes

If we could serialize Truffle Nodes (before any run-time optimization), we would have an MPS-independent representation of the executable DSL. Depending on the serialization format (implement Serializable, custom binary format, JSON, etc.), we could optimize for use-case, size, loading time, or other priorities.

Step 15: Execute DSL stand-alone without Generator

Assume an insurance calculation DSL.
Usually, we would implement

  • an interpreter to execute test cases within MPS,
  • a Generator to C to execute on the production server,
  • and a Generator to Java to provide an preview for the insurance agent.

With serialized Truffle Nodes, we need only one interpreter:

Part V: Crazy Ideas

Step 16: Step Back Debugger

By combining Instrumentation and debugger, it might be feasible to provide step-back debugging.

In the interpreter, we know the complete global state of the program, and can store deltas (to reduce memory usage). For quite some DSLs, this might be sufficient to store every intermediate state and thus arbitrary debug movement.

Step 17: Side Step Debugger

By stepping back through our execution and following different execution paths, we could explore alternate outcomes. The different execution path might stem from other input values, or hot code replacement.

Step 18: Explorative Simulations

If we had a side step debugger, nice support to project interpretation results, and a really fast interpreter, we could run explorative simulations on lots of different executions paths. This might enable legendary interactive development.


by Niko Stotz at March 11, 2020 11:19 PM

Jakarta EE Community Update March 2020

by Tanja Obradovic at March 11, 2020 04:47 PM

Welcome to the latest Jakarta EE update. We have a number of initiatives underway and many great opportunities to get involved with Jakarta EE, so I’ll get right to it.

The Adopt-A-Spec Program for Java User Groups Is Now Live

We encourage all Java User Groups (JUGs) to adopt a Jakarta EE spec. You’ll find the instructions to sign up, along with more information about the program, here.

We’re very pleased to tell you that the Madras JUG in India and the SouJava JUG in Brazil are the first JUGs to adopt a Jakarta EE specification.

It’s Now Even Easier to Get Involved with Jakarta EE!

We welcome you to get involved and we made it simpler for you to join! Please see below to learn more about the steps to become a contributor and a committer. 

For details about specification project committer agreements, check out Wayne Beaton’s blog post on the topic.

We welcome everyone who wants to get involved with Jakarta EE!

Great progress on Jakarta EE 9

Work on Jakarta EE 9 is now underway and you can check the progress we’re making here. Will attempt to get an RC out this week for Platform and Web Profile APIs!

For additional insight into Jakarta EE 9, check out the:

·  Jakarta EE Platform specification page

·  GitHub page

Alignment With MicroProfile Specs Is up to Us

After a recent MicroProfile Hangout discussion, it was decided that MicroProfile will produce specs and other communities, including Jakarta EE, can determine how they want to adopt them.

 You can find a summary of the discussion by John Clingan in the thread MicroProfile Working Group discussion – Push vs pull on the MicroProfile mailing list.

 If you’d like to join MicroProfile discussions, check out the calendar here.

 CN4J Day at KubeCon Amsterdam Is Postponed

With the postponement of the KubeCon + CloudNativeCon event, we’ve also postponed CN4J Day, which was originally planned for March 30. We’ll let you know when the event is rescheduled as soon as we can.

 In the meantime, you can follow updates about KubeCon rescheduling and get more information about what the postponement means for your involvement here.

Jakartification of Oracle Specs: We always welcome your help!

Thanks to everyone who has been helping us Jakartify the Oracle specifications. We’re making progress, but we still need helping hands. Now that we have the copyright for all of the specifications that Oracle contributed, there’s a lot of work to do.

To help you get started:

·  The Specification Committee created a document that explains how to convert Java EE specifications to Jakarta EE.

Ivar Grimstad provided a demo during the community call in October. You can view it here

Join Community Update Calls

Every month, the Jakarta EE community holds a community call for everyone in the Jakarta EE community. For upcoming dates and connection details, see the Jakarta EE Community Calendar.

Our next call is Wednesday, March 18 at 10:00 a.m. EST using this meeting link.

We know it’s not always possible to join calls in real time, so here are links to the recordings and presentations:

·  The complete playlist.

·  February 12 call and presentation, featuring Wayne Beaton’s update on enabling individual participation in Jakarta EE, Shabnam Mayel’s update on enabling JUG participating, and Ivar Grimstad’s update on Jakarta EE 9.

February Event Summary

February was a busy month for events:

·  FOSDEM. Eclipse Foundation Executive Director, Mike Milinkovich, presented to a full room at this unique, free event in Brussels. For more insight, read Ivar’s blog.

·  Devnexus. We hosted a Cloud Native for Java Meetup for more than 100 participants at this conference organized by the Atlanta JUG. We also had a Jakarta EE booth in the community corner of the exhibition hall. This is an awesome event for Java developers with 2,400 attendees and world-class speakers. Here’s a photo to inspire you to attend next year.

 

·  JakartaOne Livestream - Japan. The first Livestream event in Japanese was a success with 211 registered participants. You can watch the replay here.

·  ConFoo. Ivar spoke at the 18th edition of this event in Montreal, Canada. For more information, read Ivar’s blog.

Stay Connected With the Jakarta EE Community

The Jakarta EE community is very active and there are a number of channels to help you stay up to date with all of the latest and greatest news and information. Tanja Obradovic’s blog summarizes the community engagement plan, which includes:

·  Social media: Twitter, Facebook, LinkedIn Group

·  Mailing lists: jakarta.ee-community@eclipse.org and jakarta.ee-wg@eclipse.org

·  Newsletters, blogs, and emails: Eclipse newsletter, Jakarta EE blogs, monthly update emails to jakarta.ee-community@eclipse.org, and community blogs on “how are you involved with Jakarta EE”

·  Meetings: Jakarta Tech Talks, Jakarta EE Update, Jakarta Town Hall, and Eclipse Foundation events and conferences

Subscribe to your preferred channels today. And, get involved in the Jakarta EE Working Group to help shape the future of open source, cloud native Java.

Bookmark the Jakarta EE Community Calendar to learn more about Jakarta EE-related plans and check the date for the next Jakarta Tech Talk.

 


by Tanja Obradovic at March 11, 2020 04:47 PM

Release 4.3

March 11, 2020 12:00 AM

New version 4.3 has been released.

Release is of Type B

Features

  • Calculated properties support
  • SAP Cloud Foundry tailored package
  • Content v4 API
  • Template Engine
  • Configurable Internal Jobs
  • Multiple Schema Support in OData

Fixes

  • Add Execute Button in the SQL View
  • Add tooltips for all icons in the Entity Data Modeller
  • Debugger via https
  • Minor fixes

Statistics

  • 68K+ Sessions
  • 47K+ Users
  • 182 Countries
  • 360 Repositories in DirigibleLabs

Operational

Enjoy!


March 11, 2020 12:00 AM

Real-World IoT Adoption

by Mike Milinkovich at March 10, 2020 04:57 PM

The results of our first IoT Commercial Adoption survey tell a clear story about what organizations are doing with IoT right now and their plans for production deployments. The goal of the survey was to go beyond the IoT Developer Survey we’ve conducted for the last six years to gain insight into the industry landscape from the point of view of a broader spectrum of IoT ecosystem stakeholders.

Here are some of the key findings from the survey:

  • IoT may well live up to the hype, if somewhat slower than expected. Just under 40 percent of survey respondents are deploying IoT solutions today. Another 22 percent plan to start deploying IoT within the next two years.
  • IoT investment is on the rise, with 40 percent of organizations planning to increase their IoT spending in the next fiscal year.
  • Open source pervades IoT as a key enabler with 60 percent of companies factoring open source into their IoT deployment plans.
  • Hybrid clouds lead the way for IoT deployments. Overall, AWS, Azure, and GCP are the leading cloud platforms for IoT implementations.

The Commercial Perspective Is Crucial

The survey asked respondents to identify the requirements, priorities, and challenges they’re facing as they deploy and start using commercial IoT solutions, including those based on open source technologies. The survey ran for two months last fall and received responses from more than 360 individuals from a wide range of industries and organizations. You can read the full survey results here, and I would encourage IoT ecosystem players to do that.

IoT Ecosystem Players Must Focus on Real-World Requirements

As our survey results revealed, each player in the IoT ecosystem has an important role in driving IoT adoption. Here are some key takeaways broken down by stakeholder group.

  • Software vendors should incorporate open source technologies into their solutions to give customers the flexibility and control they need.
  • IoT platform vendors should build offerings that support hybrid cloud environments to become more responsive to customer requirements. At least part of the reason multi-cloud adoption is still in its early stages is because — not surprisingly — the leading cloud providers don’t offer their IoT platform services on other cloud platforms.
  • IoT solution providers should be prepared for extensive and intensive proofs of concept and pilot projects before they get to the stage of full production rollouts. With companies reluctant to invest heavily in IoT before they’re confident in the return on investment, these practical and tangible demonstrations will be key to encouraging broader adoption.
  • Manufacturing organizations should implement IoT solutions that tie automation, asset management, and logistics together. The most innovative organizations will also rely on IoT technologies to improve their value proposition to customers, for example, by including preventive maintenance features on manufacturing equipment.

Get Involved in Eclipse IoT

It will take a diverse community co-developing a uniform set of building blocks based on open source and open standard to drive broad IoT adoption. If you’re interested in participating in the industry-scale collaboration happening at the Eclipse IoT Working Group or contributing to Eclipse IoT projects, please visit https://iot.eclipse.org/.

As an added benefit of membership, Eclipse IoT Working Group members receive exclusive access to detailed industry research findings, including those from the annual IoT Developer Survey, which are leveraged by the entire industry.

The 2020 IoT Developer Survey is coming soon. If you’d like to contribute questions or provide feedback, join the working group mailing list here.


by Mike Milinkovich at March 10, 2020 04:57 PM

ECF 3.14.7 released

by Scott Lewis (noreply@blogger.com) at March 04, 2020 12:54 AM

ECF 3.14.7 has been released and may be downloaded here.

In concert with this bug fix release have been a number of additions to ECF's github projects for Remote Services Development.

Distribution and Discovery Providers

Enhanced:  Hazelcast-based Distribution Provider v1.7.0.  Upgraded to use Hazelcast 4
Enhanced:  System and Service-Properties docs for Distribution Providers and Discovery Providers

Bndtools Development

Enhanced:  Bndtools Workspace template with new Bndrun templates for Remote Services Development
Enhanced:  Tutorial for using Bndtools for Remote Services Development


by Scott Lewis (noreply@blogger.com) at March 04, 2020 12:54 AM

Measuring bundle startup time with JFR

February 22, 2020 10:20 PM

Several years ago, I gave a presentation at EclipseCon 2016 on “Optimising Eclipse Plug-ins” in which I talked about a few different ways of making Eclipse faster, at least from a start-up and memory usage perspective.

The presentation wasn’t recorded, but is available on SpeakerDeck as well as a narrated copy I recorded subsequently at Vimeo.

One of the techniques I discussed was using Java Flight Recorder as a means to record how long bundles take to start up, since the main cost of starting up Eclipse (or other OSGi applications) is how long it takes for the bundles to move into the STARTED state. Bundles that have a Bundle-Activator can slow this down, and since the majority of OSGi frameworks have a means to start up bundles sequentially, can lead to a lesser user experience.

At the original time of writing, JFR was still a commercial feature (requiring -XX:+UnlockCommercialFeatures setting and the wrath of licensing) in Java 8, and the API was evolving in Java 9 and onwards. However, now that Java 11 has been released, we have a free (both as in beer and in speech) implementation that we can code against in order to work as expected.

A number of API changes meant that the code needed some TLC, and so I recently uploaded a new version of the JFRBundleListener project to GitHub. The project can be downloaded and installed into an OSGi runtime (or Eclipse application) to find out where the start-up costs lie. At the present time, I have not published it anywhere for binary consumption.

The README.md contains more information about how to use it, but the TL;DR is:

  • Build and install the bundle with a low start level (e.g. 1) so that it tracks all subsequent bundle activation
  • Start your OSGi framework or Eclipse application with the magic invocation to start JFR recording, e.g. -XX:StartFlightRecording=filename=/tmp/dump.jfr,dumponexit=true,disk=true
  • Load and inspect the recording into Java Mission Control, or use the embedded java -jar JFRBundleListener.jar < /tmp/dump.jfr > /tmp/stacks.txt to generate a stacks list that can be processed by FlameGraph.pl
  • Use Brendan Gregg’s flamegraph tool to create an interactive SVG flamegraph.pl --hash --countname ms < /tmp/stacks.txt > /tmp/bundles.svg

The resulting SVG looks something like this:

Bundle startup

Custom JFR events

The API for interacting with JFR has improved significantly since its first inception in Java 8. I am looking forward to the backport of Java 11 to Java 8 to improve the experience significantly there.

To create a custom event, subclass the jdk.jfr.Event class. Provided that it has public fields of a known Java primitive or String, it will be written out when commanded. The JFRBundleEvent class is used to surface an OSGi BundleEvent, and so persists pertinent data to the JFRBundleEvent class upon creation.

These are created when receiving (synchronous) events with the SynchronousBundleListener implemented in the JFRBundleListener class. To mark a span, the event has begin() called at the start of a bundle coming on-line, and end() once it has finished. The commit() ensures that this bundle event is recorded; provided that the commit() occurs at some point later, and that the object is not reused in the meantime, it could be done at a later time.

The event information is recorded with specific key fields; they are identified using the field name in the class, or the one specified with the @Name annotation as defined in the event itself. For simplicity, I have chosen the OSGi bundle headers as the identifiers in the event, which may make post-processing it easier.

Processing JFR files

Similarly, the API for processing a JFR file has become significantly better. There is a helper method RecordingFile.readAllEvents() which translates a dump file at a given path into a list of RecordedEvent objects, or it can be used as an iterator to read through all events.

Events are identified by type; if not specified then the fully qualified name of the event class is used. This makes it easy to filter out specific events from the log, although it may be desirable that they are combined with other JFR events (e.g. thread sleeping, class loading, garbage collections etc.) in the log.

The JFRFlameGraph implementation provides a means to go through the log file, and generate a stack of bundle startup to allow for investigation as to why something is taken so long.

It prunes the list of events in the JFR recording to only include those JFRBundleEvent of interest, and then builds up a sorted data structure based on the earliest start-up time recorded. These are then interactively popped off to write out events before the end time, so that the resulting file contains a list of the bundles starting up:

...
Start-Level...;org.eclipse.ui.trace;org.eclipse.core.runtime;org.eclipse.equinox.preferences 28
Start-Level...;org.eclipse.ui.trace;org.eclipse.core.runtime;org.eclipse.core.contenttype;org.eclipse.equinox.registry 877
Start-Level...;org.eclipse.ui.trace;org.eclipse.core.runtime;org.eclipse.core.contenttype 6
Start-Level...;org.eclipse.ui.trace;org.eclipse.core.runtime;org.eclipse.equinox.app 28
Start-Level...;org.eclipse.ui.trace;org.eclipse.core.runtime 14
Start-Level...;org.eclipse.ui.trace 10
Start-Level...;org.eclipse.update.configurator 977
...

These can then be post-processed with FlameGraph.pl to generate the interactive SVG above.

The code prepends the Java thread name to the stack shown; this allows some threads (such as background Worker-n threads) to be ignored, while focussing on specific threads (such as the main thread and the Start-Level... thread). Since the SVG is interactive, clicking on the thread name will focus the view on just those bundles, which may provide more detailed analysis.

Summary

Java Flight Recorder is available as open source in Java 11, and can be used to export timing information using a custom event format. With sufficient post-processing of the data, it can generate flame graphs that can be more expressive than just looking at numbers.

The API for programmatically interacting with JFR recordings has been significantly improved, and being able to log timely information in conjunction with other events can be a significant bonus in performance analysis.


February 22, 2020 10:20 PM

Postmortem - February 7 storage and authentication outage

by Denis Roy at February 20, 2020 04:12 PM

On Friday, February 7 2020, Eclipse.org suffered a severe service disruption to many of its web properties when our primary authentication server and file server suffered a hardware failure.

For 90 minutes, our main website, www.eclipse.org, was mostly available, as was our Bugzilla bug tracking tool, but logging in was not possible. Wiki, Eclipse Marketplace and other web properties were degraded. Git and Gerrit were both completely offline for 2 hours and 18 minutes. Authenticated access to Jiro -- our Jenkins+Kubernetes-based CI system, was not possible, and builds that relied on Git access failed during that time.

There was no data loss, but there were data inconsistencies. A dozen Git repositories and Gerrit code changes were in an inconsistent state due to replication schedules, but thanks to the distributed nature of Git, the code commits were still in local developer Git repositories, as well as on the failed server, which we were eventually able to revive (in an offline environment). Data inconsistencies were more severe in our LDAP accounts database, where dozens of users were unable to log in, and in some isolated cases, users reported that their account was reverted back to old data from years prior.

In hindsight, we feel this outage could have, and should have been avoided. We’ve identified many measures we must enact to prevent such unplanned outages in the future. Furthermore, our communication and incident handling processes proved to be flawed, and will be scrutinized and improved, to ensure our community is better informed during unplanned incidents.

Lastly, we’ve identified aging hardware and Single Points of Failure (SPoF) that must be addressed.

 

File server & authentication setup

At the center of the Eclipse infra is a pair of servers that handle 2 specific tasks:

  • Network Attached Storage (NAS) via NFS

  • User Authentication via OpenLDAP

The server pair consists of a primary system, which handles all the traffic, and a hot spare. Both servers are configured identically for production service, but the spare server sits idly and receives data periodically from the primary. This specific architecture was originally implemented in 2005, with periodical hardware upgrades over time.

 

Timeline of events

Friday Feb 7 - 12:33pm EST: Fred Gurr (Eclipse Foundation IT/Releng team) reports on the Foundation’s internal Slack channel that something is happening to the Infra. Denis observes many “Flaky” status reports on https://status.eclipse.org but is in transit and cannot investigate further. Webmaster Matt Ward investigates.

12:43pm: Matt confirms that our primary nfs/ldap server is not responding, and activates “Plan A: assess and fix”.

12:59pm: Denis reaches a computer and activates “Plan B: prepare for Failover” while Matt works on Plan A. The “Sorry, we are down” page is served for all Flaky services except www.eclipse.org, which continues to be served successfully by our nginx cache.

1:18pm: The standby server is ready to assume the “primary” role.

1:29pm: Matt makes the call for failover, as the severity of the hardware failure is not known, and not easily recoverable.

1:49pm: www.eclipse.org, Bugzilla, Marketplace, Wiki return to stable service on the new primary.

2:18pm: Git and Gerrit return to stable service.

2:42pm: Our Kubernetes/OpenShift cluster is updated to the latest patchlevel and all CI services restarted.

4:47pm: All legacy JIPP servers are restarted, and all other remaining services report functional.  At this time, we are not aware of any issues.

During the weekend, Matt continues to monitor the infra. Authentication issues crop up over the weekend, which are caused by duplicated accounts and are fixed by Matt.

Monday, 4:49am EST: Mikaël Barbero (Eclipse Foundation IT/Releng team) reports that there are more duplicate users in LDAP that cannot log into our systems. This is now a substantial issue. They are fixed systematically with an LDAP duplicate finder, but the process is very slow.

10:37am: First Foundation broadcast on the cross-project mailing list that there is an issue with authentication.

Tuesday, 9:51am: Denis blogs about the incident and posts a message to the eclipse.org-committers mailing list about the ongoing authentication issues. The message, however, is held for moderation and is not distributed until many hours later.

Later that day: Most duplicated accounts have been removed, and just about everything is stabilized. We do not yet understand the source of the duplicates.

Wednesday: duplicate removals continue, as well as investigation into the cause.

Thursday 9:52am: We file a dozen bugs against projects whose Git and Gerrit repos may be out of sync. Some projects had already re-pushed or rebased their missing code patches and resolved the issue as FIXED.

Friday, 2:58pm: All remaining duplicates are removed. Our LDAP database is fully cleaned. The failed server re-enters production as the hot standby - even though its hardware is not reliable. New hardware is sourced and ordered.

 

Hardware failure

The physical servers assuming our NAS/LDAP setup are server-class hardware, 2U chassis with redundant power supplies, ECC (error checking and correction) memory, RAID-5 disk arrays with a battery-backup RAID controller memory. Both primary and standby servers were put into production in 2011.

On February 7, the primary server experienced a kernel crash from the RAID controller module. The RAID controller detected an unrecoverable ECC memory error. The entire server became unresponsive.

As originally designed in 2005, periodical (batched) data updates from the primary to the hot spare were simple to set up and maintain. This method also had a distinct advantage over live replication: rapid recovery in case of erasure (accidental or malicious) or data tampering. Of course, this came at a cost of possible data loss. However, it was deemed that critical data (in our case, Source Code) susceptible to loss during the short time was also available on developer workstations.


Failover and return to stability

As the standby server was prepared for production service, the reasons for the crash on the primary server were investigated. We assessed the possibility of continuing service on the primary; that course of action would have provided the fastest recovery with the fewest surprises later on.

As the nature of the hardware failure remained unknown, failover was the only option. We confirmed that some data replication tasks had run less than one hour prior to failure, and all data replication was completed no later than 3 hours prior. IP addresses were updated, and one by one, services that depended on NFS and authentication were restarted to flush caches and minimize any potential for an inconsistent state.

At about 4:30pm, or four hours after the failure, both webmasters were confident that the failover was successful, and that very little dust would settle over the weekend.
 

Authentication issues

Throughout the weekend, we had a few reports of authentication issues -- which were expected, since we failed over to a standby authentication source that was at least 12 hours behind the primary. These issues were fixed as they were reported, and nothing seemed out of place.

On Monday morning, Feb 10th, the Foundation’s Releng team reported that several committers had authentication issues to the CI systems. We then suspected that something else was at play with our authentication database, but it was not clear to us what had happened, or what the magnitude was. The common issue was duplicate accounts -- some users had an account in two separate containers simultaneously, which prevented users from being able to authenticate. These duplicates were removed as rapidly as we could, and we wrote scripts to identify old duplicates and purge them -- but with >450,000 accounts, it was time-consuming.

At that time, we got so wrapped up in trying to understand and resolve the issue that we completely underestimated its impact on the community, and we were absolutely silent about it.

 

Problem solved

On Friday afternoon, February 14, we were able to finally clean up all the duplicate accounts and understand why they existed in the first place.

Prior to December, 2011, our LDAP database only contained committer accounts. In December 2011, we imported all the non-committer accounts from Bugzilla and Wiki into an LDAP container we named “Community”. This allowed us to centralize authentication around a single source of truth: LDAP.

All new accounts were, and are created in the Community container, and are moved into the Committer container if/when they became an Eclipse Committer.

Our primary->secondary LDAP sync mechanism was altered, at that time, to sync the Community container as well -- but it was purely additive. Once you had an account in Community, it was there for life on the standby server, even if you became a committer later on. Or if you’d ever change your email address. This was the source of the duplicate accounts on the standby server.

A new server pair has been ordered on February 14, 2020 . These servers will be put into production service as soon as possible, and the old hardware will be recommissioned to clustered service. With these new machines, we believe our existing architecture and configuration can continue to serve us well over the coming months and years.

 

Take-aways and proposed improvements

Although the outage didn’t last incredibly long (2 hours from failure to the beginning of restored service), we feel it shouldn’t have occurred in the first place. Furthermore, we’ve identified key areas where our processes can be improved - notably, in how we communicate with you.

Here are the action items we’re committed to implementing in the near term, to improve our handling of such incidents:

  • Communication: Improved Service Status page.  https://status.eclipse.org gives a picture of what’s going on, but with an improved service, we can communicate the nature of outages, the impact, and estimated time until service is restored.

  • Communication: Internally, we will improve communication within our team and establish a maintenance log, whereby members of the team can discover the work that has been done.

  • Staffing: we will explore the possibility of an additional IT hire, thus enhancing our collective skillset, and enabling more overall time on the quality and reliability of the infra.

  • Aging Hardware: we will put top-priority on resolving aging SPoF, and be more strict about not running hardware devices past their reasonable life expectancy.

    • In the longer term, we will continue our investment in replacing SPoF with more robust technologies. This applies to authentication, storage, databases and networking.

  • Process and procedures: we will allocate more time to testing our disaster recovery and business continuity procedures. Such tests would likely have revealed the LDAP sync bug.

We believe that these steps will significantly reduce unplanned outages such as the one that occured on February 7. They will also help us ensure that, should a failure occur, we recover and return to a state of stability more rapidly. Finally, they will help you understand what is happening, and what the timelines to restore service are, so that you can plan your work tasks and remain productive.


by Denis Roy at February 20, 2020 04:12 PM

Git Contribution Activity Charts for Eclipse Projects

by waynebeaton at February 18, 2020 05:57 PM

Eclipse EGit: Git Integration for Eclipse

This post is brought to you today by the Eclipse EGitâ„¢ Project. Eclipse EGit is the Eclipse open source project that provides Git integration for the Eclipse Platform.

On Eclipse open source project pages (which we often refer to as the Project Management Interface or “PMI”), the “Who’s Involved” pages include some commit activity graphs.

The Contribution Activity chart shows the absolute number of commits made on project repositories over the last twelve months. Eclipse open source projects may have multiple Git repositories; this chart shows commit activity across all branches on all repositories owned by the project team.

The Individual Contribution Activity chart shows the commits attributed to individual developers over the last three months (across all repositories and branches). The identify of the individual committer comes directly from the author field in the Git commit. For project committers, we map author credentials to the information that committers provide in their Eclipse Foundation Account, so–for committers–the name shown here will match what’s in their account, not what’s on the commit record.

Note that any developer listed as an “Also-by” in the commit message will get credit for the commit (“Some Bodyelse”, from the example shown below, would share author credit for the commit). Because of the “Also-by” folks being counted as authors/contributors, some commits may be represented multiple times in the Individual Contribution Activity and Organization Contribution Activity charts (once each for each author).

If you click on a committer’s picture on the “Who’s Involved” page, you’ll see a chart of lifetime contributions to the project. If the chart is missing, that means that the person hasn’t authored any commits.

The Organization Contribution Activity chart shows relative contributions from Eclipse Foundation member companies. Commits made by all project committers affiliated with the company (across all project Git repositories and branches) are grouped together.

On this chart, “Other Committer” groups together contributions from committers who do not work for a member company, and “Contributor” refers to activity by contributors (i.e., developers who are not yet not committers).

Active Member Companies are those Eclipse Foundation member companies that have at least on committer that has made at least one commit on the project in the last three months (note that the order is random).

These charts take a very narrow view of “contribution”. There are many ways to contribute to an open source project. You can answer questions on forums or mailing lists, open and comment on issues, present at conferences, …


by waynebeaton at February 18, 2020 05:57 PM

Developer Surveys Survey: Including a Spotlight on Java Results

by Alex Blewitt at February 17, 2020 06:00 PM

JRebel and Snyk have recently published their Java/JVM technology reports, and Codingame and Tiobe have published reports into language usage and adoption. InfoQ looks at the state of play of these reports, and what is happening in the Java and wider ecosystems today.

By Alex Blewitt

by Alex Blewitt at February 17, 2020 06:00 PM

Eclipse Collections 10.2 Released

by Donald Raab at February 16, 2020 06:47 PM

Thank you to the contributors and our user community.

Eclipse Collections Website — Now published in Greek!

Release on Demand

It’s been about two and a half months since we released Eclipse Collections 10.1. We had a user request for a new release with a specific feature, so we decided to release another minor version. Version 10.2 is now released.

You can now download the release from Maven Central.

<dependency>
<groupId>org.eclipse.collections</groupId>
<artifactId>eclipse-collections-api</artifactId>
<version>10.2.0</version>
</dependency>

<dependency>
<groupId>org.eclipse.collections</groupId>
<artifactId>eclipse-collections</artifactId>
<version>10.2.0</version>
</dependency>

A new website translation is available

After the New Year we now have a contribution of a Greek translation of our Eclipse Collections website. This is an amazing addition to our growing number of website translations. Our website is now available in nine different languages. We have offers for two more language translations in the works. Thank you!

New Features

  • Exposed the allocateTable method as protected in Primitive Maps and Primitive Sets.

Bug Fixes

  • Fixed size edge case issues in Interval and IntInterval.

Tech Debt Reduction

  • Optimized removeIf on UnifiedMap.
  • Implemented removeIf as a default method on MutableMapIterable.
  • Replaced usages of Comparators.nullSafeEquals() with Objects.equals().

Thank you

From all the contributors and committers… thank you for using Eclipse Collections!

Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it on GitHub.


Eclipse Collections 10.2 Released was originally published in Oracle Groundbreakers on Medium, where people are continuing the conversation by highlighting and responding to this story.


by Donald Raab at February 16, 2020 06:47 PM

How to deploy Eclipse Dirigible in the SAP Cloud Platform Neo environment

by Yordan Pavlov at February 15, 2020 12:00 AM

Eclipse Dirigible is an open-source cloud development platform, that provides capabilities for end-to-end development process from database modeling and management, through RESTful services using server-side JavaScript, to pattern-based user interface generation, role based security, external services integration, testing, debugging, operations and monitoring …

To read the whole article go to: How to deploy Eclipse Dirigible in the SAP Cloud Platform Neo environment


by Yordan Pavlov at February 15, 2020 12:00 AM

Specification Project Committer Agreements

by waynebeaton at February 12, 2020 06:36 PM

We identified a hole in our committer agreement process that excluded individuals with a certain employment status from participating in Eclipse Foundation open source specification projects operating under the Eclipse Foundation Specification Process (EFSP).

I’ll start by saying that you don’t need to be a committer to contribute to an open source specification project (in fact, notwithstanding the initial bootstrapping of a new open source project team, you cannot become a committer without first demonstrating merit as a contributor). If you just want to make a contribution in the form (for example) of a chunk of text in a specification document or a new method in an API, you can sign the Eclipse Contributor Agreement, create a pull request (the column in grey below) for review by a committer (the pull request can only be merged by a committer), and you’re off to the races!

After making a habit of contributing high quality contributions, an existing project committer may (I might argue should or must) nominate you to be a committer via election.

Specification Project Agreement Selection Flow

Elections generally run for a full week (they will end immediately when all existing project committers vote in the affirmative). Following a successful election, you will be put into our committer agreement workflow: the Eclipse Foundation’s system will send you, the newly elected committer, an email with the instructions you need to provide us with agreements we need to instate you as a committer (we refer to this as the “committer paperwork” process).

If you work for an Eclipse Foundation member company that is a participant in the working group, then we already have all of the agreements that we need and you’re good-to-go.

If you’re self employed, unemployed, or a student, we need you to sign the individual working group participation agreement (IWGPA). This agreement embeds the Individual Committer Agreement and Membership Agreement that are required to participate in a specification project. Once you’ve signed the IWGPA and have provided it to the EMO Records team, they’ll set you up as a committer and you’re good-to-go.

If you work for an Eclipse Foundation member company but that company is not a participant in the working group, or you work for a company that is not a member of the Eclipse Foundation at all, then you need to sign the individual working group participation agreement (IWGPA), and your employer needs to sign the Employer Consent Agreement for Eclipse Foundation Specification Projects (which we shorten to “Employer Consent Agreement” or just “Consent Agreement). This is the part highlighted in red in the above Specification Project Agreement Selection Flow chart.

Our committer provisioning process is automated. As a new committer, you will—following your successful election—be contacted by the EMO Records team by email to engage in our agreement workflow which guides you through signing those agreements that you need. These agreements are all available for your review on our Legal Resources page. The working group-specific agreements are on our Explore our Working Groups page.

Note that in order to get committer rights on a project hosted on GitHub, you need to set your GitHub Id in your Eclipse Foundation account.

The Employer Consent Agreement is the new bit. Due to the differences in the way that intellectual property flows are managed in a specification project (vs. a software project), we need this employer agreement. This agreement must be signed on behalf of the company, so—if you need one of these—you have to identify somebody within your organization with the authority to sign on behalf of the organisation (your manager will likely be able to help you with this).

So…

  • Step 1: Sign the ECA;
  • Step 2: Establish a pattern of contributing high-quality pull requests to an Eclipse open source specification project
  • Step 3: Get voted in to join the project by the existing committers;
  • Step 4: Complete the necessary working group participation agreements; and
  • Step 5: Take your place in history.


by waynebeaton at February 12, 2020 06:36 PM

Anatomy of a server failure

by Denis Roy at February 11, 2020 02:51 PM

Last Friday, Feb 7 at around 12:30pm (Ottawa time), I received a notification from Fred Gurr (part of our release engineering team) that something was going on with the infra. The multitude of colours on the Eclipse Service Status page confirmed it -- many of our services and tools were either slow, or unresponsive.

After some initial digging, we discovered that the primary backend file server (housing Git, Gerrit, web session data, and a lot of files for our various web properties) was not responding. It was also host to our accounts database -- the center for all user authentication.

Jumping into action

It's a well-rehearsed routine for colleage Matt Ward and I -- he worked on assessing the problem and identifying the fix, while I worked on Plan B - failover to our hot standby. At around 1:35pm, roughly 1 hour into the outage, Matt made the call -- failover is the only option, as a  hardware component has failed. 20 minutes later, most services had either recovered or were well on their way.

But the failover is not perfect. Data is sync'ed every 2 hours. Account and authentication info is replicated nightly. This was a by-design strategy decision, as it offers us a recovery window in case of data erasure, corruption or unauthenticated access.

Lessons learned

The failed server was put in service in 2011, celebrating its *gasp* ninth year of 24/7 service. That is a few years too many, and although it (and its standby counterpart) were slated for replacement in 2017, the effort was pushed back to make room for competing priorities. In a moment of bitter irony, the failed hardware was planned to be replaced in the second quarter of this year -- mere months away. We gambled with the house, we lost.

Cleaning up

Today, there is much dust to settle. Our authentication database has some gremlins that we need to fix, and there could be a few missing commits that were not replicated.

We also need to source replacement hardware for the failed component, so that we can re-enable our hot standby. At the same time, we need to immediately source replacement servers for those 2011 dinosaurs. They've served us well, but their retirement is long overdue.


by Denis Roy at February 11, 2020 02:51 PM

Interfacing null-safe code with legacy code

by Stephan Herrmann at February 06, 2020 07:38 PM

When you adopt null annotations like these, your ultimate hope is that the compiler will tell you about every possible NullPointerException (NPE) in your program (except for tricks like reflection or bytecode weaving etc.). Hallelujah.

Unfortunately, most of us use libraries which don’t have the blessing of annotation based null analysis, simply because those are not annotated appropriately (neither in source nor using external annotations). Let’s for now call such code: “legacy”.

In this post I will walk through the options to warn you about the risks incurred by legacy code. The general theme will be:

Can we assert that no NPE will happen in null-checked code?

I.e., if your code consistently uses null annotations, and has passed analysis without warnings, can we be sure that NPEs can only ever be thrown in the legacy part of the code? (NPEs inside legacy code are still to be expected, there’s nothing we can change about that).

Using existing Eclipse versions, one category of problems would still go undetected whereby null-checked code could still throw NPE. This has been recently fixed bug.

Simple data flows

Let’s start with simple data flows, e.g., when your program obtains a value from legacy code, like this:

NullFrom_getProperty

You shouldn’t be surprised, the javadoc even says: “The method returns null if the property is not found.” While the compiler doesn’t read javadoc, it can recognize that a value with unspecified nullness flows into a variable with a non-null type. Hence the warning:

Null type safety (type annotations): The expression of type ‘String’ needs unchecked conversion to conform to ‘@NonNull String’

As we can see, the compiler warned us, so we are urged to fix the problem in our code. Conversely, if we pass any value into a legacy API, all bad that can happen would happen inside legacy code, so nothing to be done for our mentioned goal.

The underlying rule is: legacy values can be safely assigned to nullable variables, but not to non-null variables (example Properties.getProperty()). On the other hand, any value can be assigned to a legacy variable (or method argument).

Put differently: values flowing from null-checked to legacy pose no problems, whereas values flowing the opposite direction must be assumed to be nullable, to avoid problems in null-checked code.

Enter generics

Here be dragons.

As a minimum requirement we now need null annotations with target TYPE_USE (“type annotations”), but we have this since 2014. Good.

NullFromLegacyList

Here we obtain a List<String> value from a Legacy class, where indeed the list names is non-null (as can be seen by successful output from names.size()). Still things are going south in our code, because the list contained an unexpected null element.

To protect us from this problem, I marked the entire class as @NonNullByDefault, which causes the type of the variable names to become List<@NonNull String>. Now the compiler can again warn us about an unsafe assignment:

Null type safety (type annotations): The expression of type ‘List<String>’ needs unchecked conversion to conform to ‘List<@NonNull String>’

This captures the situation, where a null value is passed from legacy to null-checked code, which is wrapped in a non-null container value (the list).

Here’s a tricky question:

Is it safe to pass a null-checked value of a parameterized type into legacy code?

In the case of simple values, we saw no problem, but the following example tells us otherwise once generics are involved:
NullIntoNonNullList

Again we have a list of type List<@NonNull String>, so dereferencing values obtained from that list should never throw NPE. Unfortunately, the legacy method printNames() succeeded to break our contract by inserting null into the list, resulting in yet another NPE thrown in null-checked code.

To describe this situation it helps to draw boundaries not only between null-checked and legacy code, but also to draw a boundary around the null-checked value of parameterized type List<@NonNull String>. That boundary is breached when we pass this value into legacy code, because that code will only see List<String> and happily invoke add(null).

This is were I recently invented a new diagnostic message:

Unsafe null type conversion (type annotations): The value of type ‘List<@NonNull String>’ is made accessible using the less-annotated type ‘List<String>’

By passing names into legacy code, we enable a hidden data flow in the opposite direction. In the general case, this introduces the risk of NPE in otherwise null-checked code. Always?

Wildcards

Java would be a much simpler language without wildcards, but a closer look reveals that wildcards actually don’t only help for type safety but also for null-safety. How so?

If the legacy method were written using a wildcard, it would not be (easily) possible to sneak in a null value, here are two attempts:
SneakAttempts

The first attempt is an outright Java type error. The second triggers a warning from Eclipse, despite the lack of null annotations:

Null type mismatch (type annotations): ‘null’ is not compatible to the free type variable ‘?’

Of course, compiling the legacy class without null-checking would still bypass our detection, but chances are already better.

If we add an upper bound to the wildcard, like in List<? extends CharSequence>, not much is changed. A lower bound, however, is an invitation for the legacy code to insert null at whim: List<? super String> will cause names.add() to accept any String, including the null value. That’s why Eclipse will also complain against lower bounded wildcards:

Unsafe null type conversion (type annotations): The value of type ‘List<@NonNull String>’ is made accessible using the less-annotated type ‘List<? super String>’

Comparing to raw types

It has been suggested to treat legacy (not null-annotated) types like raw types. Both are types with a part of the contract ignored, thereby causing risks for parts of the program that still rely on the contract.

Interestingly, raw types are more permissive in the parameterized-to-raw conversion. We are generally not protected against legacy code inserting an Integer into a List<String> when passed as a raw List.

More interestingly, using a raw type as a type argument produces an outright Java type error, so my final attempt at hacking the type system failed:

RawTypeArgument

Summary

We have seen several kinds of data flow with different risks:

  • Simple values flowing checked-to-legacy don’t cause any specific headache
  • Simple values flowing legacy-to-checked should be treated as nullable to avoid bad surprises. This is checked.
  • Values of parameterized type flowing legacy-to-checked must be handled with care at the receiving side. This is checked.
  • Values of parameterized type flowing checked-to-legacy add more risks, depending on:
    • nullness of the type argument (@Nullable type argument has no risk)
    • presence of wildcards, unbounded or lower-bounded.

Eclipse can detect all mentioned situations that would cause NPE to be thrown from null-checked code – the capstone to be released with Eclipse 2020-03, i.e., coming soon …


by Stephan Herrmann at February 06, 2020 07:38 PM

Eclipse and Handling Content Types on Linux

by Mat Booth at February 06, 2020 03:00 PM

Getting deep desktop integration on Linux.


by Mat Booth at February 06, 2020 03:00 PM

Jakarta EE Community Update February 2020

by Tanja Obradovic at February 05, 2020 07:21 PM

With the Jan 16 announcement that we’re targeting a mid-2020 release for Jakarta EE9, the year is off to a great start for the entire Jakarta EE community. But, the Jakarta EE 9 release announcement certainly wasn’t our only achievement in the first month of 2020.

Here’s a look at the great progress the Jakarta EE community made in January, along with some important reminders and details about events you won’t want to miss.

____________________________________________________________

The Java EE Guardians Are Now the Jakarta EE Ambassadors

The rebranding is complete and the website has been updated to reflect the evolution. Also note that the group’s:

·  Twitter handle has been renamed to @jee_ambassadors

·  Google Group has been renamed to https://groups.google.com/forum/#!forum/jakartaee-ambassadors

Everyone at the Eclipse Foundation and in the Jakarta EE community is thrilled the Java EE Guardians took the time and effort to rebrand themselves for Jakarta EE. I’d like to take this opportunity to thank everyone involved with the Jakarta EE Ambassadors for their contributions to the advancement of Jakarta EE. As Eclipse Foundation Executive Director, Mike Milinkovich, noted, “The new Jakarta EE Ambassadors are an important part of our community, and we very much appreciate their support and trust.”

I look forward to collaborating with the Jakarta EE Ambassadors to drive the success and growth of the Jakarta EE community. I’d also like to encourage all Jakarta EE Ambassadors to start using the new logo to associate themselves with the group.

____________________________________________________________

Java User Groups Will Be Able to Adopt a Jakarta EE Specification

We’re working to enable Java User Groups (JUGs) to become actively involved in evolving the Jakarta EE Specification through our adopt-a-spec program.

In addition to being Jakarta EE contributors and committers, JUG members that adopt-a-spec will be able to:

·  Blog about the Specification

·  Tweet about the Specification

·  Write an end-to-end test web application, such as Pet Store for Jakarta EE

·  Review the specification and comment on unclear content

·  Write additional tests to supplement those we already have

We’ll share more information and ideas for JUG groups, organizers, and individuals to get involved as we finalize the adopt-a-spec program details and sign up process.

____________________________________________________________ 

We’re Improving Opportunities for Individuals in the Jakarta EE Working Group

Let me start by saying we welcome everyone who wants to get involved with Jakarta EE! We’re fully aware there’s always room for improvement, and that there are issues we don’t yet know about. If you come across a problem, please get in touch and we’ll be happy to help.

We recently realized we’ve made it very difficult (read impossible) for individuals employed by companies that are neither Jakarta EE Working Group participants nor Eclipse Foundation Members to become committers in Jakarta EE Specification projects.

We’re working to address the problem for these committers and are aiming to have a solution in place in the coming weeks. In the meantime, these individuals can become contributors.

We’ve provided the information below to help people understand the paperwork that must be completed to become a Jakarta EE contributor or a committer. Please look for announcements in the next week or so about the.

 ______________________________________________________ 

It’s Time to Start Working on Jakarta EE 9               

Now that the Jakarta EE 9 Release Plan is approved, it’s time for everyone in the Jakarta EE community to come together and start working on the release.

Here’re link that can help you get informed and motivate you to get involved!

·  Start with the Jakarta EE Platform specification page.

·  Access the files you need on the Jakarta EE Platform GitHub page.

·  Monitor release progress here.

____________________________________________________________

All Oracle Contributed Specifications Are Now Available for Jakartification

We now have the copyright for all of the Java EE specifications that Oracle contributed so we need the community’s help with Jakartification more than ever. This is the only way the Java EE specifications can be contributed to Jakarta EE. 

To help you get started:

·      The Specification Committee has created a document that explains how to convert Java EE specifications to Jakarta EE.

·      Ivar Grimstad provided a demo during the community call in October. You can view it here.

______________________________________________

Upcoming Events

Here’s a brief look at two upcoming Jakarta EE events to mark on your calendar.

·      JakartaOne Livestream – Japan, February 26

This event builds on the success of the JakartaOne Livestream event in September 2019. Registration for JakartaOne Livestream – Japan is open and can be accessed here. Please keep in mind this entire event will be presented in Japanese. For all the details, be sure to follow the event on Twitter @JakartaOneJPN.

·  Cloud Native for Java Day at KubeCon + CloudNativeCon Amsterdam, March 30

Cloud Native for Java (CN4J) Day will be the first time the best and brightest minds from the Java ecosystem and the Kubernetes ecosystem come together at one event to collaborate and share their expertise. And, momentum is quickly building.

To learn more about this ground-breaking event, get a sense of the excitement surrounding it, and access the registration page, check out these links:

o   Eclipse Foundation’s official announcement

o   Mike Milinkovich’s blog

o   Reza Rahman’s blog

In addition to CN4J day at KubeCon, the Eclipse Foundation will have a booth #S73 featuring innovations from our Jakarta EE and Eclipse MicroProfile communities. Be sure to drop by to meet community experts in person and check out the demos.

________________________________________________________

Join Community Update Calls

Every month, the Jakarta EE community holds a community call for everyone in the Jakarta EE community. For upcoming dates and connection details, see the Jakarta EE Community Calendar.

Our next call is Wednesday, February 12, at 11:00 a.m. EST using this meeting ID.

We know it’s not always possible to join calls in real time, so here are links to the recordings and presentations from previous calls:

·      The complete playlist

·      January 15 call and presentation, featuring:

o   Updates on the Jakarta EE 9 release from Steve Millage

o   A call for action to help with Jakartifying specifications from Ivar Grimstad

o   A review of the Jakarta EE 2020 Marketing Plan and budget from Neil Paterson

o   A retrospective on Jakarta EE 8 from Ed Bratt

o   A heads up for the CN4J and JakartaOne Livestream events from Tanja Obradovic

  ____________________________________________________________

Stay Connected With the Jakarta EE Community

The Jakarta EE community is very active and there are a number of channels to help you stay up to date with all of the latest and greatest news and information. Tanja Obradovic’s blog summarizes the community engagement plan, which includes:

• Social media: Twitter, Facebook, LinkedIn Group

• Mailing lists: jakarta.ee-community@eclipse.org and jakarta.ee-wg@eclipse.org

• Newsletters, blogs, and emails: Eclipse newsletter, Jakarta EE blogs, monthly update emails to jakarta.ee-community@eclipse.org, and community blogs on “how are you involved with Jakarta EE”

• Meetings: Jakarta Tech Talks, Jakarta EE Update, Jakarta Town Hall, and Eclipse Foundation events and conferences

Subscribe to your preferred channels today. And, get involved in the Jakarta EE Working Group to help shape the future of open source, cloud native Java.

To learn more about Jakarta EE-related plans and check the date for the next Jakarta Tech Talk, be sure to bookmark the Jakarta EE Community Calendar.

 



by Tanja Obradovic at February 05, 2020 07:21 PM

CN4J Day Is an Important Industry First

by Thabang Mashologu at January 29, 2020 02:44 AM

Mark Cloud Native for Java (CN4J) Day on March 30 on your calendars. This all-day event at KubeCon + CloudNativeCon Europe will be the first time the best and brightest minds from the Java and the Kubernetes communities join forces at a single event to collaborate and share their expertise.
 
CN4J Day attendees will benefit from expert talks, demos, and thought-provoking sessions focused on building cloud native enterprise applications using Jakarta EE-based microservices on Kubernetes.

CN4J Day is a major milestone in open source software development. It’s also a significant day for everyone involved with the Eclipse Foundation because it confirms the Jakarta EE and MicroProfile communities are at the forefront of open source innovation, delivering on the promise of cloud native Java. Given that the Eclipse Foundation is Europe’s largest open source organization, it makes perfect sense to hold this first-of-its-kind event in Amsterdam.

The timing of CN4J Day could not be better. As our Executive Director, Mike Milinkovich, noted in his blog, momentum toward Jakarta EE 9 is building. This event gives all of us an important and truly unique opportunity to:

  • Learn more about the future of cloud native Java development from industry and community leaders
  • Gain deeper insight into key aspects of Jakarta EE, MicroProfile, and Kubernetes technologies
  • Meet and share ideas with global Java and Kubernetes ecosystem innovators 

Leading minds from the global Java ecosystem will be on hand at CN4J Day to share their expertise. Along with keynote talks from our own Mike Milinkovich and IBM Java CTO, Tim Ellison, CN4J Day offers technical insights from well-known Java experts and Eclipse Foundation community members, including:

  • Adam Bien, an internationally recognized Java architect, developer, workshop leader, and author
  • Sebastian Daschner, lead java developer advocate at IBM
  • Clement Escoffier, principal software engineer at Red Hat
  • Ken Finnegan, senior principal engineer at Red Hat
  • Emily Jiang, liberty architect for MicroProfile and CDI at IBM
  • Dmitry Kornilov, Jakarta EE and Helidon Team Leader at Oracle
  • Tomas Langer, Helidon Architect & Developer at Oracle

Leading players in the Java ecosystem have signed on to sponsor CN4J activities and show their support for the initiative. Our sponsors include the Cloud Native Computing Foundation (CNCF), IBM, Oracle, and Red Hat.

Since we have limited capacity, this is a first-come, first-served event. To register today, just include CN4J Day when you’re registering for KubeCon + CloudNativeCon Europe. Thanks to the generous support of our sponsors, a limited amount of discounted CN4J Day add-on registrations will be made available to Jakarta EE and MicroProfile community members. For more information about CN4J Day and a link to the registration page, click here.
 
For additional questions regarding this event, please reach out to events-info@eclipse.org. We’ll let you know when we add speakers and sponsors for CN4J Day. Watch our blogs and newsletters for updates.

 


by Thabang Mashologu at January 29, 2020 02:44 AM

Setting up e(fx)clipse RCP development for Java11+ and PDE

by Tom Schindl at January 28, 2020 03:00 PM

As I’m currently converting a Java-8 project to AdoptJDK-11 and JavaFX-11+ I thought it would be a good idea document the steps involved.

Prequisits

I assume you have installed:

Configure your Eclipse

Java Settings

Make AdoptJDK-11 the default JRE unless it is already the default.

Make sure AdoptJDK-11 is used for the Java-SE-11 EE

e(fx)clipse Settings

Open the JavaFX-Preference Page and point it to your JavaFX-11-SDK

This step is required because JavaFX is not part of AdoptJDK-11 and hence Eclipse won’t find the libraries and your code won’t compile inside the IDE (we’ll revisit this topic below once more)

Setup a target platform

Create your project

Bootstrap your project

Check your project setup

Check if Eclipse correctly recognized the javafx-library and magically added them to your plug-in dependendencies

Implement the UI

Add javax.annotation to your MANIFEST.MF

Before you can write the Java-Code for your UI you need to add javax.annotation-package to your bundle (this used to ship with Java-8 has been removed since then)

Create a Java-Class

package my.app.app;

import javax.annotation.PostConstruct;

import javafx.scene.control.Label;
import javafx.scene.layout.BorderPane;

public class SamplePart {
  @PostConstruct
  void init(BorderPane root) {
    root.setCenter(
      new Label(System.getProperty("javafx.version"))
    );
  }
}

Adapt your e4xmi

Running your application

While everything happily compiles running the application would fail because in the initial steps we only satisfied the Eclipse compiler by magically injecting the JavaFX-Libraries in your Plug-in-Dependency (see above).

To run the application we need to decide how we’d like to ship JavaFX:

  • next to your application in a folder
  • as part of your eclipse application inside the the plugins-directory
  • you jlink yourself a JDK

We’ll not take a look at the 3rd solution as part of this blog post!

Running with an external folder

Open the generated launch configuration and append -Defxclipse.java-modules.dir=PATH_TO_YOUR_JAVAFX_LIBS in the VM arguments-field

Running with bundled javafx-modules

We provide OSGi-Bundles who contain the original and unmodified JavaFX-Modules (note you can NOT use them are OSGi-Dependencies!) you can use them by adding http://downloads.efxclipse.bestsolution.at/p2-repos/openjfx-11/repository/

Add them to your launch configuration

Exporting your application

The project wizard already generated the basic infrastructure for you but we need to make some small changes. We assume you’ve chosen to option to ship the JavaFX-modules as part of the plugins-directory to keep it simple.

The wizard already added the JavaFX-Standard-Feature into your product-File

It also added the parts to satisfy the compiler in your releng/pom.xml

While most of the stuff is already in place we need to make 2 small modifications:

  • Update the tycho-version property to 1.5.0
  • Change the export environment to match the operation-system(s) you want to target
    • Windows: os=win32, ws=win32, arch=x86_64
    • Linux: os=linux, ws=gtk, arch=x86_64
    • OS-X: os=macosx, ws=cocoa, arch=x86_64

Producing a native launcher

As we anyway have to produce a platform-dependent we can also add the creation of a native launcher. For that open your .product-File:

  • Tick the “The product includes native launcher artifacts”
  • Change the application to main-thread-application


by Tom Schindl at January 28, 2020 03:00 PM

Introduction to Eclipse JKube: Java tooling for Kubernetes and Red Hat OpenShift

by Rohan Kumar at January 28, 2020 08:00 AM

We as Java developers are often busy working on our applications by optimizing application memory, speed, etc. In recent years, encapsulating our applications into lightweight, independent units called containers has become quite a trend, and almost every enterprise is trying to shift its infrastructure onto container technologies like Docker and Kubernetes.

Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications, but it has a steep learning curve, and an application developer with no background in DevOps can find this system a bit overwhelming. In this article, I will talk about tools that can help when deploying your Maven applications to Kubernetes/Red Hat OpenShift.

Background: Eclipse JKube

This project was not built from scratch. It’s just a refactored and rebranded version of the Fabric8 Maven plugin, which was a Maven plugin used in the Fabric8 ecosystem. Although the Fabric8 project was liked and appreciated by many people in the open source community, due to unfortunate reasons it could not become successful, and the idea of Fabric8 as an integrated development platform on top of Kubernetes died. Although the main project is archived, there are still active repositories used by the community, such as the Fabric8 Docker Maven plugin, the Fabric8 Kubernetes client, and of course the Fabric8 Maven plugin.

As maintainers of the Fabric8 Maven plugin, we started decoupling the Fabric8 ecosystem related pieces from the plugin to make a general-purpose Kubernetes/OpenShift plugin. We also felt there was a need for rebranding because most people were confused about whether this plugin had something to do with Fabric8. Hence, we decided to rebrand it, and fortunately, someone from the Eclipse foundation approached us to take in our project. Now, the project is being renamed to Eclipse JKube and can be found in the Eclipse Foundation repos on GitHub.

Eclipse JKube can be seen as a reincarnation of the Fabric8 Maven plugin. It contains the good parts of this plugin and offers a clean and smooth workflow with the tooling it provides. We refactored this plugin into three components:

  • The JKube Kit
  • The Kubernetes Maven plugin
  • The OpenShift Maven plugin

The JKube Kit contains the core logic for building Docker images, generating Kubernetes/OpenShift manifests, and applying them onto Kubernetes/OpenShift clusters. Plugins consume this library for their operations. In the future, we also plan to add support for Gradle plugins.

Example

Now, let’s have a look at Eclipse JKube in action. For the demo, I will deploy a simple Spring Boot project onto Kubernetes using the Eclipse Kubernetes Maven plugin. Let’s walk through this process:

  1. Add the Kubernetes Maven plugin as a dependency in your pom.xml file, as shown in Figure 1:
The Kubernetes Maven plugin added as a dependency in pom.xml.

Figure 1: The Kubernetes Maven plugin added as a dependency in pom.xml.

  1. Build your Docker images. The Eclipse JKube Kubernetes Maven plugin offers a zero-config mode, in which it builds your Docker image with opinionated defaults. However, you can customize it by providing an image configuration in the plugin configuration. In order to build a Docker image, you just need to run the following (the results are shown in Figure 2):
mvn k8s:build
The results of the Docker image build command.

Figure 2: Building the Docker image.

  1. Generate your Kubernetes resource manifests. Eclipse JKube plugins have a powerful and configurable resource generation mechanism in which they can generate Kubernetes resources in zero-config mode. This feature can also be configured using XML configuration or by placing customized resource fragments in the src/main/jkube directory. The results are merged with the finally generated resource fragments. In order to generate resources, run the following (the results are shown in Figure 3):
mvn k8s:resource
The results of the resource manifest creation command.

Figure 3: Creating the Kubernetes resource manifests.

  1. Apply generated Kubernetes resources onto the Kubernetes cluster. In order to apply resources onto this cluster, you just need to run one of the following (the results are shown in Figure 4):
mvn k8s:apply

or:

mvn k8s:deploy
The results of the command to apply the generated Kubernetes resources onto the Kubernetes cluster.

Figure 4: Applying the generated Kubernetes resources onto the Kubernetes cluster.

  1. Undeploy your Maven application from Kubernetes. We have a cleanup goal, too, which just deletes all resources created during the deploy phase. To use this feature, run the following (the results are shown in Figure 5):
mvn k8s:undeploy
The results of undeploying your Maven application from Kubernetes.

Figure 5: Undeploying your Maven application from Kubernetes.

  1. Debug your Java application inside Kubernetes. Apart from these goals, we also have a goal for remote debugging. Suppose that you see a bug inside your application that’s running inside Kubernetes and you want to debug its behavior. You can simply run our debug goal, which does port forwarding for debugging:
mvn k8s:debug
  1. Configure your IDE in order to connect to this open port for debugging, as shown in Figure 6:
A configuration window open in the IDE.

Figure 6: Configuring your IDE to debug your Java application inside Kubernetes via Maven.

  1. Set a breakpoint in the application code and hit the application endpoint. We can see the breakpoint being hit in IDE as shown in Figure 7:
The IDE hitting the application's breakpoint.

Figure 7: The application’s breakpoint in your IDE.

With this result, I wrap up this article. We do have more in our pipeline, so stay tuned for new updates. If you want to get involved, please reach out to us via our mailing list at jkube-dev@eclipse.org, or our Gitter channel at https://gitter.im/eclipse/jkube#.

Share

The post Introduction to Eclipse JKube: Java tooling for Kubernetes and Red Hat OpenShift appeared first on Red Hat Developer.


by Rohan Kumar at January 28, 2020 08:00 AM

Eclipse 2019-12 Now Available From Flathub

by Mat Booth at January 20, 2020 04:00 PM

How to get the latest release of Eclipse from Flathub.


by Mat Booth at January 20, 2020 04:00 PM

JDT without Eclipse

January 16, 2020 11:00 PM

The JDT (Java Development Tools) is an important part of Eclipse IDE but it can also be used without Eclipse.

For example the Spring Tools 4, which is nowadays a cross-platform tool (Visual Studio Code, Eclipse IDE, …), is highly using the JDT behind the scene. If you would like to know more, I recommend you this podcast episode: Spring Tools lead Martin Lippert

A second known example is the Java Formatter that is also part of the JDT. Since a long time there are maven and gradle plugins that performs the same formatting as Eclipse IDE but as part of the build (often with the possibly to break the build when the code is wrongly formatted).

Reusing the JDT has been made easier since 2017 when it was decided to publish each release and its dependencies on maven central (with following groupId: org.eclipse.jdt, org.eclipse.platform). Stephan Herrmann did a lot of work to achieve this goal. I blogged about this: Use the Eclipse Java Development Tools in a Java SE application and I have pushed a simple example the Java Formatter is used in a simple main(String[]) method built by a classic minimal Maven project: java-formatter.

Workspace or not?

When using the JDT in an headless application, two cases needs to be distinguished:

  1. Some features (the parser, the formatter…) can be used in a simple Java main method.

  2. Other features (search index, AST rewriter…) require a workspace. This imply that the code run inside an OSGi runtime.

To illustrate this aspect, I took some of the examples provided by the site www.programcreek.com in the blog post series Eclipse JDT Tutorials and I adapted them so that each code snippet can be executed inside a JUnit test. This is the Programcreek examples project.

I have split the unit-tests into two projects:

  • programcreek-standalone for the one that do not require OSGi. The maven project is really simple (using the default convention everywhere)

  • programcreek-osgi for the one that must run inside an OSGi runtime. The bnd maven plugins are configured in the pom.xml to take care of the OSGi stuff.

If you run the test with Maven, it will work out-of-the box.

If you would like to run them inside an IDE, you should use one that starts OSGi when executing the tests (in the same way the maven build is doing it). To get a bnd aware IDE, you can use Eclipse IDE for Java Developers with the additional plugin Bndtools installed, but there are other possibilities.

Source code can be found on GitHub: programcreek-examples


January 16, 2020 11:00 PM

Oracle made me a Stackoverflow Guru

by Stephan Herrmann at January 16, 2020 06:40 PM

Just today Oracle helped me to become a “Guru” on Stackoverflow! How did they do it? By doing nothing.

In former times, I was periodically enraged, when Oracle didn’t pay attention to the feedback I was giving them during my work on ecj (the Eclipse Compiler for Java) – at least not the attention that I had hoped for (to be fair: there was a lot of good communication, too). At those times I had still hoped I could help make Java a language that is completely and unambiguously defined by specifications. Meanwhile I recognized that Java is at least three languages: the language defined by JLS etc., the language implemented by javac, and the language implemented by ecj (and no chance to make ecj to conform to both others). I realized that we were not done with Java 8 even 3 years after its release. Three more years later it’s still much the same.

So let’s move on, haven’t things improved in subsequent versions of Java? One of the key new rules in Java 9 is, that

“If [a qualified package name] does not name a package that is uniquely visible to the current module (§7.4.3), then a compile-time error occurs”.

Simple and unambiguous. That’s what compilers have to check.

Except: javac doesn’t check for uniqueness if one of the modules involved is the “unnamed module”.

In 2018 there was some confusion about this, and during discussion on stackoverflow I raised this issue to the jigsaw-dev mailing list. A bug was raised against javac, confirmed to be a bug by spec lead Alex Buckley. I summarized the situation in my answer on stackoverflow.

This bug could have been easily fixed in javac version 12, but wasn’t. Meanwhile upvotes on my answer on stackoverflow started coming in. The same for Java 13. The same for Java 14. And yet no visible activity on the javac bug. You need ecj to find if your program violates this rule of JLS.

Today the 40th upvote earned me the “Guru” tag on stackoverflow.

So, please Oracle, keep that bug unresolved, it will earn me a lot of reputation for a bright future – by doing: nothing 🙂


by Stephan Herrmann at January 16, 2020 06:40 PM

Building and running Equinox with maven without Tycho

January 12, 2020 11:00 PM

Eclipse Tycho is a great way to let maven build PDE based projects. But the Plug-in Development Environment (PDE) model is not the only way to work with OSGi.

In particular, since 2 or 3 years the Eclipse Platform jars (including the Equinox jars) are regularly published on Maven Central (check the artifacts having org.eclipse.platform as groupId).

I was looking for an alternative to P2 and to the target-platform mechanism.

bnd and bndtools logo

Bnd and Bndtools are always mentioned as potential alternative to PDE (I attended several talks discussing this at EclipseCon 2018: Migrating from PDE to Bndtools in Practice, From Zero to a Professional OSGi Project in Minutes). So I decided to explore this path.

This StackOverflow question catches my attention: How to start with OSGi. I had a close look at the answer provided by Peter Kriens (the founder of the Bnd and Bndtools projects), where he discusses the different possible setup:

  • Maven Only

  • Gradle Only

  • Eclipse, M2E, Maven, and Bndtools

  • Eclipse, Bndtools, Gradle

Even in the "Maven Only" or "Gradle Only" setups, the proposed solution relies on plugins using bnd under the hood.

How to start?

My project is quite simple, the dependencies are already on maven central. I will not have a complex use-case with multiple versions of the same library or with platform dependent artifacts. So fetching the dependencies with maven is sufficient.

I decided to try the "Maven Only" model.

How to start?

I was not sure to understand how to use the different bnd maven plugins: bnd-maven-plugin, bnd-indexer-maven-plugin, bnd-testing-maven-plugin, bnd-export-maven-plugin

Luckily I found the slides of the Bndtools and Maven: A Brave New World workshop (given at EclipseCon 2017) and the corresponding git repository: osgi-community-event2017.

The corresponding effective-osgi maven archetypes used during the workshop are still working well. I could follow the step-by-step guide (in the readme of the maven archetypes project). I got everything working as described and I could find enough explanations about the generated projects. I think I understood what I did and this is very important when you start.

After some cleanup and a switch from Apache Felix to Eclipse Equinox, I got my running setup and I answered my question: "How to start with OSGi without PDE and Tycho".

The corresponding code is in this folder: effectiveosgi-example.


January 12, 2020 11:00 PM

4 Years at The Linux Foundation

by Chris Aniszczyk at January 03, 2020 09:54 AM

Late last year marked the 4th year anniversary of the formation of the CNCF and me joining The Linux Foundation:

As we enter 2020, it’s amusing for me to reflect on my decision to join The Linux Foundation a little over 4 years ago when I was looking for something new to focus on. I spent about 5 years at Twitter which felt like an eternity (the average tenure for a silicon valley employee is under 2 years), focused on open source and enjoyed the startup life of going from a hundred or so engineers to a couple of thousand. I truly enjoyed the ride, it was a high impact experience where we were able to open source projects that changed the industry for the better: Bootstrap (changed front end development for the better), Twemoji (made emojis more open source friendly and embeddable), Mesos (pushed the state of art for open source infrastructure), co-founded TODO Group (pushed the state of corporate open source programs forward) and more!

When I was looking for change, I wanted to find an opportunity that could impact more than I could just do at one company. I had some offers from FAANG companies and amazing startups but eventually settled on the nonprofit Linux Foundation because I wanted to build an open source foundation from scratch, teach other companies about open source best practices and assumed non profit life would be a bit more relaxing than diving into a new company (I was wrong). Also, I was throughly convinced that an openly governed foundation pushing Kubernetes, container specifications and adjacent independent cloud native technologies would be the right model to move open infrastructure forward.

As we enter 2020, I realize that I’ve been with one organization for a long time and that puts me on edge as I enjoy challenges, chaos and dread anything that makes me comfortable or complacent. Also, I have a strong desire to focus on efforts that involve improving the state of security and privacy in a connected world, participatory democracy, climate change; also anything that pushes open source to new industries and geographies.

While I’m always happy to entertain opportunities that align to my goals, the one thing that I do enjoy at the LF is that I’ve had the ability to build a variety of new open source foundations improving industries and communities: CDF, GraphQL Foundation, Open Container Initiative (OCI), Presto Foundation, TODO Group, Urban Computing Foundation and more.

Anyways, thanks for reading and I look forward to another year of bringing open source practices to new industries and places, the world is better when we are collaborating openly.


by Chris Aniszczyk at January 03, 2020 09:54 AM

Eclipse Theia IDE – FAQ

by Jonas Helming and Maximilian Koegel at December 24, 2019 12:00 AM

In this article we´ll answer the most frequently asked questions about the Eclipse Theia IDE, the open source platform for building...

The post Eclipse Theia IDE – FAQ appeared first on EclipseSource.


by Jonas Helming and Maximilian Koegel at December 24, 2019 12:00 AM

An update on Eclipse IoT Packages

by Jens Reimann at December 19, 2019 12:17 PM

A lot has happened, since I wrote last about the Eclipse IoT Packages project. We had some great discussions at EclipseCon Europe, and started to work together online, having new ideas in the progress. Right before the end of the year, I think it is a good time to give an update, and peek a bit into the future.

Homepage

One of the first things we wanted to get started, was a home for the content we plan on creating. An important piece of the puzzle is to explain to people, what we have in mind. Not only for people that want to try out the various Eclipse IoT projects, but also to possible contributors. And in the end, an important goal of the project is to attract interested parties. For consuming our ideas, or growing them even further.

Eclipse IoT Packages logo

So we now have a logo, a homepage, built using using templates in a continuous build system. We are in a position to start focusing on the actual content, and on the more tricky tasks and questions ahead. And should you want to create a PR for the homepage, you are more than welcome. There is also already some content, explaining the main goals, the way we want to move forward, and demo of a first package: “Package Zero”.

Community

While the homepage is a good entry point for people to learn about Eclipse IoT and packages, our GitHub repository is the home for the community. And having some great discussions on GitHub, quickly brought up the need for a community call and a more direct communication channel.

If you are interested in the project, come and join our bi-weekly community call. It is a quick, 30 minutes call at 16:00 CET, and open to everyone. Repeating every two weeks, starting 2019-12-02.

The URL to the call is: https://eclipse.zoom.us/j/317801130. You can also subscribe to the community calendar to get a reminder.

In between calls, we have a chat room eclipse/packages on Gitter.

Eclipse IoT Helm Chart Repository

One of the earliest discussion we had, was around the question of how and were we want to host the Helm charts. We would prefer not to author them ourselves, but let the projects contribute them. After all, the IoT packages project has the goal of enabling you to install a whole set of Eclipse IoT projects, with only a few commands. So the focus is on the integration, and the expert knowledge required for creating project Helm chart, is in the actual projects.

On the other side, having a one-stop shop, for getting your Eclipse IoT Helm charts, sounds pretty convenient. So why not host our own Helm chart repository?

Thanks to a company called Kiwigrid, who contributed a CI pipeline for validating charts, we could easily extend our existing homepage publishing job, to also publish Helm charts. As a first chart, we published the Eclipse Ditto chart. And, as expected with Helm, installing it is as easy as:

Of course having a single chart is only the first step. Publishing a single Helm charts isn’t that impressive. But getting an agreement on the community, getting the validation and publishing pipeline set up, attracting new contributors, that is definitely a big step in the right direction.

Outlook

I think that we now have a good foundation, for moving forward. We have a place called “home”, for documentation, code and community. And it looks like we have also been able to attract more people to the project.

While our first package, “Package Zero”, still isn’t complete, it should be pretty close. Creating a first, joint deployment of Hono and Ditto is our immediate focus. And we will continue to work towards a first release of “Package Zero”. Finding a better name is still an item on the list.

Having this foundation in place also means, that the time is right, for you to think about contributing your own Eclipse IoT Package. Contributions are always welcome.

The post An update on Eclipse IoT Packages appeared first on ctron's blog.


by Jens Reimann at December 19, 2019 12:17 PM

WTP 3.16 Released!

December 18, 2019 10:42 PM

The Eclipse Web Tools Platform 3.16 has been released! Installation and updates can be performed using the Eclipse IDE 2019-12 Update Site or through the Eclipse Marketplace . Release 3.16 is included in the 2019-12 Eclipse IDE for Enterprise Java Developers , with selected portions also included in several other packages . Adopters can download the R3.16 build directly and combine it with the necessary dependencies.

More news


December 18, 2019 10:42 PM

Announcing Eclipse Ditto Release 1.0.0

December 12, 2019 12:00 AM

Today the Eclipse Ditto is thrilled to announce the availability of Eclipse Ditto’s first major release 1.0.0.

Maturity

The initial code contribution was done in October 2017, 2 years later and 2 releases (0.8.0 and 0.9.0) later, we think its time to graduate from the Eclipse “incubation” phase and officially declare the project as mature.

Recent adoptions and contributions from our community show us that Eclipse Ditto solves problems which also other companies have. Adopters add Eclipse Ditto as a central part of their own IoT platforms.

API stability

Having reached 1.0.0, some additional promises towards “API stability” do apply:

HTTP API stability

Ditto uses schema versioning (currently schema version 1 and 2) at the HTTP API level in order to being able to evolve APIs. It is backward compatible to the prior versions 0.8.0 and 0.9.0.

JSON API stability

Ditto kept its main JSON APIs (regarding things, policies and search) backwards compatible to 0.8.0 and 0.9.0 releases. The JSON format of “connections” was changed since 0.9.0 and will from 1.0.0 on be kept backwards compatible as well.

Java API stability

The Java APIs will for the 1.x release be kept backwards compatible, so only non-breaking additions to the APIs will be done. This is enforced by a Maven tooling.

The following Java modules are treated as API for which compatibility is enforced:

  • ditto-json
  • ditto-model-*
  • ditto-signals-*
  • ditto-protocol-adapter
  • ditto-utils
  • ditto-client

Scalability

The focus on the 0.9.0 and 1.0.0 releases regarding non-functionals were laid on horizontal scalability.

With Eclipse Ditto 1.0.0 we are confident to face production grade scalability requirements being capable of handling millions of managed things.

Changelog

The main changes compared to the last release, 0.9.0, are:

  • addition of a Java and a JavaScript client SDK in separate GitHub repo
  • configurable OpenID Connect authorization servers
  • support for OpenID Connect / OAuth2.0 based authentication in Ditto Java Client
  • invoking custom foreign HTTP endpoints as a result of events/messages
  • ability to reflect Eclipse Hono’s device connection state in Ditto’s things
  • configurable throttling of max. consumed WebSocket commands / time interval
  • Addition of “definition” field in thing at model level containing the model ID a thing may follow
  • Improved connection response handling/mapping

Please have a look at the 1.0.0 release notes for a more detailed information on the release.

Artifacts

The new Java artifacts have been published at the Eclipse Maven repository as well as Maven central.

The Docker images have been pushed to Docker Hub:



Ditto


The Eclipse Ditto team


December 12, 2019 12:00 AM

Xtext 2.20 Release

by Karsten Thoms (thoms@itemis.de) at December 03, 2019 02:38 PM

Right on time for the Eclipse 2019-12 Simultaneous Release, we have shipped Xtext 2.20. This time we focussed more on maintenance work than on features. As with each release, the world around us is spinning fast, and keeping the whole technology stack up-to-date and testing against it is quite time consuming.

Let’s talk about Xtend

For a long time, the Java language missed some features that could make a developer’s life easier. This was one of the reasons that a broad range of languages running on the Java Virtual Machine (JVM) became popular, Xtend being one of them. With its powerful lambda expressions, extension methods, and template support, Xtend had some sweet spots back in 2013, which Java did not have. And even with the availability of lambdas with Java 8, it took some years for projects to catch up with that. Xtend provided this for years, while still being able to produce Java 1.6-compliant code.

Now the (Java) world has changed, and some nice language features have been added to Java, making the gap to Xtend smaller. Back in 2013, we claimed Xtend to be the “Java 10 of today”. We are realistic enough to state that Xtend is not and will not be the “Java 17 of today”. However, there are still areas where we see Xtend as beneficial over other Java and other JVM languages. To be more specific, we still think that Xtend is the most powerful language supporting template expressions. The most common use case for this are code generators. Besides that, writing unit tests with Xtend feels much cleaner than with Java.

However, we decided to encourage to use Xtend only for these areas, and not as the primary general-purpose language. And we start doing this with the “New Project” wizard. The configuration that this wizard creates for a new Xtext project, will now use Java as the language for generated skeleton classes, so that newly-created projects (and especially new users) are using Java by default. This is just a changed default for the generated MWE2 workflow, and users, who still prefer to use Xtend for the generated artifacts, can simply modify the workflow file. We expect that those users are advanced anyway. Xtend will stay the default language for the code generator and unit test fragments.

Additionally, we have started to clean up the code base and to refactor some of the Xtend code to Java. As Xtend already is compiled into Java, this basically means that we take those sources and clean them up. This will be an ongoing maintenance work. If you like to contribute to Xtext, this would be a good starting point for refactoring contributions.

New Xtend features

After that being said, there is some good news about some features that have been added to Xtend’s Eclipse integration. We are very happy about some useful contributions from Vivien Jovet in this area.

A new refactoring has been implemented that allows the user to refactor a call to a static method either as static import or as a static extension. This allows the user to produce more readable and fluent code.

EMBED:

Xtext_Release_2_20_refactoring_import_static_method

 

The testing support for Xtend has been improved:

  • An Xtend unit test can now be triggered within the Eclipse IDE when the cursor is located around the test class definition.
  • As known from JDT’s JUnit integration, Xtend now also provides quickfixes if the JUnit library is missing on the classpath. By using the quickfix, the library can be added for either JUnit 4 or 5.

It’s time to get rid off old generator workflows

Already back in 2015, we changed to new Xtend-based generator fragments and deprecated the old Xpand-based language generator. If you still use an old generator workflow based on the org.eclipse.xtext.generator bundle (the new bundle is org.eclipse.xtext.xtext.generator, please note the duplicated .xtext segment), then it is time for you to finally take action!

The old generator is based on the Xpand language, which is dormant for a while. We are refactoring Xtext to avoid any dependency on Xpand, except for the deprecated generator bundle. Also, we do not change the old generator templates anymore, so we strongly recommend to use the maintained new generator infrastructure. Although it is not scheduled yet, dropping the whole old generator completely is just a matter of time. So, please, if you still have any anciently-structured Xtext projects, migrate them to the recommended infrastructure! If you need help on this, get in contact with us. We have enough experience to help you quickly on that.

Create new projects and files from the toolbar

If you want to allow creation of projects and files for your DSL from the toolbar, then this is good news for you: The fragments for generating the infrastructure for wizards have been extended by an option called generateToolbarButton. As the name already suggests, the generator fragments will generate the button to the toolbar, if this option is enabled in the fragment’s configuration in your generator workflow.

Making our maintenance work easier

With 4 releases per year and 3 milestone releases towards any release, it is quite some effort to make these releases. As we finished our hopefully last build infrastructure change to Eclipse JIRO with the previous release, we were able to invest a bit of time into enhancing our build pipelines again.

As a result, initiating a milestone or final release is mostly triggering a parameterized build job now and then waiting several hours until everything has been build. Actually, while I’m writing this article, the final Xtext release is being build for you, which has been triggered 3.5 hours before. Yes, it still builds that long. And it is still painful to orchestrate the build over all Xtext repositories. There are still some steps that require manual action (releasing to Maven Central, updating Eclipse Marketplace, sending notifications to the communication channels), but we slowly add all automatable tasks to the pipelines.

Also, we interacted with the Eclipse infrastructure staff to get us in the position that our technical build user is able to raise pull requests on GitHub automatically. This enabled us to create a bot update pipeline that lets us automate some frequently occurring update changes. This is, for example, updating the version, versions to use (like Tycho), the Orbit URL, etc. The job raises pull requests for us, so we can safely verify that nothing is missing and that everything is properly built. It is very much like these dependency update bots like Dependabot that are coming up more and more, but tightly tailored to the very specific needs of the Xtext project. We are still at the beginning here. Some first pull requests merged for 2.20 have been created by the bot job. We expect that the bot will be triggered automatically in the future and that the bot user will become one of the most active Xtext contributors then.

Conclusion

Xtext 2.20 is a maintenance release. For users of a recent Xtext version it will be a drop-in replacement. Users of old versions and project structures are recommended to upgrade their projects, in order to keep their projects compatible.

The Xtext project started to discourage the usage of Xtend where the latter’s language features do not have a significant benefit over Java. And internally, the project started to refactor the codebase to follow this recommendation.

For build and release engineering, the project improved towards more automated tasks and benefits from reduced manual maintenance tasks.

The project team is happy about receiving contributions. We are especially grateful about new feature ideas that are actively developed by contributors.

Do you want to know more? Have a look at the release notes for Xtext & Xtend.


by Karsten Thoms (thoms@itemis.de) at December 03, 2019 02:38 PM

Obeo Cloud Platform

December 02, 2019 10:00 AM

TLDR; This is almost the story of my first CTO pregnancy experience, organizational stuff inside.

It’s been almost 2 years since I started operating as Obeo’s CTO, 2 years since I accepted the challenge to take the lead of our R&D. As part of this, almost 1 year ago I started to organize the development of our new generation modeling tool solution. And for the past 9 months my team has been busy working full time on this new product.

When you become pregnant a product manager, you are basically a story trigger for everyone around you: people love to tell their failure-project and their stupid-leader story. It is scary to be at the place where you are the one who decides. It is even more frightening when you are at the point where you redesign your products with a completely different technological layer. This is where I was one year ago. At Obeo, we have been developing for years modeling workbenches based on the Eclipse Platform. Today our customers want more and asked us to modernize the modeling stack by making it cloud compliant. So how do we go from this statement to a first software release?

If you’re a product manager or affiliate, you’re probably aware that the first nine months of a new software product are always a big adventure. CTOs-to-be may have a lot of questions about what they can expect and the changes they’ll go through. Do you know when to expect to feel your product move? When to look for a UI/UX designer or a continuous deployment pipeline? Customer interest on what you are developing? Is that the signs of preterm labor?

1st month - Your imagination has no limit

The beginning of a new product is not easy. Where should we start? Should we rely on the prototypes we already have? Should we built a completely new project? How will you build your team? Who will be in? When? What will be the first scenario that will be supported in your software? How should you be organized? These are all the first questions you will come across.

What I will remember from that first month is that at some point you have to take decision: that is why you are here, that’s your job, and that will remind you why there is a C in CTO. You have the power of choice. There is not a unique good solution for everything. When every other person has a piece of advice for you, you start to understand that not everything works for everyone. I found what I believe works best for me. Of course, what works for me may not work for you: it depends on your company technological background but also on how you want to get your collaborators engaged as well. Each project, team and context is different.

Take the time to discuss this with others in your company, take the time to find what they need but also what are they dreaming about this new product. I developed my product vision for example by asking people to fill in a survey with questions such as in 1, 2, 5 years what is the final goal, the business needs, the users expectations, the success factor, etc. See the product vision template available on my github for details.

Product Vision Template

Write a summary of all this. And in the end, find your own voice in the ocean of opinions. Build your team, define your first scenario, share with everyone what you decided and do not hesitate to fail! Your first organisation might not be the best one: try, learn, update, reorganize, try something else… You are not alone in this task, this is a collective effort. The entire team cares about its organization and discusses it regularly by making retrospectives, having a dedicated moment during regular meetings, defining process to improve their collaboration, selecting tools…

The questional period you just went through, your team will go through the same. Which technical bricks will they rely on? Which part of our existing code will integrate this new project? Should we start by building the simplest scenario, this without taking into account the whole complexity of the end product you are trying to build? What should in the end be the software architecture?

Important point, give some time to your team to discuss all those things. But ask them to produce something even during that period. In our case, we wrote documents (AsciiDoc rocks!) and decided to keep a trace of all our decisions. For that, we used ADR - Architectural Decision Record: it is like reading the pocket reference of your project. After several months of use, it has turned out to be really useful. It helps people who are working part time on the project to get what we decided and why and help remember why we took a given decision. It forces us to discuss and validate all the important decisions together. You also need to stop this questioning period by just pushing your team to contribute code and not just loosing them in the limbs of the imaginative product.

Mid months - Scared! Believe! Realize!

Pregnancy, giving software birth and the first 3 months is such an emotional roller coaster with so many changes. As you move forward on your development journey, it is amazing to discover all the things your sweety-software can do before he’s born. We went through different themes during these months: Persistence, CRUD, Diagram, Properties views, Edition, Concurrence, Authentication… Then everything is on track, you are organized as a team, code is being produced, your scenarios are more and more rich. And at some point, we realized we were actually growing life within us. It did not come right when we started the project.

I felt it in 3 specific moments —

  • When we chose the name - Obeo Cloud Platform,
  • When I had a first ultrasound scan preview of the UI - fortunately we have a Design team to work with,
  • And when he kicked for the first time - I mean his first public live demo.

We will also never forget our baby shower celebration at EclipseCon Europe! This was an amazing moment for us. We were so happy to present our new work to the Sirius community. That day I was feeling the kicks in my belly. As Steve Jobs puts it, “A lot of times, people don’t know what they want until you show it to them.” That’s why we decided to give a preview of our new product even before it is polished, and why we launched a beta testing team. The idea is to give them a preview access and organize live remote testing sessions to grow the feedback river with real end-users. Join us now and share your needs!

8-9th months - You are (almost) releasing!

Today, I do not even realise it is already the last months of pregnancy: we are getting closer and closer to our due date. I have this mixed feeling of excitement and nervousness and want our software product to come as soon as possible.

Our product is getting ready for birth, and the whole Obeo family is preparing to welcome a new member. You are also invited to join us, stay tuned for the upcoming SiriusCon Live Q1 2020.

At the end of this year, on his first days, our software product will be tiny and little. It will know only a few things but will do them well. Next we will continue to feed him: baby bottles first and then we will go through the diversification phase introducing new kind of food. My team will help him to grow, you, our users, our customers will be part of its education. We have many projects and plans, but we need you to turn true customers problems into product features.

Sometimes it’s important to stop and look back. It really made me appreciate and comprehend what an amazing experience I and my company went through ❤ and still are.

Hope you had a good read.


December 02, 2019 10:00 AM

Eclipse m2e: How to use a WORKSPACE Maven installation

by kthoms at November 27, 2019 09:39 AM

Today a colleague of me asked me about the Maven Installations preference page in Eclipse. There is an entry WORKSPACE there, which is disabled and shows NOT AVAILABLE. He wanted to know how to enable a workspace installation of Maven.

Since we both did not find the documentation of the feature I digged into the m2e sources and found class MavenWorkspaceRuntime. The relevant snippets are the method getMavenDistribution() and the MAVEN_DISTRIBUTION constant:

private static final ArtifactKey MAVEN_DISTRIBUTION = new ArtifactKey(
      "org.apache.maven", "apache-maven", "[3.0,)", null); //$NON-NLS-1$ //$NON-NLS-2$ //$NON-NLS-3$

...

protected IMavenProjectFacade getMavenDistribution() {
  try {
    VersionRange range = VersionRange.createFromVersionSpec(getDistributionArtifactKey().getVersion());
    for(IMavenProjectFacade facade : projectManager.getProjects()) {
      ArtifactKey artifactKey = facade.getArtifactKey();
      if(getDistributionArtifactKey().getGroupId().equals(artifactKey.getGroupId()) //
          && getDistributionArtifactKey().getArtifactId().equals(artifactKey.getArtifactId())//
          && range.containsVersion(new DefaultArtifactVersion(artifactKey.getVersion()))) {
        return facade;
      }
    }
  } catch(InvalidVersionSpecificationException e) {
    // can't happen
  }
  return null;
}

From here you can see that m2e tries to look for workspace (Maven) projects and to find one the has the coordinates org.apache.maven:apache-maven:[3.0,).

So the answer how to enable a WORKSPACE Maven installation is: Import the project apache-maven into the workspace. And here is how to do it:

  1. Clone Apache Maven from https://github.com/apache/maven.git
  2. Optionally: check out a release tag
    git checkout maven-3.6.3
  3. Perform File / Import / Existing Maven Projects
  4. As Root Directory select the apache-maven subfolder in your Maven clone location

Now you will have the project that m2e searches for in your workspace:

And the Maven Installations preference page lets you now select this distribution:


by kthoms at November 27, 2019 09:39 AM

Modernizing our GitHub Sync Toolset

November 19, 2019 08:10 PM

I am happy to announce that my team is ready to deploy a new version of our GitHub Sync Toolset on November 26, 2019 from 10:00 to 11:00 am EST.

We are not expecting any disruption of service but it’s possible that some committers may lose write access to their Eclipse project GitHub repositories during this 1 hour maintenance window.

This toolset is responsible for syncronizing Eclipse committers accross all our GitHub repositories and on top of that, this new release will start syncronizing contributors.

In this context, a contributor is a GitHub user with read access to the project GitHub repositories. This new feature will allow committers to assign issues to contributors who currently don’t have write access to the repository. This feature was requested in 2015 via Bug 483563 - Allow assignment of GitHub issues to contributors.

Eclipse Committers are reponsible for maintaining a list of GitHub contributors from their project page on the Eclipse Project Management Infrastructure (PMI).

To become an Eclipse contributor on a GitHub for a project, please make sure to tell us your GitHub Username in your Eclipse account.


November 19, 2019 08:10 PM

Jakarta Microprofile REST Client in Eclipse

by Christian Pontesegger (noreply@blogger.com) at November 18, 2019 11:19 AM

Today we are going to implement a simple REST client for an Eclipse RCP application. Now with Jakarta @ Eclipse and all these nice Microprofile implementations this should be a piece of cake, right? Now lets see...

Source code for this tutorial is available on github as a single zip archive, as a Team Project Set or you can browse the files online.

Step 1: Dependencies

The Eclipse Microprofile REST Client repository is a good place to get started. It points to several implementations (at the bottom of the readme). Unfortunately these implementations do not host any kind of p2 sites which we could use directly. So our next stop is Eclipse Orbit, but same situation there. This means we need to collect our dependencies manually.

For my example I used RESTEasy, simply as it was the only one I could get working within reasonable time. To fetch dependencies, download the latest version of RESTEasy. As the RESTEasy download package does not contain the REST client API, we need to fetch that from another source. I found it in the Apache CXF project, so download the latest version too. If you know a better source, please let me know in the comments.

Now create a new Plug-in from Existing JAR Archives. Click on Add External... and add all jars from resteasy-jaxrs-x.y.z.Final/lib/*.jar. Further add apache-cxf-x.y.z/lib/jakarta.ws.rs-api-x.y.z.jar.
This plug-in now contains all dependencies we need for our client. Unfortunately also a lot of other stuff we probably do not need, but we leave the cleanup for later.

Step 2: Define the REST service

For our example we will build a client for the Petstore Service, which can be used for testing purposes. Further it provides a swagger interface to test the REST calls online. I recommend to check out the API and play with some commands online and with curl.

Lets write a simple client for the store with its 4 commands. The simplest seems to be the inventory command, so we will start there. Create a new Java interface:
package com.codeandme.restclient.resteasy;

import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.Response;

public interface IStoreService {

@GET
@Path("/v2/store/inventory")
@Produces(MediaType.APPLICATION_JSON)
Response getInventory();
}
Everything necessary for RESTEasy is provided via annotations:

  • @Path defines the path for the command of the REST service
  • @GET defines that we have to use a GET command (there exist annotations for POST, DELETE, PUT)
  • @Produces finally defines the type of data we do get in response from the server.
Step 3: Create an instance of the service

Create a new class StoreServiceFactory:
package com.codeandme.restclient.resteasy;

import java.net.URI;
import java.net.URISyntaxException;

import org.jboss.resteasy.client.jaxrs.ResteasyClient;
import org.jboss.resteasy.client.jaxrs.ResteasyWebTarget;
import org.jboss.resteasy.client.jaxrs.internal.ResteasyClientBuilderImpl;

public class StoreServiceFactory {

public static IStoreService createStoreService() throws URISyntaxException {
ResteasyClient client = new ResteasyClientBuilderImpl().build();
ResteasyWebTarget target = client.target(new URI("https://petstore.swagger.io/"));
return target.proxy(IStoreService.class);
}
}

This is the programmatic way to create a client instance. There also exists another method called CDI, which I did not try out in Eclipse.

The service is ready and usable, so give it a try. The result object returned does contain some valuable information:

  • getStatus() provides the HTTP response status. 200 is expected for a successful getInventory()
  • getEntity() provides an InputStream which contains the JSON encoded response data from the server
Step 4: Response decoding

Our response is encoded as JSON collection of properties. In Java terms this basically reflects to a Map<String, String>. Instead of decoding the data manually, we let the framework do it for us:

Change the IStoreService to:

 Map<String, String> getInventory();
Anything else is done by the framework. Now how easy was that?

Step 5: POST request

To place an order we need order parameters. Best we encapsulate them in a dedicated Order class. From the definition of the order REST call we can see that we need following class properties: id, petId, quantity, shipDate, status, complete. Add these parameters as fields to the Order class and create getters/setters for them.

Now we can extend our IStoreService with the fileOrder() call:


@Path("/v2/store")
public interface IStoreService {

@GET
@Path("inventory")
@Produces(MediaType.APPLICATION_JSON)
Map<String, String> getInventory();

@POST
@Path("order")
@Consumes(MediaType.APPLICATION_JSON)
void fileOrder(Order order);
}

The Order automatically gets encoded as JSON object. No need for us to do the coding manually!

As parts of the path are the same for both calls, I moved the common component to the class level.

Step 6: Path parameters

To fetch an order we need to put the orderId in the request path. Coding of such parameters is put in curly braces. The parameter on the java call then gets annotated so the framework knows which parameter value to put into the path:

 @GET
@Path("order/{orderId}")
@Produces(MediaType.APPLICATION_JSON)
Order getOrder(@PathParam("orderId") int orderId);

Again the framework takes care of the decoding of the JSON data.

Step 7: DELETE an Order

Deleting needs the orderId as before:

 @DELETE
@Path("order/{orderId}")
void deleteOrder(@PathParam("orderId") int orderId);

The REST API does not provide a useful JSON response to the delete call. One option is to leave the response type to void. In case the command fails, an exception will be thrown (eg when the orderId is not found and the server returns 404).

Another option is to set the return type to javax.ws.rs.core.Response. Now we do get everything the server sends back and no execption is thrown anymore. Sometimes we might only be interested in the status code. This can be fetched when setting the return type to Response.Status. Again, no exception will be thrown on a 404.

Optional: Only have required RESTEasy dependencies

Looking at all these jars I could not figure out a good way to get rid of the ones unused by the REST client. So I provided unit tests for all my calls and then removed dependencies step by step until I found the minimal set of required jars.




by Christian Pontesegger (noreply@blogger.com) at November 18, 2019 11:19 AM

Getting to the Source

by Ed Merks (noreply@blogger.com) at November 08, 2019 09:04 AM

As a Java developer using JDT, no doubt you are intimately familiar with Ctrl-Shift-T to launch the Open Type dialog.  You might not even realize this is a shortcut accessible via the Navigate menu.  So you probably will not have noticed that this menu also contains Open Discovered Type:


Eclipse has a huge variety of open source projects maintained in a bewildering collection of Git repositories.  Many are hosted at Eclipse:
https://git.eclipse.org/c/
Others are hosted at Github:
https://github.com/eclipse/

Finding the Git repository that contains a particular Java class is like finding a needle in a haystack.  This is where Open Discovered Type comes to the rescue.  Once a week, Oomph indexes every *.java file in every Git repository hosted by git.eclipse.org and github.com/eclipse.  The Open Discovered Type dialog loads this information to populate a tree view of all these packages and classes.


Please read the help information the first time you use it.  It was written to help you get the most out of this dialog.  Also be patient the first time you launch the dialog; there's a lot of information to download.

Suffice to say, you can use the dialog much like you do Open Type.  So here we search for JavaCore and discover all the classes with that name:


We can select any one of them and discover all the Git repositories containing that class and we can use the context menu for each link for each repository or for the specific file in that repository to open the link where we want it opened.  From that link, you can of course see the full history of the repository or specific file.

As a bonus, if this repository provides an Oomph setup, you can easily use that Oomph setup to import the sources for this project into your workspace. If there is no Oomph setup, you'll have to do that manually.

In any case, contributing to Eclipse open source projects has never been easier.

by Ed Merks (noreply@blogger.com) at November 08, 2019 09:04 AM

Eclipse startup up time improved

November 05, 2019 12:00 AM

I’m happy to report that the Eclipse SDK integration builds starts in less than 5 seconds (~4900 ms) on my machine into an empty workspace. IIRC this used to be around 9 seconds 2 years ago. 4.13 (which was already quite a bit improved used around 5800ms (6887ms with EGit and Marketplace). For recent improvements in this release see https://bugs.eclipse.org/bugs/show_bug.cgi?id=550136 Thanks to everyone who contributed.

November 05, 2019 12:00 AM

Announcing Ditto Milestone 1.0.0-M2

November 04, 2019 12:00 AM

The second and last milestone of the upcoming release 1.0.0 was released today.

Have a look at the Milestone 1.0.0-M2 release notes for what changed in detail.

The main changes and new features since the last release 1.0.0-M1a release notes are

  • invoking custom foreign HTTP endpoints as a result of events/messages
  • ability to reflect Eclipse Hono’s device connection state in Ditto’s things
  • support for OpenID Connect / OAuth2.0 based authentication in Ditto Java Client
  • configurbale throttling of max. consumed WebSocket commands / time interval

Artifacts

The new Java artifacts have been published at the Eclipse Maven repository as well as Maven central.

The Docker images have been pushed to Docker Hub:



Ditto


The Eclipse Ditto team


November 04, 2019 12:00 AM

Setup a Github Triggered Build Machine for an Eclipse Project

by Jens v.P. (noreply@blogger.com) at October 29, 2019 12:55 PM

Disclaimer 1: This blog post literally is a "web log", i.e., it is my log about setting up a Jenkins machine with a job that is triggered on a Github pull request. A lot of parts have been described elsewhere, and I link to the sources I used here. I also know that nowadays (e.g., new Eclipse build infrastructure) you usually do that via docker -- but then you need to configure docker, in which

by Jens v.P. (noreply@blogger.com) at October 29, 2019 12:55 PM

LiClipse 6.0.0 released

by Fabio Zadrozny (noreply@blogger.com) at October 25, 2019 06:59 PM

LiClipse 6.0.0 is now out.

The main changes is that many dependencies have been updated:

- it's now based on Eclipse 4.13 (2019-09), which is a pretty nice upgrade (in my day-to-day use I find it appears smoother than previous versions, although I know this sounds pretty subjective).

- PyDev was updated to 7.4.0, so, Python 3.8 (which was just released) is now already supported.

Enjoy!

by Fabio Zadrozny (noreply@blogger.com) at October 25, 2019 06:59 PM

Qt World Summit 2019 Berlin – Secrets of Successful Mobile Business Apps

by ekkescorner at October 22, 2019 12:39 PM

Qt World Summit 2019

Meet me at Qt World Summit 2019 in Berlin

QtWS19_globe

I’ll speak about development of mobile business apps with

  • Qt 5.13.1+ (Qt Quick Controls 2)
    • Android
    • iOS
    • Windows 10

ekkes_session_qtws19

Qt World Summit 2019 Conference App

As a little appetizer I developed a conference app. HowTo download from Google Play Store or Apple and some more screenshots see here.

02_sessions_android

sources at GitHub

cu in Berlin


by ekkescorner at October 22, 2019 12:39 PM

A nicer icon for Quick Access / Find Actions

October 20, 2019 12:00 AM

Finally we use a decent icon for Quick Access / Find Actions. This is now a button in the toolbar which allows you to trigger arbitrary commands in the Eclipse IDE.

October 20, 2019 12:00 AM

A Tool for Jakarta EE Package Renaming in Binaries

by BJ Hargrave (noreply@blogger.com) at October 17, 2019 09:26 PM

In a previous post, I laid out my thinking on how to approach the package renaming problem which the Jakarta EE community now faces. Regardless of whether the community chooses big bang or incremental, there are still existing artifacts in the world using the Java EE package names that the community will need to use together with the new Jakarta EE package names.

Tools are always important to take the drudgery away from developers. So I have put together a tool prototype which can be used to transform binaries such as individual class files and complete JARs and WARs to rename uses of the Java EE package names to their new Jakarta EE package names.

The tools is rule driven which is nice since the Jakarta EE community still needs to define the actual package renames for Jakarta EE 9. The rules also allow the users to control which class files in a JAR/WAR are transformed. Different users may want different rules depending upon their specific needs. And the tool can be used for any package renaming challenge, not just the specific Jakarta EE package renames.

The tools provides an API allowing it to be embedded in a runtime to dynamically transform class files during the class loader definition process. The API also supports transforming JAR files. A CLI is also provided to allow use from the command line. Ultimately, the tool can be packaged as Gradle and Maven plugins to incorporate in a broader tool chain.

Given that the tool is prototype, and there is much work to be done in the Jakarta EE community regarding the package renames, I have started a list of TODOs in the project' issues for known work items.

Please try out the tool and let me know what you think. I am hoping that tooling such as this will ease the community cost of dealing with the package renames in Jakarta EE.

PS. Package renaming in source code is also something the community will need to deal with. But most IDEs are pretty good at this sort of thing, so I think there is probably sufficient tooling in existence for handling the package renames in source code.

by BJ Hargrave (noreply@blogger.com) at October 17, 2019 09:26 PM

I’ll never forget that first EclipseCon meeting with you guys and Disney characters all around and…

by Doug Schaefer at October 16, 2019 01:18 AM

I’ll never forget that first EclipseCon meeting with you guys and Disney characters all around and the music. And all the late nights in the Santa Clara bar and summits and meetings talking until no one else was left. Great times indeed. Until we meet again Michael!


by Doug Schaefer at October 16, 2019 01:18 AM

Missing ECE already? Bring back a little of it - take the survey!

by Anonymous at October 15, 2019 09:22 PM

We hope you enjoyed the 2019 version of EclipseCon Europe and OSGi Community Event as much as we did.

Please share your thoughts and feedback by completing the short attendee survey. We read all responses, and we will use them to improve next year's event.

Speakers, please upload your slides to your session page. Attendees really appreciate this!


by Anonymous at October 15, 2019 09:22 PM

A Committer’s View of Our New ECD Tools Working Group

by Thabang Mashologu at October 10, 2019 01:48 PM

When you look at the very impressive list of founding members for our new Eclipse Cloud Development (ECD) Tools Working Group, it’s clear that world-leading technology companies strongly believe that open source, cloud native development tools are needed. Our Eclipse Foundation developer community has also enthusiastically embraced the initiative.

To get an insider’s view of why the ECD Tools Working Group initiative is so important, I recently talked to Carlos Andres De La Rosa, an active Eclipse Foundation committer, about why he is getting involved in the Working Group. Here’s an edited version of our conversation.

 

Q. How did you first become involved in the open source communities at the Eclipse Foundation?

 A. I was looking for something interesting from which I could learn something new related to cloud technologies. I found the Eclipse MicroProfile project about two years ago. That was a very interesting topic for me, especially the fault tolerance spec. I added my email address to the mailing list and started joining weekly calls. After a while, I started to contribute to the spec with different things, like improving the spec testing, adding and modifying documentation, and also debating about how to evolve the spec. I became a committer and an entire technological world opened up for me. Jakarta EE was another interesting project that got my attention, so then I became involved with the Jakarta EE community.

I like the fact that Eclipse projects are from the people, for the people, so anyone can learn and contribute to solve problems that can help a lot of developers around the world. It’s shared knowledge and that’s the most important thing for me.

 

Q. Why did you expand your involvement to include the ECD Tools Working Group?

 A. When you’re part of the community, you have access to beta versions of the specifications and projects. I was looking around and I learned about this new project for cloud development tools. I saw the charter and what people were trying to achieve and I knew I wanted to be part of it. It was that simple.

I think cloud technologies will be the biggest and most important topic for the next 10 years. Everything is moving to the cloud, so every day we have more applications, from banking to AI medical analysis services. And, people have easy access to all this thanks to the cloud. I want to be there at the forefront and be working for the community when cloud really starts to grow.

 

Q. What benefits do you expect to gain from your participation in the ECD Tools Working Group?

A. It’s a huge opportunity to learn from the best engineers in the world because there are a lot of great professionals at the Eclipse Foundation and they have a lot of experience driving technology forward. If you’re new to this field and you’re trying to learn new things, this is a big chance to do it. Also, I’m currently working as a cloud consultant and this project is very related to my day-to-day job, so I can use what I learn in this project to improve my consultancy services.

 

Q. What impact do you think the ECD Tools Working Group will have on open source, cloud native development?

A. I think the impact will be huge because this group will deliver all of the tools that will be used every day in cloud development. From deploy, scale, debug, and manage, to Cloud Foundry applications, everything is integrated with the Eclipse IDE making the job of developers more productive.

 

Q. How does your participation in the ECD Tools Working Group fit with the other cloud-related projects you’re involved with at the Eclipse Foundation?

A. Everything is related. MicroProfile provides implementations and specs for capabilities such as fault tolerance that are needed in a microservices architecture to make it easier for developers to create microservices. And the Jakarta EE specs are basically the foundation for the MicroProfile framework because MicroProfile depends on the Java enterprise specs. The Eclipse Cloud Development Tools is a complementary project that helps to complete the framework for cloud native applications. So, they are all part of the cloud ecosystem. 

I really think that MicroProfile, Jakarta EE, and now the Eclipse Cloud Development Tools Working Group are the three most important projects at the Eclipse Foundation right now. They will drive the future of cloud technologies and everything that is handled and developed by the community.

 

Q. What would you say to people who are considering joining the ECD Tools Working Group?

A. I would say “join now!” The most important thing I can tell people is get involved if you want to learn about a cutting-edge technology that will be driving peoples’ lives for a long time. And contribute to help create something meaningful. It’s a community, so it’s for everyone.


by Thabang Mashologu at October 10, 2019 01:48 PM

Open Source Gerrymandering

by Chris Aniszczyk at October 08, 2019 06:20 PM

Over the years, I have spent a lot of time thinking about and working on open source communities… from bootstrapping projects out of corporations (or broken communities), to starting brand new open source foundations.

I was recently having a conversation with an old colleague about bringing an open source project out of a company into the wild and how to setup the project for success. A key part of that discussion involved setting up the governance for the project and what that means. There was also discussion how neutral and open governance under a nonprofit foundation can be good for certain projects as research has shown that neutral foundations can promote growth and community better than other approaches. Also the conversation led to a funny side discussion on the concept of gerrymandering and open source.

For those who aren’t familiar with the term, it’s become popular in the US political lexicon as a “practice intended to establish a political advantage for a particular party or group by manipulating district boundaries.� A practical example of this is from my town of Austin TX which is in district 35 which snakes all the way from Austin to San Antonio for some reason.

The same concept of gerrymandering can apply to open source communities as open source projects can act like mini political institutions (or bigger ones in the case of Kubernetes). I shared some of my favorite examples with my friend so I figured I’d write this down for future reference and share it with folks as you really need to read the “fine print� to find these at times.

Apache Cassandra

The Apache Software Foundation (ASF) is a fantastic open source organization that has been around for a long time (they celebrated their 20th anniversary) and has had a lot of impact across the world. The way projects are governed in the ASF are through the Apache Way, which places a lot of emphasis on “community over code� amongst some other principles which are great practices for open source projects to follow.

There have been some interesting governance issues and lessons learned over the years in the ASF, in particular it can be challenging when you have a strong single vendor associated with a project as was with the case with Cassandra awhile ago:

As the ASF board noted in the minutes from its meeting with DataStax representatives, “The Board expressed continuing concern that the PMC was not acting independently and that one company had undue influence over the project.” There was some interesting press around the time this happened:

“Jagielski told me in an interview, echoing what he’d said on the Cassandra mailing list, that undue influence conflicts with project leadership obligations established by the ASF. As he suggested, the ASF tried many times to get a DataStax-heavy Project Management Committee (PMC) to pay attention to alleged trademark and other violations, to no avail. Whatever DataStax’s positive influence on the development of the project—in other words—it failed to exercise equivalent influence on governing the project in ASF fashion.â€�

The ASF basically forced a reorganization of the Cassandra PMC to be in more in lines with its values and then caused the primary vendor behind the project to pull engineers off the open source project.

Containerd

The containerd project is an industry-standard container runtime with an emphasis on simplicity, robustness and portability. The history of the project comes from being born at Docker where their open source projects had a governance policy essentially aligned with the BDFL philosophy with one of their project founders.

In CNCF, (which containered is a project of), project governance documents aren’t considered static and evolve over time to meet the needs of their community. For example, when containerd joined the CNCF their governance was geared towards a BDFL approach but over time evolved to a more neutral approach that spread authority across maintainers.

Cloud Foundry

Cloud Foundry is an open source community that has a large and mature ecosystem of PaaS focused projects. In the Cloud Foundry Foundation (CFF), they have a unique governance clauses in regards to how affiliates are treated and voting.

Pivotal Platinum Director Voting Power. The Platinum Director appointed by Pivotal (“Pivotal Director�) shall have five (5) votes on any matter submitted to a vote of the Board. (i) On a date one (1) year after the incorporation date set forth in the Certificate, the number of Pivotal Director’s votes will be reduced to three (3). (ii) On a date two (2) years after the incorporation date set forth in the Certificate, the number of Pivotal Director’s votes will be reduced to one (1)

To bootstrap the foundation, the originating company wanted a little bit of control for a couple of years, which can make sense in some situations as the beginning of a foundation can be a tumultuous time. In my opinion, it’s great to see the extra vote clause expire after 2 years, however, it’s still very unfair to the early potential members of the organization.

Another example of open source gerrymandering can be how votes are represented by member companies that are owned by a single entity:

At no time may a Member and its Affiliates have more than one Director who is an employee, officer, director, or consultant of that Member, except that Pivotal, EMC, and VMware, though Affiliates, shall each have one (1) Director on the Board).

This is an interesting tidbit given that Dell owns Pivotal, EMC and VMWare. In some organizations, usually there is legal language that collapses owned entities into one vote.

I personally I’m not the biggest fan of this approach as it makes things unfair from the beginning and can be an impediment to wide adoption across the industry. There can definitely be reasons of why you need to do this in the formation phase but it should be done with caution. If you saw the recent news that Pivotal was being spun back into VMWare and their woes with adoption, it shouldn’t come as a surprise in my opinion as one company was bearing too much of the burden in my opinion and not building a diverse community of contributors.

Cloud Native Computing Foundation (CNCF)

If you remember the early days of the container and orchestration wars, there was a lot different technologies, approaches and corporate politics. When CNCF was founded, the original charter included a clause that upgraded certain startup members from Silver to Platinum that were important in the ever evolving cloud native ecosystem.

“The Governing Board may extend a Platinum membership at the Silver Membership Scale rates on a year-by-year basis for up to 5 years to startup companies with revenues less than $50 million that are deemed strategic technology contributors by the Governing Board.�

In my opinion, that particular piece in the charter was important in bringing together all the relevant startups to the table along with the big established companies at the time.

In terms of projects, the CNCF Technical Oversight Committee (TOC) defines a set of principles to steward the technical community. The most important principle is around a minimum viable governance that enables projects to be self-governing. TOC members are available to provide guidance to the projects but do not control them. 

https://twitter.com/CloudNativeFdn/status/1167455648768045056

Unlike Apache and the Apache Way, CNCF does not require its hosted projects to follow any specific governance model. Instead, CNCF specifies that graduated projects need to “explicitly define a project governance and committer process.� So in reality, CNCF operates under the principle of subsidiarity, encouraging decisions to be made at the lowest project level consistent with their resolution.

GitLab

GitLab is a fantastic open source project AND company that I admire deeply for their transparency. The way the GitLab project is structured is that it’s wholly owned by the GitLab company (they also own the trademark). To the credit of GitLab, they make this clear via their stewardship principles online and discuss what they consider enterprise product work versus project work.

I’d love for them in the future to separate the branding from the company, project and the product as I believe it’s confusing and dilutes the messaging, but that’s just my opinion 🙂

Istio

Istio is a popular service mesh project originated at Google. It has documented its governance model publicly: https://github.com/istio/community/blob/master/STEERING-COMMITTEE.md

However, as you can see, it’s heavily tilted towards Google and there seems to be no limits on the number of spots on the steering committee from one company which is a common tactic in open governance approaches to keep things fair. On top of that, Google owns the trademark, domains and other project assets so I’d consider Istio to be heavily gerrymandered in Google’s versus the community’s interest.

JCP

I had the pleasure of serving on the Java Community Process (JCP) Executive Committee for a few years while I was at Twitter. It’s a great organization that drives standardization across the Java ecosystem, some of the fine print is interesting though:

“The EC is composed of 25 Java Community Process Members whose seats are allocated as follows: 16 Ratified Seats, 6 Elected Seats, and 2 Associate Seats, plus one permanent seat held by Oracle. (Oracle’s representative must not be a member of the PMO.) The EC is led by a non-voting Chair from the PMO.â€�

This essentially gives Oracle a permanent seat on the Executive Committee.

Here’s another fun clause:

Ballots to approve Umbrella JSRs that define the initial version of a new Platform Edition Specification or JSRs that propose changes to the Java language are approved if (a) at least a two-thirds majority of the votes cast are “yes” votes, (b) a minimum of 5 “yes” votes are cast, and (c) Oracle casts one of the “yes” votes. Ballots are otherwise rejected.

This essentially gives Oracle a veto vote on any JSR.

Note: The coolest thing the JCP has done is contribute the EE specification work to the Eclipse Foundation and form the Jakarta project over there to steward things in an open way.

Knative

Knative, like Istio mentioned above, is an open source project that was born at Google and controlled by Google. There have been a lot of discussion lately about this as Google recently decided to not openly govern the project and move it to a neutral foundation:

Kubernetes

Kubernetes operates under the auspices of the CNCF and openly governed by the Kubernetes Steering Committee (KSC). The Kubernetes project has grown significantly over time, but has done a great job of keeping things openly governed and inclusive in my opinion, especially compared to its project size these days. The KSC governs the project along with a variety of sub working groups. Also, the Kubernetes trademark is neutrally owned by the CNCF and openly governed via the Conformance Working Group which decides how certification works for the community, which there are nearly 100 certified solutions out there!

Spinnaker

The Spinnaker project was originally born at Netflix and recently spun out into the Continuous Delivery Foundation (CDF) as an openly governed project. The project assets, from domains to github to trademarks are all neutrally owned by the community through the CDF.

Vault

Vault is a fantastic and widely used secrets management tool from Hashicorp. It’s a single vendor controlled open source project that has an open core model with an open source and enterprise versions (see matrix). What this essentially means is that the buck stops at the single vendor on what features/fixes end up in the open source version, most likely that won’t include things that they sell in their enterprise offering.

Conclusion

I hope you learned something new about open source projects, foundations and communities as these things can be a little bit more complicated as you dig into the details. It’s really important to note that there is a difference between open source and open governance and you should always be skeptical of a project that claims it’s truly open if only one for profit company owns all the assets and control. While there’s nothing wrong with this approach at all, most organizations don’t set expectations up front which can lead to frustrations down the road. Note, there’s nothing wrong with single vendor controlled open source projects, I think they are great but I think they need to be upfront, similar to what GitLab stewardship principles on what they will put in open source versus their enterprise version.

In conclusion, as with anything in life, you should always read the fine print of an open source communities charter or legal paperwork to understand how it works. The lesson here is that every organization or project has its own rules and governance and it’s important that you understand how decisions are made and who has ownership of project assets like trademarks.


by Chris Aniszczyk at October 08, 2019 06:20 PM

JShell in Eclipse

by Jens v.P. (noreply@blogger.com) at October 08, 2019 12:16 PM

Java 9 introduced a new command line tool: JShell. This is a read–eval–print loop (REPL) for Java with some really nice features. For programmers I would assume writing a test is the preferred choice, but for demonstrating something (in a class room for example) this is a perfect tool if you are not using a special IDE such as BlueJ (which comes with its own REPL). The interesting thing about

by Jens v.P. (noreply@blogger.com) at October 08, 2019 12:16 PM

Removing “Contact Us

by tevirselrahc at October 07, 2019 02:17 PM

Unfortunately, because of the larger amount of spam, I now have to remove off the “Contact Us” page.

If you want to contact us, I would recommend you go through our twitter account.


by tevirselrahc at October 07, 2019 02:17 PM

Instanceof Type Guards in N4JS

by n4js dev (noreply@blogger.com) at September 30, 2019 08:06 AM

Statically typed languages like Java use instanceof checks to determine the type of an object at runtime. After a successful check, a type cast needs to be done explicitly in most of those languages. In this post we present how N4JS introduced type guards to perform these type casts implicitly. 

No error due to implicit cast in successful instanceof type guard

The example above shows that strict type rules on the any instance a causes errors to show up when accessing the unknown property pX. However, after asserting that a is an instance of X, the property pX can be accessed without errors. A separate type cast is unnecessary, since type inference now also considers instanceof type guard information.


Hover information on variable access of a shows the inferred type

The resulting type is the intersection type of the original type (which is here any) and of all type guards that must hold on a specific variable access (which is here only type X). Keeping the original types any or Object is not necessary and could be optimised later. In case the original type is different, it is necessary to include it in the resulting intersection type. The reason is that the type guard could check for an interface only. If so, property accesses to properties of the original types would cause errors.


Re-definition of a type guarded variable

Two distinct differences between type guards and type declarations are (1) their data flow nature and (2) their read-only effects. Firstly, when redefining (in the sense of the data flow) a variable, the type guard information gets lost. Consequently, subsequent accesses to the variable will no longer benefit from the type guard, since the type guard was invalidated by the re-definition. Secondly, only the original type information is considered for a redefinition. That means that the type guard does not change the expected type and, hence, does not limit the set of types that can be assigned to a type guarded variable.


Further examples for instanceof type guards in N4JS

Data flow analysis is essential for type guards and has been presented in a previous post. Based upon this information, type information for each variable access is computed. Since also complicated data flows are handled correctly, such as in for loops or short circuit evaluation, type guard information is already available in composed condition expressions (see function f3 and f5 above). Aside from being able to nest instanceof type guards (see function f4 above), they also can be used as a filter at the beginning of a function (see function f6 above) or inside a loop: Negating a type guard and then exiting the function or block leaves helpful valid type guard information on all the remaining control flow paths.

by Marcus Mews

by n4js dev (noreply@blogger.com) at September 30, 2019 08:06 AM

Team Sports for Developers! Edge Computing Mini-Hackathon

by Anonymous at September 26, 2019 09:21 PM

Do you like to build gadgets and/or hack? Then get a team together for the Edge Computing Mini-Hackathon, organized by Edgeworx.

Teams will be challenged to integrate at least one other Eclipse IoT project with Eclipse ioFog and showcase what they were able to accomplish. Representatives from all Eclipse projects are welcome to come help guide, coach, and influence participants to make use of their projects. There will be prizes for the standouts, plus giveaways (and fun) for all!

The event is part of Community Night on Tuesday, October 22, from 19:30 - 22:00 in the Theater Stage room at the Forum.


by Anonymous at September 26, 2019 09:21 PM

Blocked by an Eclipse Wizard?

by Wim at September 24, 2019 08:53 AM

Tuesday, September 24, 2019
There is a small but very useful patch in Eclipse 4.12 for people that do not want the UI to be blocked by wizards. There are many cases where it is desired that the underlying window can be reached WHILE the user is finishing the wizard. That's why it's strange that the Eclipse Wizard demands from us to always have full and utter attention.

Read more


by Wim at September 24, 2019 08:53 AM

How to Render a (Hierarchical) Tree in Asciidoctor

by Niko Stotz at September 21, 2019 03:16 PM

Showing a hierarchical tree, like a file system directory tree, in Asciidoctor is surprisingly hard. We use PlantUML to render the tree on all common platforms.

Example of rendered hierarchical tree

This tree is rendered from the following code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[plantuml, format=svg, opts="inline"]
----
skinparam Legend {
    BackgroundColor transparent
    BorderColor transparent
    FontName "Noto Serif", "DejaVu Serif", serif
    FontSize 17
}
legend
Root
|_ Element 1
  |_ Element 1.1
  |_ Element 1.2
|_ Element 2
  |_ Element 2.1
end legend
----

It works on all Asciidoctor implementations that support asciidoctor-diagram and renders well in both HTML and PDF. Readers can select the text (i.e. it’s not an image), and we don’t need to ship additional files.

We might want to externalize the boilerplate:

1
2
3
4
5
6
7
8
9
10
11
12
[plantuml, format=svg, opts="inline"]
----
!include asciidoctor-style.iuml
legend
Root
|_ Element 1
  |_ Element 1.1
  |_ Element 1.2
|_ Element 2
  |_ Element 2.1
end legend
----
asciidoctor-style.iuml
1
2
3
4
5
6
skinparam Legend {
    BackgroundColor transparent
    BorderColor transparent
    FontName "Noto Serif", "DejaVu Serif", serif
    FontSize 17
}

Thanks to PlantUML’s impressive reaction time, we soon won’t even need Graphviz installed.

Please find all details in the example repository and example HTML / example PDF rendering.


by Niko Stotz at September 21, 2019 03:16 PM

Let's Do It! Obeo loves The SeaCleaners

by Cédric Brun (cedric.brun@obeo.fr) at September 20, 2019 12:00 AM

I am deeply convinced a company is not only an economical actor. It has a much wider responsibility as any decision also has social, environmental or even political implications.

Looking at our environment state, its recent evolution and how it is forecasted to evolve indeed the task in front of us is huge. It would be easy to dismiss this as a problem our governments and big organizations should step up to, and indeed those in power have the responsibility, the ability and leverage to act and maybe bend those charts.

But I have a motto to “Focus on what you can control, then you can act” and so do I.

Obeo participates and hosts quite a few events each year and we are often struck by the nonsensical nature of the “goodies” industry and what global model they promote: built at the cheapest price, moved across the globe, distributed at the event and then pretty quickly to the bin.

Starting now, you won’t get any more goodies from us at conferences or events, but instead we will gladly discuss how we try to do our part, as a company, in this global challenge.

In relation to this initiative to stop producing waste we do not deem necessary: Obeo is partnering with The SeaCleaners organization to reduce plastic waste. The SeaCleaners is building a giant multihull boat designed to retrieve the plastic waste in the Ocean: The MANTA. The organization vision is that the preservation of the oceans is a global, long-term and worldwide matter that integrates economic, social, human, educational and scientific perspectives. They do that in a dynamic and solidarity-based project. You can learn more about this initiative on Obeo’s website.

The "Manta"

Furthermore, all the designs and blueprints of the Manta boat will be Open-Source and that enable enhancements and duplication at a global scale, a principle clearly aligned with our values and what we do within the Eclipse community.

The "Manta" boat technical data

That being said, it is just one step on a very specific part of our activity, but a step starting a journey with more to do to improve the way Obeo operates regarding its environmental responsibility. When you start building awareness of our impact on all the ins and outs of what we do, you realize even a non-industrial, software company can contribute.

Let's Do It! Obeo loves The SeaCleaners was originally published by Cédric Brun at CEO @ Obeo on September 20, 2019.


by Cédric Brun (cedric.brun@obeo.fr) at September 20, 2019 12:00 AM

Language-Workbench für Testsprachen

by Arne Deutsch (adeutsch@itemis.de) at September 13, 2019 02:12 PM

Kennen Sie das? Das Gefühl, all das schon einmal erlebt zu haben? Ein Déjà-vu? Selbiges beschlich mich vergangene Woche bei einem ersten Gespräch mit einem Automobilhersteller über das Tooling seiner hauseigenen Testsprache.

Language Workbench für Testsprachen

Das Problem ist jedes Mal dasselbe. Schon vor Jahren ist die Erkenntnis gereift, dass es nicht sinnvoll ist, die riesige Menge an Testfällen gegen fast jährlich neue Modelle immer wieder neu zu entwickeln. Jedes Mal wieder Unmengen an Zeit und Geld in die Programmierung zu stecken für Arbeiten, die doch eigentlich schon zig Mal erledigt wurden. Nur eben „ein klein wenig anders“.

Auch die Lösung dieses Problems war grundsätzlich die Richtige: eine eigene, kleine Programmiersprache, um Testfälle zu spezifizieren. Der eigentliche Testcode in C wird dann daraus generiert.

Auf diese Weise können sinnvolle Abstraktionen geschaffen werden, welche für verschiedene Modellserien anpassbar und parametrisierbar sind, ohne sich mit technischen Aspekten wie Zeigerarithmetik und Speicherverletzungen herumzuschlagen.

Doch nach einiger Zeit wurden die Schattenseiten dieses Ansatzes deutlich. Während das Tooling für gängige Programmiersprachen exzellent und ausgereift ist und die Entwickler mit mächtigen Werkzeugen zum Editieren des Codes verwöhnt werden, stellt sich die Situation für die neue Testsprache anders dar.

Natürlich liefert der Compiler mehr oder weniger hilfreiche Fehlermeldungen, und immerhin wurde ein einfaches Eclipse-Plugin entwickelt, um zumindest Schlüsselwörter hervorzuheben, aber von einer echten Toolunterstützung kann keine Rede sein. Es gibt keine Codevervollständigung, keine automatische Formatierung, und auch die Integration mit den anderen Tools ist minimalistisch.

Erste Abschätzungen deuten auf mehrere Personenjahre an Entwicklungsaufwand hin, um hier auch nur annähernd dahin zu kommen, wo die Entwicklung mit Java oder C schon lange ist. Und gemacht hat das auch noch keiner im Unternehmen.

Ein hoher Aufwand und ein hohes Risiko, welche in keinem Verhältnis zum Nutzen stehen.

Also war die eigene Sprache ein Irrweg? Oder muss man mit dem schlechten Tooling leben?

Mitnichten!

Es handelt sich hier um ein gelöstes Problem. Die Idee, domänenspezifische Sprachen mit Language-Workbenches zu entwickeln, existiert seit Jahrzehnten. Der Begriff wurde vor 14 Jahren geprägt. Doch während es sich damals noch um Experimente handelte, die noch nicht wirklich produktionstaugilch waren, sind diese Tools mittlerweile ausgereift und verkürzen die Entwicklung von Werkzeugen für DSLs um den Faktor 10 und mehr.

Mit wenigen Wochen Aufwand können bereits beeindruckende Ergebnisse erzielt werden; mit noch etwas mehr Mühe kommt man nahe an das Tooling heran, welches man von Java gewöhnt ist.

Insbesondere im Open-Source-Umfeld um Eclipse herum existiert mit Xtext eine Lösung, die exakt diesen Anwendungsfall optimal unterstützt, eine existierende Sprache mit wenig Aufwand um hervorragendes Tooling zu erweitern. Warum Zeit und Geld verschwenden, um das Rad mal wieder neu zu erfinden, wenn man einfach auf die Arbeit von anderen aufbauen kann? Du hast ein ähnliches Problem?

Schreibe uns gerne deine Erfahrungen in die Kommentare oder sprich uns an!

P.S.: Langweilig wird mir das nicht … auch wenn ich meine, das alles schon einmal erlebt zu haben. Hat ja manchmal auch Vorteile.


by Arne Deutsch (adeutsch@itemis.de) at September 13, 2019 02:12 PM

From building blocks to IoT solutions

by Jens Reimann at September 10, 2019 07:07 AM

Eclipse IoT

The Eclipse IoT ecosystem consists of around 40 different projects, ranging from embedded devices, to IoT gateways and up to cloud scale solutions. Many of those projects stand alone as “building blocks”, rather than ready to run solutions. And there is a good reason for that: you can take these building blocks, and incorporate them into your own solution, rather than adopting a complete, pre-built solution.

This approach however comes with a downside. Most people will understand the purpose of building blocks, like “Paho” (an MQTT protocol library) and “Milo” (an OPC UA protocol library) and can easily integrate them into their solution. But on the cloud side of things, building blocks become much more complex to integrate, and harder to understand.

Of course, the “getting started” experience is extremely important. You can simply download an Eclipse IDE package, tailored towards your context (Java, Modelling, Rust, …), and are up and running within minutes. We don’t want you to design your deployment descriptors first, and then let you figure out how to start up your distributed cluster. Otherwise “getting started” will become a week long task. And a rather frustrating one.

Getting started. Quickly!

Eclipse IoT building blocks

During the Eclipse IoT face-to-face meeting in Berlin, early this year, the Eclipse IoT working group discussed various ideas. How can we enable interested parties to get started, with as little effort as possible. And still, give you full control. Not only with a single component, which doesn’t provide much benefit on its own. But get you started with a complete solution, which solves actual IoT related problems.

The goal was simple. Take an IoT use case, which is easy to understand by IoT related people. And provide some form of deployment, which gets people up and running in less than 15 minutes. With as little as possible external requirements. At best, run everything on your local laptop. Still, create everything in a close-to-production style of deployment. Not something completely stripped down. But use a way of deployment, that you could actually use as a basis for extending it further.

Kubernetes & Helm

We quickly agreed on Kubernetes as the runtime platform, and Helm as the way to perform the actual deployments. With Kubernetes being available even on a local machine (using minikube on Linux, Windows and Mac) and being available, at the same time, in several enterprise ready environments, it seemed like an ideal choice. Helm charts seemed like an ideal choice as well. Helm designed directly for Kubernetes. And it also allows you to generate YAML files, from the Helm charts. So that the deployment only requires you to deploy a bunch of YAML files. Maintaining the charts, is still way easier than directly authoring YAML files.

Challenges, moving towards an IoT solution

A much tougher question was: how do we structure this, from a project perspective. During the meeting, it soon turned out, there would be two good initial candidates for “stacks” or “groups of projects”, which we would like to create.

It also turned out that we would need some “glue” components for a package like that. Even though it may only be a script here, or a “readme” file there. Some artifacts just don’t fit into either of those projects. And what about “in development” versions of the projects? How can you point people towards a stable deployment, only using a stable (released) group of projects, when scripts and readme’s are spread all over the place, in different branches.

A combination of “Hono, Ditto & Hawkbit” seemed like an ideal IoT solution to start with. People from various companies already work across those three projects, using them in combination for their own purpose. So, why not build on that?

But in addition to all those technical challenges, the governance of this effort is an aspect to consider. We did not want to exclude other Eclipse IoT projects, simply by starting out with “Hono, Ditto, and Hawkbit”. We only wanted to create “an” Eclipse IoT solution, and not “the” Eclipse IoT solution. The whole Eclipse IoT ecosystem is much too diverse, to force our initial idea on everyone else. So what if someone comes up with an additional group of Eclipse IoT projects? Or what if someone would like to add a new project to an existing deployment?

A home for everyone

Luckily, creating an Eclipse Foundation project solves all those issues. And the Eclipse Packaging project already proves that this approach works. We played with the idea, to create some kind of a “meta” project. Not a real project in the sense of having a huge code base. But more a project, which makes use of the Eclipse Foundations governance framework. Allowing multiple, even competing companies, to work upstream in a joint effort. And giving all the bits and pieces, which are specific to the integration of the projects, a dedicated home.

A home, not only for the package of “Hono, Ditto and Hawkbit”, but hopefully for other packages as well. If other projects would like to present their IoT solution, by combining multiple Eclipse IoT projects, this is their chance. You can easily become a contributor to this new project, and publish your scripts, documentation and walk-throughs, alongside the other packages.

Of course everything will be open source, licensed under the EPL. So go ahead and fork it, add your custom application on top of it. Or replace an existing component with something, you think is even better than what we put it. We want to enable you to deploy what we provide in a few minutes. Offer you an explanation, what to expect from it, and what this IoT solution can do for you. And encourage you to play around with it. And enable you to extend it, and build something bigger.

Let’s get started

EclipseCon Europe 2019

We created a new project proposal for the Eclipse IoT packages project. The project is currently in the community review phase. Once we pass the creation review, we will start publishing the content for the first package we have.

The Eclipse IoT working group will also meet at the IoT community day of EclipseCon Europe 2019. Our goal is to present an initial version of the initial package. Ready to run!

But even more important, we would like to continue our discussions around this effort. All contributions are welcome: code, documentation, additional packages … your ideas, thoughts, and feedback!

The post From building blocks to IoT solutions appeared first on ctron's blog.


by Jens Reimann at September 10, 2019 07:07 AM

The Rising Adoption of Capella

by Cédric Brun (cedric.brun@obeo.fr) at September 04, 2019 12:00 AM

Witnessing an OSS technology getting together with a wide group of users is something I find exhilarating, I have experienced it with Acceleo, EMF Compare and Eclipse Sirius along the years, each time in different contexts and at different scales but discovering what is being done by others with a technology is always a source of excitement to me.

Capella was contributed by Thales to the Eclipse communities a few years ago already and fueled by the growing need to design Systems in a better way, by the interest in Model Based System Engineering and the qualities of the product in itself we can clearly see an acceleration in the last few months.

If you are wondering what is Capella and what it’s used for, here is a 2-minute video we prepared for you:

Worldwide awareness of this solution grows and adoption rises, organizations from Europe, North America and Asia are now using Capella and experiencing the benefits of using a tool which implements a method (coined “Arcadia”) and not only a language.

Capella Forum Activity

Looking at the numbers, just for this summer : more than 1200 downloads each month, a forum actvity which has been growing with a nice looking curve and monthly stats on YouTube reaching more than 2000 views: considering the size of the target audience this is a significant acceleration and that is without counting the deployment of System Modeling Workbench provided by Siemens which includes the technology.

Adopters not only use it but speak about it and as with any other tool having an opportunity to understanding how others are using it is highly valuable.

Rolls Royce, ArianeGroup or the Singapore University: they all have shared valuable information through the recent webinars :

More are coming and many already available through the Resources Page! BTW we can’t always get the authorization to keep them available online so your safest option is to register and attend.

Munich (Germany)

We also make sure to setup « in Real Life » opportunities to discuss Capella and MBSE. Occasions to talk with the team behind Capella and the experts arounds the world. Next up is Capella Day Munich 2019 in a couple of weeks (the 16th of September) organized by Thales and Obeo in conjunction with the Models Conference 2019. Here is a glimpse of the program :

The agenda is filled with general presentations, feedback by industrial users about their Capella deployment or specific add-ons/integration.

The program of Capella Day Munich 2019

You might want to hurry as we are almost sold out and such occasions are pretty unique!

I sincerely hope you’ll enjoy it, we are working hard to make it a success :-), if you can’t make it this time then know there are more occasions to come: AOSEC in Bangalore, EclipseCon in Germany (again!) where there might be a workshop focused on “MBSE at Eclipse” (Please add your name and interest on the corresponding wiki page )

The Rising Adoption of Capella was originally published by Cédric Brun at CEO @ Obeo on September 04, 2019.


by Cédric Brun (cedric.brun@obeo.fr) at September 04, 2019 12:00 AM

Time for Change

by Doug Schaefer at September 03, 2019 02:56 PM

First, let me get straight to the point. I have left BlackBerry/QNX and will be starting a new job in Ottawa next week. It’s a great opportunity to work on something new for a great company with a bunch of former colleagues I admire. As much as I’m looking forward to that much needed change, it sadly will take me away from the Eclipse community. This message is a goodbye and thank you.

Thinking back all the way to the beginning, I’m quickly overwhelmed by how many great people I have had the opportunity to work with thanks to the Eclipse CDT project. At the very beginning was Sky Matthews and John Prokopenko who let me weasel my way on as Rational’s technical lead on the project just as it was starting out in 2002 also at a time when I needed a change. Of course, I had a great team of developers at Rational with me that made it fun and easy. Not to mention the original team at QNX who were welcoming and made it easy to get involved. I have a special mention for Sebastien Marineau, CDT’s first project lead, who let me take a leadership role on the project and eventually hired me on at QNX to take over.

Then there was the early years on the CDT where we made our mark. Those early CDT Summits were so fun and really helped built up a team atmosphere. We had about a dozen companies sending contributors, a few of them competitors in the spirit of co-opetition, and we made it work. Then over the years we started getting independent contributors who just did it for the passion of building great C++ tooling they wanted to use themselves. It’s been a great mix and I am so lucky and proud to have been a part of it.

And of course, it was all topped off with our yearly EclipseCons. I am proud to have attended every one of the EclipseCon North America ones and was able to attend quite a few of the EclipseCon Europes in Germany. I have to thank Anne and Mike and Ralph and Wayne and Sharon and Perri and Donald and Ian and Lynn and all the Eclipse Foundation staff past and present for making me feel a part of the family. I always looked forward to the EclipseCon Express flights out of and return to Ottawa with many of them.

My fellow attendees at these conferences were amazing, from the first one at Disneyland where we had an overflow crowd at the CDT BOF and where I gave my first of many CDT talks, to all the friends I met at the bar or ran into at sessions, many of whom had nothing to do with CDT but made me feel so much a part of the bigger community. I will never forget the late nights in the bars chatting with friends like Michael Scharf and Ian Bull and Eric Cloninger and Gilles and Adrian and Jonah and Tom and so many others. As it turns out, last year in Ludwigsburg was a perfect finale where we had such a great time at the Nestor on Wednesday night. I will never forget you all.

I’m incredibly proud of what we built for the CDT. It still has the best indexer in the business, thanks to the parser we built back at Rational and the database I built at QNX and then with so many hands continually making it better and adjusting to the now ever changing C++ language spec. The Launch Bar achieved what I wanted by simplifying the Eclipse launch experience. CDT’s new Core Build fits naturally with the Launch Bar and makes it much simpler to integrate external build systems like CMake. And we have just started a GDB debug adapter using the debug adapter protocol which will pave the way to simplify integrating debuggers with the CDT.

The current set of active committers on the CDT have lately been pulling almost all the weight evolving it and getting releases out the door. Their great work has made my transition easier and will keep the CDT rolling along for years to come. And hopefully vendors will come back too and help provide funding for all this activity. We have an action plan to transition the project lead role. Follow the cdt-dev mailing list to find out more.

It’s sad to leave and the memories and friendships will be forever. I will keep my cdtdoug personal gmail account as a reminder of where I came from. But my new role will give me some much needed energy to keep things going for the next decade. I once questioned why you hardly see any retired engineers helping with open source projects or sharing their passion with the next generation. I promise you this, you will see me again.

Take care, and thank you.


by Doug Schaefer at September 03, 2019 02:56 PM

Redux App Development and Testing in N4JS (Chess Game Part 2)

by n4js dev (noreply@blogger.com) at August 29, 2019 04:04 PM

In large applications, Redux - an implementation of Flux architecture created by Facebook - is often used to organise application code by using a strict data flow in one direction only. Redux is UI agnostic, and can be used in conjunction with any UI library. As a continuation of our chess game tutorial with React, we show how to extract the entire program state out of React components, store it with Redux, and test it with N4JS. The full tutorial is available at eclipse.org/n4js and the sources can be found at github.com/Eclipse/n4js-tutorials.


The first part of the chess game tutorial discussed how to develop a chess game app with React and JSX in N4JS. We have stored the program state - which for instance contains information about the locations of all chess pieces - in the state of the React components directly. As applications become larger, however, the mix of program state and UI makes the application hard to comprehend and difficult to test. To address these issues, we extract the program state from the UI components in the second part of the tutorial.

When using React with Redux, we store the application state in Redux store instead of the state of React components. As a result, React components become stateless UI elements and simply render the UI using the data retrieved from the Redux store. In a Redux architecture, data flows strictly in one direction. The diagram below graphically depicts the action/data flow in a React/Redux app.

Strict data flow of flux architecture application


The action/data flow in the diagram can be roughly understood as follows:
  • When a user interaction is triggered on the React component (e.g. button clicked, text field edited etc.), an action is created. The action describes the changes needed to be updated in the application state. For instance, when a text field is edited, the action created may contain the new string of the text field.
  • Then the action is dispatched to the Redux store whereby the Redux store stores the application state, usually as a hierarchical tree of state.
  • The reducers take the action and the current application state and create an updated application state.
  • If the changes in the application state are to a certain React component, they are forwarded into the component in form of props. The change in props causes the component to re-render.

In the second part of the tutorial we further elaborate on the interaction of React and Redux and migrate the original chess non-Redux app. The tutorial explains the role of the reducer, and how the game state is stored and maintained in the Redux store. Based on storing the game using Redux, the tutorial shows how to test the game application with the N4JS test library Mangelhaft, by for instance checking that valid move destination squares are updated after a chess piece was selected.

Note that the way of testing the game logics is completely UI-agnostic and no React components are involved at all. This is thanks to the decoupling of game logics from UI with the help of Redux.


by Minh Quang Tran

by n4js dev (noreply@blogger.com) at August 29, 2019 04:04 PM

And Now for Something Completely Different

by Ed Merks (noreply@blogger.com) at August 29, 2019 09:23 AM

It's been 5 years since I last blogged, so I had to delete 500 SPAM posts when getting started again.  Much has happened over the past years, some of them not so great. When you start to get old like me, you must deal with the older generation gradually passing on and health problems, such as coronary re-plumbing, can become an ugly fact of life.


I've been working with itemis for the past 11 years, but that now draws to a close.  I wish to thank them for their generous support over all these years.  Many of you might thank them as well because much of what I've contributed is thanks to their funding.  Though admittedly I have the nasty habit of working like a maniac, beyond any reasonable number of working hours, regardless of whether or not there is financial reward.  Cool things are just so compelling. But the worst habit then is not bothering to document or advertise all these cool new features as they become available, but rather to dive into the next obsession because somehow that's more compelling.  Compulsion is a bit of a Merks' family trait, e.g., my sister has more than 20 dogs, though it's easy to lose count...

In any case, most of my obsession over the last year has been related to working with Airbus.  I don't normally talk about my customers, but given they were gracious enough to allow me to demo at last year's EclipseCon the software being developed at Airbus, it's common knowledge anyway.  My goodness but that was a creative and cool project! Unfortunately that too has, as is the case for all good things, come to an end.

I immediately dove into generating a quality report for the Eclipse SimRel p2 repository; there's no rest for the wicked nor for the compelled.  I used EMF's Java Emitter Templates (JET) for implementing that, just as I did for generating the full site information for  EMF's Update Sites  as part of migrating the build to Maven/Tycho.

Speaking of which, did you know that you can make it trivially easy for your contributors to set up a development environment? Just have a look at EMF' build page.  Also, did you know that there exists such a thing for the complete Eclipse Platform SDK as well? Of course not, because I never bothered to tell you!

What's really supergeil (yes, I live in Germany and speak fluent Denglish) about the installing an environment with the full Platform SDK, or some subset there of, is that you can easily see all the Git history of each source file, so you can see what exactly has changed and evolved.  Also, when developing new applications, you can easily search for how the Platform implements those things; then you can snarf and barf out your own solutions, with all due respect for the EPL of course.  You can even find out all the places that a constant is actually used; you cannot do that against binaries because constants get in-lined.  Also, if you see some label in the IDE, you can search for where it comes from, some *.properties file somewhere no doubt, and then you will know the name of that property and can easily find where that's defined and how that's used in the code.  You might even contribute fixes via Gerrit commits!

But I digress.  I was using JET to generate a nice helpful web page with information about all the problems in the SimRel repo, or in any arbitrary repo actually, i.e., invalid licenses, unsigned content, missing pack200 files, duplicate bundles, inappropriate providers, and so on. But then I got frustrated that JET templates eventually get really hard to read, especially as they become more complicated.  Then, when you need it the most, all the nice features of JDT are missing while editing the scriplets and expressions in that template. So as I am wont to do, I digressed further and spent the better part of the last two months working on a rich editor for JET.  I'm sorry (not!) that I had to violate a few API rules to do so, but alas, API rights activists is a topic for another blog because that's a long digression.  The good thing is that the JET editor is finished now; it will be in 2019-09's M3.  Here's a sneak preview:


Yes, that's content assist, just is if were in a real Java editor! Not only that, this time I wrote documentation for it in EMF's doc bundle. And, to top that off with icing, this time I blog about it.  Perhaps only three people in the world will ever use it, but I am one of those three people and I love it and I need it even for working with EMF's code generation templates too. So now I can pop this off my digression stack and go back to generating that p2 repo quality report.  I've been doing that for the past week, and it's almost ready for prime-time.

But then at this point, I must ask myself, where is the financial gain in all this?  My local neighborhood fox, I've named him Fergus,  might be trying to tell me something.



Perhaps you should be a little more sly.  Perhaps the endless free goodness too must come to an end...

by Ed Merks (noreply@blogger.com) at August 29, 2019 09:23 AM

GSoC 2019 Summary: Dart support for the Eclipse IDE

August 19, 2019 12:00 AM

Summary of my Summer of Code 2019

This post is a summary of my Summer of Code 2019. My project was to bring Dart development support to the Eclipse IDE. To do so I created a plugin that consumes the Dart analysis server using LSP4E. It also provides syntax highlighting using TM4E and many more features listed below.

Features

The following list showcases the most significant features of the plugin I (and other contributors) added during GSoC 2019.

  • Running Dart programs directly from Eclipse - eclipse/dartboard#1

    • Standard & error output in a dedicated console
    • Support for multiple Launch configurations
    • Running single files from the context menu
  • Dart preference page - eclipse/dartboard#10

    • Set the location for the Dart SDK that should be used for all actions
    • Choose whether to automatically synchronize the dependencies of project
  • First class Pub support - eclipse/dartboard#107

    • Automatic synchronization of Pub dependencies when changing the pubspec.yaml file
    • Shortcut for running $ pub get manually
  • Creating new Dart projects and files - eclipse/dartboard#115

    • Stagehand templates are also supported
  • Usage of the Dart logo - eclipse/dartboard@a23fc1f

  • Import existing Dart projects directly into the workspace (+ automatic dependency synchronization) - eclipse/dartboard#116

Upstream Bug Fixes

During development I encountered many issues with the libraries and tools I was using. As I was already aware I took the time to fix them directly and provide a patch or PR for the corresponding library.

Things Left to do

I have completed all of my goals I set in the initial proposal for GSoC. However a few things and features have come up during development that I plan on taking care of in the near future.

  • Flutter support - Flutter apps need to be run using the special $ flutter command suite, instead of the default SDK

Other things that could be enhanced include:

  • Pub console - Currently there is now way to see the output of the pub commands
  • Webdev support - Dart apps that should be run on the browser need to be run using the $ webdev command line tools

I hope to be able to work on them but community contributions are always welcome.

A full list of commits and issues can be found on the project's GitHub repository. Installation instructions can also be found there.

Appreciation

In the early days, Lakshminarayana Nekkanti has joined the project as a committer. He has been extremely helpful since by fixing bugs in the Eclipse platform that have been open for years (Bug 513034) and contributing a lot of features and knowledge to the plugin. Thank you, Lakshminarayana!

I would also like to thank Lars Vogel who has been my Mentor and helped tremendously when I was unsure what to do.


August 19, 2019 12:00 AM

Dartboard: Pub support, Stagehand templates & better theming

August 03, 2019 12:00 AM

Automatic pub.dev dependency synchronization, stagehand templates and theme improvements for TM4E.

Pub dependency synchronization

pub.dev is the main source of packages for Dart projects. Dependencies are added to a pubspec.yaml file and the $ pub get command is used to download the packages from the registry. Since most projects require at least a few dependencies this step must be taken before the project compiles without errors.

To ease this process a little, the command is now automatically run when saving the pubspec.yaml file in Eclipse. With this, required dependencies are automatically downloaded when a project is imported or when a new package is added to the pubspec.yaml file.

If you don't want this behavior the feature can be disabled from the preference page. Manually running the synchronization is also supported from the context menu of a project.

Pub context handler

The current progress of the synchronization is reported to the Progress view.

Pub progress

Stagehand

Stagehand is a CLI tool, written in Dart, that generates new projects from a list of templates. There are different project types to chose from, like Flutter, AngularDart or just console applications. After a template is generated the project contains a pubspec.yaml file containing all necessary dependency information and various entry files that are required by the type of project.

This tool is now supported graphically by the New Dart Project wizard also provided by Dartboard. Under the hood the plugin uses the exact executable from pub.dev. And by automatically updating it we make sure that new templates can be immediately used.

This is how the New Dart Project wizard looks now: Stagehand

At first selection of the Wizard (after a fresh IDE start) Stagehand is updated and its templates are fetched. After that job is finished the available templates are added to the combo box. If no template should be used the Use Stagehand template checkbox can be unchecked and an empty project is generated.

Subsequent usages of the wizard use cached versions of the templates list to not strain your network or the pub.dev registry too much.

Project import

Not every Dart project was created from Eclipse though. So to be able to use Dartboard with existing Dart projects it is required to import them into the workspace. For this task we now provide a shortcut to import Dart projects from the Import Project context menu entry.

Currently it simply opens the Projects from Folder or Archive wizard. This wizard however allows to specify a ProjectConfigurator that takes care of selecting folders that should be imported as a project. In our own configurator we traverse all children and look for pubspec.yaml files. Every folder that contains such a file is considered to be a separate project.

Eclipse light theme for TM4E

TM4E can be used to apply different syntax highlighting to the editor. We provide a configuration file that specifies how certain words inside the editor should look. It also gives the option to select what theme to use. There are different light and dark themes available, like Solarized Light or the classic Eclipse Java editor theme.

I provided a patch to add some missing styles to the latter to make it look more like the classic theme. This is what it looks like now: Eclipse light theme

See eclipse/tm4e#215.

Wrap up

With two weeks to go until the end of Summer of Code 2019 there is not that much left to do for me to fulfill my proposed timeline. One major thing that is still missing are automated tests. I have started work on it now and will work on that for the last two weeks.

This will also be my last update post on Summer of Code as the next post will be a summary of my whole summer.


August 03, 2019 12:00 AM

Pimping the status line

by Andrey Loskutov (noreply@blogger.com) at July 28, 2019 04:50 PM

This weekend I've tried to write a test for Eclipse debug hover, that required to know exact position of the selected text, somewhere in the middle of the editor. If you think this is easy - go figure out in Eclipse at which offset is your cursor - surprisingly there is no obvious way to do so!

So I've used some 3rd party editor that was kind enough to provide this information in the status line. Why shouldn't this be offered by Eclipse itself?

So I've created an enhancement request and pushed patch that adds both features to Eclipse. By default, status line shows now cursor position, and if editor has something selected, the number of characters in the selection (works also in block selection mode). Both new additions to the status line can be disabled via preferences.


 If there is no selection, cursor offset is shown:


Both new additions to the status line can be disabled via preferences:




by Andrey Loskutov (noreply@blogger.com) at July 28, 2019 04:50 PM

Incompatible Eclipse workspaces

by Andrey Loskutov (noreply@blogger.com) at July 28, 2019 04:35 PM


Eclipse has mechanism to recognize if the workspace to be used is created with older Eclipse version.
In such case, to be safe, Eclipse shows dialog like:




As of today (Eclipse 4.12 M1), if you click on "Cancel" button, Eclipse will behave differently, depending on the use cases "history":

A. If the workbench was not started yet:

  1. If Eclipse was started without "-data" argument and user selects incompatible workspace, Eclipse will show "Older Workspace Version" dialog above and by clicking on "Cancel" it will offer workspace selection dialog.
  2. If Eclipse was started with "-data" argument pointing to the incompatible workspace, Eclipse will show "Older Workspace Version" dialog above and by clicking on "Cancel" it will terminate (instead of offering to select another workspace).

B. If the workbench was started:

  1. If user selects compatible workspace in the "File -> Switch Workspace" dialog, Eclipse restarts fine.
  2. If user selects incompatible workspace in the "File -> Switch Workspace" dialog, Eclipse restarts, shows the "Older Workspace Version" dialog above and by clicking on "Cancel" it will terminate (instead of offering to select another workspace).
This behavior is inconvenient (at least), so we have bug 538830.

Fix Proposal #1

The proposal is, that independently on the way Eclipse was started, if user clicks on the "Cancel" button in the "Older Workspace Version" dialog, we always show the default workspace selection dialog (instead of termination):



In this dialog above user has two choices: launch any workspace or finally terminate Eclipse via "Cancel".

Proposal #1 Matrix

A1. If the workbench was not started yet:

  1. If Eclipse was started with or without "-data" argument and user selects incompatible workspace, Eclipse will show "Older Workspace Version" dialog above and by clicking on "Cancel" it will offer workspace selection dialog. To terminate Eclipse, user has to click "Cancel" in the workspace selection dialog.

B1. If the workbench was started:

  1. If user selects compatible workspace in the "File -> Switch Workspace" dialog, Eclipse restarts fine.
  2. If user selects incompatible workspace in the "File -> Switch Workspace" dialog, Eclipse restarts, shows the "Older Workspace Version" dialog above and by clicking on "Cancel" it will offer to select another workspace.

Fix Proposal #2

The proposal is, that depending on the way Eclipse was started, if user clicks on the "Cancel" button in the "Older Workspace Version" dialog, we may or may not show the default workspace selection dialog. So what happens after "Older Workspace Version" dialog is shown is not predictable by just looking on this dialog - it depends on the history of this dialog.

Proposal #2 Matrix

A2. If the workbench was not started yet:

  1. If Eclipse was started without "-data" argument and user selects incompatible workspace, Eclipse will show "Older Workspace Version" dialog above and by clicking on "Cancel" it will offer workspace selection dialog.
  2. If Eclipse was started with "-data" argument pointing to the incompatible workspace, Eclipse will show "Older Workspace Version" dialog above and by clicking on "Cancel" it will terminate (instead of offering to select another workspace).

B2. If the workbench was started:

  1. If user selects compatible workspace in the "File -> Switch Workspace" dialog, Eclipse restarts fine.
  2. If user selects incompatible workspace in the "File -> Switch Workspace" dialog, Eclipse restarts, shows the "Older Workspace Version" dialog above and by clicking on "Cancel" it will offer to select another workspace.

Similarities

Both proposals fix second bullet in the use case B2.

Differences

We see that Proposal #1 has no second bullet for A1 case, and is always consistent in the way how UI behaves after clicking on "Cancel" in the "Older Workspace Version" dialog. Proposal #2 fixes only B2 use case, inconsistency in UI behavior for the second part of A1 use case remains.

Technical (biased) notes:

  1. Proposal #1 is implemented and the patch is available, along with the demo video. To test it live, one has to build Eclipse - but here I have SDK binaries with the patch applied. The patch is relatively simple and only affects Platform UI internal code.
  2. Proposal #2 is not implemented yet. I assume that this will require more work compared to the patch #1. We will need a new command line argument for Eclipse to differentiate between "I want you not to terminate even if incompatible -data is supplied because I'm calling you from UI" and "Please terminate if incompatible data is supplied because I'm calling you from the command line". A new command line argument for Eclipse means not just Platform UI internal change, but also requires changes in the Equinox and Help, and also means public interface change.

Question to the masses!

We want to know your opinion - which proposal should be implemented?

Please reply here or on the bug 538830.

Update:

This version was implemented and available in 4.13 M1:



by Andrey Loskutov (noreply@blogger.com) at July 28, 2019 04:35 PM

Papyrus SysML 1.6 available from the Eclipse Marketplace.

by tevirselrahc at July 12, 2019 02:03 PM

I should have mentioned, yesterday, that Papyrus SysML 1.6 is available from the Eclipse market place at https://marketplace.eclipse.org/content/papyrus-sysml-16


by tevirselrahc at July 12, 2019 02:03 PM

Building UIs with EASE

by Christian Pontesegger (noreply@blogger.com) at July 08, 2019 06:54 PM

You probably used EASE before to automate daily tasks in your IDE or to augment toolbars and menus with custom functionality. But so far scripts could not be used to build UIs. This changed with the recent contribution of the UI Builder module.

What it is all about
The UI Builder module allows to create views and dialogs by pure script code in your IDE. It is great for custom views that developers do not want to put into their products, for rapid prototyping and even for mocking.

The aim of EASE is to hide layout complexity as much as possible and provide a simple, yet flexible way to implement typical UI tasks.

Example 1: Input Form
We will start by creating a simple input form for address data.

loadModule("/System/UI Builder");
createView("Create Contact");

setColumnCount(2);
createLabel("First Name:");
var txtFirstName = createText();
createLabel("Last Name:");
var txtLastName = createText();
This snippet will create a dynamic view as shown below:
The renderer used will apply a GridLayout. By setting the columnCount to 2 we may simply add our elements without providing any additional layout information - a simple way to create basic layouts.

If needed EASE provides more control by providing layout information when creating components:

createView("Create Contact");
createLabel("First Name:", "1/1 >x");
var txtFirstName = createText("2-4/1 o!");
createLabel("Last Name:", "1/2 >x");
var txtLastName = createText("2-4/2 o!");
Creates the same view as above, but now with detailed layout information.
As an example "1/2 >x" means: first column, second row, horizontal align right, vertical align middle. A full documentation on the syntax is provided in the module documentation (Hover over the UI Builder module in the Modules Explorer view).

Now lets create a combo viewer to select a country for the address:
cmbCountry = createComboViewer(["Austria", "Germany", "India", "USA"])
Simple, isn't it?

So far we did not need to react on any of our UI elements. Next step is to create a button which needs some kind of callback action:
createButton("Save 1", 'print("you hit the save button")')
createButton("Save 2", saveAddress)

function saveAddress() {
print("This is the save method");
}
The first version of a button adds the callback code as string argument. When the button gets pressed, the callback code will be executed. You may call any script code that the engine is capable of interpreting.

The second version looks a bit cleaner, as it defines a function saveAddress() which is called on a button click. Note that we provide a function reference to createButton().

View the full example of this script on our script repository. In addition to some more layouting it also contains a working implementation of the save action to store addresses as JSON data files.

Interacting with SWT controls

The saveAddress() method needs to read data from the input fields of our form. This could be done using
var firstName = txtFirstName.getText();
Unfortunately SWT Controls can only be queried in the UI thread, while the script engine is executed in its own thread. To move code execution to the UI thread, the UI module provides a function executeUI(). By default this functionality is disabled as a bad script executed in the UI thread might stall your Eclipse IDE. To enable it you need to set a checkbox in Preferences/Scripting. The full call then looks like this:
loadModule("/System/UI")
var firstName = executeUI('txtFirstName.getText();');

Example 2: A viewer for our phone numbers

Now that we are able to create some sample data, we also need a viewer for our phone numbers. Say we are able to load all our addresses into an array, the only thing we need is a table viewer to visualize our entries. Following 2 lines will do the job:
var addresses = readAddresses();
var tableViewer = createTableViewer(addresses)
Where readAddresses() collects our *.address files and stores them into an array.

So the viewer works, however we need to define how our columns shall be rendered.
createViewerColumn(tableViewer, "Name", createLabelProvider("getProviderElement().firstName + ' ' + getProviderElement().lastName"))
createViewerColumn(tableViewer, "Phone", createLabelProvider("getProviderElement().phone"))
Whenever a callback needs a viewer element, getProviderElement() holds the actual element.
We are done! 3 lines of code for a TableViewer does not sound too bad, right? Again a full example is available on our script repository. It automatically loads *.address files from your workspace and displays them in the view.

Example 3: A workspace viewer

We had a TableViewer before, now lets try a TreeViewer. As a tree needs structure, we need to provide a callback to calculate child elements from a given parent:
var viewer = createTreeViewer(getWorkspace().getProjects(), getChildren);

function getChildren() {
if (getProviderElement() instanceof org.eclipse.core.resources.IContainer)
return getProviderElement().members();

return null;
}
So simple! The full example looks like this:
Example 4: Math function viewer

The last example demonstrates how to add a custom Control to a view.
For plotting we use the Charting module that is shipped with EASE. The source code should be pretty much self explanatory.

Some Tips & Tricks

  • Layouting is dynamic.
    Unlike the Java GridLayout you do not need to fill all cells of your layout. The EASE renderer takes care to automatically fill empty cells with placeholders
  • Elements can be replaced.
    If you use coordinates when creating controls, you may easily replace a given control by another one. This simplifies the process of layouting (eg if you experience with alignments) and even allows a view to dynamically change its components depending on some external data/events
  • Full control.
    While some methods from SWT do not have a corresponding script function, still all SWT calls may be used as the create* methods expose the underlying SWT instances.
  • Layout help.
    To simplify layouting use the showGrid() function. It displays cell borders that help you to see row/column borders.



by Christian Pontesegger (noreply@blogger.com) at July 08, 2019 06:54 PM

Back to the top