New Eclipse IoT Charter and Steering Committee

by Ian Skerrett at May 25, 2017 05:48 PM

It is hard to believe the Eclipse IoT Working Group was launch over 5 years ago on November 1, 2011, at the time we called it Eclipse M2M.  A lot has changed over these 5 years, including the name, and the IoT industry has matured to be one of the dominant technology trends in the technology industry. The good news is the Eclipse IoT Working Group has been a huge success. We have a thriving open source community that includes 30 different projects, more than 200 developers and 30 member companies. Eclipse IoT is well known and positioned in the industry and continues to see momentum and growth.

Given this community growth, we felt it was time to take a fresh look at the Eclipse IoT Working Group charter and the Steering Committee. After a number of drafts and revisions, we have updated and published the new working group charter.  Most of the changes were done to reflect the current focus on IoT runtimes and frameworks and adding more clarity to the roles and responsibilities of the Steering Committee.

Now that the new charter as been approved, I am thrilled to have Red Hat, Bosch and Eurotech volunteer to participate in the Eclipse IoT Steering Committee. All three companies are active leaders in the Eclipse IoT community and the general IoT industry. They each bring a unique perspective on IoT and open source to our community:

  • Bosch is a world leading industrial company that is considered a leader in providing industrial IoT solutions. Their commitment and involvement in the Eclipse IoT community is evident by their involvement in projects like Eclipse Leshan, Eclipse Hono, Eclipse Vorto, Eclipse Hawkbit, and Eclipse Ditto.
  • Eurotech is well known industrial gateway vendor that was one of the founding members of Eclipse M2M. They have experience incredible success with Eclipse Kura and are on the path to success with Eclipse Kapua.
  • Red Hat has deep roots in open source and enterprise IT. In the last 2 years they have become deeply involved in projects like Kapua, Hono and others. They have also been instrumental in helping launch our Eclipse IoT Open Testbeds.  Red Hat understands that for IoT to be successful it needs to integrate OT and IT. They are on the path to being a leader in this space.

The next 2-3 years are going to be very exciting for the IoT industry and in particular the Eclipse IoT community. We have the technology and individuals that are making a difference and delivering real and valuable technology for IoT solution developers. It is very exciting to have these 3 companies help lead the way to our continued success and momentum.

 



by Ian Skerrett at May 25, 2017 05:48 PM

Woohoo! Java 9 has a REPL! Getting Started with JShell and Eclipse January

by Jonah Graham at May 25, 2017 02:12 PM

With Java 9 just around the corner, we explore one of its most exciting new features – the Java 9 REPL (Read-Eval-Print Loop). This REPL is called JShell and it’s a great addition to the Java platform. Here’s why.

With JShell you can easily try out new features and quickly check the behaviour of a section of code. You don’t have to create a long-winded dummy main or JUnit test – simply type away.  To demonstrate the versatility of JShell, I am going to use it in conjunction with the Eclipse January package for data structures. Eclipse January is a set of libraries for handling numerical data in Java, think of it as a ‘numpy for Java’.

Install JShell

JShell is part of Java 9, currently available in an Early Access version from Oracle and other sources. Download and install Java 9 JDK from http://jdk.java.net/9/ or, if you have it available on your platform, you can install with your package manager (e.g. sudo apt-get install openjdk-9-jdk).

Start a terminal and run JShell:capture1

As you can see, JShell allows you to type normal Java statements, leave off semi-colons, run expressions, access expressions from previous outputs, and achieve many other short-cuts. (You can exit JShell with Ctrl-D.)

Using JShell with Eclipse January

To use Eclipse January, you need to:

1. Download January:

Get the January 2.0.1 jar.

2. Download the dependency jars:

The January dependencies are available from Eclipse Orbit, they are:

3. Run JShell again, but add to the classpath all the jars you downloaded (remember to be the in the directory you downloaded the jars to):

Windows:

"c:\Program Files\Java\jdk-9\bin\jshell.exe"  --class-path org.eclipse.january_2.0.1.v201703300842.jar;org.apache.commons.lang_2.6.0.v201404270220.jar;org.apache.commons.math3_3.5.0.v20160301-1110.jar;org.slf4j.api_1.7.10.v20170428-1633.jar;org.slf4j.binding.nop_1.7.10.v20160301-1109.jar

Linux:

jshell --class-path org.eclipse.january_2.0.1.v201703300842.jar:org.apache.commons.lang_2.6.0.v201404270220.jar:org.apache.commons.math3_3.5.0.v20160301-1110.jar:org.slf4j.api_1.7.10.v20170428-1633.jar:org.slf4j.binding.nop_1.7.10.v20160301-1109.jar

Some notes:
Some version of jshell the command line argument is called -classpath instead of --class-path
If you are using git bash as your shell on Windows, add winpty before calling jshell and use colons to separate the path elements.

capture2

Then you can run through the different types of January commands. Note JShell supports completions using the ‘Tab’ key. Also use /! to rerun the last command.

Import classes

Start by importing the needed classes:

import org.eclipse.january.dataset.*

(No need for semi-colons and you can use the normally ill-advised * import)

Array Creation

Eclipse January supports straightforward creation of arrays. Let’s say we want to create a 2-dimensional array with the following data:

[1.0, 2.0, 3.0,
 4.0, 5.0, 6.0,
 7.0, 8.0, 9.0]

First we can create a new dataset:

Dataset dataset = DatasetFactory.createFromObject(new double[] { 1, 2, 3, 4, 5, 6, 7, 8, 9 })
System.out.println(dataset.toString(true))

This gives us a 1-dimensional array with 9 elements, as shown below:

[1.0000000, 2.0000000, 3.0000000, 4.0000000, 5.0000000, 6.0000000, 7.0000000, 8.0000000, 9.0000000]

We then need to reshape it to be a 3×3 array:

dataset = dataset.reshape(3, 3)
System.out.println(dataset.toString(true))

The reshaped dataset:

 [[1.0000000, 2.0000000, 3.0000000],
 [4.0000000, 5.0000000, 6.0000000],
 [7.0000000, 8.0000000, 9.0000000]]

Or we can do it all in just one step:

Dataset another = DatasetFactory.createFromObject(new double[] { 1, 1, 2, 3, 5, 8, 13, 21, 34 }).reshape(3, 3)
System.out.println(another.toString(true))

Another dataset:

 [[1.0000000, 1.0000000, 2.0000000],
 [3.0000000, 5.0000000, 8.0000000],
 [13.000000, 21.000000, 34.000000]]

There are methods for obtaining the shape and number of dimensions of datasets

dataset.getShape()
dataset.getRank()

Which gives us:

jshell> dataset.getShape()
$8 ==> int[2] { 3, 3 }

jshell> dataset.getRank()
$9 ==> 2

Datasets also provide functionality for ranges and a random function that all allow easy creation of arrays:

Dataset dataset = DatasetFactory.createRange(15, Dataset.INT32).reshape(3, 5)
System.out.println(dataset.toString(true))

[[0, 1, 2, 3, 4],
 [5, 6, 7, 8, 9],
 [10, 11, 12, 13, 14]]


import org.eclipse.january.dataset.Random //specify Random class (see this is why star imports are normally bad)
Dataset another = Random.rand(new int[]{3,5})
System.out.println(another.toString(true))

[[0.27243843, 0.69695728, 0.20951172, 0.13238926, 0.82180144],
 [0.56326222, 0.94307839, 0.43225034, 0.69251040, 0.22602319],
 [0.79244049, 0.15865358, 0.64611131, 0.71647195, 0.043613393]]

Array Operations

The org.eclipse.january.dataset.Maths provides rich functionality for operating on the Dataset classes. For instance, here’s how you could add 2 Dataset arrays:

Dataset add = Maths.add(dataset, another)
System.out.println(add.toString(true))

Or you could do it as an inplace addition. The example below creates a new 3×3 array and then adds 100 to each element of the array.

Dataset inplace = DatasetFactory.createFromObject(new double[] { 1, 2, 3, 4, 5, 6, 7, 8, 9 }).reshape(3, 3)
inplace.iadd(100)
System.out.println(inplace.toString(true))

[[101.0000000, 102.0000000, 103.0000000],
 [104.0000000, 105.0000000, 106.0000000],
 [107.0000000, 108.0000000, 109.0000000]]

Slicing

Datasets simplify extracting portions of the data, known as ‘slices’. For instance, given the array below, let’s say we want to extract the data 2, 3, 5 and 6.

[1, 2, 3,
 4, 5, 6,
 7, 8, 9]

This data resides in the first and second rows and the second and third columns. For slicing, indices for rows and columns are zero-based. A basic slice consists of a start and stop index, where the start index is inclusive and the stop index is exclusive. An optional increment may also be specified. So this example would be expressed as:

Dataset dataset = DatasetFactory.createFromObject(new double[] { 1, 2, 3, 4, 5, 6, 7, 8, 9 }).reshape(3, 3)
System.out.println(dataset.toString(true))
Dataset slice = dataset.getSlice(new Slice(0, 2), new Slice(1, 3))
System.out.println(slice.toString(true))

slice of dataset:

[[2.0000000, 3.0000000],
 [5.0000000, 6.0000000]]

Slicing and array manipulation functionality is particularly valuable when dealing with 3-dimensional or n-dimensional data.

Wrap-Up

For more on Eclipse January see the following examples and give them a go in JShell:

  • NumPy Examples shows how common NumPy constructs map to Eclipse Datasets.
  • Slicing Examples demonstrates slicing, including how to slice a small amount of data out of a dataset too large to fit in memory all at once.
  • Error Examples demonstrates applying an error to datasets.
  • Iteration Examples demonstrates a few ways to iterate through your datasets.
  • Lazy Examples demonstrates how to use datasets which are not entirely loaded in memory.

Eclipse January is a ‘numpy for Java’ but until now users have not really been able to play around with it in the same way you would numpy in Python.

JShell provides a great way to test drive libraries like Eclipse January. There are a couple of features that would be nice-to-have such as a magic variable for the last result (maybe $_ or $!) and maybe a shorter way to print a result (maybe /p :-). But overall, it is great to have and finally gives Java the REPL and ability to be used interactively that many have gotten so used to with other programming languages.

In fact we will be making good use of JShell for the Eclipse January workshop being held at EclipseCon France, see details and register here:  https://www.eclipsecon.org/france2017/session/eclipse-january

eclipseConV2



by Jonah Graham at May 25, 2017 02:12 PM

The State of Android and Eclipse

by kingargyle at May 25, 2017 12:55 PM

So it has been a while since I posted an update of where things stand.  Honestly, not a lot has changed on the eclipse front.  We still don’t have built in AAR support, there is no integrated Gradle support between Buildship and Andmore, we are behind on Nougat support, and O is fast approaching release status.   With that said, there have been a few bug fixes contributed by the community, which were released as a maintenance release, but the larger corporate adoption… is pretty much non-existent from a contribution stand point.

The later I’m not sure how to fix, as I whole heartily believe that Andmore needs a corporate backer and sponsor to fund things.   This could probably be avoided if several people that really have an itch to fix the base wanted to scratch it.   There is a lot to work on that it can be a bit overwhelming to figure out where to start.

When Google had announced a couple years ago that Android Development Tools would not be maintained any longer, I had that itch to see if I could at least get it and the Moto Dev Studio tools to a place where it would have a chance to survive.  I managed to scratch that and over the next year with the help of several other committers bring you Andmore 0.5.0.   Believe me the most difficult work has been done, and that is getting everything through the IP process, what is there is clean from that stand point.

So the general question now that Andmore is at the Eclipse Foundation, is there a dire need from the community to have Android tooling?   There was much outrage and yelling when Google made the decision to move to Intellij, some of it rightfully deserved some of it just background noise.   Regardless of the reasons, Android Tooling was always a Google controlled and sponsored project, it was open source in really name only (it took nearly 2 years for Doug’s CDT integration to be integrated into the core).   The same is happening with Android Studio, it is controlled and dominated by Google and is really open source in name only.  If Google decides for whatever reason that they want to move all development to the cloud and abandon Android Studio and Intellij, there is nothing that anyone will be able to do to prevent it.  The difference though is, that Jet Brains will pickup and continue developing Android support themselves.  Why, because they have a financial interest to make sure their IDE supports it for their corporate customers.

This brings us back to the question… Does the community really want Android tooling built off the eclipse platform?   If so, how do we improve it, given that right now, this is largely a volunteer effort.

 



by kingargyle at May 25, 2017 12:55 PM

The Containerization of Dev Environments

by Doug Schaefer at May 24, 2017 03:38 PM

As a veteran tools architect working for a veteran embedded systems platform vendor, we’re getting pretty good at building cross development environments. You get all the speed and integration with other native tools that today’s rich host platforms can provide. Combine that that with a good installer and software maintenance tool, and it’s super simple for users to get setup and keep their installs updated with the latest fixes and patches. It’s worked for many years.

So of course, I was a bit taken back with recent talk about delivering development environments in containers and distributing them to users for use with cloud IDEs. The claim is that the installation process is simpler. But I have to ask, while yes it is simpler for the provider, is it also simpler for the user?

I work with embedded software engineers. Their systems are complex and the last thing they want to do is fight with their tools. That doesn’t pay the bills. And that’s why we work so hard to make that management simpler. And if you don’t have the experience creating cross development environments, it is certainly appealing to only have to worry about one host platform, 64-bit Linux, as you do with Docker which, BTW, just happens to be the easiest to support especially relative to Windows.

But do I really have to teach my embedded developer customers about Docker? How to clean up images as updates are delivered? How to start and stop containers, and in the case of Windows and Mac, the VMs that run those containers? And that’s not to mention cloud environments which are a whole new level requiring server management, especially as the developer community scales. Embedded development tools require a lot of horsepower. How many users can a server actually support and how do customers support the burstiness of demand?

So, while I get it, and as vendors take this path and as users do get used to it, I do need to be prepared to support such environments. I’ll just feel a bit sad that we are giving up on providing our users the great experiences that native cross development tools provide.


by Doug Schaefer at May 24, 2017 03:38 PM

Fuse and BRMS Tooling Maintenance Release for Neon.3

by pleacu at May 24, 2017 02:03 PM

Try our complete Eclipse-Neon capable, Devstudio 10.4.0 compatible integration tooling.

jbosstools jbdevstudio blog header

JBoss Tools Integration Stack 4.4.3.Final / JBoss Developer Studio Integration Stack 10.3.0.GA

All of the Integration Stack components have been verified to work with the same dependencies as JBoss Tools 4.4 and Developer Studio 10.

What’s new for this release?

This release syncs up with Devstudio 10.4.0, JBoss Tools 4.4.4 and Eclipse Neon.3. It is also a maintenance release for Fuse Tooling, SwitchYard and the BRMS tooling.

Released Tooling Highlights

JBoss Fuse Development Highlights

Fuse Tooling Highlights

See the Fuse Tooling 9.2.0.Final Resolved Issues Section of the Integration Stack 10.3.0.GA release notes.

SwitchYard Highlights

See the SwitchYard 2.3.1.Final Resolved Issues Section of the Integration Stack 10.3.0.GA release notes.

JBoss Business Process and Rules Development

BPMN2 Modeler Known Issues

See the BPMN2 1.3.3.Final Known Issues Section of the Integration Stack 10.3.0.GA release notes.

Drools/jBPM6 Known Issues

Data Virtualization Highlights

Teiid Designer Known Issues

See the Teiid Designer 11.0.1.Final Resolved Issues Section of the Integration Stack 10.1.0.GA release notes.

What’s an Integration Stack?

Red Hat JBoss Developer Studio Integration Stack is a set of Eclipse-based development tools. It further enhances the IDE functionality provided by JBoss Developer Studio, with plug-ins specifically for use when developing for other Red Hat JBoss products. It’s where the Fuse Tooling, DataVirt Tooling and BRMS tooling is aggregated. The following frameworks are supported:

JBoss Fuse Development

  • Fuse Tooling - JBoss Fuse Development provides tooling for Red Hat JBoss Fuse. It features the latest versions of the Fuse Data Transformation tooling, Fuse Integration Services support, SwitchYard and access to the Fuse SAP Tool Suite.

  • SwitchYard - A lightweight service delivery framework providing full lifecycle support for developing, deploying, and managing service-oriented applications.

JBoss Business Process and Rules Development

JBoss Business Process and Rules Development plug-ins provide design, debug and testing tooling for developing business processes for Red Hat JBoss BRMS and Red Hat JBoss BPM Suite.

  • BPEL Designer - Orchestrating your business processes.

  • BPMN2 Modeler - A graphical modeling tool which allows creation and editing of Business Process Modeling Notation diagrams using graphiti.

  • Drools - A Business Logic integration Platform which provides a unified and integrated platform for Rules, Workflow and Event Processing including KIE.

  • jBPM6 - A flexible Business Process Management (BPM) suite.

JBoss Data Virtualization Development

JBoss Data Virtualization Development plug-ins provide a graphical interface to manage various aspects of Red Hat JBoss Data Virtualization instances, including the ability to design virtual databases and interact with associated governance repositories.

  • Teiid Designer - A visual tool that enables rapid, model-driven definition, integration, management and testing of data services without programming using the Teiid runtime framework.

JBoss Integration and SOA Development

JBoss Integration and SOA Development plug-ins provide tooling for developing, configuring and deploying BRMS, SwitchYard and Fuse applications to Red Hat JBoss Fuse and Fuse Fabric containers, Apache ServiceMix, and Apache Karaf instances.

  • All of the Business Process and Rules Development plugins, plus…​

  • Fuse Apache Camel Tooling - A graphical tool for integrating software components that works with Apache ServiceMix, Apache ActiveMQ, Apache Camel and the FuseSource distributions.

  • SwitchYard - A lightweight service delivery framework providing full lifecycle support for developing, deploying, and managing service-oriented applications.

The JBoss Tools website features tab

Don’t miss the Features tab for up to date information on your favorite Integration Stack components.

Installation

The easiest way to install the Integration Stack components is through the stand-alone installer. If you’re interested specifically in Fuse we have the all-in-one installer JBoss Fuse Tooling + JBoss Fuse/Karaf runtime.

For a complete set of Integration Stack installation instructions, see Integration Stack Installation Instructions

Give it a try!

Paul Leacu.


by pleacu at May 24, 2017 02:03 PM

JBoss Tools and Red Hat Developer Studio Maintenance Release for Eclipse Neon.3

by jeffmaury at May 24, 2017 12:22 PM

JBoss Tools 4.4.4 and Red Hat JBoss Developer Studio 10.4 for Eclipse Neon.3 are here waiting for you. Check it out!

devstudio10

Installation

JBoss Developer Studio comes with everything pre-bundled in its installer. Simply download it from our Red Hat developers and run it like this:

java -jar devstudio-<installername>.jar

JBoss Tools or Bring-Your-Own-Eclipse (BYOE) JBoss Developer Studio require a bit more:

This release requires at least Eclipse 4.6.3 (Neon.3) but we recommend using the latest Eclipse 4.6.3 Neon JEE Bundle since then you get most of the dependencies preinstalled.

Once you have installed Eclipse, you can either find us on the Eclipse Marketplace under "JBoss Tools" or "Red Hat JBoss Developer Studio".

For JBoss Tools, you can also use our update site directly.

http://download.jboss.org/jbosstools/neon/stable/updates/

What is new?

Our main focus for this release was improvements for container based development and bug fixing.

Improved OpenShift 3 and Docker Tools

We continue to work on providing better experience for container based development in JBoss Tools and Developer Studio. Let’s go through a few interesting updates here.

OpenShift Server Adapter enhanced flexibility

OpenShift server adapter is a great tool that allows developers to synchronize local changes in the Eclipse workspace with running pods in the OpenShift cluster. It also allows you to remote debug those pods when the server adapter is launched in Debug mode. The supported stacks are Java and NodeJS.

As pods are ephemeral OpenShift resources, the server adapter definition was based on an OpenShift service resource and the pods are then dynamically computed from the service selector.

This has a major drawback as it allows to use this feature only for pods that are part of a service, which may be logical for Web based applications as a route (and thus a service) is required in order to access the application.

So, it is now possible to create a server adapter from the following OpenShift resources:

  • service (as before)

  • deployment config

  • replication controller

  • pod

If a server adapter is created from a pod, it will be created from the associated OpenShift resource, in the preferred order:

  • service

  • deployment config

  • replication controller

As the OpenShift explorer used to display OpenShift resources that were linked to a service, it has been enhanced as well. It now displays resources linked to a deployment config or replication controller. Here is an example of a deployment with no service ie a deployment config:

server adapter enhanced

So, as an OpenShift server adapter can be created from different kind of resources, the kind of associated resource is displayed when creating the OpenShift server adapter:

server adapter enhanced1

Once created, the kind of OpenShift resource adapter is also displayed in the Servers view:

server adapter enhanced2

This information is also available from the server editor:

server adapter enhanced3

Security vulnerability fixed in certificate validation database

When you use the OpenShift tooling to connect to an OpenShift API server, the certificate of the OpenShift API server is first validated. If the issuer authority is a known one, then the connection is then established. If the issuer is an unknown one, a validation dialog is first shown to the user with the details of the OpenShift API server certificate as well as the details of the issuer authority. If the user accepts it, then the connection is established. There is also an option to store the certificate in a database so that next time a connection is attempted to the same OpenShift API server, then the certificate will be considered valid an no validation dialog will be show again.

certificate validation dialog

We found a security vulnerability as the certificate was wrongly stored: it was partially stored (not all attributes were stored) so we may interpret a different certificate as validated where it should not.

We had to change the format of the certificate database. As the certificates stored in the previous database were not entirelly stored, there was no way to provide a migration path. As a result, after the upgrade, the certificate database will be empty. So if you had previously accepted some certificates, then you need to accept them again and fill the certificate database again.

CDK 3 Server Adapter

The CDK 3 server adapter has been here for quite a long time. It used to be Tech Preview as CDK 3 was not officially released. It is now officially available. While the server adapter itself has limited functionality, it is able to start and stop the CDK virtual machine via its minishift binary. Simply hit Ctrl+3 (Cmd+3 on OSX) and type CDK, that will bring up a command to setup and/or launch the CDK server adapter. You should see the old CDK 2 server adapter along with the new CDK 3 one (labeled Red Hat Container Development Kit 3).

cdk3 server adapter5

All you have to do is set the credentials for your Red Hat account and the location of the CDK’s minishift binary file and the type of virtualization hypervisor.

cdk3 server adapter1

Once you’re finished, a new CDK Server adapter will then be created and visible in the Servers view.

cdk3 server adapter2

Once the server is started, Docker and OpenShift connections should appear in their respective views, allowing the user to quickly create a new Openshift application and begin developing their AwesomeApp in a highly-replicable environment.

cdk3 server adapter3
cdk3 server adapter4

OpenShift Container Platform 3.5 support

OpenShift Container Platform (OCP) 3.5 has been announced by Red Hat. JBossTools 4.4.4.Final has been validated against OCP 3.5.

OpenShift server adapter extensibility

The OpenShift server adapter had long support for EAP/Wildfly and NodeJS based deployments. It turns out that it does a great deal of synchronizing local workspace changes to remote deployments on OpenShift which have been standardized through images metadata (labels). But each runtime has its own specific. As an example, Wildfly/EAP deployments requires that a re-deploy trigger is sent after the files have been synchronized.

In order to reduce the technical debt and allow support for other runtimes (lots of them in the microservice world), we have refactored the OpenShift server adapter so that each runtime specific is now isolated and that it will be easy and safe to add support for new runtime.

For a full in-depth description, see the following wiki page.

Pipeline builds support

Pipeline based builds are now supported by the OpenShift tooling. When creating an application, if using a template, if one of the builds is based on pipeline, you can view the detail of the pipeline:

pipeline wizard

When your application is deployed, you can see the details of the build configuration for the pipeline based builds:

pipeline details

More to come as we are improving the pipeline support in the OpenShift tooling.

Update of Docker Client

The level of the underlying com.spotify.docker.client plug-in used to access the Docker daemon has been upgraded to 3.6.8.

Run Image Network Support

A new page has been added to the Docker Run Image Wizard and Docker Run Image Launch configuration that allows the end-user to specify the network mode to use. A user can choose from Default, Bridge, Host, None, Container, or Other. If Container is selected, the user must choose from an active Container to use the same network mode. If Other is specified, a named network can be specified.

Network Mode
Network Mode Configuration

Refresh Connection

Users can now refresh the entire connection from the Docker Explorer View. Refresh can be performed two ways:

  1. using the right-click context menu from the Connection

  2. using the Refresh menu button when the Connection is selected

Refresh Connection

Server Tools

API Change in JMX UI’s New Connection Wizard

While hardly something most users will care about, extenders may need to be aware that the API for adding connection types to the &aposNew JMX Connection&apos wizard in the &aposJMX Navigator&apos has changed. Specifically, the &aposorg.jboss.tools.jmx.ui.providerUI&apos extension point has been changed. While previously having a child element called &aposwizardPage&apos, it now requires a &aposwizardFragment&apos.

A &aposwizardFragment&apos is part of the &aposTaskWizard&apos framework first used in WTP’s ServerTools, which has, for a many years, been used throughout JBossTools. This framework allows wizard workflows where the set of pages to be displayed can change based on what selections are made on previous pages.

This change was made as a direct result of a bug caused by the addition of the Jolokia connection type in which some standard workflows could no longer be completed.

This change only affects adopters and extenders, and should have no noticable change for the user, other than that the below bug has been fixed.

Hibernate Tools

Hibernate Runtime Provider Updates

A number of additions and updates have been performed on the available Hibernate runtime providers.

Hibernate Runtime Provider Updates

The Hibernate 5.0 runtime provider now incorporates Hibernate Core version 5.0.12.Final and Hibernate Tools version 5.0.5.Final.

The Hibernate 5.1 runtime provider now incorporates Hibernate Core version 5.1.4.Final and Hibernate Tools version 5.1.3.Final.

The Hibernate 5.2 runtime provider now incorporates Hibernate Core version 5.2.8.Final and Hibernate Tools version 5.2.2.Final.

Forge Tools

Forge Runtime updated to 3.6.1.Final

The included Forge runtime is now 3.6.1.Final. Read the official announcement here.

startup

What is next?

Having JBoss Tools 4.4.4 and Developer Studio 10.4 out we are already working on the next release for Eclipse Oxygen.

Enjoy!

Jeff Maury


by jeffmaury at May 24, 2017 12:22 PM

Registration opens now! – Eclipse DemoCamp Oxygen 2017

by Maximilian Koegel and Jonas Helming at May 24, 2017 12:00 PM

Eclipse DemoCamp Oxygen 2017 on June 28th 2017 – Registration opens now!

We are pleased to invite you to participate in the Eclipse DemoCamp Oxygen 2017. The DemoCamp Munich is one the biggest DemoCamps worldwide and therefore an excellent opportunity to showcase all the cool, new and interesting technology being built by the Eclipse community. This event is open to Eclipse enthusiasts who want to show demos of what they are doing with Eclipse.

Registration open now!
Please click here for detailed information and the registration.

Seating is limited, so please register soon if you plan to attend.
We look forward to welcoming you to the Eclipse DemoCamp Oxygen 2017!

IMPORTANT NOTE
The last DemoCamps have always been sold out. Due to legal reasons, we have a fixed limit for the room and cannot overbook. However, every year some places unfortunately remain empty. Please unregister if you find you are unable to attend so we can invite participants from the waiting list. If you are not attending and have not unregistered, we kindly ask for a donation of 10 Euros to “Friends of Eclipse”. Thank you in advance for your understanding!

If the event is sold out, you will be placed on the waiting list. You will be informed if a place becomes available, and you will need to confirm your attendance after we contact you.
If you are interested in giving a talk, please send your presentation proposal to matthias.zimmermann@bsi-software.com for consideration. There are always more proposals than slots in the agenda, so we will make a selection from the submissions.

We look forward to meeting you at the Eclipse DemoCamp Oxygen 2017!


TwitterGoogle+LinkedInFacebook

Leave a Comment. Tagged with democamp, eclipse, democamp, eclipse


by Maximilian Koegel and Jonas Helming at May 24, 2017 12:00 PM

Eclipse Newsletter - Language Server Protocol 101

May 24, 2017 09:35 AM

Everything you need to know about the Language Server Procotol (aka LSP) is in this month's newsletter!

May 24, 2017 09:35 AM

What is it like to work in Open-source?

by maggierobb at May 24, 2017 09:19 AM

Open-source software (OSS) is computer software with its source code made available with a license in which the copyright holder provides the rights to study, change, and distribute the software to anyone and for any purpose. – Wikipedia

Yannick_opensource

I am Yannick Mayeur, a French computer science student currently gaining work experience at Kichwa Coders in the UK, and this is how I feel about working with Open-source.

Why Open-source ?

Let me tell you a story. A company asks someone in their software team to build some software to do a certain task. It takes him a lot of time but he manages to do it. He is the only one working on the project so there are no comments in the code nor any documentation to help maintain the code. He later leaves the company, the software slowly becomes useless as nobody else knows how to use it.

If this company had created an Open-source project instead, this problem wouldn’t have occurred.

Help spread Open-source – or ensure a job for life by using this guerrilla guide on how to write unmaintainable code. But seriously don’t. 

Get started

Getting started with Open-source is fairly easy because a lot of people write guides or blog posts to help people tackle the tricky stuff like git. All you need is a computer and motivation!

My feeling on the subject

I am now into my fifth week at Kichwa Coders, during which time I have worked on two different projects for the Eclipse Foundation:

  • Eclipse January which is a set of libraries to help process a lot of data. It is mainly used by Diamond Light Source scientists to process the data they get from their particle accelerator.
  • Eclipse CDT which is an IDE for C and C++, and is used by a lot of programmers.

Knowing that I contributed to two big projects that a lot of people use every day makes me kinda proud. And the truth is it wasn’t even that hard. With my not-that-big-knowledge of how to work on big projects I have contributed some changes that will have a lot of visibility such as bug 515296 (for more details on the bug and on how Pierre and I solved it you can read his blog post about it). Because if you are experiencing a problem the chances are that other people within the community are also experiencing it – and their knowledge of it can help you solve it for everyone.

Conclusion

Knowing that other people can see exactly what code you wrote puts pressure on you – but in a good way. In my opinion it pushes you to give the best of yourself. But this community can also help you out when you are stuck with a problem. All this and more makes being part of an Open-source project a very satisfying experience.



by maggierobb at May 24, 2017 09:19 AM

Generate Traced Code with Xtext

by Miro Spönemann at May 24, 2017 08:54 AM

Xtext 2.12 is released on May 26th. As described in its release notes, a main novelty is an API for tracing generated code.

Why Tracing?

Whenever you transpile code from one language to another, you need some kind of mapping that instructs the involved tools how to navigate from a piece of source code to the respective target code and back. In a debugging session, for example, developers can get quite frustrated if they have to step through the generated code and then try to understand where the problem is in their original source. Allowing to debug directly in the source code saves time and frustration, so it’s definitely the way to go.

For transpiled JVM languages such as Xtend, the JSR 45 specification defines how to map byte code to the source. For languages that target JavaScript, such as TypeScript or CoffeeScript, source maps are generated by the compiler and then processed by the development tools of the web browser. The DSL framework Xtext offers an API to access tracings between source and target code for any language created with Xtext. Xbase languages such as Xtend make use of such tracings automatically, but for other languages the computation of tracing information had to be added manually to the code generator with Xtext 2.11 and earlier versions.

Tracing Xtend Templates

The examples shown in this post are available on GitHub. They are based on a simple Xtext language that describes classes with properties and operations.

The examples are implemented in Xtend, a language that is perfectly suited to writing code generators. Among other things, it features template expressions with smart whitespace handling and embedded conditionals and loops. Here’s an excerpt of the generator implementation of our example, where the target language is C:

The entry point of the new API is TracingSugar, which provides extension methods to generate traced text. The code above uses generateTracedFile to create a file and map its contents to model, the root of our AST. The generateHeader method is shown below. It defines another template, and the resulting text is mapped to the given ClassDeclaration using the @Traced active annotation.

The _name extension method in the code above is another part of the new API. Here it writes the name property of the ClassDeclaration into the output and maps it to the respective source location. This method is generated from the EMF model of the language using the @TracedAccessors annotation. Just pass the EMF model factory class as parameter to the annotation, and it creates a tracing method for each structural feature (i.e. property or reference) of your language.

The Generator Tree

The new tracing API creates output text in two phases: first it creates a tree of generator nodes from the Xtend templates, then it transforms that tree into a character sequence with corresponding tracing information. The base interface of generator nodes is IGeneratorNode. There are predefined nodes for text segments, line breaks, indentation of a subtree, tracing a subtree, and applying templates.

The generator tree can be constructed via templates, or directly through methods provided by TracingSugar, or with a mixture of both. The direct creation of subtrees is very useful for generating statements and expressions, where lots of small text segments need to be concatenated. The following excerpt of our example code generator transforms calls to class properties from our source DSL into C code:

The parts of the TracingSugar API used in this code snippet are

  • trace to create a subtree traced to a source AST element,
  • append to add text to the subtree, and
  • appendNewLine to add line breaks.

The resulting C code may look like this:

Employing the Traces

Trace information is written into _trace files next to the generator output. For example, if you generate a file persons.c, you’ll get a corresponding .persons.c._trace in the same output directory. Xtext ships a viewer for these files, which is very useful to check the result of your tracing computation. In the screenshot below, we can see that the property reference bag is translated to the C code Bag* __local_0 = &this->bag;

trace_viewer

The programmatic representation of such a trace file is the ITrace interface. An instance of ITrace points either in the source-to-target or the target-to-source direction, depending on how it was obtained. In order to get such a trace, inject ITraceForURIProvider and call getTraceToTarget (for a source-to-target trace) or getTraceToSource (for a target-to-source trace).

Xtext provides some generic UI for traced generated code: If you right-click some element of your source file and select “Open Generated File”, you’ll be directed to the exact location to which that element has been traced. In the same way, you can right-click somewhere in the generated code and select “Open Source File” to navigate to the respective source location. This behavior is shown in the animation below.

tracing_c_10fps

Enhancing Existing Code Generators

In many cases it is not necessary to rewrite a code generator from scratch in order to enhance it with tracing information. The new API is designed in a way that it can be weaved into existing Xtend code with comparatively little effort. The following hints might help you for such a task, summarizing what we have learned in the previous sections of this post.

  • Use generateTracedFile to create a traced text file. There are two overloaded variants of that method: one that accepts a template and traces it to a root AST element, and one that accepts a generator node. If you are already using Xtend templates, just pass them to this method.
  • Add the @Traced annotation to methods that transform a whole AST element into text. In some cases it might be useful to extract parts of a template into local methods so this annotation can be applied.
  • Use the @TracedAccessors annotation to generate extension methods for tracing single properties and references. For example, if you have an expression such as property.name in your template, you could replace that with property._name so that the expression is properly traced.
  • Use the TracingSugar methods to construct a generator subtree out of fine-grained source elements such as expressions. If you have previously used other string concatenation tools like StringBuilder or StringConcatenation, you can replace them with CompositeGeneratorNode (see e.g. generateExpression in our example code).

It’s Time to Trace!

With the new Xtext version 2.12, generating traced code has become a lot simpler. If such traces are relevant in any way for your languages, don’t hesitate to try the API described here! We also welcome any feedback, so please report problems on GitHub and meet us on Gitter to discuss things, or just to tell us how cool tracing is 🙂


by Miro Spönemann at May 24, 2017 08:54 AM

It’s time to organise Eclipse Oxygen DemoCamps

May 23, 2017 08:35 AM

What is an Eclipse DemoCamp and why should I organise one?

May 23, 2017 08:35 AM

It’s time to organise Eclipse Oxygen DemoCamps

by Antoine THOMAS at May 23, 2017 08:27 AM

The next major release of the Eclipse Oxygen is coming up on June 28 and, it means the start of this year’s Eclipse DemoCamps season. If you or your colleagues are considering a DemoCamp for 2017, we would like to help!

What’s a DemoCamp?

You may be asking yourself what the heck a DemoCamp is and why should you care? Eclipse DemoCamps are typically 1-day or even evening events organized by Eclipse community members all over the world. The organizers bring together a set of expert speakers and attendees from their local community. In other words, it’s a free event where you get to meet fellow Eclipsians and learn from each other in the form of demos/talks about Eclipse technology.

How do I get started?

This is the best part, wherever you are, you can organize an Eclipse DemoCamp! You choose the place, set the time, organize the venue (maybe a local pub or company office), provide a screen and projector, and arrange for refreshments.

To tell us that you are planning an Eclipse DemoCamp:

  • Send us an email on democamps@eclipse.org to ask about support, speaker ideas or possible goodies
  • Add it to the DemoCamp 2017 wiki page

To add it, simply create a page with the program and venue information. And if you use another service like Meetup, just add a link to it from the Eclipse wiki. We will be pleased to list it on events.eclipse.org.

How does Eclipse Foundation help?

We, as the Eclipse Foundation, will participate to the cost of food, beverages, and room rental up to $300. We encourage organizers to find outside corporate sponsors to help organize their event. Sponsors usually contribute a certain amount, food or provide the space. Please acknowledge your sponsors on the DemoCamp & Hackathon wiki page and at the event itself.

We will help you promote it through the Eclipse Foundation’s social media network and website. To read more about organizing an event, visit this page “Organise an Eclipse DemoCamp or Hackathon“.

Eclipse Foundation staff also tries to attend the DemoCamps. This is obviously not always possible, but who knows… we could be coming to yours!

In 2016, DemoCamps took place in 19 different cities, from 10 different countries: Austria, Canada, China, Germany, Guatemala, Hungary, India, Norway, Poland, and Switzerland! We need you to reach new places in 2017 and that place could be near you!

Looking forward to hear from you 🙂


by Antoine THOMAS at May 23, 2017 08:27 AM

Presentation of the Vert.x-Swagger project

by phiz71 at May 22, 2017 12:00 AM

This post is an introduction to the Vert.x-Swagger project, and describe how to use the Swagger-Codegen plugin and the SwaggerRouter class.

Eclipse Vert.x & Swagger

Vert.x and Vert.x Web are very convenient to write REST API and especially the Router which is very useful to manage all resources of an API.

But when I start a new API, I usually use the “design-first” approach and Swagger is my best friend to define what my API is supposed to do. And then, comes the “boring” part of the job : convert the swagger file content into java code. That’s always the same : resources, operations, models…

Fortunately, Swagger provides a codegen tool : Swagger-Codegen. With this tool, you can generate a server stub based on your swagger definition file. However, even if this generator provides many different languages and framework, Vert.X is missing.

This is where the Vert.x-Swagger project comes in.

The project

Vert.x-Swagger is a maven project providing 2 modules.

vertx-swagger-codegen

It’s a Swagger-Codegen plugin, which add the capability of generating a Java Vert.x WebServer to the generator.

The generated server mainly contains :

  • POJOs for definitions
  • one Verticle per tag
  • one MainVerticle, which manage others APIVerticle and start an HttpServer.

the MainVerticle use vertx-swagger-router

vertx-swagger-router

The main class of this module is SwaggerRouter. It’s more or less a Factory (and maybe I should rename the class) that can create a Router, using the swagger definition file to configure all the routes. For each route, it extracts parameters from the request (Query, Path, Header, Body, Form) and send them on the eventBus, using either the operationId as the address or a computed id (just a parameter in the constructor).

Let see how it works

For this post, I will use a simplified swagger file but you can find a more complex example here based on the petstore swagger file

Generating the server

First, choose your swagger definition. Here’s a YAML File, but it could be a JSON file :

Then, download these libraries :

Finally, run this command

java -cp /path/to/swagger-codegen-cli-2.2.2.jar:/path/to/vertx-swagger-codegen-1.0.0.jar io.swagger.codegen.SwaggerCodegen generate \
  -l java-vertx \
  -o path/to/destination/folder \
  -i path/to/swagger/definition \
  --group-id your.group.id \
  --artifact-id your.artifact.id

For more Information about how SwaggerCodegen works
you can read this https://github.com/swagger-api/swagger-codegen#getting-started

You should have something like that in your console:

[main] INFO io.swagger.parser.Swagger20Parser - reading from ./wineCellarSwagger.yaml
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/model/Bottle.java
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/model/CellarInformation.java
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/verticle/BottlesApi.java
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/verticle/BottlesApiVerticle.java
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/verticle/InformationApi.java
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/verticle/InformationApiVerticle.java
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/resources/swagger.json
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/java/io/swagger/server/api/MainApiVerticle.java
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/src/main/resources/vertx-default-jul-logging.properties
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/pom.xml
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/README.md
[main] INFO io.swagger.codegen.AbstractGenerator - writing file [path/to/destination/folder]/.swagger-codegen-ignore
And this in your destination folder:

Generated sources

What have been created ?

As you can see in 1, the vertx-swagger-codegen plugin has created one POJO by definition in the swagger file.

Example : the bottle definition

In 2a and 2b you can find :

  • an interface which contains a function per operation
  • a verticle which defines all operationId and create EventBus consumers

Example : the Bottles interface

Example : the Bottles verticle

… and now ?

Line 23 of BottlesApiVerticle.java, you can see this

BottlesApi service = new BottlesApiImpl();
This line will not compile until the BottlesApiImpl class is created.

In all XXXAPIVerticles, you will find a variable called service. It is a XXXAPI type and it is instanciated with a XXXAPIImpl contructor. This class does not exist yet since it is the business of your API.

And so you will have to create these implementations.

Fine, but what if I don’t want to build my API like this ?

Well, Vert.x is unopinionated but the way the vertx-swagger-codegen creates the server stub is not. So if you want to implement your API the way you want, while enjoying dynamic routing based on a swagger file, the vertx-swagger-router library can be used standalone.

Just import this jar into your project :

You will be able to create your Router like this :

FileSystem vertxFileSystem = vertx.fileSystem();
vertxFileSystem.readFile(YOUR_SWAGGER_FILE, readFile -> {
    if (readFile.succeeded()) {
        Swagger swagger = new SwaggerParser().parse(readFile.result().toString(Charset.forName(“utf-8”))); 
        Router swaggerRouter = SwaggerRouter.swaggerRouter(Router.router(vertx), swagger, vertx.eventBus(), new OperationIdServiceIdResolver());
        […]
   } else {
        […]
   }
});
You can ignore the last parameter in SwaggerRouter.swaggerRouter(...). As a result, addresses will be computed instead of using operationId from the swagger file. For instance, GET /bottles/{bottle_id} will become GET_bottles_bottle-id

Conclusion

Vert.x and Swagger are great tools to build and document an API but using both in the same project can be painful. The Vert.x-Swagger project was made to save time, letting the developers focusing on business code. It can be seen as an API framework over Vert.X.

You can also use the SwaggerRouter in your own project without using Swagger-Codegen.

In future releases, more information from the swagger file will be used to configure the router and certainly others languages will be supported.

Though Vert.x is polyglot, Vert.x-Swagger project only supports Java. If you want to contribute to support more languages, you’re welcome :)

Thanks for reading.


by phiz71 at May 22, 2017 12:00 AM

N4JS Becomes an Eclipse Project

by Brian Smith (noreply@blogger.com) at May 19, 2017 03:34 PM

We’re proud to announce that N4JS has been accepted as an Eclipse Project and the final official steps are underway. Our team have been working very hard to wrap up the Initial Contribution and are excited to be part of Eclipse. The project will be hosted at https://eclipse.org/n4js, although this currently redirects to the project description while our pages are being created. In the meantime, N4JS is already open source - our GitHub project pages are located at http://numberfour.github.io/n4js/ which contains articles, documentation, the source for N4JS and more.

Some background information about us:
N4JS was developed by Enfore AG, founded in 2009 as NumberFour AG by Marco Boerries. Enfore’s goal is to build an open business platform for 200+ million small businesses and to provide those businesses with the tools and solutions they need to stay competitive in a connected world.

Initially, JavaScript was intended as the main language for third-party developers to contribute to our platform; it runs directly in the browser and it’s the language of the web! One major drawback is the absence of a static type system; this turned out to be an essential requirement for us. We wanted to ensure reliable development of our platform and our own applications, as well as making life easier for third-party contributors to the Enfore platform. That’s the reason why we developed N4JS, a general-purpose programming language based on ECMAScript 5 (commonly known as JavaScript). The language combines the dynamic aspects of JavaScript with the strengths of Java-like types to facilitate the development of flexible and reliable applications.

N4JS is constantly growing to support many new modern language features as they become available. Some of the features already supported are concepts introduced in ES6 including arrow functions, async/await, modules and much more. Our core team are always making steady improvements and our front end team make use of the language and IDE daily for their public-facing projects. For more information on how the N4JS language differs from other JavaScript variants introducing static typing, see our detailed FAQ.

Why Eclipse?
For us, software development is much more than simply writing code, which is why we believe in IDEs and Eclipse in particular. We were looking for developer tools which leverage features like live code validation, content assist (aka code completion), quick fixes, and a robust testing framework. Contributors to our platform can benefit from these resources for their own safe and intuitive application development.

We tried very hard to design N4JS so that Java developers feel at home when writing JavaScript without sacrificing JavaScript’s support for dynamic and functional features. Our vision is to provide an IDE for statically-typed JavaScript that feels just like JDT. This is why we strongly believe that N4JS could be quite interesting in particular for Eclipse (Java) developers. Aside from developers who are making use of N4JS, there are areas in the development of N4JS itself which would be of particular interest to committers versed in type theory, semantics, EMF, Xtext and those who generally enjoy solving the multitude of challenges involved in creating new programming languages.

What’s next?
While we are moving the project to Eclipse, there are plenty of important checks that must be done by the Eclipse Intellectual Property Team. The Initial Contribution is under review with approximately thirty Contribution Questionnaires created. This is a great milestone for us and reflects the huge effort involved in the project to date. We look forward to joining Eclipse, taking part in the ecosystem in an official capacity and seeing what the community can do with N4JS. While we complete these final requirements, we want to extend many thanks to all at Eclipse who are helping out with the process so far!


by Brian Smith (noreply@blogger.com) at May 19, 2017 03:34 PM

Open Testbeds, DB Case Study, and IoT Events

by Roxanne on IoT at May 19, 2017 01:02 PM

The Eclipse IoT community has been working hard on some pretty awesome things over the past few months! Here is a quick summary of what has been happening.

Open Testbeds

We recently announced the launch of Eclipse IoT Open Testbeds. Simply put, they are collaborations between vendors and open source communities that aim to demonstrate and test commercial and open source components needed to create specific industry solutions.

The Asset Tracking Management Testbed is the very first one! It is a collaboration between Azul Systems, Codenvy, Eurotech, Red Hat, and Samsung’s ARTIK team. It demonstrates how assets with various sensors can be tracked in real-time, in order to minimize the cost of lost or damaged parcels. You can learn more about the Eclipse IoT Open Testbeds here.

Watch Benjamin Cabé present the Asset Tracking testbed demo in the video below. It was recorded at the Red Hat Summit in Boston this month.⬇

https://medium.com/media/9760619803f58c96c48cecf2af59dd18/href

Case Study

We have been working with Deutsche Bahn (DB) and DB Systel to create a great case study that demonstrates how open source IoT technology is being used on their German railway system. They are currently using two Eclipse IoT projects, Eclipse Paho and Eclipse Mosquitto, among other technologies. In other words, if you’ve taken a DB train in Germany, you might have witnessed the “invisible” work of Eclipse IoT technology at the station or on board. How awesome is that?!

Case Study — Eclipse IoT and DB

Upcoming IoT Events

I am currently working on the organization of two upcoming Eclipse IoT Days that will take place in Europe this fall! 🍂 🍁 🍃 We are currently accepting talks for both events. Go on, submit your passion! I am excited to read your proposal :)

Eclipse IoT Day @ Thingmonk
September 11 | London, UK
📢 Email us your proposal iot at eclipse dot org

Eclipse IoT Day @ EclipseCon Europe
October 24 | Ludwigsburg, Germany
📢 Propose a talk

I look forward to meeting you in person at both events!

— Roxanne (Yes, I decided to sign this blog post.)


by Roxanne on IoT at May 19, 2017 01:02 PM

Installing Red Hat Developer Studio 10.2.0.GA through RPM

by jeffmaury at May 19, 2017 12:23 PM

With the release of Red Hat JBoss Developer Studio 10.2, it is now possible to install Red Hat JBoss Developer Studio as an RPM. It is available as a tech preview. The purpose of this article is to describe the steps you should follow in order to install Red Hat JBoss Developer Studio.

Red Hat Software Collections

JBoss Developer Studio RPM relies on Red Hat Software Collections. You don’t need to install Red Hat Software Collections but you need to enable the Red Hat Software Collections repositories before you start the installation of the Red Hat JBoss Developer Studio.

Enabling the Red Hat Software Collections base repository

The identifier for the repository is rhel-server-rhscl-7-rpms on Red Hat Enterprise Linux Server and rhel-workstation-rhscl-7-rpms on Red Hat Enterprise Linux Workstation.

The command to enable the repository on Red Hat Enterprise Linux Server is:

sudo subscription-manager repos --enable rhel-server-rhscl-7-rpms

The command to enable the repository on Red Hat Enterprise Linux Workstation is:

sudo subscription-manager repos --enable rhel-workstation-rhscl-7-rpms

For more information, please refer to the Red Hat Software Collections documentation.

JBoss Developer Studio repository

As this is a tech preview, you need to manually configure the JBoss Developer Studio repository.

Create a file /etc/yum.repos.d/rh-eclipse46-devstudio.repo with the following content:

[rh-eclipse46-devstudio-stable-10.x]
      name=rh-eclipse46-devstudio-stable-10.x
      baseurl=https://devstudio.redhat.com/static/10.0/stable/rpms/x86_64/
      enabled=1
      gpgkey=https://www.redhat.com/security/data/a5787476.txt
      gpgcheck=1
      upgrade_requirements_on_install=1
      metadata_expire=24h

Install Red Hat JBoss Developer Studio

You’re now ready to install Red Hat JBoss Developer Studio through RPM.

Enter the following command:

sudo yum install rh-eclipse46-devstudio

Answer &aposy&apos when transaction summary is ready to continue installation.

Answer &aposy&apos one more time when you see request to import GPG public key

Public key for rh-eclipse46-devstudio .rpm is not installed
      Retrieving key from https://www.redhat.com/security/data/a5787476.txt
      Importing GPG key 0xA5787476:
       Userid     : "Red Hat, Inc. (development key) <security@redhat.com>"
       Fingerprint: 2d6d 2858 5549 e02f 2194 3840 08b8 71e6 a578 7476
       From       : https://www.redhat.com/security/data/a5787476.txt
      Is this ok [y/N]:

After all required dependencies have been downloaded and installed, Red Hat JBoss Developer Studio is available on your system through the standard update channel !!!

You should see messages like the following:

rh eclipse46 devstudio.log

Launch Red Hat JBoss Developer Studio

From the system menu, mouse over the Programming menu, and the Red Hat Eclipse menu item will appear.

programming menu

Select this menu item and Red Hat JBoss Developer Studio user interface will appear then:

devstudio

Enjoy!

Jeff Maury


by jeffmaury at May 19, 2017 12:23 PM

EcoreTools: user experience revamped thanks to Sirius 5.0

by Cédric Brun (cedric.brun@obeo.fr) at May 19, 2017 12:00 AM

Every year the Eclipse M7 milestone act as a very strong deadline for the projects which are part of the release train: it’s then time for polishing and refining!

When your company is responsible for a number of inter-dependent projects some of them core technologies like EMF Services , the GMF Runtime, others user facing tools like Acceleo, Sirius or EcoreTools, packaging and integration oriented projects like Amalgam or the Eclipse Packaging project and all of these releases needs to be coordinated, then may is a busy month.

I’m personally involved in EcoreTools which makes me in the position to step in the role of the consumer of the other technologies and my plan for Oxygen was to make use of the Property Views support included in Sirius. This support allows me, as the maintainer of EcoreTools to specify directly through the .odesign every Tab displayed in the properties view. Just like the rest of Sirius it is 100% dynamic, no need for code generation or compilation, and complete flexibility with the ability to use queries in every part of the definition.

Before Oxygen EcoreTools already had property editors. Some of them were coded by hand and were developed more than 8 years ago. When I replaced the legacy modeler by using Sirius I made sure at that time to reuse those highly tuned property editors. Others I generated using the first generation of the EEF Framework so that I could cover every type of Ecore and benefit from the dialogs to edit properties using double click. The intent at that time was to make the modeler usable in fullscreen when no other view is visible.

Because of this requirement I had to wait for the Sirius team to make its magic: the properties views support was ready for production with Sirius 4.1, but this was not including any support for dialogs and wizards yet.

Then magic happened: the support for dialogs and wizards is now completely merged in Sirius, starting with M7. In EcoreTools the code responsible for those properties editors represents more than 70% of the total code which peaks at 28K.

Lines of Java code subject to deletion in EcoreTools

In gray those are the plugins which are subject to removal once I use this new feature, as a developer one can only rejoice at the idea of deleting so much code!.

I went ahead and started working on this, the schedule was tight but thanks to the ability to define reflective rules using Dynamic Mappings I could quickly cover everything in Ecore and get those new dialogs working.

New vs old dialogs

Just by using a dozen reflective rules and adding specific Pages or Widgets when needed.

The tooling definition in ecore.odesign

It went so fast I could add new tools for the Generation Settings through a specific tab.

Genmodel properties exposed through a specific tab

And even introduce a link to directly navigate to the Java code generated from the model:

Link opening the corresponding generated Java code.

Even support for EAnnotations could be implemented in a nice way:

Tab to add, edit or delete any EAnnotation

As a tool provider I could focus on streamlining the experience, providing tabs and actions so that the end user don’t have to leave the modeler to adapt the generation settings or launch the code generation, give visual clues when something is invalid. I went through many variants of these UIs just to get the feel of it, as I get an instant feedback I only need minutes to rule out an option. I have a whole new dimension I can use to make my tool super effective.

This is what Sirius is about, empowering the tool provider to focus on the user experience of its users.

It is just one of the many changes which we’ve been working on since last year to improve the user experience of modeling tools, Mélanie and Stéphane will present a talk on this very subject during EclipseCon France at Toulouse: “All about UX in Sirius.”.

All of these changes are landing in Eclipse Oxygen starting with M7, those are newly introduced and I have no doubt I’ll have some polishing and refining to do, I’m counting on you to report anything suspicious

EcoreTools: user experience revamped thanks to Sirius 5.0 was originally published by Cédric Brun at CTO @ Obeo on May 19, 2017.


by Cédric Brun (cedric.brun@obeo.fr) at May 19, 2017 12:00 AM

Case Study: Deploying Eclipse IoT on Germany's DB Railway System

May 18, 2017 08:55 AM

We worked with Deutsche Bahn (DB) to find out how they use Eclipse IoT technology on their railway system!

May 18, 2017 08:55 AM

New blog location

by Kim Moir (noreply@blogger.com) at May 17, 2017 09:12 PM

I moved my blog to WordPress.

New location is here https://kimmoir.blog/

by Kim Moir (noreply@blogger.com) at May 17, 2017 09:12 PM

What can Eclipse developers learn from Team Sky’s aggregation of marginal gains?

by Tracy M at May 17, 2017 01:36 PM

The concept of marginal gains, made famous by Team Sky, has revolutionized some sports. The principle is that if you make 1% improvements in a number of areas, in the long run the cumulative gains will be hugely significant. And in that vein, a 1% decline here-and-there will lead to significant problems further down the line.

So how could we apply that principle to the user experience (UX) of Eclipse C/C++ Development (CDT) tools? What would happen if we continuously improved lots of small things in Eclipse CDT? Such as the build console speed? Or a really annoying message in the debugger source window? It is still too soon to analyse the impact of these changes but we believe even the smallest positive change will be worth it. Plus it is a great way to get new folks involved with the project. Here’s a guest post from Pierre Sachot, a computer science student at IUT Blagnac who is currently doing open-source work experience with Kichwa Coders. Pierre has written an experience report on fixing his very first CDT UX issue.

Context

This week I worked with Yannick on fixing the CDT CSourceNotFoundEditor problem – the unwanted error message that Eclipse CDT shows when users are running the debugger and jumping into a function which is in another project file. When Eclipse CDT users were running the debugger on the C Project, a window was opening on screen. This window was both alarming in appearance and obtrusive. In addition, the message itself was unclear. For example, it could display “No source available for 0x02547”, which is irrelevent to the user because he/she does not have an access to this memory address. Several users had complained about it and expressed a desire to disable the window (see: stack overflow: “Eclipse often opens editors for hex numbers (addresses?) then fails to load anything”). In this post I will show you how we replaced CSourceUserNot FoundEditor with a better user experience display.

Problem description:

1- The problem we faced was that CSourceNotFoundEditor displayed on several occasions. For example:

  • When the source file was not found
  • When the memory address was known but not the function name
  • When the function name was known

2- We also wanted to tackle that red link! Red lettering is synonymous with big problems – yet the error message was merely informing the user that the source could not be found, so we felt a less alarmist style of text would be more appropriate.


CSourceNotFoundEditor Dialog:

Previous version New version

CSourceNotFoundEditor Preferences:

Previous version New version

How to resolve the problem ?

CSourceNotFoundEditor:

CSourceNotFoundEditor is the class called by the openEditor() function, Yannick added a link to the debug preferences page inside it:

  • The first thing to do was to create the “Preferences…” button and a text to go with it. Yannick did this in the createButtons() function.
  • Next, we made it possible for the user to open the Preferences on the correct page – in our case, the Debug page – using this code:
PreferencesUtil.createPreferenceDialogOn(parent.getShell(), "org.eclipse.cdt.debug.ui.CDebugPreferencePage", null, null).open();

“org.eclipse.cdt.debug.ui.CDebugPreferencePage” is the class name we want to load in the debug preferences.

CDebugPreferencePage:

This class is the one which contains the debug preferences page. I set about modifying it so that the CSourceNot Found preferences could be re-set and access to them enabled. This included the option to modify PreferenceMessages.properties which contains the String values of the buttons, and PreferenceMessage.java to declare them and use them. The last thing we did was to create a global value in CCorePreferenceConstants to get and set the display preferences. This we did in 4 stages:

  • First we created a group for the radio buttons. This is in the function createContents().
  • Second we created the variable intended to store the preference value. This value is a String store in the CCorePreferenceConstants class. To get a preference String value, you need to use:
DefaultScope.INSTANCE.getNode(CDebugCorePlugin.PLUGIN_ID).get(CCorePreferenceConstants.YOUR_PREFERENCE_NAME, null);

And to store it:

InstanceScope.INSTANCE.getNode(CCorePlugin.PLUGIN_ID).put(CCorePreferenceConstants.YOUR_PREFERENCE_NAME, "Your text");

Here we created a preference named: SHOW_SOURCE_NOT_FOUND_EDITOR which can take 3 values, defined at the begining of the CDebugPreferencePage class:

/**
* Use to display by default the source not found editor
* @since 6.3
*/
public static final String SHOW_SOURCE_NOT_FOUND_EDITOR_DEFAULT = "all_time"; //$NON-NLS-1$

/**
* Use to display all the time the source not found editor
* @since 6.3
*/
public static final String SHOW_SOURCE_NOT_FOUND_EDITOR_ALL_THE_TIME = "all_time"; //$NON-NLS-1$

/**
* Use to display sometimes the source not found editor
* @since 6.3
*/
public static final String SHOW_SOURCE_NOT_FOUND_EDITOR_SOMETIMES = "sometimes"; //$NON-NLS-1$

/**
* Use to don't display the source not found editor
* @since 6.3
*/
public static final String SHOW_SOURCE_NOT_FOUND_EDITOR_NEVER = "never"; //$NON-NLS-1$
  • Third, we needed to find where to set the values and where to get them. So, to set the values on your components, use the setValues() function.To store a value, you will need to add your code in storeValues(), like it’s name suggests it will store the value inside of the global preferences variable.
  • The fourth and final stage is really important: – You need to put the default value of the preference you want to add in setDefaultValues() to allows access to the original value of the preferences.

DsfSourceDisplayAdapter:

This is the class which calls CSourceNotFoundEditor, so here in the function openEditor, we needed to check the preferences options in order to know if it was possible to display CSourceFoundEditor. These checks need to be carried out in openEditor() function because this is the function which opens the CSourceNotFoundEditor. To do that, we created two cases:

  • First case in which the user wants to display the Editor all the time
  • Second for when the user only wants to display it if the source file is not found
  • The last case is an exclusion of the “all_time”, so you don’t need to check it because nothing is done in this case.

To do that, we did it like this:
how to display CSourceNotFoundEditor

Conclusion

Now users have the capacity to disable CSourceNotFoundEditor window altogether or to choose for themselves when to display it. Thus saving time and improving the user experience of the Eclipse debugger. This is a great example of how working on an open source project can really benefit a whole community of users. But, a word of warning, CDT project isn’t the easiest program to develop or the easiest to master, you need to understand other user’s code and if you change it you need to retain its original logic and style. Fiddly perhaps but well worth it! The user community will appreciate your efforts and the flow of coding future work will be smoother and more efficient. A better user experience for everyone.



by Tracy M at May 17, 2017 01:36 PM