Eclipse Neon.2: quick demo of three improvements

by howlger at January 19, 2017 02:30 PM

In December 2016 Neon.2 was released with only a few but nonetheless very helpful improvements. My Eclipse Neon.2: quick demo of 3 improvements shows three of these improvements:

  1. IDE – Compare Editor: Swap Left and Right View
  2. Java – Open Projects from File System: new Java and JDK detectors
  3. Arduino C++ Tools

Eclipse Neon.2: quick demo of 3 improvements

Opening a Java project that has not been created with Eclipse becomes a no-brainer with the new Java detector that is used by File > Open Projects from File System. Also the Arduino Downloads Manager of the Arduino C++ Tools shows how simple things can be: Just choose your Arduino board or compatible system and the libraries you want to use. Everything required, e. g. C++ compiler, will be downloaded and configured for you. Watch Doug‘s 11-minute video for more details.

There are also Eclipse IDE Git integration improvements, but EGit and JGit forgot to contribute their version 4.5 (I like auto-stage selected files on Commit…) and 4.6 to Neon.2. To get the latest Git improvements add the update site http://download.eclipse.org/egit/updates to Window > Preferences > Install/Update > Available Software Sites.

If you missed the last two releases, here are my quick demos of 10 Neon.1 and 22 Neon.0 improvements:

Eclipse Neon: 5:30-minute demo of 22 nice improvementsEclipse Neon: 5:30-minute demo of 22 nice improvements

The next and last Neon update Neon.3 will be released on March 23 before the next annual main release Oxygen on June 28.



by howlger at January 19, 2017 02:30 PM

Eclipse Newsletter | Exploring New Technologies

January 19, 2017 10:42 AM

Jumpstart an Angular project, develop microservices w/fabric8, build a blog app w/JHipster 4, and test Java microservices.

January 19, 2017 10:42 AM

2017 Board Elections | Nominations Open

January 18, 2017 03:40 PM

Nominations for the 2017 Eclipse Foundation Board Election is open for Committer & Sustaining Member representatives.

January 18, 2017 03:40 PM

How to improve your GUI design with usability tests

by Rainer Klute (rainer.klute@itemis.de) at January 18, 2017 09:45 AM

Developing and actually using a graphical user interface (GUI) are two sides of a coin. As a software developer, have you ever wondered how users of your application are getting along with the GUI you created for them?

Someone knowing an application and its domain inside out has certainly a different view than someone else who is just trying to take his first steps. This article shows how to improve your GUI with the help of usability tests by an example.

The challenge: designing a supportive user interface

If you are aware of how a suboptimally designed GUI might hamper the user: congratulations! However, how can you find out which of several possible user interface variants supports the user experience best? Your own experience in the field will not necessarily contribute a good piece of advice, because quite likely you are professionally blinkered – ironically due to your very experience.

You need a method that offers a lot of insights and can help you make the right choices. Such a method is to sketch a few different GUI variants and have potential or actual users check them out. You don't need to implement these variants in your software. A wireframe tool to set up some mockups is sufficient, preferably with some scriptable interactivity.

A wizard page for Xtext

Our case deals with a wizard page for Xtext, a tool for creating domain-specific languages and associated language infrastructures. Don't worry, you do not need to understand in detail what Xtext is and what it does to follow this use case and the basic principle behind usability tests. Suffice to know that Xtext is Eclipse-based, has a wizard to create a new Xtext project and that this wizard includes a particular page to configure some advanced options for a new Xtext project. Fig. 1 shows the implementation of this dialogue in the current Xtext release 2.10.

implementation-dialog-xtext-2.10.-fig1.pngFig.1: Original wizard page

There has been some discussion going on among Xtext core developers on whether this wizard page's design really satisfies users' needs or not. There wasn't a clear conclusion, so itemis' usability experts were asked to investigate further and run a usability test on the current GUI and some alternatives.

They used Balsamiq Mockups to draw wireframe models of the original wizard page and two alternative versions. Balsamiq Mockups is able to mimic some dynamic behavior. You can configure it e.g. to toggle a certain option from disabled to enabled state if some checkbox has been checked. This is really nice, because this way users can play around with mockup interfaces and get a more realistic impression of how the real implementation would behave. Fig. 2 shows a wireframe version of the screenshot in fig. 1.

wireframe-original-user-interface-xtext-2.10..pngFig. 2: Wireframe of the original user interface (variant 1)

Running the usability test

In the relaxed atmosphere of a usability dinner, five software developers were asked to perform a simple task with all three user interface variants: creating a new Xtext project with support for an Eclipse editor front-end. The participants had no to moderate Xtext experience and no to senior expert level Eclipse experience. The "New Project" wizard or at least the “Advanced Xtext Configuration” wizard page was new to all of them.

While performing their task, they were asked to think aloud and comment on what they see, what they like, what they dislike, what irritates them etc.

Hidden option dependencies

The most prominent test result: All users stumbled over an unexpected dependency between wizard options. The background: If you want to support a front-end in your Xtext project you have to check "Generic IDE Support" first, and then select the kind of front-end you want, i.e. Eclipse Plugin, IntelliJ IDEA Plugin or Web Integration. By default, "Generic IDE Support" is preselected in the wizard. However, the user could get into a situation where the option is disabled, because e.g. he unchecked it inadvertently.

No user was able to spot this dependency by just looking at the wizard page. Everyone at first checked "Eclipse Plugin" – only to run into an error message shown at the bottom of the page, see fig. 1 or fig. 2. Not everyone noticed that message immediately and not everyone was immediately able to tell what to do next. Sure, sooner or later everyone managed to find out that he had to enable "Generic IDE Support" first in order to activate Eclipse support. However, this is a severe usability issue, because everyone got irritated by the unexpected dependency and by the "unavoidable" necessity of dealing with an error message.

generic-ide-support-xtext-2.10.-variant2.pngFig. 3: Automatically setting the "Generic IDE Support" option (variant 2)

A second wizard page variant (fig. 3) copes with this deficiency. It looks almost identical to the first version, but its dynamic behavior is different: If the user checks "Eclipse Plugin", the "Generic IDE Support" option is automatically checked, too, i.e. the user interface essentially does by itself what the user would have to do otherwise. "Generic IDE Support" is also disabled so it cannot be unchecked as long as "Eclipse Plugin" or one of the other front-end options are checked. Users liked this behaviour very much and had no issues fulfilling their task. This holds even though the dependency was still not explicitly visible in the GUI.

option-dependencies-xtext-2.10.-variant3.png
Fig. 4: Making option dependencies visible (variant 3)

A third variant of the wizard page visualized the options in a dependency hierarchy (fig 4). Users were now able to see that

  • "Generic IDE Support" is a requirement for "Eclipse Plugin", "IntelliJ IDEA Plugin", and "Web Integration", and
  • there is no dependency between IDE issues and the remaining options like e.g. testing support or source layout.

On the other hand some users found it confusing that they could not select "Eclipse Plugin" right away, but instead had to check "Generic IDE Support" first.

Overall, users felt that variant 2 supported them best, followed by variant 3. Nobody preferred the original variant 1.

Explain your options!

As an additional result it turned out that users don't necessarily understand what the options offered by the wizard actually mean. Our testers had quite different ideas – or no idea at all – on what "Eclipse Plugin", "IntelliJ IDEA Plugin", "Web Integration", "Generic IDE Support", "Testing Support", "Preferred Build System" and "Source layout" might mean. The "Source Layout" option really took the cake: Not a single user explained it correctly without seeing the available options. As a consequence, the developers should add tooltips to the options. These tooltips could explain each option in some detail, link to the appropriate section of the documentation or even include it.

Bottom line

Consider drafting some alternative GUI variants or have usability experts do that for you and run usability tests! It will be a benefit to your users and might help propagate your software.

And please don't take for granted that everyone knows the terms you are familiar with! Explain them to your users!

by Rainer Klute (rainer.klute@itemis.de) at January 18, 2017 09:45 AM

EclipseCon France 2017 | Call for Papers

January 18, 2017 08:40 AM

Time to send us your proposals! Submissions close March 29. Submit by March 15 to be an early-bird pick.

January 18, 2017 08:40 AM

Eclipse Infrastructure Support for IP Due Due Diligence Type

by waynebeaton at January 16, 2017 10:01 PM

The Eclipse Foundation’s Intellectual Property (IP) Policy was recently updated and we’re in the process of updating our processes and support infrastructure to accommodate the changes. With the updated IP Policy, we introduced the notion of Type A (license certified) and Type B (license certified, provenance checked, and scanned) due diligence types for third-party dependencies that projects can opt to adopt.

With Type A, we assert only that third-party content is license compatible with a project. For Type B third-party content, the Eclipse Foundation’s IP Team invests considerable effort to also assert that the provenance is clear and that the code has been scanned to ensure that it is clear of all sorts of potential issues (e.g. copyright or license violations). The type of due diligence applies at the release level. That is, a project team can decide the level of scrutiny that they’d like to apply on a release-by-release basis.

For more background, please review License Certification Due Diligence and What’s Your (IP Due Diligence) Type?

By default all new projects at the Eclipse Foundation now start configured to use Type A by default. We envision that many project teams will eventually employ a hybrid solution where they have many Type A releases with period Type B releases.

The default due diligence type is recorded in the project’s metadata, stored in the Project Management Infrastructure (PMI). Any project committer or project lead can navigate to their project page, and click the “Edit” button to access project metadata.

project_edit.png

In the section titled “The Basics” (near the bottom), there’s a place where the project team can specify the default due diligence type for the project (it’s reported on the Governance page). If nothing is specified, Type B is assumed. Specifying the value at the project level is basically a way for the project team to make a statement that their releases tend to employ a certain type of due diligence for third-party content.

project_dd_type.png

Project teams can also specify the due diligence type for third-party content in the release record. Again, a project committer or project lead can navigate to a release record page, and click “Edit” to gain access to the release metadata.

release_dd_type.png

As for projects, the metadata for IP due diligence type is found in the section titled “The Basics”. The field’s description is not exactly correct: if not specified in the release record, our processes all assume the value specified in the project metadata. We’ll fix this.

When the time comes to create a request to the Eclipse Foundation’s IP Team to review a contribution (a contribution questionnaire, or CQ), committers will see an extra question on the page for third-party content.

create_cq.png

As an aside, committers that have tried to use our legacy system for requesting IP reviews (the Eclipse Developer Portal) will have noticed that we’re now redirecting those requests to the PMI-based implementation. Project committers will find a direct link to this implementation under the Committer Tools block on their project’s PMI page.

We’ve added an extra field to the record that gets created in our IP tracking system (IPZilla), Type, that will be set to Type_A or Type_B (in some cases, it may be empty, “–“). We’ve also added a new statelicense_certified, that indicates that the licenses has been checked and the content can be used by the project in any Type A release.

Any content that is approved can be assumed to also be license certified.

There are many other questions that need to be answered, especially with regard to IP Logs, mixing IP due diligence types, and branding downloads. I’ll try to address these topics and more in posts over the next few days.



by waynebeaton at January 16, 2017 10:01 PM

Using MQTT-SN over BLE with the BBC micro:bit

by Benjamin Cabé at January 16, 2017 11:11 AM

The micro:bit is one of the best IoT prototyping platforms I’ve come across in the past few months.

The main MCU is a Nordic nRF51822 with 16K RAM and 256K Flash. A Freescale KL26Z is used for conveniently implementing a USB interface as well as a mass storage driver so as deploying code onto the micro:bit is as simple as directly copying a .hex file over USB (if your familiar with the mbed ecosystem, this will sound familiar :-)).

The board is packed with all the typical sensors and actuators you need for prototyping an IoT solution: accelerometer, compass, push buttons, an LED matrix, … What’s really cool, is the built-in BLE support, combined with the battery connector, making it really easy to have a tetherless, low-power 1, IoT testing device.

So how does one take the micro:bit and turn it into an IoT device? Since there is no Internet connectivity, you need to rely on some kind of gateway to bridge the constrained device that is the micro:bit to the Internet. You can of course implement your own protocol to do just that, but then you have to basically reimplement the wheel. That’s the reason why I thought the micro:bit would be ideal to experiment with MQTT-SN.

You can jump directly to the video tutorial at the end of the post, and come back later for more in-depth reading.

What is MQTT-SN and why you should care

If I were to over simplify things, I would just say that MQTT-SN (which stands for “MQTT for Sensor Networks”, by the way) is an adaptation of the MQTT protocol to deal with constrained devices, both from a footprint/complexity standpoint, and to adapt to the fact constrained devices may not have TCP/IP support.

MQTT-SN is designed so as to make the packets as small as possible. An example is the fact that an MQTT-SN client registers the topic(s) it wishes to us against the  server, this way further PUBLISH or SUBSCRIBE exchanges only have to deal with a 2-byte long ID, as opposed to a possibly very long UTF-8 string.

Like I said before, you really don’t want to reimplement your own protocol, and using MQTT-SN just makes lot of sense since it bridges very naturally to good ol’ MQTT.

Setting up an MQTT-SN client on the micro:bit

The MQTT-SN supports the BLE UARTService from Nordic, that essentially mimics a classical UART by means of two BLE characteristics, for RX and TX. This is what we’ll use as our communication channel.

The Eclipse Paho project provides an MQTT-SN embedded library that turns out to be really easy to use. It allows you to serialize and deserialize MQTT-SN packets, the only remaining thing to do is for you to effectively transmit them (send or receive) over your communication channel – BLE UART in our case.

In order to show you how simple the library is to use, here’s an example of how you would issue a CONNECT:

MQTTSNPacket_connectData options = MQTTSNPacket_connectData_initializer;
options.clientID.cstring = microbit_friendly_name();
int len = MQTTSNSerialize_connect(buf, buflen, &options);
int rc = transport_sendPacketBuffer(buf, len);

/* wait for connack */
rc = MQTTSNPacket_read(buf, buflen, transport_getdata);
if (rc == MQTTSN_CONNACK)
{
    int connack_rc = -1;

    if (MQTTSNDeserialize_connack(&connack_rc, buf, buflen) != 1 || connack_rc != 0)
    {
        return -1;
    }
    else {
        // CONNECTION OK - continue
    }
} else {
    return -1;
}

Now what’s behind the transport_sendPacketBuffer and transport_getdata functions? You’ve guess correctly, this is where either send or read a buffer to/from the BLE UART.
Using the micro:bit UART service API, the code for transport_getdata is indeed very straightforward:

int transport_getdata(unsigned char* buf, int count)
{
    int rc = uart->read(buf, count, ASYNC);
    return rc;
}

You can find the complete code for publishing the micro:bit acceloremeter data over BLE on my Github. Note that for the sake of simplifying things, I’ve disabled Bluetooth pairing so as connecting to a BLE/MQTT-SN gateway just works out of the box.

MQTT-SN gateway

There are a few MQTT-SN gateways available out there, and you should feel free to use the one that floats your boat. Some (most?) MQTT-SN gateways will also behave as regular MQTT brokers so you won’t necessarily have to bridge the MQTT-SN devices to MQTT strictly speaking, but rather directly use the gateway as your MQTT broker.
For my tests, I’ve been pretty happy with RSMB, an Eclipse Paho component, that you can get from Github.

The README of the project is pretty complete and you should be able to have your RSMB broker compiled in no time. The default configuration file for RSMB should be named broker.cfg (you can specify a different configuration file on the command line, of course).
Below is an example of the configuration file so as RSMB behaves as both a good ol’ MQTT broker, but also an MQTT-SN gateway, bridged to iot.eclipse.org’s MQTT sandbox broker. Note that in my example I only care about publishing messages, so the bridge is configured in out mode, meaning that messages only flow from my MQTT-SN devices to iot.eclipse.org, and not the other way around. Your mileage may vary if you also want your MQTT-SN devices to be able to subscribe to message, in which case the bridging mode should be set to both

# will show you packets being sent and received
trace_output protocol

# MQTT listener
listener 1883 INADDR_ANY mqtt

# MQTT-S listener
listener 1884 INADDR_ANY mqtts

# QoS 2 MQTT-S bridge
connection mqtts
  protocol mqtt
  address 198.41.30.241:1883
  topic # out

Bridging the BLE device(s) to the MQTT-SN gateway

Now there is still one missing piece, right? We need some piece of software for forwarding the messages coming from the BLE link, to the MQTT-SN gateway.

I’ve adapted an existing Node.js application that does just that. For each BLE device that attaches to it, it creates a UDP socket to the MQTT-SN gateway, and transparently routes packets back and forth. When the micro:bit “publishes” an MQTT-SN packet, it is just as if it were directly talking to the MQTT-SN gateway.

The overall architecture is as follows:

Note that it would be more elegant (and also avoid some nasty bugs, actually 2) to leverage MQTT-SN’s encapsulation mechanism so as to make the bridge even more straightforward, and not have to maintain one UDP socket per BLE device. To quote the MQTT-SN specification:

The forwarder simply encapsulates the MQTT-SN frames it receives on the wireless side and forwards them unchanged to the GW; in the opposite direction, it decapsulates the frames it receives from the gateway and sends them to the clients, unchanged too.

Unfortunately RSMB does not support encapsulated packets at this point, but you can rely on this fork if you want to use encapsulation: https://github.com/MichalFoksa/rsmb.

Visualizing the data: mqtt-spy to the rescue!

Like in my previous article about Android Things, I used mqtt-spy to visualize the data coming from the sensors.

Note that publishing sensor data in JSON might not be the best idea in production: the MTU of a BLE packet is just 20 bytes. Those extra curly braces, commas, and double quotes are as many bytes you won’t be able to use for your MQTT payload. You may want to look at something like CBOR for creating small, yet typed, binary payloads.
However, JSON is of course pretty convenient since there’s a plethora of libraries out there that will allow you to easily manipulate the data…

Using mqtt-spy, it’s very easy to visualize the values we’re collecting from the accelerometer of the micro:bit, either in “raw” form, or on a chart, using mqtt-spy’s ability to parse JSON payloads.

Video tutorial and wrap-up

I’ve wanted to give MQTT-SN a try for a long time now, and I’m really happy I took the time to do so. All in all, I would summarize my findings as follow:

  • The Eclipse Paho MQTT-SN embedded client just works! Similarly to the MQTT embedded client, it is very easy to take it and port it to your embedded device, and no matter what actual transport layer you are using (Bluetooth, Zigbee, UDP, …), you essentially just have to provide an implementation of “transport_read” and “transport_write”.
  • You may want to be careful when doing things like “UART over BLE”. The main point of BLE is that it’s been designed to be really low-power, so if you tend to overly communicate or to remain paired with the gateway all the time, you will likely kill your battery in no time!
  • The NRF5x series from Nordic is very widely available on the market, so it would be really interesting to run a similar MQTT-SN stack on other devices than the micro:bit, therefore demonstrating how it truly enables interoperability. If you build something like this, I really want to hear from you!
  • Although it’s true that there are not quite as many MQTT-SN libraries and gateways available out there as there are for MQTT, the protocol is pretty straightforward and that shouldn’t be preventing you from giving it a try!

 

Notes:

  1. You should keep in mind that the micro:bit, like other similar boards, is meant to be a prototyping platform, and for example having the KL26Z core taking core of the USB controller might not be ideal battery-wise, if you only care about doing tetherless BLE communications.
  2. RSMB expects the first packet received on an incoming UDP connection to be a CONNECT packet. If the bridge forwards everything to the gateway transparently, that may not always be the case. If, instead, it takes care of encapsulating all MQTT-SN packets properly, that means you know need only one UDP socket from your BLE/UDP bridge to the gateway)

by Benjamin Cabé at January 16, 2017 11:11 AM

ECF 3.13.4 now available

by Scott Lewis (noreply@blogger.com) at January 15, 2017 06:13 PM

ECF 3.13.4 is now available.  This was a maintenance release, with bug fixes for the Eclipse tooling for OSGi Remote Services and an update of the Apache Httpclient filetransfer provider contributed to Eclipse.

by Scott Lewis (noreply@blogger.com) at January 15, 2017 06:13 PM

What’s Your (IP Due Diligence) Type?

by waynebeaton at January 13, 2017 07:01 PM

Long-time Eclipse Committer, Ian Bull initiated a interesting short chat on Twitter yesterday about one big challenge when it comes to intellectual property (IP) management. Ian asked about the implications of somebody forking an open source project, changing the license in that fork, and then distributing the work under that new license.

We can only surmise why somebody might do this (at least in the hypothetical case), but my optimistic nature tends toward assuming that this sort of thing isn’t done maliciously. But, frankly, this sort of thing does happen and the implications are the same regardless of intent.

Even-longer-time Eclipse Committer, Doug Schaefer offered an answer.

The important takeaway is that changing a license on intellectual property that you don’t own is probably bad, and everybody who touches it will potentially be impacted (e.g. potentially face litigation). I say “probably bad”, because some licenses actually permit relicensing.

Intellectual property management is hard.

The Eclipse Foundation has a dedicated team of intellectual property analysts that do the hard work on behalf of our open source project teams. The IP Team performs analysis on the project code that will be maintained by the project and for third-party libraries that are maintained elsewhere. It’s worth noting that there is no such thing as zero risk; the Eclipse IP Team’s work is concerned with minimising, understanding, and documenting risk. When they reject a contribution or third-party library use request, they do so to benefit of the project team, adopters of the project code, and everybody downstream.

In yesterday’s post, I introduced the notion of Type A (license certified), or Type B (license certified, provenance checked, and scanned). The scanned part of Type B due diligence includes—among many other things—the detection of the sort of relicensing that Ian asked about.

Since we don’t engage in the same sort of deep dive into the code, we wouldn’t detect this sort of thing with the license certification process that goes with Type A. That is, of course, not to say that it’s okay to use inappropriately relicensed third-party code in a Type A release, we just wouldn’t detect it via Type A license certification due diligence. This suggests a heightened risk associated with Type A over Type B to consider.

Type B due diligence is more resource intensive and so potentially takes a long time to complete. One of the great benefits of Type A, is that the analysis is generally faster, enabling a project team to get releases out quickly. For this reason, I envision a combination approach (some Type A releases mixed with less frequent Type B releases) to be appealing to many project teams.

So project teams needs to decide for themselves and for their downstream consumers, what sort of due diligence they require. I’ve already been a part of a handful of these discussions and am more than happy to participate in more. Project teams: you know how to find me.

It’s worth noting that Eclipse Foundation’s IP Team still does more due diligence review with Type A analysis than any other open source software foundation and many commercial organisations. If a committer suspects that shenanigans may be afoot, they can ask the IP Team to engage in a deeper review (a Type A project release can include Type B approved artifacts).

April wrapped up the Twitter conversation nicely.

Indeed. Kudos to the Eclipse Intellectual Property Team.

If you want to discuss the differences between the types of due diligence, our implementation of the Eclipse IP Policy changes, or anything else, I’ll be at Eclipse Converge and Devoxx US. Register today.

Eclipse Converge



by waynebeaton at January 13, 2017 07:01 PM

License Certification Due Diligence

by waynebeaton at January 12, 2017 08:02 PM

With the changes in the Eclipse Intellectual Property (IP) Policy made in 2016, the Eclipse Foundation now offers two types of IP Due Diligence for the third-party software used by a project. Our Type A Due Diligence involves a license certification only and our Type B Due Diligence provides our traditional license certification, provenance check, and code scan for various sorts of anomalies. I’m excited by this development at least in part because it will help new projects get up to speed more quickly than they could have in the past.

Prior to this change, project teams would have to wait until the full application of what we now call Type B Due Diligence was complete before issuing a release. Now, a project team can opt to push out a Type A release after having all of their third-party libraries license certified.

A project team can decide what level of IP Due Diligence they require for each release. Hypothetically, a project team could opt to make several Type A releases followed by a Type B release, and then switch back. I can foresee this being something that project teams that need to engage in short release cycles will do.

We’ve solicited a few existing projects to try out the new IP Due Diligence type and have already approved a handful of third-party libraries as Type A. The EMO has also started assuming that all new projects use Type A (license certification) by default. As we move forward, we expect that all new projects will employ Type A Due Diligence for all incubation releases and then decide whether or not to switch to Type B (license certification, provenance check, and code scan) for their graduation. There is, of course, no specific requirement to switch at graduation or ever, but we’re going to encourage project teams to defer the decision of whether or not to switch from Type A until that point.

After graduation, project teams can decide what they want to do. We foresee at least some project teams opting to issue regular multiple Type A releases along with an annual Type B release (at this point in the time, there is no specific requirement to be Type A or Type B to participate in the simultaneous release).

We’ve started rolling out some changes to the infrastructure to support this update to the IP Due Diligence process. I’ll introduce those changes in my next post.

Update: Based on some Tweets, I changed my intended topic for the next post. Please see What’s Your (IP Due Diligence) Type?

Eclipse Converge



by waynebeaton at January 12, 2017 08:02 PM

Making @Service annotation even cleverer

by Tom Schindl at January 12, 2017 06:45 PM

As some of you might know e(fx)clipse provides a Eclipse DI extension supporting more powerful feature when we deal with OSGi-Services:

  • Support for dynamics (eg if a higher-ranked service comes along you get it injected, …)
  • Support for service list
  • ServiceFactory support because the request is made from the correct Bundle

Since tonights build the @Service annotation has support to define:

  • A static compile time defined LDAP-Filter expression
    public class MySQLDIComponent {
      @Inject
      public void setDataSource(
        @Service(filterExpression="(dbtype=mysql)") 
        DataSource ds) {
         // ...
      }
    }
    
    public class H2DIComponent {
      @Inject
      public void setDataSource(
        @Service(filterExpression="(dbtype=h2)") 
        DataSource ds) {
         // ...
      }
    }
    
  • A dynamic LDAP-Filter expression who is calculated at runtime and can change at any time
    public class CurrentDatasource extends BaseValueObservable<String> implements OString {
      @Inject
      public CurrentDatasource(
        @Preference(key="database",defaultValue="h2") String database) {
        super("(dbtype="+database+")");
      }
    
      @Inject
      public void setDatabase(
        @Preference(key="database",defaultValue="h2") String database) {
        setValue("(dbtype="+database+")");
      }
    }
    
    public class DIComponent {
      @Inject
      public void setDataSource(
        @Service(dynamicFilterExpression=CurrentDatasource.class)
        DataSource ds) {
        // ...
      }
    }
    

    You notice the dynamic provider itself if integration fully into the DI-Story 😉



by Tom Schindl at January 12, 2017 06:45 PM

JBoss Tools 4.4.3.AM1 for Eclipse Neon.2

by jeffmaury at January 11, 2017 09:51 PM

Happy to announce 4.4.3.AM1 (Developer Milestone 1) build for Eclipse Neon.2.

Downloads available at JBoss Tools 4.4.3 AM1.

What is New?

Full info is at this page. Some highlights are below.

OpenShift 3

Although our main focus is bug fixes, we continue to work on providing better experience for container based development in JBoss Tools and Developer Studio. Let’s go through a few interesting updates here and you can find more details on the What’s New page.

Scaling from pod resources

When an application is being deployed to Openshift, it was possible to scale the pod resources from the service resource.

scale command from service

However, it was not a very logical choice. So the command is also available at the pod level, leading to better usability.

scale command from pod

Enjoy!

Jeff Maury


by jeffmaury at January 11, 2017 09:51 PM

JSON Forms – Day 3 – Extending the UI Schema

by Maximilian Koegel and Jonas Helming at January 11, 2017 09:15 AM

JSON Forms is a framework to efficiently build form-based web UIs. These UIs are targeted at entering, modifying and viewing data and are usually embedded within an application. JSONForms eliminates the need to write HTML templates and Javascript for manual databinding to create customizable forms by leveraging the capabilities of JSON and JSON schema as well as by providing a simple and declarative way of describing forms. Forms are then rendered within a UI framework – currently based on AngularJS. If you would like to know more about JSON Forms the JSON Forms homepage is a good starting point.

In this blog series, we would like to introduce the framework based on a real-world example application, a task tracker called “Make It happen”. On day 0 and 1 we defined our first form and on day 2 we introduced the UI schema and adapted it for our sample application.

Day 2 resulted in a functional form looking like this:

jsonforms_blogseries_day2_form

If you would like to follow this blog series please follow us on twitter. We will announce every new blog post on JSON Forms there.

The goal of our third iteration on the “Make It Happen” example is to enhance the data schema with additional attributes and update the UI schema accordingly. 

So far, the JSON Schema for our data entity defined three attributes:

  • “Name” (String) – mandatory
  • “Description” (multi-line String)
  • “Done” (Boolean).

In our third iteration, we add two additional attributes to the Task entity. These additional attributes are:

  • “Due Date” (Date)
  • “Rating” (Integer)

As JSON Forms facilitates JSON Schema as a basis for all forms, we start with enhancing the data schema with the new attributes. The following listing shows the complete data schema, only due_date and rating have to be added, though:

{
    "type": "object",
    "properties": {
      "name": {
        "type": "string"
      },
      "description": {
        "type": "string"
      },
      "done": {
        "type": "boolean"
      },
      "due_date": {
        "type": "string",
        "format": "date"
      },
      "rating": {
        "type": "integer",
        "maximum": 5
      }
    },
    "required": ["name"]
}

Based on the extended data schema, we also need to extend the UI schema to add the new properties to our rendered form:

{
  "type": "VerticalLayout",
  "elements": [
    {
      "type": "Control",
      "scope": {
        "$ref": "#/properties/name"
      }
    },
    {
      "type": "Control",
      "label": false,
      "scope": {
        "$ref": "#/properties/done"
      }
    },
    {
      "type": "Control",
      "scope": {
        "$ref": "#/properties/description"
      },
      "options": {
        "multi":true
      }
    },
    {
      "type": "Control",
      "scope": {
        "$ref": "#/properties/due_date"
      }
    },
    {
      "type": "Control",
      "scope": {
        "$ref": "#/properties/rating"
      }
    }
  ]
}

Based on those two schemas, the JSON Form renderer will now automatically produce this form:

jsonforms_blogseries_day3_form

Note that JSON Forms automatically creates the correct widgets for the new attributes: a date picker for “due date” and a input field for “rating”. For rating it would be nice to have a more special control, though. This is possible with JSON Forms and will be described later in this series. Please also note that those controls are automatically bound to the underlying data and provide the default features such as validation.

Another interesting feature often required in forms is to control the visibility of certain controls based on the current input data. This is supported in JSON Forms, we will describe this rule-based visibility next week.

If you are interested in trying out JSON Forms, please refer to the Getting-Started tutorial. It explains how to set up JSON Forms in your project and how you can try the first steps out yourself. If you would like to follow this blog series, please follow us on twitter. We will announce every new blog post on JSON Forms there.

We hope to see you soon for the next day!


TwitterGoogle+LinkedInFacebook

Leave a Comment. Tagged with AngularJS, emf, emfcp, emfforms, Forms, JSON, JSON Schema, AngularJS, emf, emfcp, emfforms, Forms, JSON, JSON Schema


by Maximilian Koegel and Jonas Helming at January 11, 2017 09:15 AM

First Papyrus IC Research/Academia webinar of 2017

by tevirselrahc at January 10, 2017 05:29 PM

If you’ve been following this blog, you already know that I have an Industry Consortium.

And if you looked at the Papyrus Industry Consortium’s (PIC) website, you also know that it has a Research and Academia Committee!

And that committee is known to hold very interesting webinars about various aspects of modeling, open source, and, of course, ME!

Well, the first webinar of the year will happen this Friday, January 13th, at 16:00 – 17:00 CET, 15:00 – 16:00 GMT, 10:00 – 11:00 EST.

Our first speaker of 2017 is none other than Jordi Cabot, ICREA Research Professor at IN3 (Open University of Catalonia), a well-known member of our community with many years of experience as a researcher in Model Driven Engineering and in open-source software and the driving force behind the MOdeling LAnguages blog.

Jordi will be talking about some of the key factors in the success of open-source software projects. His talk is titled:

Wanna see your OSS project succeed? Nurture the community

I hope you will join us for this very interesting talk.

You can find the connection information in the Papyrus IC wiki.


Filed under: Research and Academia, Uncategorized Tagged: academia, community, open-source, project, research, webinar

by tevirselrahc at January 10, 2017 05:29 PM

Use the Eclipse Java Development Tools in a Java SE application

January 09, 2017 11:00 PM

Stephan Herrmann has announced that some libraries of the Eclipse Neon.2 release are now available on maven central.

Some eclipse jars are now available the central repository

It is now easy to reuse the piece of Eclipse outside any Eclipse based application. Let me share with you this simple example: use the java code formatter of Eclipse JDT in a simple java main class.

Step 1: create a very simple maven project. You will need org.eclipse.jdt.core as dependency.

Listing 1. Example pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>example</groupId>
  <artifactId>java-formatter</artifactId>
  <version>1.0.0-SNAPSHOT</version>

  <dependencies>
    <dependency>
      <groupId>org.eclipse.jdt</groupId>
      <artifactId>org.eclipse.jdt.core</artifactId>
      <version>3.12.2</version>
    </dependency>
  </dependencies>
</project>

Step 2: write a java class with a main method.

Listing 2. Example main class
import java.util.Properties;

import org.eclipse.jdt.core.JavaCore;
import org.eclipse.jdt.core.ToolFactory;
import org.eclipse.jdt.core.formatter.CodeFormatter;
import org.eclipse.jdt.internal.compiler.impl.CompilerOptions;
import org.eclipse.jface.text.BadLocationException;
import org.eclipse.jface.text.Document;
import org.eclipse.jface.text.IDocument;
import org.eclipse.text.edits.TextEdit;

public class MainFormatter {

  public static void main(String[] args) {
    String result;

    String javaCode = "public class MyClass{ "
                        + "public static void main(String[] args) { "
                        + "System.out.println(\"Hello World\");"
                        + " }"
                        + " }";

    Properties prefs = new Properties();
    prefs.setProperty(JavaCore.COMPILER_SOURCE, CompilerOptions.VERSION_1_8);
    prefs.setProperty(JavaCore.COMPILER_COMPLIANCE, CompilerOptions.VERSION_1_8);
    prefs.setProperty(JavaCore.COMPILER_CODEGEN_TARGET_PLATFORM, CompilerOptions.VERSION_1_8);

    CodeFormatter codeFormatter = ToolFactory.createCodeFormatter(prefs);
    IDocument doc = new Document(javaCode);
    try {
      TextEdit edit = codeFormatter.format(CodeFormatter.K_COMPILATION_UNIT | CodeFormatter.F_INCLUDE_COMMENTS,
                                             javaCode, 0, javaCode.length(), 0, null);
      if (edit != null) {
        edit.apply(doc);
        result = doc.get();
      }
      else {
        result = javaCode;
      }
    }
    catch (BadLocationException e) {
      throw new RuntimeException(e);
    }

    System.out.println(result);
  }
}

Step 3: there is no step 3! You can just run your code in your IDE or from the command line using maven to compute your classpath.

Console output

The code used in this example is a simplification of what you can find in another great open-source project: JBoss Forge Roaster.


January 09, 2017 11:00 PM

Eclipse Neon.2 is on Maven Central

by Stephan Herrmann at January 09, 2017 10:21 PM

It’s done, finally!

Bidding farewell to my pet peeve

In my job at GK Software I have the pleasure of developing technology based on Eclipse. But those colleagues consuming my technology work on software that has no direct connection to Eclipse nor OSGi. Their build technology of choice is Maven (without tycho that is). So whenever their build touches my technology we are facing a “challenge”. It doesn’t make a big difference if they are just invoking a code generator built using Xtext etc or whether some Eclipse technology should actually be included in their application runtime.

Among many troubles, I recall one situation that really opened my eyes: one particular build had been running successfully for some time, until one day it was fubar. One Eclipse artifact could no longer be resolved. Followed long nights of searching why that artifact may have disappeared, but we reassured ourselves, nothing had disappeared. Quite to the contrary somewhere on the wide internet (Maven Central to be precise) a new artifact had appeared. So what? Well, that artifact was the same that we also had on our internal servers. Well, if it’s the same, what’s the buzz? It turned out it had a one-char difference in its version: instead of 1.2.3.v20140815 its version was 1.2.3-v20140815. Yes take a close look, there is a difference. Bottom line, with both almost-identical versions available, Maven couldn’t figure out what to do, maybe each was considered as worse than the other, to the effect that Maven simply failed to use either. Go figure.

More stories like this and I realized that relying on Eclipse artifacts in Maven builds was always at the mercy of some volunteers, who typically don’t have a long-term relationship to Eclipse, who filled in a major gap by uploading individual Eclipse artifacts to Maven Central (thanks to you volunteers, please don’t take it personally: I’m happy that your work is no longer needed). Anybody who has ever studied the differences between Maven and OSGi (wrt dependencies and building that is) will immediately see that there are many possible ways to represent Eclipse artifacts (OSGi bundles) in a Maven pom. The resulting “diversity” was one of my pet peeves in my job.

At this point I decided to be the next volunteer who would screw up other people’s builds who would collaborate with the powers that be at Eclipse.org to produce the official uploads to Maven Central.

As of today, I can report that this dream has become reality, all relevant artifacts of Neon.2 that are produced by the Eclipse Project, are now “officially” available from Maven Central.

Bridging between universes

I should like to report some details of how our artifacts are mapped into the Maven world:

The main tool in this endeavour is the CBI aggregator, a model based tool for transforming p2 repositories in various ways. One of its capabilities is to create a Maven repository (a dual use repo actually, but the p2 side of this is immaterial to this story). That tool does a great job of extracting meta data from the p2 repo in order to create “meaningful” pom files, the key feature being: it copies all dependency information, which is originally authored in MANIFEST.MF, into corresponding declarations in the pom file.

Still a few things had to be settled, either by improving the tool, by fine tuning the input to the tool, or by some steps of post-processing the resulting Maven repo.

  • Group IDs
    While OSGi artifacts only have a single qualified Bundle-SymbolicName, Maven requires a two-part name: groupId x artifactId. It was easy to agree on using the full symbolic name for the artifactId, but what should the groups be? We settled on these three groups for the Eclipse Project:

    • org.eclipse.platform
    • org.eclipse.jdt
    • org.eclipse.pde
  • Version numbers
    In Maven land, release versions have three segments, in OSGi we maintain a forth segment (qualifier) also for releases. To play by Maven rules, we decided to use three-part versions for our uploads to Maven Central. This emphasizes the strategy to only publish releases, for which the first three parts of the version are required to be unique.
  • 3rd party dependencies
    All non-Eclipse artifacts that we depend on should be referenced by their proper coordinates in Maven land. By default, the CBI aggregator assigns all artifacts to the synthetic group p2.osgi.bundle, but if s.o. depends on p2.osgi.bundle:org.junit this doesn’t make much sense. In particular, it must be avoided that projects consuming Eclipse artifacts will get the same 3rd party library under two different names (perhaps in different versions?). We identified 16 such libraries, and their proper coordinates.
  • Source artifacts
    Eclipse plug-ins have their source code in corresponding .source plug-ins. Maven has a similar convention, just using a “classifier” instead of appending to the artifact name. In Maven we conform to their convention, so that tools like m2e can correctly pick up the source code from any dependencies.
  • Other meta data
    Followed a hunt for project url, scm coordinates, artifact descriptions and related data. Much of this could be retrieved from our MANIFEST.MF files, some information is currently mapped using a static, manually maintained mapping. Other information like licences and organization are fully static during this process. In the end all was approved by the validation on OSSRH servers.

If you want to browse the resulting wealth, you may start at

Everything with fully qualified artifact names in these groups (and date of 2017-01-07 or newer) should be from the new, “official” upload.

This is just the beginning

The bug on which all this has been booked is Bug 484004: Start publishing Eclipse platform artifacts to Maven central. See the word “Start”?

To follow-up tasks are already on the board:

(1) Migrate all the various scripts, tools, and models to the proper git repo of our releng project. At the end of the day, this process of transformation and upload should become a routine operation to be invoked by our favourite build meisters.

(2) Fix any quirks in the generated pom files. E.g., we already know that the process did not handle fragments in an optimal way. As a result, consuming SWT from the new upload is not straight forward.

Both issues should be handled in or off bug 510072, in the hope, that when we publish Neon.3 the new, “official” Maven coordinates of Eclipse artifacts will be even fit all all real world use. So: please test and report in the bug any problems you might find.

(3) I was careful to say “Eclipse Project”. We don’t yet have the magic wand to apply this to literally all artifacts produced in the Eclipse community. Perhaps s.o. will volunteer to apply the approach to everything from the Simultaneous Release? If we can publish 300+ artifacts, we can also publish 7000+, can’t we? 🙂

happy building!



by Stephan Herrmann at January 09, 2017 10:21 PM

EMF Forms 1.11.0 Feature: Grid Table and more

by Maximilian Koegel and Jonas Helming at January 02, 2017 01:28 PM

With Neon.1, we released EMF Forms 1.11.0. EMF Forms makes it really simple to create forms which edit your data based on an EMF model. To get started with EMF Forms please refer to our tutorial. In this post, we wish to outline the improvements in the release 1.10.0: An alternative table renderer based on Nebula Grid Table.

EMF Forms allows you to describe a form-based UI in a simple and technology independent model, which in turn is translated by a rendering component to create the actual UI. Besides controls for simple values and layouts, EMF Forms as always supports tables. So, instead of manually implementing columns, databinding, validation, as well as all other typical table features, you only need to specify which attributes of which elements shall be displayed in the table. Like all other controls, this is specified in the view model. The following Screenshot shows a simple view with one table containing elements of type “Task”.

image13

Please note that the property “Detail Editing” is there as well with the “WithPanel”. This is already a more advanced option of the table renderer. It will display a detail panel below the table, when you click on an entry (see the following screenshot). The default is, of course, there to directly edit the values in the table cells.

image16

Now imagine how long it would have taken you to implement the table above. In EMF Forms, you can literally do this within a minute. However, there is another scenario, in which the approach is even more powerful.

Imagine, you have manually developed a few tables for your UI using the default SWT table. Now, you have the option to enable “single cell selection”, meaning you can only select a single cell instead of complete rows. This is not possible with SWT Table. Therefore, you must switch to another table implementation, e.g. Nebula Grid or NatTable. In the case in which you manually implemented your tables, you must change all of your code to a new API. However, with EMF Forms, you simply need to provide a new renderer. This component is responsible for interpreting the view model information specifying the table into the running UI. The renderer is then used for all tables in your application, so you only need to do this work once. For the example of the table, EMF Forms already provides an alternative table renderer out of the box. As you can see in the following screenshot, it uses Nebula Grid to render the same table, but enables single cell selection. To use this, just include the new renderer feature (org.eclipse.emf.ecp.view.table.ui.nebula.grid.feature) into your application, and it is again done in less than a minute.

image15

As shown along the example of tables, enhancing the existing renderers provides all types of customizations. Please note that the framework already includes a variety of renderers, but it is also simple to write your own. If you miss any feature or ways to adapt it, please provide feedback by submitting bugs or feature requests or contact us if you are interested in enhancements or support.


TwitterGoogle+LinkedInFacebook

Leave a Comment. Tagged with eclipse, emf, emfcp, emfforms, eclipse, emf, emfcp, emfforms


by Maximilian Koegel and Jonas Helming at January 02, 2017 01:28 PM

New year resolution for using Eclipse – Hiding the toolbar

by Lars Vogel at January 01, 2017 01:12 PM

Happy 2017.

For this year I plan to use Eclipse without toolbar. I think this will enforce me to use more shortcuts, e.g. for perspective switching, for starting the last run porgram and the like. Also it gives me more “real estate” in the IDE for the code.

If you want to do the same, select Windows -> Appearance -> Hide Toolbar from the menu.


by Lars Vogel at January 01, 2017 01:12 PM

Looking Forward to 2017

by Doug Schaefer at December 31, 2016 11:10 PM

I know a lot of people didn’t like how 2016 turned out, especially Americans, but for me it was a year of reflection and renewal.

As the state of the art for user interface frameworks gel around hardware accelerated graphics, I have been worrying for the future of Eclipse. It’s age is really starting to show and it’s getting harder to find tools developers that want to work with it. And with the murky future of JavaFX and Java as a desktop application technology, It’s time to start looking for the next thing in desktop IDE frameworks.

I also spent a lot of the year learning more about what embedded software engineers do. I’ve been building tools for them for many years but haven’t had a chance to use them myself. As Arduino and Raspberry Pi become cheap yet powerful and accessible devices, I bought a few of them and am starting to see how fun it is to program these devices to interact with the real world.

There are a few areas where I will be focusing my time in 2017. Here are some quick highlights. One New Years resolution I definitely have is to write more so I’ll be adding details as the year progresses.

Eclipse Two

Those who follow me on Twitter will notice me working on a new project that has grown from my fascination with Electron. Electron is the combination of Chromium and node.js in a desktop application framework. It’s what Visual Studio Code is written with along with many new and upcoming desktop applications, like Slack. It’s given me a fun opportunity to really learn HTML, CSS, and JavaScript and think of how I’d build an IDE with it. I have a couple of things running and you can follow along here on my github account.

Of course people have asked why not just extend one of the many text editor frameworks, like VIsual Studio Code, for what I need. There is a big difference between text editors and IDEs. Text editors seem to focus maniacally on being just text editors. IDEs add different types of visualizations and graphical editors that abstract away some of the more complex aspects of systems development. Web developers may not appreciate it much (yet), but embedded software developers really need the help that these abstractions provide.

My hope is that this work with an Electron-based IDE bears fruit and attracts IDE developers who are excited about the same idea. I’ve called it Eclipse Two (and it’s not e2, BTW, it’s Two), since it’s my full intention that if the Eclipse community is interested, we’ll bring it there. As it was in the days Eclipse One was introduced in 2001 and CDT in 2002, we can’t build this by ourselves. It only succeeds with a strong community and with strong architectural leadership that Eclipse is famous for.

CDT and the Language Services Protocol

The Language Services Protocol (LSP) is quickly becoming the accepted architecture that enables IDE’s to acquire language knowledge users expect as well as allows us to experiment with new IDE front ends, like Eclipse Two. Since it’ll be a few years before a new desktop IDE enters prime time, we’ll need to keep Eclipse and the CDT alive and thriving.

One thing we’re starting to see, thanks to Dr. Peter Sommerlad and friends on the C++ Language committee, is the C++ language continuing to evolve and modernize with new language constructs introduced every three years. It’s going to be very difficult for the small CDT team to keep up.

We need to look for alternative language providers and work with other IDEs, possibly leveraging the LLVM project’s libclang or some other parser that we could hook up to the LSP. That will likely be a lot of work since we rely on the CDT’s parsers for many features that the LSP doesn’t currently support but I think it’s a long term direction we need to investigate and a number of us CDT committers feel the same way.

Arduino and the Electronic Hobbyist

I am still fully committed to the Arduino plug-ins I’ve built for CDT and will continue to enhance them as the Arduino community and the mainstream Arduino IDE evolves. I am still hoping that members of the community will help with code along with their fantastic bug reports. The feedback has been nice to see and I’m glad the plug-ins have been useful.

The more I look at the work that embedded software engineers do and the incredible complexity of the systems they are working with, the more I am reassured that these developers do indeed need the help a good IDE can give them. Of course, it has to be a good IDE and I continue to work to understand what that means and help make it happen.

BTW, I had started on some plug-ins I was using to program the ESP8266 I used in my demos in 2016. Since then I’ve been in conversation with the ESP32 community and it’s been great to see that they are already adopting Eclipse and the CDT. Instructions are here if you’re interested. The good news for me is that it’ll give me a chance to stop working on my own plug-ins and to give me more time to focus on the other things in this list :).

Use an RTOS for your Real Time system

Programming the ESP8266 gave me some experience with FreeRTOS. In the demo, I have an ultrasonic sensor that I use to trigger different colors in the NeoPixels I also have attached to the chip. All of this is very real time sensitive. I need to measure the time between two interrupts to calculate distance from the sensor, and the NeoPixel communications depend on sending a serial stream of data at a very sensitive clock rate. Real time matters.

As part of the demo, I was showing CMake and the Launch Bar and how easy it was to switch from building and launching for one system to another. I took the real-time code for the ESP8266 and pretty much ran it as is on my BeagleBone running the QNX Neutrino RTOS, including the interrupt handlers and the NeoPixel code. I can’t imagine doing that on Linux. I know I work for the company, but it really helped me appreciate the Neutrino microkernel architecture and how easy it is to build an embedded system with the tools and APIs we provide.

The problem is, not enough people know about Neutrino and what a good RTOS can offer. Too many people are using Linux in real-time systems because it’s easier to get started, because it’s what they know, not because it’s the right architecture. One thing I hope to do is to help with the cause and spread the word and make it easier for the community to try it out. What that means, we’ll have to see in the upcoming months.

Beyond the IDE

I’ve made my career as a tools developer to do what I can to help other software developers build systems. But tools alone isn’t enough. Tools need to be combined with education through demos and tutorials and other types of instruction. Now imagine combining the two, a tutorial you access on the web that drives your desktop IDE as you learn.

And with that we come full circle as that’s one of the use cases I hope we can achieve with Eclipse Two! An IDE that not only helps you write and test code and build systems, but teaches you how best to do that as well.

Happy New Year and all the best in 2017!

It’s going to be a great year for the Eclipse community and technology and I look forward to helping where I can.


by Doug Schaefer at December 31, 2016 11:10 PM

Internet of Things - Reactive and Asynchronous with Vert.x

by ppatierno at December 29, 2016 12:00 AM

Vert.x IoT

this is a re-publication of the following blog post.

I have to admit … before joining Red Hat I didn’t know about the Eclipse Vert.x project but it took me few days to fall in love with it !

For the other developers who don’t know what Vert.x is, the best definition is …

… a toolkit to build distributed and reactive systems on top of the JVM using an asynchronous non blocking development model

The first big thing is related to develop a reactive system using Vert.x which means :

  • Responsive : the system responds in an acceptable time;
  • Elastic : the system can scale up and scale down;
  • Resilient : the system is designed to handle failures gracefully;
  • Asynchronous : the interaction with the system is achieved using asynchronous messages;

The other big thing is related to use an asynchronous non blocking development model which doesn’t mean to be multi-threading but thanks to the non blocking I/O (i.e. for handling network, file system, …) and callbacks system, it’s possible to handle a huge numbers of events per second using a single thread (aka “event loop”).

You can find a lot of material on the official web site in order to better understand what Vert.x is and all its main features; it’s not my objective to explain it in this very short article that is mostly … you guess … messaging and IoT oriented :-)

In my opinion, all the above features make Vert.x a great toolkit for building Internet of Things applications where being reactive and asynchronous is a “must” in order to handle millions of connections from devices and all the messages ingested from them.

Vert.x and the Internet of Things

As a toolkit, so made of different components, what are the ones provided by Vert.x and useful to IoT ?

Starting from the Vert.x Core component, there is support for both versions of HTTP protocol so 1.1 and 2.0 in order to develop an HTTP server which can expose a RESTful API to the devices. Today , a lot of web and mobile developers prefer to use this protocol for building their IoT solution leveraging on the deep knowledge they have about the HTTP protocol.

Regarding more IoT oriented protocols, there is the Vert.x MQTT server component which doesn’t provide a full broker but exposes an API that a developer can use in order to handle incoming connections and messages from remote MQTT clients and then building the business logic on top of it, so for example developing a real broker or executing protocol translation (i.e. to/from plain TCP,to/from the Vert.x Event Bus,to/from HTTP,to/from AMQP and so on). The API raises all events related to the connection request from a remote MQTT client and all subsequent incoming messages; at same time, the API provides the way to reply to the remote endpoint. The developer doesn’t need to know how MQTT works on the wire in terms of encoding/decoding messages.

Related to the AMQP 1.0 protocol there are the Vert.x Proton and the AMQP bridge components. The first one provides a thin wrapper around the Apache Qpid Proton engine and can be used for interacting with AMQP based messaging systems as clients (sender and receiver) but even developing a server. The last one provides a bridge between the protocol and the Vert.x Event Bus mostly used for communication between deployed Vert.x verticles. Thanks to this bridge, verticles can interact with AMQP components in a simple way.

Last but not least, the Vert.x Kafka client component which provides access to Apache Kafka for sending and consuming messages from topics and related partitions. A lot of IoT scenarios leverage on Apache Kafka in order to have an ingestion system capable of handling million messages per second.

Conclusion

The current Vert.x code base provides quite interesting components for developing IoT solutions which are already available in the current 3.3.3 version (see Vert.x Proton and AMQP bridge) and that will be available soon in the future 3.4.0 version (see MQTT server and Kafka client). Of course, you don’t need to wait for their official release because, even if under development, you can already adopt these components and provide your feedback to the community.

This ecosystem will grow in the future and Vert.x will be a leading actor in the IoT applications world based on a microservices architecture !


by ppatierno at December 29, 2016 12:00 AM