Building a real-time web app with Angular/Ngrx and Vert.x

by benorama at April 26, 2017 12:00 AM

Nowadays, there are multiple tech stacks to build a real-time web app. What are the best choices to build real-time Angular client apps, connected to a JVM-based backend? This article describes an Angular+Vertx real-time architecture with a Proof of Concept demo app.

this is a re-publication of the following Medium post

Intro

Welcome to the real-time web! It’s time to move on from traditional synchronous HTTP request/response architectures to reactive apps with connected clients (ouch… that’s a lot of buzzwords in just one sentence)!

Real-time app

Image source: https://www.voxxed.com

To build this kind of app, MeteorJS is the new cool kid on the block (v1.0 released in october 2014): a full stack Javascript platform to build connected-client reactive applications. It allows JS developers to build and deploy amazing modern web and mobile apps (iOS/Android) in no time, using a unified backend+frontend code within a single app repo. That’s a pretty ambitious approach but it requires a very opinionated and highly coupled JS tech stack and it’s still a pretty niche framework.

Moreover, we are a Java shop on the backend. At AgoraPulse, we rely heavily on :

  • Angular and Ionic for the JS frontend (with a shared business/data architecture based on Ngrx),
  • Groovy and Grails ecosystem for the JVM backend.

So my question is:

What are the best choices to build real-time Angular client apps, connected to a JVM-based backend these days?

Our requirements are pretty basic. We don’t need full Meteor’s end-to-end application model. We just want to be able to :

  1. build a reactive app with an event bus on the JVM, and
  2. extend the event bus down to the browser to be able to publish/subscribe to real-time events from an Angular app.

Server side (JVM)

Reactive apps is a hot topic nowadays and there are many great libs/platforms to build this type of event-driven architecture on the JVM:

Client side

ReactJS and Angular are the two most popular framework right now to build modern JS apps. Most platforms use SockJS to handle real-time connections:

  • Vertx-web provides a SockJS server implementation with an event bus bridge and a vertx-evenbus.js client library (very easy to use),
  • Spring provides websocket SockJS support though Spring Messaging and Websocket libs (see an example here)

Final choice: Vert.x + Angular

In the end, I’ve chosen to experiment with Vert.x for its excellent Groovy support, distributed event bus, scalability and ease of use.

I enjoyed it very much. Let me show you the result of my experimentation which is the root of our real-time features coming very soon in AgoraPulse v6.0!

Why Vert.x?

Like other reactive platform, Vert.x is event driven and non blocking. It scales very well (even more that Node.js).

Unlike other reactive platforms, Vert.x is polyglot: you can use Vert.x with multiple languages including Java, JavaScript, Groovy, Ruby, Ceylon, Scala and Kotlin.

Unlike Node.js, Vert.x is a general purpose tool-kit and unopinionated. It’s a versatile platform suitable for many things: from simple network utilities, sophisticated modern web applications, HTTP/REST microservices or a full blown back-end message-bus application.

Like other reactive platforms, it looks scary in the begining when you read the documentation… ;) But once you start playing with it, it remains fun and simple to use, especially with Groovy! Vert.x really allows you to build substantial systems without getting tangled in complexity.

In my case, I was mainly interested by the distributed event-bus provided (a core feature of Vert.x).

To validate our approach, we built prototypes with the following goals:

  • share and synchronize a common (Ngrx-based) state between multiple connected clients, and
  • distribute real-time (Ngrx-based) actions across multiple connected clients, which impact local states/reducers.

Note: @ngrx/store is a RxJS powered state management inspired by Redux for Angular apps. It’s currently the most popular way to structure complex business logic in Angular apps.

Redux

Source: https://www.smashingmagazine.com/2016/06/an-introduction-to-redux/

PROOF OF CONCEPT

Here is the repo of our initial proof of concept:

http://github.com/benorama/ngrx-realtime-app

The repo is divided into two separate projects:

  • Vert.x server app, based on Vert.x (version 3.3), managed by Gradle, with a main verticle developed in Groovy lang.
  • Angular client app, based on Angular (version 4.0.1), managed by Angular CLI with state, reducers and actions logic based on @ngrx/store (version 2.2.1)

For the demo, we are using the counter example code (actions and reducers) from @ngrx/store.

The counter client business logic is based on:

  • CounterState interface, counter state model,
  • counterReducer reducer, counter state management based on dispatched actions, and
  • Increment, decrement and reset counter actions.

State is maintained server-side with a simple singleton CounterService.

class CounterService {
    static INCREMENT = '[Counter] Increment'
    static DECREMENT = '[Counter] Decrement'
    static RESET = '[Counter] Reset'
    int total = 0
    void handleEvent(event) {
        switch(event.type) {
            case INCREMENT:
                total++
                break
            case DECREMENT:
                total--
                break
            case RESET:
                total = 0
                break
        }
    }
}

Client state initialization through Request/Response

Initial state is initialized with simple request/response (or send/reply) on the event bus. Once the client is connected, it sends a request to the event bus at the address counter::total. The server replies directly with the value of CounterService total and the client dispatches locally a reset action with the total value from the reply.

Vertx Request Response

Source: https://www.slideshare.net/RedHatDevelopers/vertx-microservices-were-never-so-easy-clement-escoffier

Here is an extract of the corresponding code (from AppEventBusService):

initializeCounter() {
    this.eventBusService.send('counter::total', body, (error, message) => {
    // Handle reply
    if (message && message.body) {
            let localAction = new CounterActions.ResetAction();
            localAction.payload = message.body; // Total value
            this.store.dispatch(localAction);
        }
    });
}

Actions distribution through Publish/Subscribe

Action distribution/sync uses the publish/subscribe pattern.

Counter actions are published from the client to the event bus at the address counter::actions.

Any client that have subscribed to counter::actions address will receive the actions and redispatch them locally to impact app states/reducers.

Vertx Publish Subscribe

Source: https://www.slideshare.net/RedHatDevelopers/vertx-microservices-were-never-so-easy-clement-escoffier

Here is an extract of the corresponding code (from AppEventBusService):

publishAction(action: RemoteAction) {
    if (action.publishedByUser) {
        console.error("This action has already been published");
        return;
    }
    action.publishedByUser = this.currentUser;
    this.eventBusService.publish(action.eventBusAddress, action);
}
subscribeToActions(eventBusAddress: string) {
    this.eventBusService.registerHandler(eventBusAddress, (error, message) => {
        // Handle message from subscription
        if (message.body.publishedByUser === this.currentUser) {
            // Ignore action sent by current manager
            return;
        }
        let localAction = message.body;
        this.store.dispatch(localAction);
    });
}

The event bus publishing logic is achieved through a simple Ngrx Effects. Any actions that extend RemoteAction class will be published to the event bus.

@Injectable()
export class AppEventBusEffects {

    constructor(private actions$: Actions, private appEventBusService: AppEventBusService) {}
    // Listen to all actions and publish remote actions to account event bus
    @Effect({dispatch: false}) remoteAction$ = this.actions$
        .filter(action => action instanceof RemoteAction && action.publishedByUser == undefined)
        .do((action: RemoteAction) => {
            this.appEventBusService.publishAction(action);
        });

    @Effect({dispatch: false}) login$ = this.actions$
        .ofType(UserActionTypes.LOGIN)
        .do(() => {
            this.appEventBusService.connect();
        });
}

You can see all of this in action by locally launching the server and the client app in two separate browser windows.

Demo app screen

Bonus: the demo app also includes user status (offline/online), based of the event bus connection status.

The counter state is shared and synchronized between connected clients and each local action is distributed in real-time to other clients.

Mission accomplished!

Typescript version of Vertx EventBus Client
The app uses our own Typescript version of the official JS Vertx EventBus Client. It can be found here, any feedback, improvement suggestions are welcome!


by benorama at April 26, 2017 12:00 AM

Results, results, results — IoT Developer Survey 2017

by Roxanne on IoT at April 25, 2017 09:50 AM

In February & March 2017, we conducted the third annual IoT Developer Survey and 713 of you took the time to complete it! Thank you for contributing to this initiative. It might only a small sample, but it gives many IoT community, companies, and individuals a glimpse into what is going on in the vast and continually changing world we call the Internet of Things.

If you thought of Carmen Sandiego when reading this question — you are awesome!

Here are some quick survey highlights for you to devour:

Download images hereRead our full analysisView the slides

Make your own opinion and tell us how you interpret the survey results! Tweet: @roxannejoncas @EclipseIoT or leave a comment below!


by Roxanne on IoT at April 25, 2017 09:50 AM

Papyrus Architecture Framework

by tevirselrahc at April 24, 2017 11:00 AM

For the upcoming Oxygen release, I am getting a new, improved architecture framework that is aligned with ISO 42010.Now, I’m not (yet) an expert in this, but my minions are! And they have created a nice YouTube video explaining what it does and what it provides to Toolsmiths

Now, I’m not (yet) an expert in this, but my minions are! And they have created a nice YouTube video explaining what it does and what it provides to Toolsmiths.

If you are a toolsmith for Me, hope to become one or are just curious, you must go see it (and the other Me videos on YouTube)!


Filed under: DSML, Papyrus, Papyrus Core, Uncategorized Tagged: toolsmiths

by tevirselrahc at April 24, 2017 11:00 AM

Host your own eclipse signing server

by Christian Pontesegger (noreply@blogger.com) at April 24, 2017 10:03 AM

We handled signing plugins with tycho some time ago already. When working in a larger company you might want to keep your certificates and passphrases hidden from your developers. For such a scenario a signing server could come in handy.

The eclipse CBI project provides such a server which just needs to get configured in the right way. Mikael Barbero posted a short howto on the mailing list, which should contain all you need. For a working setup example follow this tutorial.

To have a test vehicle for signing we will reuse the tycho 4 tutorial source files.

Step 1: Get the service

Download the latest service snapshot file and store it to a directory called signingService. Next download the test server, we will use it to create a temporary certificate and keystore.

Finally we need a template configuration file. Download it and store it to signingService/jar-signing-service.properties.

Step 2: A short test drive

Open a console and change into the signingService folder. There execute:
java -cp jar-signing-service-1.0.0-20170331.204711-10.jar:jar-signing-service-1.0.0-20170331.204711-10-tests.jar org.eclipse.cbi.webservice.signing.jar.TestServer
You should get some output giving you the local address of the signing service as long as the certificate store used:
Starting test signing server at http://localhost:3138/jarsigner
Dummy certificates, temporary files and logs are stored in folder: /tmp/TestServer-2590700922068591564
Jarsigner executable is: /opt/oracle-jdk-bin-1.8.0.121/bin/jarsigner
We are not ready yet to sign code, but at least we can test if the server is running correctly. If you try to connect with a browser you should get a message that HTTP method GET is not supported by this URL.

Step 3: Preparing the tycho project

We need some changes to our tycho project so it can make use of the signing server. Get the sources of the tycho 4 tutorial (checking out from git is fully sufficient) and add following code to com.codeandme.tycho.releng/pom.xml:
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>

<pluginRepositories>
<pluginRepository>
<id>cbi</id>
<url>https://repo.eclipse.org/content/repositories/cbi-releases/</url>
</pluginRepository>
</pluginRepositories>

<build>
<plugins>
<!-- enable jar signing -->
<plugin>
<groupId>org.eclipse.cbi.maven.plugins</groupId>
<artifactId>eclipse-jarsigner-plugin</artifactId>
<version>${eclipse.jarsigner.version}</version>
<executions>
<execution>
<id>sign</id>
<goals>
<goal>sign</goal>
</goals>
<phase>verify</phase>
</execution>
</executions>
<configuration>
<signerUrl>http://localhost:3138/jarsigner</signerUrl>
</configuration>
</plugin>

</plugins>
</build>
</project>
The code above shows purely additions to the pom.xml, no sections were removed or replaced.

You may try to build your project with maven already. As I had problems to connect to https://timestamp.geotrust.com/tsa my local build failed, even if maven reported SUCCESS.

Step 4: Configuring a productive instance

So lets get productive. Setting up your keystore with your certificates will not be handled by this tutorial, so I will reuse the keystore created by the test instance. Copy the keystore.jks file from the temp folder to the signingService folder. Then create a text file keystore.pass:
echo keystorePassword >keystore.pass

Now we need to adapt the jar-signing-service.properties file to our needs:
### Example configuration file

server.service.pathspec=/jarsigner
server.service.pathspec.versioned=false

jarsigner.bin=/opt/oracle-jdk-bin-1.8.0.121/bin/jarsigner

jarsigner.keystore=/somewhere/signingService/keystore.jks
jarsigner.keystore.password=/somewhere/signingService/keystore.pass
jarsigner.keystore.alias=acme.org

jarsigner.tsa=http://timestamp.entrust.net/TSS/JavaHttpTS

  • By setting the versioned flag to false in line 4 we simplify the service web address (details can be found in the sample properties file).
  • Set the jarsigner executable path in line 6 according to your local environment.
  • Lines 8-10 contain details about the keystore and certificate to use, you will need to adapt them, but above settings should result in a working build.
  • The change in line 12 was necessary at the time of writing this tutorial because of connection problems to https://timestamp.geotrust.com/tsa.
Run your service using
java -jar jar-signing-service-1.0.0-20170331.204711-10.jar
Remember that your productive instance now runs on port 8080, so adapt your pom.xml accordingly.

by Christian Pontesegger (noreply@blogger.com) at April 24, 2017 10:03 AM

Eclipse IoT @ Red Hat Summit

by Roxanne on IoT at April 21, 2017 03:31 PM

In less than two weeks, we will be at the Red Hat Summit in Boston, MA.

We’re really excited! We will be involved in many aspects of the conference including the Red Hat IoT Partner Showcase, where we will be demoing something very cool! Stay tuned for the details.

Benjamin Cabé will also be speaking at on May 2 @ 10 am during the Lightning Talks.

Plan to join this year’s IoT CodeStarter starting on May 2 @ 6 pm to experience and use open source projects such as Eclipse Kura, Eclipse Kura Wires and Eclipse Kapua.

Stop by our demo station at the Red Hat IoT Partner Showcase to say hello!


by Roxanne on IoT at April 21, 2017 03:31 PM

JSON Forms – Day 6 – Custom Renderers

by Maximilian Koegel and Jonas Helming at April 21, 2017 09:28 AM

JSON Forms is a framework to efficiently build form-based web UIs. These UIs allow end users to enter, modify, and view data and are usually embedded within a business application. JSON Forms eliminates the need to write HTML templates and Javascript for databinding by hand. It supports the creation of customizable forms by leveraging the capabilities of JSON and JSON schema and providing a simple and declarative way of describing forms. Forms are then rendered with a UI framework, currently one that is based on AngularJS. If you would like to know more about JSON Forms, the JSON Forms homepage is a good starting point.

In this blog series, we wish to introduce the framework based on a real-world example application, a task tracker called “Make It happen”. On day 1 we described the overall requirements, from day 2 to 5, we have created a fully working form for the entity “Task”. If you would like to follow this blog series please follow us on twitter, where we will announce every new blog post regarding JSON Forms.

So far, on the previous days, we have created a fully functional form rendered in AngularJS, by simply using two schemata: a JSON Schema to define the underlying data and a UI Schema to specify the UI. JSON Forms provides a rendering component, which translates this two schemas into a AngularJS form, including data binding, validation, rule-based visibility, and so on. While this is very efficient, you may wonder what you should do if the form rendered by JSON Forms does not exactly look like what you expected. Let us have a look at the form we have so far:

While the generic structure and the layout look pretty good, there are two controls, which could require some aesthetic improvement.

First, the checkbox for “done” is very small, we would rather have something like this:

Second, the control for “rating” is just a plain number field, a rating would better be expressed by a control like this:

Both improvements can be addressed by customizing existing renderers or adding new custom renderers to JSON Forms. This use case is actually not special at all, it is anticipated and fully supported.

In JSON Forms a renderer is only responsible for displaying one particular UI element, like a control or a horizontal layout. JSON Forms ships with default renderers for all UI schema elements. The default renderers are meant as a starting point, and therefore, it is very likely that you will add new renderers and extend existing ones.

The good news is that you still do not have to implement the complete form manually. Rather, you just need to add some code for the customized part. That means you can iteratively extend the framework with custom renderers, while the complete form remains fully functional. Let us have a quick look at the architecture of the JSON Forms rendering component. In fact there is not only one renderer, there is at least one renderer per concept of the UI schema. Renderers are responsible for translating the information of the UI schema and the data schema into a running HTML UI.

All those renderers are registered at a renderer factory (see following diagram). For every renderer, there is a “Tester”, which decides, whether a certain renderer should be responsible for rendering a certain UI element. This can depend on the type of the UI schema element (e.g. all controls), on the type of the referenced data property (e.g. a renderer for all String properties), or even on the name of the data property (e.g. only the attribute “rating”).

This architecture allows you to register renders in an extremely flexible way. If there are no custom renderers, the default renderer will be used. Please note, that JSON Forms supports renderers written in JS5, JS6, and Typescript. In the following, we will use Typescript.

So let’s customize the styling of the default renderer for the “done” attribute by CSS styling and by adding a custom renderer for the rating attribute. Therefore, we start with a customization of the CSS of the sample application. By adding the following style you can change the size of the checkbox of the done attribute alone:

#properties_done {
  width:35px;
  height:35px;
}

Second we need a tester for the custom rating renderer, which should only be applied for the rating property:

.run(['RendererService', 'JSONFormsTesters', function(RendererService, Testers) {
        RendererService.register('rating-control', Testers.and(
            Testers.schemaPropertyName('rating')
        ), 10);
    }])

As you can see, the tester references a renderer, so the next step is to implement it:

.directive('ratingControl', function() {
    return {
        restrict: 'E',
        controller: ['BaseController', '$scope', function(BaseController, $scope) {
            var vm = this;
            BaseController.call(vm, $scope);
            vm.max = function() {
                if (vm.resolvedSchema.maximum !== undefined) {
                    return vm.resolvedSchema.maximum;
                } else {
                    return 5;
                }
            };
        }],
        controllerAs: 'vm',
        templateUrl: './renderer/rating.control.html'
    };
})
<jsonforms-control>
 <uib-rating
 id="{{vm.id}}"
 readonly="vm.uiSchema.readOnly"
 ng-model="vm.resolvedData[vm.fragment]"
 max="vm.max()"></uib-rating>
 </uib-rating>
</jsonforms-control>

 

Please see this tutorial for more details about implementing your own custom renderer in either JS5, JS6, or Typescript. After adding our CSS customization and the custom renderer to our project, we can see the result embedded in our form:

Please note, that the data schema and the UI schema do not have to be adapted at all. JSON Forms facilitates a strict separation between the definition of a form and its rendering. That enables you to not only adapt the look and feel of your UI, but also render the same UI schema in different ways.

If you are interested in implementing your own renderer or if you miss any feature in JSON Forms, please contact us. If you are interested in trying out JSON Forms, please refer to the Getting-Started tutorial. This tutorial explains how to set up JSON Forms in your project as well as how you can try out the first steps on your own. If you would like to follow this blog series, please follow us on twitter. We will announce every new blog post on JSON Forms there.

 

List of all available days to date:


TwitterGoogle+LinkedInFacebook

Leave a Comment. Tagged with Angular, emf, emf forms, JSON, json forms, Angular, emf, emf forms, JSON, json forms


by Maximilian Koegel and Jonas Helming at April 21, 2017 09:28 AM

Technical Debt: How Do You Unfork a Fork?

by Tracy M at April 20, 2017 01:13 PM

filled_cirle_point_style_graph

Everyone knows how to fork that interesting open source project, it’s simple and handy to do. What’s not so easy to do is to merge back a fork that has over the years taken on a life of its own and for many reasons has diverged drastically from the original repo.

This is a case study of an ongoing project we are doing with SWT XYGraph, a visualisation project that is now part of Eclipse Nebula. It is the story of a fork of SWT XYGraph maintained by Diamond Light Source, the UK’s national synchrotron. But mostly it is a story about the efforts to merge the fork, reduce technical debt, and work towards the goal of sharing software components for Science, a key goal of the Eclipse Science Working Group.

Know Your History

One of the first things in this project was to understand the history – spanning 8 years – of the fork.  We knew the Diamond fork was done before SWT XYGraph became part of Nebula and under the Eclipse Foundation umbrella. The fork was made in order to quickly add in a number of new features that required some fundamental architectural changes to the code base.

However on looking through the history, we found there were more than just 2 forks involved. The original project had been developed as part of Control System Studio (CSS) from Oakridge National Labs. CSS had in turn been forked by Diamond and customised for the local facility. Even though SWT XYGraph had been contributed to the Eclipse Nebula project, the original repo and many, many forks were still out there: more than enough forks for a dinner party. I can’t explain it any further in words so will dump our illegible working diagram of it all here:

forks

Patches were pulled across and merged across forks when it was straightforward to do so. But with so many forks, this was a case where git history really mattered. Anywhere the history was preserved it was straightforward to track the origins of a specific feature – much harder in the cases where the history was lost. Git history is important, always worth some effort to preserve.

Choose Your Approach Carefully

Deciding if it worthwhile to merge a big fork takes some consideration. The biggest question to ask is: Are the architectural changes fundamentally resolvable? (Not like Chromium’s fork of Webkit – Blink). If that is a yes, then it’s a case of trading off the long-term benefits for the short term pain. In this case, Diamond knew it was something they wanted to do, more a matter of timing and picking the correct approach.

Together there seemed to be 2 main ways to tackle removing the fork that was part of a mature product in constant use at the scientific facility.

Option 1: Create a branch and work in parallel to get the branch working with upstream version, then merge the branch.

Option 2: Avoid a branch, but work to incrementally make the fork and upstream SWT XYGraph plug-ins identical, then make the switch over to the upstream version.

Option 1 had been tried before without success; there were too many moving parts and it created too much overhead, and ironically another fork to maintain. So it was clear this time Option 2 would be the way forward.

Tools are Your Friend

The incremental merging of the two needed to be done in a deliberate, reproducible manner to make it easier to trace back any issues coming up. Here are the tools that were useful in doing this.

1. Git Diff

The first step was to get an idea of the scale of the divergence, both quantitatively and qualitatively.

For quantity, a rough and ready measure was obtained by using git diff:

$ git diff --shorstat <diamond> <nebula>
399 files changed, 15648 insertions(+), 15368 deletions(-)

$ git diff <diamond> <nebula> | wc -l
37874

2. Eclipse IDE’s JDT formatter

Next, we needed to remove diffs that were just down to formatting. For this using Eclipse IDE and the quick & easy formatting. Select “src” folder, choose Source menu -> Format. All code formatted to Eclipse standard in one go.

format_src_folder

3. Merge Tools

Then it was time to dive into the differences and group them into features, separating quick fixes from changes that broke APIs. For this we used the free and open meld on Linux.

3. EGit Goodness

Let’s say we found a line of code different in the fork. To work out where the feature had come from, we could use ‘git blame‘ but much nicer is the eGit support in Eclipse IDE. Show annotations was regularly used to try to work out where that feature had come from, which fork it had been originally created on and then see if we could find any extra information such as bugzilla or JIRA tickets describing the feature. We were always grateful for code with good and helpful commit messages.

egit_annotations.png

3. Bug Tracking Tools

In this case we were using two different bug trackers: Bugzilla on the Eclipse Nebula side of things and JIRA on the Diamond side of things. As part of the merge, we were contributing lots and lots of distinct features to Nebula, we had a parent issue: Bug 513865 to which we linked all the underlying fixes and features, aiming to keep each one distinct and standalone. At the time of writing that meant 21 dependent bugs.

4. Gerrits & Pull Requests

Gerrits were created for each bug for Eclipse Nebula. Pull requests were created for each change going to Diamond’s DAWN (over 50 to date). Each was reviewed before being committed back. In many cases we took the opportunity to tidy code up or enhance it with things like standalone examples that could be used to demonstrate the feature.

5. Github Built-in Graphs

It was also good to use the built in Github built in Graphs  (on any repository click on ‘Graphs’ tab), first to see other forks out in the wild (Members tab):

members

Then the ‘Network’ tab to keep track of the relationship with those forks compared to the main Diamond fork:

networkgraph

Much nicer than our hand-drawn effort from earlier, though in this case not all the code being dealt with was in Github.

Win/Win

The work is ongoing and we are getting to the tricky parts – the key reasons the forks were created in the first place – to make fundamental changes to the architecture. This will require some conversations to understand the best way forward. Already with the work that has been done, there has been mutual benefits: Diamond get new features and bug fixes developed in the open source and Eclipse Nebula get new features and bug fixes developed at Diamond Light Source. The New & Noteworthy for Eclipse Nebula shows off screenshots of all the new features as a result of this merge.

Nebula_N&N_1.3_-_improved_mouse_cursors

Going forward this paves the way for Diamond to not only get rid of duplicate maintenance of >30,000 lines of Java code (according to cloc), but to contribute some significant features they have developed that integrate with SWT XYGraph. In doing so with the Eclipse Science Working Group it make a great environment to collaborate in open source and make advancements that benefit all involved.



by Tracy M at April 20, 2017 01:13 PM

Eclipse Newsletter - Mastering Eclipse CDT

April 20, 2017 11:10 AM

Learn all about Eclipse CDT, a fully functional C & C++ IDE for the Eclipse platform in this month's newsletter.

April 20, 2017 11:10 AM

Access OSGi Services via web interface

by Dirk Fauth at April 20, 2017 05:58 AM

In this blog post I want to share a simple approach to make OSGi services available via web interface. I will show a simple approach that includes the following:

  • Embedding a Jetty  Webserver in an OSGi application
  • Registering a Servlet via OSGi DS using the HTTP Whiteboard specification

I will only cover this simple scenario here and will not cover accessing OSGi services via REST interface. If you are interested in that you might want to look at the OSGi – JAX-RS Connector, which looks also very nice. Maybe I will look at this in another blog post. For now I will focus on embedding a Jetty Server and deploy some resources.

I will skip the introduction on OSGi DS and extend the examples from my Getting Started with OSGi Declarative Services blog. It is easier to follow this post when done the other tutorial first, but it is not required if you adapt the contents here to your environment.

As a first step create a new project org.fipro.inverter.http. In this project we will add the resources created in this tutorial. If you use PDE you should create a new Plug-in Project, with Bndtools create a new Bnd OSGi Project using the Component Development template.

PDE – Target Platform

In PDE it is best practice to create a Target Definition so the work is based on a specific set of bundles and we don’t need to install bundles in our IDE. Follow these steps to create a Target Definition for this tutorial:

  • Create a new target definition
    • Right click on project org.fipro.inverter.http → New → Other… → Plug-in Development → Target Definition
    • Set the filename to org.fipro.inverter.http.target
    • Initialize the target definition with: Nothing: Start with an empty target definition
  • Add a new Software Site in the opened Target Definition Editor by clicking Add… in the Locations section
    • Select Software Site
    • Software Site http://download.eclipse.org/releases/oxygen
    • Disable Group by Category
    • Select the following entries
      • Equinox Core SDK
      • Equinox Compendium SDK
      • Jetty Http Server Feature
    • Click Finish
  • Optional: Add a new Software Site to include JUnit to the Target Definition (only needed in case you followed all previous tutorials on OSGi DS or want to integrate JUnit tests for your services)
    • Software Site http://download.eclipse.org/tools/orbit/R-builds/R20170307180635/repository
    • Select JUnit Testing Framework
    • Click Finish
  • Save your work and activate the target platform by clicking Set as Target Platform in the upper right corner of the Target Definition Editor

Bndtools – Repository

Using Bndtools is different as you already know if you followed my previous blog posts. To be also able to follow this blog post by using Bndtools, I will describe the necessary steps here.

We will use Apache Felix in combination with Bndtools instead of Equinox. This way we don’t need to modify the predefined repository and can start without further actions. The needed Apache Felix bundles are already available.

PDE – Prepare project dependencies

We will prepare the project dependencies in advance so it is easier to copy and paste the code samples to the project. Within the Eclipse IDE the Quick Fixes would also support adding the dependencies afterwards of course.

  • Open the MANIFEST.MF file of the org.fipro.inverter.http project and switch to the Dependencies tab
  • Add the following two dependencies on the Imported Packages side:
    • javax.servlet (3.1.0)
    • javax.servlet.http (3.1.0)
    • org.fipro.inverter (1.0.0)
    • org.osgi.service.component.annotations (1.3.0)
  • Mark org.osgi.service.component.annotations as Optional via Properties…
  • Add the upper version boundaries to the Import-Package statements.

Bndtools – Prepare project dependencies

  • Open the bnd.bnd file of the org.fipro.inverter.http project and switch to the Build tab
  • Add the following bundles to the Build Path
    • org.apache.http.felix.jetty
    • org.apache.http.felix.servlet-api
    • org.fipro.inverter.api

Create a Servlet implementation

  • Create a new package org.fipro.inverter.http
  • Create a new class InverterServlet
@Component(
    service=Servlet.class,
    property= "osgi.http.whiteboard.servlet.pattern=/invert",
    scope=ServiceScope.PROTOTYPE)
public class InverterServlet extends HttpServlet {

    private static final long serialVersionUID = 1L;

    @Reference
    private StringInverter inverter;

    @Override
    protected void doGet(HttpServletRequest req, HttpServletResponse resp)
            throws ServletException, IOException {

        String input = req.getParameter("value");
        if (input == null) {
            throw new IllegalArgumentException("input can not be null");
        }
        String output = inverter.invert(input);

        resp.setContentType("text/html");
        resp.getWriter().write(
            "<html><body>Result is " + output + "</body></html>");
        }

}

Let’s look at the implementation:

  1. It is a typical Servlet implementation that extends javax.servlet.http.HttpServlet
  2. It is also an OSGi Declarative Service that is registered as service of type javax.servlet.Servlet
  3. The service has PROTOTYPE scope
  4. A special property osgi.http.whiteboard.servlet.pattern is set. This configures the context path of the Servlet.
  5. It references the StringInverter OSGi service from the previous tutorial via field reference. And yes since Eclipse Oxygen this is also supported in Equinox (I wrote about this here).

PDE – Launch the example

Before explaining the details further, launch the example to see if our servlet is available via standard web browser. For this we create a launch configuration, so we can start directly from the IDE.

  • Select the menu entry Run -> Run Configurations…
  • In the tree view, right click on the OSGi Framework node and select New from the context menu
  • Specify a name, e.g. OSGi Inverter Http
  • Deselect All
  • Select the following bundles
    (note that we are using Eclipse Oxygen, in previous Eclipse versions org.apache.felix.scr and org.eclipse.osgi.util are not required)

    • Application bundles
      • org.fipro.inverter.api
      • org.fipro.inverter.http
      • org.fipro.inverter.provider
    • Console bundles
      • org.apache.felix.gogo.command
      • org.apache.felix.gogo.runtime
      • org.apache.felix.gogo.shell
      • org.eclipse.equinox.console
    • OSGi framework and DS bundles
      • org.apache.felix.scr
      • org.eclipse.equinox.ds
      • org.eclipse.osgi
      • org.eclipse.osgi.services
      • org.eclipse.osgi.util
    • Equinox Http Service and Http Whiteboard
      • org.eclipse.equinox.http.jetty
      • org.eclipse.equinox.http.servlet
    • Jetty
      • javax.servlet
      • org.eclipse.jetty.continuation
      • org.eclipse.jetty.http
      • org.eclipse.jetty.io
      • org.eclipse.jetty.security
      • org.eclipse.jetty.server
      • org.eclipse.jetty.servlet
      • org.eclipse.jetty.util
  • Ensure that Default Auto-Start is set to true
  • Switch to the Arguments tab
    • Add -Dorg.osgi.service.http.port=8080 to the VM arguments
  • Click Run

Note:
If you include the above bundles in an Eclipse RCP application, ensure that you auto-start the org.eclipse.equinox.http.jetty bundle to automatically start the Jetty server. This can be done on the Configuration tab of the Product Configuration Editor.

If you now open a browser and go to the URL http://localhost:8080/invert?value=Eclipse you should get a response with the inverted output.

Bndtools – Launch the example

  • Open the launch.bndrun file in the org.fipro.inverter.http project
  • On the Run tab add the following bundles to the Run Requirements
    • org.fipro.inverter.http
    • org.fipro.inverter.provider
    • org.apache.felix.http.jetty
  • Click Resolve to ensure all required bundles are added to the Run Bundles via auto-resolve
  • Add -Dorg.osgi.service.http.port=8080 to the JVM Arguments
  • Click Run OSGi

Http Service & Http Whiteboard

Now why is this simply working? We only implemented a servlet and provided it as OSGi DS. And it is “magically” available via web interface. The answer to this is the OSGi Http Service Specification and the Http Whiteboard Specification. The OSGi Compendium Specification R6 contains the Http Service Specification Version 1.2 (Chapter 102 – Page 45) and the Http Whiteboard Specification Version 1.0 (Chapter 140 – Page 1067).

The purpose of the Http Service is to provide access to services on the internet or other networks for example by using a standard web browser. This can be done by registering servlets or resources to the Http Service. Without going too much into detail, the implementation is similar to an embedded web server, which is the reason why the default implementations in Equinox and Felix are based on Jetty.

To register servlets and resources to the Http Service you know the Http Service API very well and you need to retrieve the Http Service and directly operate on it. As this is not every convenient, the Http Whiteboard Specification was introduced. This allows to register servlets and resources via the Whiteboard Pattern, without the need to know the Http Service API in detail. I always think about the whiteboard pattern as a “don’t call us, we will call you” pattern. That means you don’t need to register servlets on the Http Service directly, you will provide it as a service to the service registry, and the Http Whiteboard implementation will take it and register it to the Http Service.

Via Http Whiteboard it is possible to register:

  • Servlets
  • Servlet Filters
  • Resources
  • Servlet Listeners

I will show some examples to be able to play around with the Http Whiteboard service.

Register Servlets

An example on how to register a servlet via Http Whiteboard is shown above. The main points are:

  • The servlet needs to be registered as OSGi service of type javax.servlet.Servlet.
  • The component property osgi.http.whiteboard.servlet.pattern needs to be set to specify the request mappings.
  • The service scope should be PROTOTYPE.

For registering servlets the following component properties are supported. (see OSGi Compendium Specification Release 6 – Table 140.4):

Component Property Description
osgi.http.whiteboard.servlet.asyncSupported Declares whether the servlet supports the asynchronous operation mode. Allowed values are true and false independent of case. Defaults to false.
osgi.http.whiteboard.servlet.errorPage Register the servlet as an error page for the error code and/or exception specified; the value may be a fully qualified exception type name or a three-digit HTTP status code in the range 400-599. Special values 4xx and 5xx can be used to match value ranges. Any value not being a three-digit number is assumed to be a fully qualified exception class name.
osgi.http.whiteboard.servlet.name The name of the servlet. This name is used as the value of the javax.servlet.ServletConfig.getServletName()
method and defaults to the fully qualified class name of the service object.
osgi.http.whiteboard.servlet.pattern Registration pattern(s) for the servlet.
servlet.init.* Properties starting with this prefix are provided as init parameters to the javax.servlet.Servlet.init(ServletConfig) method. The servlet.init. prefix is removed from the parameter name.

The Http Whiteboard service needs to call javax.servlet.Servlet.init(ServletConfig) to initialize the servlet before it starts to serve requests, and when it is not needed anymore javax.servlet.Servlet.destroy() to shut down the servlet. If more than one Http Whiteboard implementation is available in a runtime, the init() and destroy() calls would be executed multiple times, which violates the Servlet specification. It is therefore recommended to use the PROTOTYPE scope for servlets to ensure that every Http Whiteboard implementation gets its own service instance.

Note:
In a controlled runtime, like an RCP application that is delivered with one Http Whiteboard implementation and that does not support installing bundles at runtime, the usage of the PROTOTYPE scope is not required. Actually such a runtime ensures that the servlet is only instantiated and initialized once. But if possible it is recommended that the PROTOTYPE scope is used.

To register a servlet as an error page, the service property osgi.http.whiteboard.servlet.errorPage needs to be set. The value can be either a three-digit  HTTP error code, the special codes 4xx or 5xx to specify a range or error codes, or a fully qualified exception class name. The service property osgi.http.whiteboard.servlet.pattern is not required for servlets that provide error pages.

The following snippet shows an error page servlet that deals with IllegalArgumentExceptions and the HTTP error code 500. It can be tested by calling the inverter servlet without a query parameter.

@Component(
    service=Servlet.class,
    property= {
        "osgi.http.whiteboard.servlet.errorPage=java.lang.IllegalArgumentException",
        "osgi.http.whiteboard.servlet.errorPage=500"
    },
    scope=ServiceScope.PROTOTYPE)
public class ErrorServlet extends HttpServlet {

    private static final long serialVersionUID = 1L;

    @Override
    protected void doGet(HttpServletRequest req, HttpServletResponse resp)
            throws ServletException, IOException {

        resp.setContentType("text/html");
        resp.getWriter().write(
        "<html><body>You need to provide an input!</body></html>");
    }
}

Register Filters

Via servlet filters it is possible to intercept servlet invocations. They are used to modify the ServletRequest and ServletResponse to perform common tasks before and after the servlet invocation.

The example below shows a servlet filter that adds a simple header and footer on each request to the servlet with the /invert pattern:

@Component(
    property = "osgi.http.whiteboard.filter.pattern=/invert",
    scope=ServiceScope.PROTOTYPE)
public class SimpleServletFilter implements Filter {

    @Override
    public void init(FilterConfig filterConfig)
            throws ServletException { }

    @Override
    public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain)
            throws IOException, ServletException {
        response.setContentType("text/html");
        response.getWriter().write("<b>Inverter Servlet</b><p>");
        chain.doFilter(request, response);
        response.getWriter().write("</p><i>Powered by fipro</i>");
    }

    @Override
    public void destroy() { }

}

To register a servlet filter the following criteria must match:

  • It needs to be registered as OSGi service of type javax.servlet.Filter.
  • One of the given component properties needs to be set:
    • osgi.http.whiteboard.filter.pattern
    • osgi.http.whiteboard.filter.regex
    • osgi.http.whiteboard.filter.servlet
  • The service scope should be PROTOTYPE.

For registering servlet filters the following service properties are supported. (see OSGi Compendium Specification Release 6 – Table 140.5):

Service Property Description
osgi.http.whiteboard.filter.asyncSupported Declares whether the servlet filter supports asynchronous operation mode. Allowed values are true and false independent of case. Defaults to false.
osgi.http.whiteboard.filter.dispatcher Select the dispatcher configuration when the
servlet filter should be called. Allowed string values are REQUEST, ASYNC, ERROR, INCLUDE, and FORWARD. The default for a filter is REQUEST.
osgi.http.whiteboard.filter.name The name of a servlet filter. This name is used as the value of the FilterConfig.getFilterName() method and defaults to the fully qualified class name of the service object.
osgi.http.whiteboard.filter.pattern Apply this servlet filter to the specified URL path patterns. The format of the patterns is specified in the servlet specification.
osgi.http.whiteboard.filter.regex Apply this servlet filter to the specified URL paths. The paths are specified as regular expressions following the syntax defined in the java.util.regex.Pattern class.
osgi.http.whiteboard.filter.servlet Apply this servlet filter to the referenced servlet(s) by name.
filter.init.* Properties starting with this prefix are passed as init parameters to the Filter.init() method. The filter.init. prefix is removed from the parameter name.

Register Resources

It is also possible to register a service that informs the Http Whiteboard service about static resources like HTML files, images, CSS- or Javascript-files. For this a simple service can be registered that only needs to have the following two mandatory service properties set:

Service Property Description
osgi.http.whiteboard.resource.pattern The pattern(s) to be used to serve resources. As defined by the [4] Java Servlet 3.1 Specification in section 12.2, Specification of Mappings.This property marks the service as a resource service.
osgi.http.whiteboard.resource.prefix The prefix used to map a requested resource to the bundle’s entries. If the request’s path info is not null, it is appended to this prefix. The resulting
string is passed to the getResource(String) method of the associated Servlet Context Helper.

The service does not need to implement any specific interface or function. All required information is provided via the component properties.

To create a resource service follow these steps:

  • Create a folder resources in the project org.fipro.inverter.http
  • Add an image in that folder, e.g. eclipse_logo.png
  • PDE – Add the resources folder in the build.properties
  • Bndtools – Add the following line to the bnd.bnd file on the Source tab
    -includeresource: resources=resources
  • Create resource service
@Component(
    service = ResourceService.class,
    property = {
        "osgi.http.whiteboard.resource.pattern=/files/*",
        "osgi.http.whiteboard.resource.prefix=/resources"})
public class ResourceService { }

After starting the application the static resources located in the resources folder are available via the /files path in the URL, e.g. http://localhost:8080/files/eclipse_logo.png

Note:
While writing this blog post I came across a very nasty issue. Because I initially registered the servlet filter for the /* pattern, the simple header and footer where always added. This also caused setting the content type, that didn’t match the content type of the image of course. And so the static content was never shown correctly. So if you want to use servlet filters to add common headers and footers, you need to take care of the pattern so the servlet filter is not applied to static resources.

Register Servlet Listeners

It is also possible to register different servlet listeners as whiteboard services. The following listeners are supported according to the servlet specification:

  • ServletContextListener – Receive notifications when Servlet Contexts are initialized and destroyed.
  • ServletContextAttributeListener – Receive notifications for Servlet Context attribute changes.
  • ServletRequestListener – Receive notifications for servlet requests coming in and being destroyed.
  • ServletRequestAttributeListener – Receive notifications when servlet Request attributes change.
  • HttpSessionListener – Receive notifications when Http Sessions are created or destroyed.
  • HttpSessionAttributeListener – Receive notifications when Http Session attributes change.
  • HttpSessionIdListener – Receive notifications when Http Session ID changes.

There is only one component property needed to be set so the Http Whiteboard implementation is handling the listener.

Service Property Description
osgi.http.whiteboard.listener When set to true this listener service is handled by the Http Whiteboard implementation. When not set or set to false the service is ignored. Any other value is invalid.

The following example shows a simple ServletRequestListener that prints out the client address on the console for each request (borrowed from the OSGi Compendium Specification):

@Component(property = "osgi.http.whiteboard.listener=true")
public class SimpleServletRequestListener
    implements ServletRequestListener {

    public void requestInitialized(ServletRequestEvent sre) {
        System.out.println("Request initialized for client: "
            + sre.getServletRequest().getRemoteAddr());
    }

    public void requestDestroyed(ServletRequestEvent sre) {
        System.out.println("Request destroyed for client: "
            + sre.getServletRequest().getRemoteAddr());
    }

}

Servlet Context and Common Whiteboard Properties

The ServletContext is specified in the servlet specification and provided to the servlets at runtime by the container. By default there is one ServletContext and without additional information the servlets are registered to that default ServletContext via the Http Whiteboard implementation. This could lead to scenarios where different bundles provide servlets for the same request mapping. In that case the service.ranking will be inspected to decide which servlet should be delivered. If the servlets belong to different applications, it is possible to specify different contexts. This can be done by registering a custom ServletContextHelper as whiteboard service and associate the servlets to the corresponding context. The ServletContextHelper can be used to customize the behavior of the ServletContext (e.g. handle security, provide resources, …) and to support multiple web-applications via different context paths.

A custom ServletContextHelper it needs to be registered as service of type ServletContextHelper and needs to have the following two service properties set:

  • osgi.http.whiteboard.context.name
  • osgi.http.whiteboard.context.path
Service Property Description
osgi.http.whiteboard.context.name Name of the Servlet Context Helper. This name can be referred to by Whiteboard services via the osgi.http.whiteboard.context.select property. The syntax of the name is the same as the syntax for a Bundle Symbolic Name. The default Servlet Context Helper is named default. To override the
default, register a custom ServletContextHelper service with the name default. If multiple Servlet Context Helper services are registered with the same name, the one with the highest Service Ranking is used. In case of a tie, the service with the lowest service ID wins. In other words, the normal OSGi service ranking applies.
osgi.http.whiteboard.context.path Additional prefix to the context path for servlets. This property is mandatory. Valid characters are specified in IETF RFC 3986, section 3.3. The context path of the default Servlet Context Helper is /. A custom default Servlet Context Helper may use an alternative path.
context.init.* Properties starting with this prefix are provided as init parameters through the ServletContext.getInitParameter() and ServletContext.getInitParameterNames() methods. The context.init. prefix is removed from the parameter name.

The following example will register a ServletContextHelper for the context path /eclipse and will retrieve resources from http://www.eclipse.org. It is registered with BUNDLE service scope to ensure that every bundle gets its own instance, which is for example important to resolve resources from the correct bundle.

Note:
Create it in a new package org.fipro.inverter.http.eclipse within the org.fipro.inverter.http project, as we will need to create some additional resources to show how this example actually works.

@Component(
    service = ServletContextHelper.class,
    scope = ServiceScope.BUNDLE,
    property = {
        "osgi.http.whiteboard.context.name=eclipse",
        "osgi.http.whiteboard.context.path=/eclipse" })
public class EclipseServletContextHelper extends ServletContextHelper {

    public URL getResource(String name) {
        // remove the path from the name
        name = name.replace("/eclipse", "");
        try {
            return new URL("http://www.eclipse.org/" + name);
        } catch (MalformedURLException e) {
            return null;
        }
    }
}

Note:
With PDE remember to add org.osgi.service.http.context to the Imported Packages. With Bndtools remember to add the new package to the Private Packages in the bnd.bnd file on the Contents tab.

To associate servlets, servlet filter, resources and listeners to a ServletContextHelper, they share common service properties (see OSGi Compendium Specification Release 6 – Table 140.3) additional to the service specific properties:

Service Property Description
osgi.http.whiteboard.context.select An LDAP-style filter to select the associated ServletContextHelper service to use. Any service property of the Servlet Context Helper can be filtered on. If this property is missing the default Servlet Context Helper is used. For example, to select a Servlet Context Helper with name myCTX provide the following value:
(osgi.http.whiteboard.context.name=myCTX)To select all Servlet Context Helpers provide the following value:
(osgi.http.whiteboard.context.name=*)
osgi.http.whiteboard.target The value of this service property is an LDAP style filter expression to select the Http Whiteboard implementation(s) to handle this Whiteboard service. The LDAP filter is used to match HttpServiceRuntime services. Each Http Whiteboard implementation exposes exactly one HttpServiceRuntime service. This property is used to associate the Whiteboard service with the Http Whiteboard implementation that registered the HttpServiceRuntime service. If this property is not specified, all Http Whiteboard implementations can handle the service.

The following example will register a servlet only for the introduced /eclipse context:

@Component(
    service=Servlet.class,
    property= {
        "osgi.http.whiteboard.servlet.pattern=/image",
        "osgi.http.whiteboard.context.select=(osgi.http.whiteboard.context.name=eclipse)"
    },
    scope=ServiceScope.PROTOTYPE)
public class ImageServlet extends HttpServlet {

    private static final long serialVersionUID = 1L;

    @Override
    protected void doGet(HttpServletRequest req, HttpServletResponse resp)
            throws ServletException, IOException {

        resp.setContentType("text/html");
        resp.getWriter().write("Show an image from www.eclipse.org");
        resp.getWriter().write(
            "<p><img src='img/nattable/images/FeatureScreenShot.png'/></p>");
    }

}

And to make this work in combination with the introduced ServletContextHelper we need to additionally register the resources for the /img context, which is also only assigned to the /eclipse context:

@Component(
    service = EclipseImageResourceService.class,
    property = {
        "osgi.http.whiteboard.resource.pattern=/img/*",
        "osgi.http.whiteboard.resource.prefix=/eclipse",
        "osgi.http.whiteboard.context.select=(osgi.http.whiteboard.context.name=eclipse)"})
public class EclipseImageResourceService { }

If you start the application and browse to http://localhost:8080/eclipse/image you will see an output from the servlet together with an image that is loaded from http://www.eclipse.org.

Note:
The component properties and predefined values are available via org.osgi.service.http.whiteboard.HttpWhiteboardConstants. So you don’t need to remember them all and can also retrieve some additional information about the properties via the corresponding Javadoc.

The sources for this tutorial are hosted on GitHub in the already existing projects:

 


by Dirk Fauth at April 20, 2017 05:58 AM

IoT Developer Trends - 2017 Edition

April 19, 2017 01:00 PM

The third annual IoT Developer Survey results are now available! What has changed in IoT this year?

April 19, 2017 01:00 PM

IoT Developer Trends 2017 Edition

by Ian Skerrett at April 19, 2017 12:00 PM

For the last 3 years we have been tracking the trends of the IoT developer community through the IoT Developer Survey [2015] [2016]. Today, we released the third edition of the IoT Developer Survey 2017. As in previous years, the report provides some interesting insights into what IoT developers are thinking and using to build IoT solutions. Below are some of the key trends we identified in the results.

The survey is the results of a collaboration between the Eclipse IoT Working Group, IEEE, Agile-IoT EU and the IoT Council. Each partner promoted the survey to their respective communities. A total of 713 individuals participated in the survey. The complete report is available for everyone and we also make available the detailed data [xls, odf].

As with any survey of this type, I always caution people to see these results as one data point that should be compared to other industry reports. All of these surveys have inherent biases so identifying trends that span surveys is important.

Key Trends from 2017 Survey

 1. Expanding Industry Adoption of IoT

The 2017 survey participants appear to be involved in a more diverse set of industries. IoT Platform and Home Automation industries continue to lead but industries such as Industrial Automation, Smart Cities, Energy Management experience significant growth between 2016 to 2017.

industries

2. Security is the key concern but….

Security continues to be the main concern IoT developers with 46.7% respondents indicating it was a concern. Interoperability (24.4%) and Connectivity (21.4%) are the next most popular concerns mentioned. It would appear that Interoperability is on a downward trend for 2015 (30.7%) and 2016 (29.4%) potentially indicating the work on standards and IoT middleware are lessening this concern.

concerns2017

This year we asked what security-related technologies were being used for IoT solutions. The top two security technologies selected were the existing software technologies, ie. Communication Security (TLS, DTLS) (48.3%) and Data Encryption (43.2%). Hardware oriented security solutions were less popular, ex. Trusted Platform Modules (10%) and Hardware Security Modules (10.6%). Even Over the Air Update was only being used by 18.5% of the respondents. Security may be a key concern but it certainly seems like the adoption of security technology is lagging.

security

3. Top IoT Programming Language Depends…

Java and C are the primary IoT programming languages, along with significant usage of C++, Python and JavaScript. New this year we asked in the survey, language usage by IoT categories: Constrained Devices, IoT Gateway and IoT Cloud Platform. Broken down by these categories it is apparent that language usage depends on the target destination for the developed software:

  • On constrained devices, C (56.4%) and C++ (38.3%) and the dominant languages being used. Java (21.2%) and Python (20.8%) have some usage but JavaScript (10.3%) is minimal.
  • On IoT Gateways, the language of choice is more diverse, Java (40.8%), C (30.4%), Python (29.9%) and C++ (28.1%) are all being used. JavaScript and Node.js have some use.
  • On IoT Cloud Platforms, Java (46.3%) emerges as the dominant language. JavaScript (33.6%), Node.js (26.3%) and Python (26.2%) have some usage. Not surprisingly, C (7.3%) and C++ (11.6%) usage drops off significantly.

Overall, it is clear IoT solution development requires a diverse set of language programming skills. The specific language of choice really depends on the target destination.

4. Linux is key OS; Raspbian and Ubuntu top IoT Linux distros

Linux continues to be the main operating system for IoT. This year we asked to identify OS by the categories: Constrained Device and IoT Gateway. On Constrained Devices, Linux (44.1%) is the most popular OS but the second most popular is No OS/ Bar Metal (27.6%). On IoT Gateway, Linux (66.9%) becomes even more popular and Windows (20.5%) becomes the second choice.

The survey also asked which Linux distro is being used. Raspbian (45.5%) and Ubuntu (44.%) are the two top distros for IoT.

linuxdistros

If Linux is the dominant operating system for IoT, how are the alternative IoT operating systems doing? In 2017, Windows definitely experienced a big jump from previous years. It also seems like FreeRTOS and Contiki are experiencing growth in their usage.

 5. Amazon, MS and Google Top IoT Cloud Platforms

Amazon (42.7%) continues to be the leading IoT Cloud Platform followed by MS Azure (26.7%) and Google Cloud Platform (20.4%). A significant change this year has been the drop of Private / On-premise cloud usage, from 34.9% in 2016 to 18.4% in 2017. This might be an indication that IoT Cloud Platforms are now more mature and developers are ready to embrace them.

cloud

6. Bluetooth, LPWAN protocols and 6LowPAN trending up; Thread sees little adoption

For the last 3 years we have asked what connectivity protocols developers use for IoT solutions. The main response has been TCP/IP and Wi-Fi. However, there are a number of connectivity standards and technologies that are being developed for IoT so it has been interesting to track their adoption within the IoT developer community. Based on the 2017 data, it would appear Bluetooth/Bluetooth Smart (48.2%), LPWAN technologies (ex LoRa, Sigfox, LTE-M) (22.4%) and 6LoWPAN (21.4%) are being adopted by the IoT developer community. However, it would appear Thread (6.4%) is still having limited success with developer adoption.

connectivity2017

Summary

Overall, the survey results are showing some common patterns for IoT developers. The report also looks at common IoT hardware architecture, IDE usage, perceptions of IoT Consortiums, adoption of IoT standards, open source participation in IoT and lots more. I hope the report provides useful information source to the wider IoT industry.

Next week we will be doing a webinar to go through the details of the results. Please join us on April 26 at 10:30amET/16:30pmCET.

2017 IoT Survey - webinar 2

Thank you to everyone who participated in the survey, the individual input is what makes these surveys useful. Also, thank you to our co-sponsors Eclipse IoT Working Group, IEEE, Agile IoT and the IoT Council. It is great to be able to collaborate with other successful IoT communities.

We will plan to do another survey next year. Feel free to leave any comments or thoughts on how we can improve it.

 

 

 



by Ian Skerrett at April 19, 2017 12:00 PM

Why DSLs?

by Sebastian Zarnekow (noreply@blogger.com) at April 17, 2017 09:04 PM

A lot has been written about domain specific languages, their purpose and their application. According to the ever changing wisdom of wikipedia, a DSL “is a computer language specialized to a particular application domain. This is in contrast to a general-purpose language (GPL), which is broadly applicable across domains.” In other words, a DSL is supposed to help to implement software systems or parts of those in a more efficient way. But it begs the question, why engineers should learn new syntaxes, new APIs and new tools rather than using their primary language and just get the things done?

Here is my take on this. And to answer that question, let’s move the discussion away from programming languages towards a more general language understanding. And instead of talking abstract, I’ll use a very concrete example. In fact one of the most discussed domains ever - and one that literally everyone has an opinion about: The weather.

We all know this situation: When watching the news, the forecaster will tell something about sunshine duration, wind speed and direction, or temperature. Being not a studied meteorologist, I can still find my way through most of the facts, though the probability of precipitation always gives me a slight headache. If we look at the vocabulary that is used in an average weather forecast, we can clearly call that a domain specific language, though it only scratches the surface of meteorology. But what happens, when two meteorologists talk to each other about the weather? My take: they will use a very efficient vocabulary to discuss it unambiguously.
Now let’s move this gedankenexperiment forward. There are approximately 40 non-compound words in the Finnish language that describe snow. Now what happens, when a Finnish forecaster and a German news anchor talk about snowy weather conditions and the anchorman takes English notes on that? I bet it is safe to assume that there will be a big loss of precision when it comes to the mutual agreement on the exact form of snowy weather. And even more so, when this German guy later on tries to explain to another Finn what the weather was like. The bottomline of this: common vocabulary and language is crucial to successful communication.

Back to programming. Let’s assume that the English language is a general purpose programming language, the German guy is a software developer and the Finnish forecaster is a domain expert for snowy weather. This may all sound a little farfetched, but in fact it is exactly how most software projects are run: A domain expert explains the requirements to a developer. The dev will start implementing the requirements. Other developers will be onboarded on the project. They try to wrap their head around the state of the codebase and surely read the subtleties of the implementation differently, no matter how fluent they are in English. Follow-up meetings will be scheduled to clarify questions with the domain experts. And the entire communication is prone to loss in precision. In the end all involved parties talk about similar yet slightly different things. Misunderstandings go potentially unnoticed and cause a lot of frustration on all sides.

This is where domain specific languages come into play! Instead of a tedious, multi-step translation from one specialized vocabulary to a general purpose language and vice versa, the logic is directly implemented using the domain specific terms and notation. The knowledge is captured with fewer manual transformation steps; the system is easier to write, understand and review. This may even work to the extent that the domain experts do write the code themselves. Or they pair up with the software engineers and form a team.

As usual, there is no such thing as free lunch. As long as your are not Omnilingual, you should probably not waste your time learning Finnish by heart, especially when you are working with Spanish people next week, and the French team the week thereafter. But without any doubt, fluent Finnish will pay off as long as your are working with the Finns.

A development process based on domain specific languages and thus based on a level of abstraction close to the problem domain can relief all involved people. There are fewer chances for misunderstandings and inaccurate translations. Speaking the same language and using the same vocabulary naturally feels like pulling together. And that’s what makes successful projects.

by Sebastian Zarnekow (noreply@blogger.com) at April 17, 2017 09:04 PM

Dynamic Routing in Serverless Microservice with Vert.x Event Bus

by bytekast at April 14, 2017 12:00 AM

this is a re-publication of the following blog post

SERVERLESS FRAMEWORK

The Serverless Framework has become the De Facto toolkit for building and deploying Serverless functions or applications. Its community has done a great job advancing the tools around Serverless architecture.

However, in the Serverless community there is debate among developers on whether a single AWS Lambda function should only be responsible for a single API endpoint. My answer, based on my real-world production experience, is NO.

Imagine if you are building a set of APIs with 10 endpoints and you need to deploy the APIs to DEV, STAGE and PROD environments. Now you are looking at 30 different functions to version, deploy and manage - not to mention the Copy & Paste code and configuration that will result from this type of set-up. NO THANKS!!!

I believe a more pragmatic approach is 1 Lambda Function == 1 Microservice.

For example, if you were building a User Microservice with basic CRUD functionality, you should implement CREATE, READ, UPDATE and DELETE in a single Lambda function. In the code, you should resolve the desired action by inspecting the request or the context.

VERT.X TO THE RESCUE

There are many benefits to using Vert.x in any application. With Vert.x, you get a rock-solid and lightweight toolkit for building reactive, highly performant, event-driven and non-blocking applications. The toolkit even provides asynchronous APIs for accessing traditional blocking drivers such as JDBC.

However, for this example, we will mainly focus on the Event Bus. The event bus allows different parts of your application to communicate with each other via event messages. It supports publish/subscribe, point to point, and request-response messaging.

For the User Microservice example above, we could treat the combination of the HTTP METHOD and RESOURCE PATH as a unique event channel, and register the subscribers/handlers to respond appropriately.

Let’s dive right in.

GOAL:

Create a reactive, message-driven, asynchronous User Microservice with GET, POST, DELETE, PUT CRUD operations in a single AWS Lambda Function using the Serverless Framework

Serverless stack definition:

SOLUTION:

Use Vert.x‘s Event Bus to handle dynamic routing to event handlers based on HTTP method and resource path from the API input.

Lambda Handler:

CODE REVIEW

Lines 14-19 initializes the Vert.x instance. AWS Lambda will hold on to this instance for the life of the container/JVM. It is reused in subsequent requests.

Line 17 registers the User Service handlers

Line 22 defines the main handler method that is called when the Lambda function is invoked.

Line 27 sends the Lambda function input to the (dynamic) address where handlers are waiting to respond.

Lines 44-66 defines the specific handlers and binds them to the appropriate channels (http method + resource path)

SUMMARY

As you can see, Vert.x‘s Event Bus makes it very easy to dynamically support multiple routes in a single Serverless function. This reduces the number of functions you have to manage, deploy and maintain in AWS. In addition, you gain access to asynchronous, non-blocking APIs that come standard with Vert.x.

Serverless + Vert.x = BLISS


by bytekast at April 14, 2017 12:00 AM

Dynamic Menu Item to select Theme in Eclipse IDE

by psuzzi at April 11, 2017 05:23 PM

In Bug 514458, I added the “Theme” dynamic menu to the Eclipse IDE. This post explains how I did this.

Menu Eclipse 3.x style

First edit the plugin.xml, and add a menu contribution. With the locationURI menu:org.eclipse.ui.appearance?after=org.eclipse.ui.window.appearance.separator1, you’ll contribute a submenu to the Window > Appearance menu, just after the separator.

appearance-menu-structure

Add a Theme menu under the menuContribution, and then add a child dynamic element.

<menuContribution
      locationURI="menu:org.eclipse.ui.appearance?after=org.eclipse.ui.window.appearance.separator1">
   <menu
         id="org.eclipse.ui.appearance.theme"
         label="Theme">
      <dynamic
            class="org.eclipse.ui.internal.actions.ThemeDynamicMenu"
            id="org.eclipse.ui.ide.dynamic1">
      </dynamic>
   </menu>
</menuContribution>

Next, create the java class implementing the dynamic menu, and add mock code to verify the menu works.

public class ThemeDynamicMenu extends ContributionItem {

	@Override
	public void fill(Menu menu, int index) {
		MenuItem menuItem = new MenuItem(menu, SWT.CHECK, index);
		menuItem.setText("Theme 1"); //$NON-NLS-1$
		menuItem.addSelectionListener(new SelectionAdapter() {
			@Override
			public void widgetSelected(SelectionEvent e) {
				// executed on menu select
				System.out.println("Selected"); //$NON-NLS-1$
			}
		});
	}

}

 

Verify the menu is displayed where you expect, and the submenu dynamic entries are working.

new-theme-dynamic-menu

Now remove the line which adds the “Theme 1” item, and rewrite the body of the widgetSelected.

The menu should have one menu item for each available theme, and each widgetSelected(){…} should activate the corresponding theme.

The theme selection code is inspired tothe ViewsPreferencePage one.

/**
 * Implements the Dynamic Menu to choose the Theme.
 */
public class ThemeDynamicMenu extends ContributionItem {

	private static String THEME_ID = "THEME_ID"; //$NON-NLS-1$

	private IThemeEngine engine;
	private boolean highContrastMode;

	public ThemeDynamicMenu() {
		IWorkbench workbench = PlatformUI.getWorkbench();
		MApplication application = workbench.getService(MApplication.class);
		IEclipseContext context = application.getContext();
		engine = context.get(IThemeEngine.class);
		highContrastMode = workbench.getDisplay().getHighContrast();
	}

	@Override
	public void fill(Menu menu, int index) {
		for (ITheme theme : engine.getThemes()) {
			if (!highContrastMode && !Util.isGtk() && theme.getId().equals(E4Application.HIGH_CONTRAST_THEME_ID)) {
				continue;
			}
			MenuItem menuItem = new MenuItem(menu, SWT.CHECK, index);
			menuItem.setText(theme.getLabel());
			menuItem.setData(THEME_ID, theme.getId());
			menuItem.addSelectionListener(new SelectionAdapter() {
				@Override
				public void widgetSelected(SelectionEvent e) {
					engine.setTheme(theme, !highContrastMode);
				}
			});
		}

		menu.addMenuListener(new MenuAdapter() {
			@Override
			public void menuShown(MenuEvent e) {
				for (MenuItem item : menu.getItems()) {
					boolean isActive = item.getData(THEME_ID).equals(engine.getActiveTheme().getId());
					item.setEnabled(!isActive);
					item.setSelection(isActive);
				}
			}
		});

	}

}

Finally, launch the Eclipse IDE to check the menu works as expected.

theming-menu


by psuzzi at April 11, 2017 05:23 PM

Clean Sheet Service Update

by Frank Appel at April 11, 2017 11:31 AM

Written by Frank Appel

Early enough to pass as an easter gift we provide a Clean Sheet service update. The new version (0.5) is a simple bugfix release related to some StyledText adapter problems on windows.

The Clean Sheet Eclipse Design

In case you've missed out on the topic and you are wondering what I'm talking about, here is a screenshot of my real world setup using the Clean Sheet theme (click on the image to enlarge). Eclipse IDE Look and Feel: Clean Sheet Screenshot For more information please refer to the features landing page at http://fappel.github.io/xiliary/clean-sheet.html, read the introductory Clean Sheet feature description blog post, and check out the New & Noteworthy page.

 

Clean Sheet Service Update

Thanks to the participation of dedicated users, we were able to spot and resolve some nuisances. On windows, the Mylyn task editor is working again with Bugzilla. Also on windows, line number updates in java editors are propageted again on vertical scrolling per mouse drag. Please refer to the issues #75 and #76 for more details.

Clean Sheet Installation

Drag the 'Install' link below to your running Eclipse instance

Drag to your running Eclipse* workspace. *Requires Eclipse Marketplace Client

or

Select Help > Install New Software.../Check for Updates.
P2 repository software site: @ http://fappel.github.io/xiliary/
Feature: Code Affine Theme

After feature installation and workbench restart select the ‘Clean Sheet’ theme:
Preferences: General > Appearance > Theme: Clean Sheet

 

On a Final Note, …

Of course, it’s interesting to hear suggestions or find out about potential issues that need to be resolved. Feel free to use the Xiliary Issue Tracker or the comment section below for reporting.

With this in mind, I’d like to thank all the Clean Sheet adopters for the support and wish everybody a happy easter egg hunt 😉

Title Image: © Depositphotos.com/piccola

The post Clean Sheet Service Update appeared first on Code Affine.


by Frank Appel at April 11, 2017 11:31 AM

EPLv2: A New Version of the Eclipse Public License

April 10, 2017 03:00 PM

Participate in the community discussion on the Eclipse Foundation's revised Eclipse Public License (EPL).

April 10, 2017 03:00 PM

Moving on

by Sebastian Zarnekow (noreply@blogger.com) at April 10, 2017 11:46 AM

After an exciting journey of 15 months as the Director Engineering at SMACC, I decided to move on. It was not an easy decision to make, though it’s still one that I wanted to make. In the past year I made many new friends, met great people, and had the chance to work in a super nice team. It was a great time with plenty of challenges, important learnings and great fun. But I also realized that I was missing the time as a technical consultant. Language engineering always was and still is a strong passion of mine. So I figured it’s about time to move on and refocus. Xtext, Eclipse, Language oriented programming - exciting times ahead. Keeping you posted ...

by Sebastian Zarnekow (noreply@blogger.com) at April 10, 2017 11:46 AM

EPLv2: A New Version of the Eclipse Public License

by Mike Milinkovich at April 07, 2017 06:17 PM

The Eclipse Foundation is in the process of revising the Eclipse Public License (EPL). Refreshing a popular open source license is a big job, and one that we have been chipping away at for over a year.

The EPL and its predecessor the Common Public License have been around for about 16 years now. For a full presentation on the changes we are considering and their motivation, you can check out our presentation, or the video on YouTube.

Please get involved. Just as importantly, if you are a developer involved in the Eclipse community and ecosystem, encourage your colleagues in the legal department to get involved. The discussions are happening on the epl-discuss@eclipse.org mail list (subscription required). The most recent public drafts of the EPLv2 can be found here.


Filed under: Foundation, Open Source

by Mike Milinkovich at April 07, 2017 06:17 PM

JSON Forms – Day 5 – Layouts

by Maximilian Koegel and Jonas Helming at April 07, 2017 06:55 AM

JSON Forms is a framework for efficiently building form-based web UIs. These UIs, which are usually embedded in a business application, allow end users to enter, modify, and view data. JSON Forms eliminates the need to write HTML templates and Javascript for data binding by hand. It allows you to create customizable forms by leveraging the capabilities of JSON and JSON schema and providing a simple and declarative way of describing forms. Forms are then rendered with a UI framework, currently one that is based on AngularJS. If you would like to know more about JSON Forms, the JSON Forms homepage is a good starting point.

In this blog series, we would like to introduce the framework based on a real-world example application, a task tracker called “Make It happen”. On day 1 and day 2, we described the overall requirements as well as created a fully working form for the entity “Task”. On day 3 we showed how to extend the form with new attributes and controls. On day 4 we introduced rule-based visibility of controls, based on the data, the user has entered.

If you would like to follow this blog series please follow us on twitter. This is where we will announce every new blog post regarding JSON Forms. After the first 4 days, the current form looks like this:

While the form is already fully functional, including data binding, validation, and even rule-based validation, the layout is definitely not optimal. Basically, we just see a vertical list of all controls. Therefore, in this blog post, we want to refine the layout using the JSON Forms layout elements.

So far, we have embedded all controls into one root “VerticalLayout”. This type of layout always arranges its children vertically. However, JSON Forms also supports different layout types, e.g. “HorizontalLayout”. Of course, those layouts can be combined by embedding one layout element into another. For example to achieve a two column layout you can use a “HorizontalLayout” element which contains two “VerticalLayout” elements.

In our current form, we could save some vertical space by putting the “dueDate” attribute in one shared “HorizontalLayout” with the “rating” property. The same applies for “recurrence” and “recurrence_interval”.

The resulting UI schema looks like this:

{
  "type": "VerticalLayout",
  "elements": [
    {
      "type": "Control",
      "label": false,
      "scope": {
        "$ref": "#/properties/done"
      }
    },
    {
      "type": "Control",
      "scope": {
        "$ref": "#/properties/name"
      }
    },
    {
      "type": "HorizontalLayout",
      "elements": [
        {
          "type": "Control",
          "scope": {
            "$ref": "#/properties/due_date"
          }
        },
        {
          "type": "Control",
          "scope": {
            "$ref": "#/properties/rating"
          }
        }
      ]
    },
    {
      "type": "Control",
      "scope": {
        "$ref": "#/properties/description"
      },
      "options": {
          "multi":true
      }
    },
    {
      "type": "HorizontalLayout",
      "elements": [
        {
          "type": "Control",
          "scope": {
            "$ref": "#/properties/recurrence"
          }
        },
        {
          "type": "Control",
          "scope": {
            "$ref": "#/properties/recurrence_interval"
          },
          "rule": {
              "effect": "HIDE",
              "condition": {
                  "scope": {
                      "$ref": "#/properties/recurrence"
                  },
                  "expectedValue": "Never"
              }
          }
        }
      ]
    }
  ]
}

So by simply moving things around a bit in the UI schema, we can produce the following layout:

As you can imagine, it is pretty easy to produce more advanced layouts by just combining layout elements.

Please note, that EMF Forms supports more layout types than horizontal and vertical. As an example, you could use the element “Categorization” to separate controls into several sub categories. The following screenshot shows a rendered categorization element, where the attributes of a task are split into two tabs. Please note that the horizontal layout described before is now nested into the categorization.

The UI Schema for this layout is as follows:

{
  "type": "Categorization",
  "elements": [
    {
      "type": "Category",
      "label": "Main",
      "elements": [
        {
          "type": "Control",
          "label": false,
          "scope": {
            "$ref": "#/properties/done"
          }
        },
        {
          "type": "Control",
          "scope": {
            "$ref": "#/properties/name"
          }
        },
        {
          "type": "Control",
          "scope": {
            "$ref": "#/properties/description"
          },
          "options": {
              "multi":true
          }
        }
      ]
    },
    {
      "type": "Category",
      "label": "Additional",
      "elements": [
        {
          "type": "HorizontalLayout",
          "elements": [
            {
              "type": "Control",
              "scope": {
                "$ref": "#/properties/due_date"
              }
            },
            {
              "type": "Control",
              "scope": {
                "$ref": "#/properties/rating"
              }
            }
          ]
        },
        {
          "type": "HorizontalLayout",
          "elements": [
            {
              "type": "Control",
              "scope": {
                "$ref": "#/properties/recurrence"
              }
            },
            {
              "type": "Control",
              "scope": {
                "$ref": "#/properties/recurrence_interval"
              },
              "rule": {
                  "effect": "HIDE",
                  "condition": {
                      "scope": {
                          "$ref": "#/properties/recurrence"
                      },
                      "expectedValue": "Never"
                  }
              }
            }
          ]
        }
      ]
    }
  ]
}

Please refer to this page for an overview of the available layout elements in JSON Forms.

As you might have noticed, the layout types of JSON Forms are a little different from what you know from HTML or any widget toolkit. As an example, there is no element called “tabbed layout”, but rather, it is called “Categorization”. The reason for this is that the UI Schema of JSON Forms is focussed on describing the structure of the layout rather than the actual rendering. The concrete visualization is the responsibility of the rendering component. This is analogous to the combination of HTML and CSS and allows for flexibility in rendering. As an example, the categorization element described above basically only specifies, that a form is split into several categories. The default renderer will translate this information into tabs. However, you could also implement an alternative renderer, which would display categories slightly differently, e.g. like this:

Talking about alternative renderers: This is a core feature of JSON Forms! So far, we have just used the default renderer for controls as well as for layouts. In the next post, we will describe how to implement and plugin custom renderers in order to adapt the way in which a form is rendered. So stay tuned!

If you are interested in trying out JSON Forms, please refer to the Getting-Started tutorial. This explains how to set up JSON Forms in your own project and how you can try out the first few steps yourself. If you would like to follow this blog series, please follow us on twitter. We will announce every new blog post on JSON Forms there. If you need support for JSON Forms or if you are interested in new features, please feel free to contact us.

We hope to see you soon for the next day!

 

List of all available days to date:


TwitterGoogle+LinkedInFacebook

Leave a Comment. Tagged with AngularJS, emf, Forms, JSON, JSON Schema, AngularJS, emf, Forms, JSON, JSON Schema


by Maximilian Koegel and Jonas Helming at April 07, 2017 06:55 AM

What is Eclipse?

by Ian Skerrett at April 05, 2017 05:19 PM

Last week we launched a survey to solicit opinions about open source foundations and the general Eclipse community. A key thing we hope to accomplish with this survey is to gauge how people perceive the Eclipse brand? The Eclipse community has substantially grown and change over the last number of years, so we really want to know how people answer the question ‘What is Eclipse?’

The survey has 16 questions and should take less than 5 minutes to complete. It would great to have as many Eclipse community members as possible to answer ‘What is Eclipse?‘.

Eclipse Survey 1



by Ian Skerrett at April 05, 2017 05:19 PM