JBoss Tools 4.5.2.AM2 for Eclipse Oxygen.2

by jeffmaury at April 26, 2018 02:06 PM

Happy to announce 4.5.2.AM2 (Developer Milestone 2) build for Eclipse Oxygen.2 (built with RC2).

Downloads available at JBoss Tools 4.5.2 AM2.

What is New?

Full info is at this page. Some highlights are below.

Fuse Tooling

Fuse 7 Karaf-based runtime Server adapter

Fuse 7 is cooking and preliminary versions are already available on early-access repository. Fuse Tooling is ready to leverage them so that you can try the upcoming major Fuse version.

Fuse 7 Server Adapter

Classical functionalities with server adapters are available: automatic redeploy, Java debug, Graphical Camel debug through created JMX connection. Please note: - you can’t retrieve the Fuse 7 Runtime yet directly from Fuse tooling, it is required to download it on your machine and point to it when creating the Server adapter. - the provided templates requires some modifications to have them working with Fuse 7, mainly adapting the bom. Please see work related to it in this JIRA task and its children.

Display routes defined inside "routeContext" in Camel Graphical Editor (Design tab)

"routeContext" tag is a special tag used in Camel to provide the ability to reuse routes and to split them across different files. This is very useful on large projects. See Camel documentation for more information. Since this version, the Design of the routes defined in "routeContext" tags are now displayed.

Usability improvement: Progress bar when "Changing the Camel version"

Since Fuse Tooling 10.1.0, it is possible to change the Camel version. In case the Camel version was not cached locally yet and for slow internet connections, this operation can take a while. There is now a progress bar to see the progress.

Switch Camel Version with Progress Bar

Enjoy!

Jeff Maury


by jeffmaury at April 26, 2018 02:06 PM

JBoss Tools and Red Hat Developer Studio for Eclipse Oxygen.2

by jeffmaury at April 26, 2018 02:06 PM

JBoss Tools 4.5.2 and Red Hat JBoss Developer Studio 11.2 for Eclipse Oxygen.2 are here waiting for you. Check it out!

devstudio11

Installation

JBoss Developer Studio comes with everything pre-bundled in its installer. Simply download it from our JBoss Products page and run it like this:

java -jar jboss-devstudio-<installername>.jar

JBoss Tools or Bring-Your-Own-Eclipse (BYOE) JBoss Developer Studio require a bit more:

This release requires at least Eclipse 4.7 (Oxygen) but we recommend using the latest Eclipse 4.7.2 Oxygen JEE Bundle since then you get most of the dependencies preinstalled.

Once you have installed Eclipse, you can either find us on the Eclipse Marketplace under "JBoss Tools" or "Red Hat JBoss Developer Studio".

For JBoss Tools, you can also use our update site directly.

http://download.jboss.org/jbosstools/oxygen/stable/updates/

What is new?

Our main focus for this release was on adoption of Java9, improvements for container based development and bug fixing. Eclipse Oxygen itself has a lot of new cool stuff but let me highlight just a few updates in both Eclipse Oxygen and JBoss Tools plugins that I think are worth mentioning.

OpenShift 3

Spring Boot applications support in OpenShift server adapter

The OpenShift server adapter allowed hotdeploy and debugging for JEE and NodeJS based applications. It now supports Spring Boot applications with some limitations: the Spring Boot devtools module must be added to your application as it monitors code changes and as the application must be launched in exploded mode, you must use the upstream image (docker.io/fabric8/s2i-java) rather than the downstream image builder fis-java-openshift.

As an example, we’ve provided an OpenShift template that will create an OpenShift application based on the upstream application and a Git repository that added the Spring Boot devtools to the Fabric8 Spring Boot quickstart.

{
        "apiVersion": "v1",
        "kind": "Template",
        "metadata": {
          "annotations": {
            "description": "Spring-Boot and CXF JAXRS QuickStart. This example demonstrates how you can use Apache CXF JAXRS with Spring Boot on Openshift. The quickstart uses Spring Boot to configure a little application that includes a CXF JAXRS endpoint with Swagger enabled.",
            "tags": "quickstart,java,springboot,fis",
            "iconClass": "icon-jboss",
            "version": "2.0"
          },
          "name": "s2i-spring-boot-cxf-jaxrs"
        },
        "labels": {
          "template": "s2i-spring-boot-cxf-jaxrs"
        },
        "parameters": [
          {
            "name": "APP_NAME",
            "displayName": "Application Name",
            "required": true,
            "value": "s2i-spring-boot-cxf-jaxrs",
            "description": "The name assigned to the application."
          },
          {
            "name": "GIT_REPO",
            "displayName": "Git Repository URL",
            "required": true,
            "value": "https://github.com/jeffmaury/spring-boot-cxf-jaxrs.git",
            "description": "The URL of the repository with your application source code."
          },
          {
            "name": "GIT_REF",
            "displayName": "Git Reference",
            "value": "hotdeploy",
            "description": "Set this to a branch name, tag or other ref of your repository if you are not using the default branch."
          },
          {
            "name": "SERVICE_NAME",
            "displayName": "Service Name",
            "value": "cxf-jaxrs",
            "description": "Exposed service name."
          },
          {
            "name": "BUILDER_VERSION",
            "displayName": "Builder version",
            "value": "2.0",
            "description": "The version of the FIS S2I builder image to use."
          },
          {
            "name": "APP_VERSION",
            "displayName": "Application Version",
            "value": "1.0.0.redhat-000014",
            "description": "The application version."
          },
          {
            "name": "MAVEN_ARGS",
            "displayName": "Maven Arguments",
            "value": "package -DskipTests -Dfabric8.skip -e -B",
            "description": "Arguments passed to mvn in the build."
          },
          {
            "name": "MAVEN_ARGS_APPEND",
            "displayName": "Extra Maven Arguments",
            "description": "Extra arguments passed to mvn, e.g. for multi-module builds."
          },
          {
            "name": "ARTIFACT_DIR",
            "displayName": "Maven build directory",
            "description": "Directory of the artifact to be built, e.g. for multi-module builds."
          },
          {
            "name": "IMAGE_STREAM_NAMESPACE",
            "displayName": "Image Stream Namespace",
            "value": "openshift",
            "required": true,
            "description": "Namespace in which the Fuse ImageStreams are installed. These ImageStreams are normally installed in the openshift namespace. You should only need to modify this if you&aposve installed the ImageStreams in a different namespace/project."
          },
          {
            "name": "BUILD_SECRET",
            "displayName": "Git Build Secret",
            "generate": "expression",
            "description": "The secret needed to trigger a build.",
            "from": "[a-zA-Z0-9]{40}"
          },
          {
            "name": "CPU_REQUEST",
            "displayName": "CPU request",
            "value": "0.2",
            "required": true,
            "description": "The amount of CPU to requests."
          },
          {
            "name": "CPU_LIMIT",
            "displayName": "CPU limit",
            "value": "1.0",
            "required": true,
            "description": "The amount of CPU the container is limited to use."
          }
        ],
        "objects": [
          {
            "apiVersion": "v1",
            "kind": "Route",
            "metadata": {
              "labels": {
                "component": "${APP_NAME}",
                "provider": "s2i",
                "project": "${APP_NAME}",
                "version": "${APP_VERSION}",
                "group": "quickstarts"
              },
              "name": "${SERVICE_NAME}-route"
            },
            "spec": {
              "to": {
                "kind": "Service",
                "name": "${SERVICE_NAME}"
              }
            }
          },
          {
            "apiVersion": "v1",
            "kind": "Service",
            "metadata": {
              "annotations": {
              },
              "labels": {
                "component": "${APP_NAME}",
                "provider": "s2i",
                "project": "${APP_NAME}",
                "version": "${APP_VERSION}",
                "group": "quickstarts"
              },
              "name": "${SERVICE_NAME}"
            },
            "spec": {
              "clusterIP": "None",
              "deprecatedPublicIPs": [],
              "ports": [
                {
                  "port": 9413,
                  "protocol": "TCP",
                  "targetPort": 8080
                }
              ],
              "selector": {
                "project": "${APP_NAME}",
                "component": "${APP_NAME}",
                "provider": "s2i",
                "group": "quickstarts"
              }
            }
          },
          {
            "kind": "ImageStream",
            "apiVersion": "v1",
            "metadata": {
              "name": "${APP_NAME}",
              "creationTimestamp": null,
              "labels": {
                "component": "${APP_NAME}",
                "group": "quickstarts",
                "project": "${APP_NAME}",
                "provider": "s2i",
                "version": "${APP_VERSION}"
              }
            },
            "spec": {},
            "status": {
              "dockerImageRepository": ""
            }
          },
          {
            "kind": "BuildConfig",
            "apiVersion": "v1",
            "metadata": {
              "name": "${APP_NAME}",
              "creationTimestamp": null,
              "labels": {
                "component": "${APP_NAME}",
                "group": "quickstarts",
                "project": "${APP_NAME}",
                "provider": "s2i",
                "version": "${APP_VERSION}"
              }
            },
            "spec": {
              "triggers": [
                {
                  "type": "GitHub",
                  "github": {
                    "secret": "${BUILD_SECRET}"
                  }
                },
                {
                  "type": "Generic",
                  "generic": {
                    "secret": "${BUILD_SECRET}"
                  }
                },
                {
                  "type": "ConfigChange"
                },
                {
                  "type": "ImageChange",
                  "imageChange": {}
                }
              ],
              "source": {
                "type": "Git",
                "git": {
                  "uri": "${GIT_REPO}",
                  "ref": "${GIT_REF}"
                }
              },
              "strategy": {
                "type": "Source",
                "sourceStrategy": {
                  "from": {
                    "kind": "DockerImage",
                    "name": "fabric8/s2i-java:${BUILDER_VERSION}"
                  },
                  "forcePull": true,
                  "incremental": true,
                  "env": [
                    {
                      "name": "BUILD_LOGLEVEL",
                      "value": "5"
                    },
                    {
                      "name": "ARTIFACT_DIR",
                      "value": "${ARTIFACT_DIR}"
                    },
                    {
                      "name": "MAVEN_ARGS",
                      "value": "${MAVEN_ARGS}"
                    },
                    {
                      "name": "MAVEN_ARGS_APPEND",
                      "value": "${MAVEN_ARGS_APPEND}"
                    }
                  ]
                }
              },
              "output": {
                "to": {
                  "kind": "ImageStreamTag",
                  "name": "${APP_NAME}:latest"
                }
              },
              "resources": {}
            },
            "status": {
              "lastVersion": 0
            }
          },
          {
            "kind": "DeploymentConfig",
            "apiVersion": "v1",
            "metadata": {
              "name": "${APP_NAME}",
              "creationTimestamp": null,
              "labels": {
                "component": "${APP_NAME}",
                "group": "quickstarts",
                "project": "${APP_NAME}",
                "provider": "s2i",
                "version": "${APP_VERSION}"
              }
            },
            "spec": {
              "strategy": {
                "resources": {}
              },
              "triggers": [
                {
                  "type": "ConfigChange"
                },
                {
                  "type": "ImageChange",
                  "imageChangeParams": {
                    "automatic": true,
                    "containerNames": [
                      "${APP_NAME}"
                    ],
                    "from": {
                      "kind": "ImageStreamTag",
                      "name": "${APP_NAME}:latest"
                    }
                  }
                }
              ],
              "replicas": 1,
              "selector": {
                "component": "${APP_NAME}",
                "deploymentconfig": "${APP_NAME}",
                "group": "quickstarts",
                "project": "${APP_NAME}",
                "provider": "s2i",
                "version": "${APP_VERSION}"
              },
              "template": {
                "metadata": {
                  "creationTimestamp": null,
                  "labels": {
                    "component": "${APP_NAME}",
                    "deploymentconfig": "${APP_NAME}",
                    "group": "quickstarts",
                    "project": "${APP_NAME}",
                    "provider": "s2i",
                    "version": "${APP_VERSION}"
                  }
                },
                "spec": {
                  "containers": [
                    {
                      "name": "${APP_NAME}",
                      "image": "library/${APP_NAME}:latest",
                      "readinessProbe" : {
                        "httpGet" : {
                          "path" : "/health",
                          "port" : 8081
                        },
                        "initialDelaySeconds" : 10
                      },
                      "livenessProbe" : {
                        "httpGet" : {
                          "path" : "/health",
                          "port" : 8081
                        },
                        "initialDelaySeconds" : 180
                      },
                      "ports": [
                        {
                          "containerPort": 8778,
                          "name": "jolokia"
                        }
                      ],
                      "env" : [ {
                        "name" : "KUBERNETES_NAMESPACE",
                        "valueFrom" : {
                          "fieldRef" : {
                            "fieldPath" : "metadata.namespace"
                          }
                        }
                      } ],
                      "resources": {
                        "requests": {
                          "cpu": "${CPU_REQUEST}"
                        },
                        "limits": {
                          "cpu": "${CPU_LIMIT}"
                        }
                      }
                    }
                  ]
                }
              }
            },
            "status": {}
          }
        ]
      }

You can see a demo of the OpenShift server adapter for Spring Boot application here:

Support for route timeouts and liveness probe for OpenShift Server Adapter debugging configurations

While debugging your OpenShift deployment, you may face two different issues:

  • if you launch your test through a Web browser, then it’s likely that you will access your OpenShift deployment through an OpenShift route. The problem is that, by default, OpenShift routes have a 30 seconds timeout for each request. So if you’re stepping through one of your breakpoints, you will get a timeout error message in the browser window even if you can still debug your OpenShift deployment. And you’re now stuck will the navigation of your OpenShift application.

  • if your OpenShift deployment has a liveness probe configured, depending on your virtual machine capabilities or how your debugger is configured, if your stepping into one of your breakpoints, the liveness probe may fail thus OpenShift so OpenShift will restart your container and your debugging session will be destroyed.

So, from now, when the OpenShift server adapter is started in debug mode, the following action are being performed:

  • if an OpenShift route is found that is linked to the OpenShift deployment you want to debug, the route timeout will be set or increased to 1 hour. The original or default value will be restored when the OpenShift server adapter will be restarted in run mode.

  • if your OpenShift deployment has a liveness probe configured, the initialDelay field will be increased to 1 hour if the defined value for this field is lower than 1 hour. If the value of this field is defined to a value greater than 1 hour, it is left intact. The original value will be restored when the OpenShift server adapter will be restarted in run mode

Enhanced command to delete resource(s)

When it comes to delete OpenShift resources, you had two different choices:

  • individually delete each resource but as some resources are hidden by the OpenShift explorer, it may become troublesome

  • delete the containing OpenShift project but you are then deleting more resources than required

There is now a new enhanced command to delete resources. It is available at the OpenShift project level and it will first list all the available OpenShift resources for the selected OpenShift project. You can now select the ones you want to delete and you can also filter the list using a filter that will be applied to the labels for each retrieved OpenShift resource.

So if you have two different deployments in a single OpenShift project (if you using OpenShift Online Starter for example) or if you have different kind of resources in a single deployment, you can now distinct them.

Let’s see this in action:

In this example, I have deployed an EAP6.4 based application and an EAP7.0 based one. Here is what you can see from the OpenShift explorer:

new delete resources explorer

Now, let’s invoke the new delete command on the eap OpenShift project: right click the OpenShift project and select Delete Resources…​:

new delete resources dialog

Let suppose that we want to delete the EAP6.4 deployement. Enter eap=6.4 in the filter field:

new delete resources dialog1

Push the Select All button:

new delete resources dialog2

Close this dialog by pushing the OK button. The resources will be deleted and the OpenShift explorer will be updated accordingly:

new delete resources explorer1

Server tools

EAP 7.1 Server Adapter

A server adapter has been added to work with EAP 7.1 and Wildfly 11. It’s based on WildFly 11. This new server adapter includes support for incremental management deployment like it’s upstream WildFly 11 counterpart.

Fuse Tooling

Fuse 7 Karaf-based runtime Server adapter

Fuse 7 is cooking and preliminary versions are already available on early-access repository. Fuse Tooling is ready to leverage them so that you can try the upcoming major Fuse version.

Fuse 7 Server Adapter

Classical functionalities with server adapters are available: automatic redeploy, Java debug, Graphical Camel debug through created JMX connection. Please note: - you can’t retrieve the Fuse 7 Runtime yet directly from Fuse tooling, it is required to download it on your machine and point to it when creating the Server adapter. - the provided templates requires some modifications to have them working with Fuse 7, mainly adapting the bom. Please see work related to it in this JIRA task and its children.

Display routes defined inside "routeContext" in Camel Graphical Editor (Design tab)

"routeContext" tag is a special tag used in Camel to provide the ability to reuse routes and to split them across different files. This is very useful on large projects. See Camel documentation for more information. Since this version, the Design of the routes defined in "routeContext" tags are now displayed.

Usability improvement: Progress bar when "Changing the Camel version"

Since Fuse Tooling 10.1.0, it is possible to change the Camel version. In case the Camel version was not cached locally yet and for slow internet connections, this operation can take a while. There is now a progress bar to see the progress.

Switch Camel Version with Progress Bar

Support for creating Fuse Ignite Technical Extensions

We are happy to announce the addition of support for creating Technical Extension projects for Fuse Ignite*. That includes the creation of the project using the "New Fuse Ignite Extension Project" wizard as well as support for building the deployable artifact directly from inside the Eclipse environment.

*Fuse Ignite is a JBoss Fuse feature that provides a web interface for integrating applications. Without writing code, a business expert can use Ignite to connect to applications and optionally operate on data between connections to different applications. In Ignite, a data operation is referred to as a step in an integration. Ignite provides steps for operations such as filtering and mapping data. To operate on data in ways that are not provided by Ignite built-in steps, you can develop an Ignite extension to define one or more custom steps. Fuse Ignite comes as part of Fuse and Fuse Online. Please refer to the online documentation for more information on how to create and configure technical extensions for Fuse Ignite.

Fuse Ignite Technical Extension Wizard

The provided project template allows you to define an Apache Camel route as the base flow of your new technical extension.

Fuse Ignite Technical Extension Route

To configure your new technical extension you can use the JSON file created with the new project.

Fuse Ignite Technical Extension Configuration

Forge Tools

Forge Runtime updated to 3.8.1.Final

The included Forge runtime is now 3.8.1.Final. Read the official announcement here.

And more…​

You can find more noteworthy updates in on this page.

What is next?

Having JBoss Tools 4.5.2 and Developer Studio 11.2 out we are already working on the next maintenance release for Eclipse Oxygen.

Enjoy!

Jeff Maury


by jeffmaury at April 26, 2018 02:06 PM

Fluent-Log API landed in e(fx)clipse

by Tom Schindl at April 25, 2018 09:21 PM

Last week I came a cross Googles FLogger-API and I really liked it.

Back in e(fx)clipse land I started to miss it but because introducing a dependency to other log-frameworks is not possible – I implemented our own fluent log API inspired by Flogger.

So how do you use it:

// if you have a logger
Logger logger = LoggerCreator.createLogger( Sample.class );
FluentLogger flogger = FluentLogger.of( logger );

// if you use @Log
@Inject
@Log
FluentLogger flogger;

// Log something
FluentLogContext debug = flogger.atDebug();
debug.log( "Hello World" );
debug.log( "Hello World with format %s", 10 );
debug.log( () -> "Lazy Hello World" );
debug.log( t -> "Lazy Hello World with Context" + t, o );

// Log with exception
try {
   // ...
} catch( Throwable l ) {
  flogger.atInfo().withException( t ).log( "Hello World" );
}

// Throttle: Only log every 100 log statement
flogger.atInfo().throttleByCount(100)
  .log( "Log every 100 time" );

// Throttle: Only log every minute
flogger.atInfo().throttleByTime(1, TimeUnit.MINUTES)
  .log( "Log every minute" );

// Build your own condition fluent addition
logger.atInfo().with( Throttle::new ).every( 100 )
  .log( "Log every 100 time" )

by Tom Schindl at April 25, 2018 09:21 PM

Interview: Cloud scale IoT messaging

by Anonymous at April 25, 2018 04:26 PM

Eclipse Hono is a cloud-based IoT connectivity platform. In this interview Jens Reimann and Dejan Bosanac's give us insights into the project. You can learn more at their talk, Cloud scale IoT messaging.

"..IoT connectivity is one of the main challenges in building IoT cloud platforms, as having a single broker is not enough anymore. Hono solves scalable messaging problem with adding more specifics to IoT use cases. This means it’s interesting to other companies that want to build their own IoT cloud platforms..."


by Anonymous at April 25, 2018 04:26 PM

EclipseCon France 2018: Register Early!

April 25, 2018 01:30 PM

Prices go up after April 30, so register now.

April 25, 2018 01:30 PM

Xtend 2.14 – Unnecessary modifiers validation

by Tamas Miklossy (miklossy@itemis.de) at April 25, 2018 12:24 PM

In the Xtend programming language, visibility modifiers are unnecessary when they match the defaults.

The public modifier is default on:

  • Classes
  • Interfaces
  • Enums
  • Annotatitons
  • Constructors
  • Methods


The private modifier is default on:

  • Fields

Additionally the final modifier is redundant in combination with the val keyword on field declarations, and the def keyword in combination with the override keyword on method declarations.

Xtend 2.14 adds validation rules to detect the unnecessary modifiers and issues corresponding warnings.


1_Unnecessary_Modifier_Warnings


The Xtend IDE also provides Quick Fixes to assist the user on fixing such issues: Select all Unnecessary modifier warnings on the Problems view and invoke the Quick Fix dialog either via the context menu or the keyboard shortcut Ctrl + 1.

2_Quickfix_Dialog


After clicking on the Finish button all Unnecessary modifier warnings will be fixed at once with a single action. Comparing the Xtend code before and after the Quick Fix execution confirms that all unnecessary modifiers have been successfully removed.

3_Compare_Dialog


For ongoing Xtend projects, it could be noisy suddenly having tons of new warnings after updating to a new Xtend version. The Unnecessary modifier serverity can be configured on the Xtend preference page and (wenn desired) can even be completely ignored.

4_Xtend_Preferences

The latest Xtend version can be installed from its Update Site. Give it a try! The Xtext team is always happy about your early feedback!


by Tamas Miklossy (miklossy@itemis.de) at April 25, 2018 12:24 PM

Survey of 1800+ developers now released on new Jakarta EE website

April 24, 2018 02:00 PM

Survey of 1,800+ Java developers reveals "cloud native" top requirement in platform's evolution.

April 24, 2018 02:00 PM

JBoss Tools and Red Hat Developer Studio for Eclipse Oxygen.3a

by jeffmaury at April 24, 2018 12:56 PM

JBoss Tools 4.5.3 and Red Hat JBoss Developer Studio 11.3 for Eclipse Oxygen.3a are here waiting for you. Check it out!

devstudio11

Installation

JBoss Developer Studio comes with everything pre-bundled in its installer. Simply download it from our JBoss Products page and run it like this:

java -jar jboss-devstudio-<installername>.jar

JBoss Tools or Bring-Your-Own-Eclipse (BYOE) JBoss Developer Studio require a bit more:

This release requires at least Eclipse 4.7 (Oxygen) but we recommend using the latest Eclipse 4.7.3a Oxygen JEE Bundle since then you get most of the dependencies preinstalled.

Once you have installed Eclipse, you can either find us on the Eclipse Marketplace under "JBoss Tools" or "Red Hat JBoss Developer Studio".

For JBoss Tools, you can also use our update site directly.

http://download.jboss.org/jbosstools/oxygen/stable/updates/

What is new?

Our main focus for this release was on adoption of Java10, improvements for container based development and bug fixing. Eclipse Oxygen itself has a lot of new cool stuff but let me highlight just a few updates in both Eclipse Oxygen and JBoss Tools plugins that I think are worth mentioning.

OpenShift 3

CDK and Minishift Server Adapter better developer experience

MINISHIFT_HOME setting

When working with both CDK and upstream Minishift, it is recommanded to distinguish environments through the MINISHIFT_HOME variable. It was possible before to use this parameter but it requires a two steps process:

  • first create the server adapter (through the wizard)

  • then change the MINISHIFT_HOME in the server adapter editor

It is now possible to set this parameter from the server adapter wizard. So now, everything is correctly setup when you create the server adapter.

Let’s see an example with the CDK server adapter.

From the Servers view, select the new Server menu item and enter cdk in the filter:

cdk server adapter wizard

Select Red Hat Container Development Kit 3.2+

cdk server adapter wizard1

Click the Next button:

cdk server adapter wizard2

The MINISHIFT_HOME parameter can be set here and is defaulted.

CDK and Minishift Server Adapter runtime download

When working with both CDK and upstream Minishift, you needed to have previously downloaded the CDK or Minishift binary. It is now possible to download the runtime to a specific folder when you create the server adapter.

Let’s see an example with the CDK server adapter.

From the Servers view, select the new Server menu item and enter cdk in the filter:

cdk server adapter wizard

Select Red Hat Container Development Kit 3.2+

cdk server adapter wizard1

Click the Next button:

cdk server adapter wizard3

In order to download the runtime, click the Download and install runtime…​ link:

cdk server adapter wizard4

Select the version of the runtime you want to download

cdk server adapter wizard5

Click the Next button:

cdk server adapter wizard6

You need an account to download the CDK. If you already had configured credentials, select the one you want to use. If you didn’t, click the Add button to add your credentials.

cdk server adapter wizard7

Click the Next button. Your credentials will be validated, and upon success, you must accept the license agreement:

cdk server adapter wizard8

Accept the license agreement and click the Next button:

cdk server adapter wizard9

You can choose the folder where you want the runtime to be installed. Once you’ve set it, click the Finish button:

The download of the runtime will be started and you should see the progression on the server adapter wizard:

cdk server adapter wizard10

Once the download is completed, you will notice that the Minishift Binary and Username fields have been filled:

cdk server adapter wizard11

Click the Finish button to create the server adapter.

Please note that if it’s the first time you install CDK, you must perform an initialization. In the Servers view, right click the server and select the Setup CDK menu item:

cdk server adapter wizard12
cdk server adapter wizard13

Please note that the setup-cdk command will also be automatically run when you start the CDK server adapter if the MINISHIFT_HOME environment is detected uninitialized after user approval.

Minishift Server Adapter

A new server adapter has been added to support upstream Minishift. While the server adapter itself has limited functionality, it is able to start and stop the Minishift virtual machine via its minishift binary. From the Servers view, click New and then type minishift, that will bring up a command to setup and/or launch the Minishift server adapter.

minishift server adapter

All you have to do is set the location of the minishift binary file, the type of virtualization hypervisor and an optional Minishift profile name.

minishift server adapter1

Once you’re finished, a new Minishift Server adapter will then be created and visible in the Servers view.

minishift server adapter2

Once the server is started, Docker and OpenShift connections should appear in their respective views, allowing the user to quickly create a new Openshift application and begin developing their AwesomeApp in a highly-replicatable environment.

minishift server adapter3
minishift server adapter4

The credentials framework still supports the JBoss.org credentials in case other services / components require or use this credentials domain.

Fuse Tooling

New shortcuts in Fuse Integration perspective

Shortcuts for the Java, Launch, and Debug perspectives and basic navigation operations are now provided within the Fuse Integration perspective.

The result is a set of buttons in the Toolbar:

New Toolbar action

All of the associated keyboard shortcuts are also available, such as Ctrl+Shift+T to open a Java Type.

Performance improvement: Loading Advanced tab for Camel Endpoints

The loading time of the "Advanced" tab in the Properties view for Camel Endpoints is greatly improved.

Advanced Tab in Properties view

Previously, in the case of Camel Components that have a lot of parameters, it took several seconds to load the Advanced tab. For example, for the File component, it would take ~3.5s. It now takes ~350ms. The load time has been reduced by a factor of 10. (See this interesting article on response time)

If you notice other places showing slow performance, you can file a report by using the Fuse Tooling issue tracker. The Fuse Tooling team really appreciates your help. Your feedback contributes to our development priorities and improves the Fuse Tooling user experience.

Display Fuse version corresponding to Camel version proposed

When you create a new project, you select the Camel version from a list. Now, the list of Camel versions includes the Fuse version to help you choose the version that corresponds to your production version.

Fuse Version also displayed in drop-down list close to Camel version

Update validation for similar IDs between a component and its definition

Starting with Camel 2.20, you can use similar IDs for the component name and its definition unless the specific property "registerEndpointIdsFromRoute" is provided. The validation process checks the Camel version and the value of the "registerEndpointIdsFromRoute" property.

For example:

<from id="timer" uri="timer:timerName"/>

Improved guidance in method selection for factory methods on Global Bean

When selecting factory method on a Global bean, a lot of possibilities were proposed in the user interface. The list of factory methods for a global bean is now limited to only those methods that match the constraints of the bean’s global definition type (bean or bean factory).

Customize EIP labels in the diagram

The Fuse Tooling preferences page for the Editor view includes a new "Preferred Labels" option.

Fuse Tooling editor preference page

Use this option to define the label of EIP components (except endpoints) shown in the Editor’s Design view.

Dialog for defining the display text for an EIP

Fuse Ignite Technical Extension templates

The existing template for "Custom step using a Camel Route" has been updated to work with Fuse 7 Tech Preview 4.

Two new templates have been added: - Custom step using Java Bean - Custom connector

New Fuse Ignite wizard with 3 options

Improvements of the wizard to create a Fuse Integration project

The creation wizard provides better guidance for the targeted deployment environment:

New Fuse Integration Project wizard page to select environment

More place is available to choose the templates and they are now filtered based on the targeted environment:

New Fuse Integration Project wizard page to select templates

It also points out to other places to find different examples for advanced users (see the link at the bottom of the previous screenshot).

Camel Rest DSL editor (Technical preview)

Camel is providing a Rest DSL to help the integration through Rest endpoints. Fuse Tooling is now providing a new tab in read-only mode to visualize the Rest endpoints defined.

Rest DSL editor tab in read-only mode

It is currently in Tech Preview and needs to be activated in Window → Preferences → Fuse Tooling → Editor → Enable Read Only Tech preview REST DSL tab.

Work is still ongoing and feedback is very welcome on this new feature, you can comment on this JIRA epic.

Dozer upgrade and migration

When upgrading from Camel < 2.20 to Camel > 2.20, the Dozer dependency has been upgraded to a version not backward-compatible If you open a Data transformation based on Dozer in Fuse Tooling, it will propose to migrate the file used for the transformation (technically changing the namespace). It allow to continue to use the Data Transformation editor and have - in most cases - the Data Transformation working at runtime with Camel > 2.20.

Hibernate Tools

Hibernate Runtime Provider Updates

A number of additions and updates have been performed on the available Hibernate runtime providers.

New Hibernate 5.3 Runtime Provider

With beta releases available in the Hibernate 5.3 stream, the time was right to make available a corresponding Hibernate 5.3 runtime provider. This runtime provider incorporates Hibernate Core version 5.3.0.Beta2 and Hibernate Tools version 5.3.0.Beta1.

hibernate 5 3
Figure 1. Hibernate 5.3 is available
Other Runtime Provider Updates

The Hibernate 5.0 runtime provider now incorporates Hibernate Core version 5.0.12.Final and Hibernate Tools version 5.0.6.Final.

The Hibernate 5.1 runtime provider now incorporates Hibernate Core version 5.1.12.Final and Hibernate Tools version 5.1.7.Final.

The Hibernate 5.2 runtime provider now incorporates Hibernate Core version 5.2.15.Final and Hibernate Tools version 5.2.10.Final.

Java Developement Tools (JDT)

Support for Java™ 10

The biggest part is the support for local variable type inference.

Add Java 10 JRE

Basic necessity of recognizing a Java 10 for launching

j10

And the compiler compliance option of 10

j10.compliance
JEP 286 var - compilation

Support for compilation of var as shown below

var.compile

Flagging of the compiler errors as expected, shown below

var.nocompile

Completion at places var is allowed

var.complete

Completion not offered at places var is not allowed

var.nocomplete

Hover to reveal the javadoc

var.hover

Convert from var to the appropriate type using quick assist

var.vartotype

Convert from type to var using quick assist

var.typetovar

General

Credentials Framework

Sunsetting jboss.org credentials

Download Runtimes and CDK Server Adapter used the credentials framework to manage credentials. However, the JBoss.org credentials cannot be used any more as the underlying service used by these components does not support these credentials.

Aerogear

Aerogear component deprecation

The Aerogear component has been marked deprecated as there is no more maintenance on the source code. It is still available in Red Hat Central and may be removed in the future.

Arquillian

Arquillian component removal

The Arquillian component has been removed from Red Hat Central as it has been deprecated since July 2017.

The last available update site release is here:

BrowserSim

BrowserSim component deprecation

The BrowserSim component has been marked deprecated as there is no more maintenance on the source code. It is still available in Red Hat Central and may be removed in the future.

Freemarker

Freemarker component removal

The Freemarker component has been removed from Red Hat Central as it has been deprecated since July 2017.

The last available update site release is here:

LiveReload

LiveReload component deprecation

The LiveReload component has been marked deprecated as there is no more maintenance on the source code. It is still available in Red Hat Central and may be removed in the future.

And more…​

You can find more noteworthy updates in on this page.

What is next?

Having JBoss Tools 4.5.3 and Developer Studio 11.3 out we are already working on the next release for Eclipse Photon.

Enjoy!

Jeff Maury


by jeffmaury at April 24, 2018 12:56 PM

ECF Photon adds Gogo Commands

by Scott Lewis (noreply@blogger.com) at April 24, 2018 01:34 AM

A third major enhancement for ECF's implementation of OSGi Remote Services is the addition of Apache Gogo console commands for examining the existing state of remote services, and the ability to export a service and import an endpoint from the OSGi console.

See this wiki page describing the new commands and their usage.



by Scott Lewis (noreply@blogger.com) at April 24, 2018 01:34 AM

Interview: Making EMF Intelligent with AI

by Anonymous at April 23, 2018 08:55 AM

Niranjan Babu's talk Making EMF Intelligent with AI was chosen as an early bird selection. Read this brief Q&A with Niranjan to find out more about the I-EMF project.

Q: How did you begin with the idea of combining AI with EMF?

A: EMF is used extensively as a modeling framework in the automotive world and the entire world is moving towards model driven development. These modeling frameworks are thus critical in determining the efficiency of software development. I thought the best way to improve efficiency is to make these models intelligent. That is when machine learning came into picture and I decided to combine machine learning with EMF.


by Anonymous at April 23, 2018 08:55 AM

Eclipse Photon Nears Release

by Kesha Williams at April 23, 2018 05:30 AM

Eclipse Photon, the seventeenth annual release of the Eclipse Project, will be released in June, but we’re keeping an eye on all the new and noteworthy features in each pre-release milestone. Milestone 6 (M6) offers noteworthy features for the Eclipse Platform, Java Development Tools (JDT), Plug-in Development Environment (PDE), Equinox sub-project, and for JDT and Eclipse Platform developers.

By Kesha Williams

by Kesha Williams at April 23, 2018 05:30 AM

EC by Example: FlatCollect

by Donald Raab at April 23, 2018 05:08 AM

Learn how to flatten a collection of collections into a single collection using the flatCollect method in Eclipse Collections.

Organize a collection of collections into a single collection

What is FlatCollect?

The method flatCollect is a special form of collect, where the output of the Function provided to the method must always be some Iterable type. The purpose of flatCollect is to provide a transformation that flattens a collection of collections. This method is similar in function to flatMap in Java Streams. The primary difference is that the Function for flatCollect must return an Iterable, while the Function for flatMap must return a Stream.

Creating Intervals from Integers and flattening them to a List

FlatCollecting a List (Java 8)

@Test
public void flatCollectingAListJava8()
{
MutableList<Integer> list = mList(5, 4, 3, 2, 1);
MutableList<Integer> result = list.flatCollect(Interval::oneTo);

MutableList<Integer> expected = mList(
1, 2, 3, 4, 5,
1, 2, 3, 4,
1, 2, 3,
1, 2,
1);
Assert.assertEquals(expected, result);
}

Collection Pipelines

Martin Fowler describes the Collection Pipeline pattern here. Here is an example of flatCollect used in a collection pipeline to find all of the methods that contain “flat” in their name for a List of classes. Here I used a overloaded form of flatCollect which takes a target collection as an argument.

@Test
public void flatCollectingAListOfMethodsToASetJava8()
{
MutableList<Class<?>> list = mList(
ListIterable.class,
MutableList.class,
ImmutableList.class);
MutableSet<String> result = list
.collect(Class::getMethods)
.flatCollect(Lists.fixedSize::with, mSet())
.collect(Method::getName)
.select(each -> each.toLowerCase().contains("flat"));

MutableSet<String> expected = mSet("flatCollect");
Assert.assertEquals(expected, result);
}

The method getMethods on class returns an array, so in the Function I pass to flatCollect, I convert the array to a List. If getMethods had returned a List or some other Iterable type, I could have simply used flatCollect passing Class::getMethods.

Here’s the same example using Java 10 with Local-variable Type Inference.

@Test
public void flatCollectingAListOfMethodsToASetJava10()
{
var list = mList(
ListIterable.class,
MutableList.class,
ImmutableList.class);
var resultSet = list
.collect(Class::getMethods)
.flatCollect(Lists.fixedSize::with, mSet())
.collect(Method::getName)
.select(each -> each.toLowerCase().contains("flat"));

var expected = mSet("flatCollect");
Assert.assertEquals(expected, resultSet);
}

Symmetric Sympathy Strikes Again

While there exists a method named collectWith which is a form of collect that takes a Function2, there currently is no method named flatCollectWith which also takes a Function2. I discovered the lack of flatCollectWith (again) this week. I have submitted an issue for this feature and began working on it over the weekend. I expect to have the flatCollectWith implementation tested and completed over the next week or two.

APIs covered in the examples

  1. flatCollect— transforms elements of a source collection to a new collection by flattening collections in the source collection to a single collection
  2. mList — creates a MutableList
  3. mSet — creates a MutableSet
  4. Interval.oneTo(int) — creates an Interval starting from 1 to the specified value
  5. var — Local variable Type Inference included in Java 10 (JEP 286)

Refer to my previous blogs in the EC by Example series for examples of collect and select.

Check out this presentation to learn more about the origins, design and evolution of the Eclipse Collections API. There is also a video covering the slides that was recorded at an Eclipse Community Virtual Meet-up.

Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it on GitHub.


by Donald Raab at April 23, 2018 05:08 AM

Eclipse Vert.x RabbitMQ client gets a new consumer API!

by Sammers21 at April 23, 2018 12:00 AM

In Eclipse Vert.x 3.6.0 the RabbitMQ client will get a new consumer API. In this post we are going to show the improvements since the previous API and how easy it is to use now.

Before digging into the new API let’s find out what were the limitations of the actual one:

  1. The API uses the event bus in such limiting the control of the consumer over the RabbitMQ queue.
  2. The message API is based on JsonObject which does not provide a typed API

The new API at a glance

Here is how simple queue consumption looks like with the new API:

RabbitMQClient client = RabbitMQClient.create(vertx, new RabbitMQOptions());

client.basicConsumer("my.queue", res -> {
  if (res.succeeded()) {
    System.out.println("RabbitMQ consumer created !");
    RabbitMQConsumer mqConsumer = res.result();
    mqConsumer.handler((RabbitMQMessage message) -> {
        System.out.println("Got message: " + message.body().toString());
    });
  } else {
    // Oups something went wrong
    res.cause().printStackTrace();
  }
});

Now to create a queue you simply call the basicConsumer method and you obtain asynchronously a RabbitMQConsumer.

Then you need to provide a handler called for each message consumed via RabbitMQConsumer#handler which is the idiomatic way to consumer stream in Vert.x

You may also note that when we a message arrives, it has the type of RabbitMQMessage, this is a typed message representation.

Since RabbitMQConsumer is a stream, you also allowed to pause and resume the stream, subscribe to the end event, get notified when an exception occurs.

In addition, you can cancel the subscription by calling RabbitMQConsumer#cancel method.

Backpressure

Sometimes you can have more incoming messages than you can handle.

The new consumer API allows you to control this and lets you store arrived messages in the internal queue before they are delivered to the application. Indeed, you can configure the queue size.

Here is how you can limit the internal queue size:

// Limit to max 300 messages
QueueOptions options = new QueueOptions()
  .setMaxInternalQueueSize(300);

RabbitMQClient client = RabbitMQClient.create(vertx, new RabbitMQOptions());

client.basicConsumer("my.queue", options, res -> {
  if (res.succeeded()) {
    System.out.println("RabbitMQ consumer created !");
    RabbitMQConsumer mqConsumer = res.result();
    mqConsumer.handler((RabbitMQMessage message) -> {
      System.out.println("Got message: " + message.body().toString());
    });
  } else {
    res.cause().printStackTrace();
  }
});

When the intenral queue queue capacity is exceeded, the new message will be simply dropped.

An alternative option is to drop the oldest message in the queue.

In order to achieve this, you should specify the behavior by calling QueueOptions#setKeepMostRecent method.

Finally

The new Vert.x RabbitMQ client consumer API is way more idiomatic and modern way to consume messages from a queue.

This API is going to provided in the 3.6.0 release, while the old will be deprecated.

I hope you enjoyed reading this article. See you soon on our Gitter channel!


by Sammers21 at April 23, 2018 12:00 AM

India Java User Group Tour 2018

by Nikhil Nanivadekar at April 20, 2018 12:13 AM

After 27+ hours of travel I just reached Pune, my home town. I am excited for my India Java User Group Tour 2018. I will be presenting on Java 10, Java 9, Eclipse Collections, Spark and more!

It is a quick pit stop in Pune before I head over to Chennai for my first JUG meet-up in #INDJUG tour. Abstracts for all my talks are available at the end of this blog. Join me at one of these cities:

Chennai JUG: Saturday, April 21

Delhi-NCR JUG: Sunday, April 22

Bengaluru JUG: Wednesday, April 25

Hyderabad JUG: Sunday, April 29

Kerala JUG (Thiruvananthapuram): Saturday, May 5

Thank you to @MadrasJUG, @DelhiJUG, @bangalorejug, @JUGHYD, and @KeralaJUG for hosting me in each city.

I’ll be tweeting using the #INDJUG so if you can’t make it, you can still follow my adventures through India.

Hope to see you at one of the stops!

https://medium.com/media/1965d4579fd3e295b79002ad7b50a1f5/href

Abstracts

How to make your project Java-10 compatible:

Java 10 was recently released and was the first release with the new Java release cadence. However, one can’t simply upgrade their projects from Java 8 to Java 10. It first should be upgraded to Java 9, due to the numerous changes that might potentially break existing applications. This session is a case study of making a third-party Java Collections library (Eclipse Collections) first Java 9 compatible and then with relative ease Java 10 compatible. The audience will see an overview of all the steps taken and the evolution of the final product which is Java 10 compatible. Attending this session will put you in the driver’s seat when you will be required to upgrade your application to use JDK 10.

Getting Started with Spark:

Data analytics and machine learning have become mainstream in recent years. With the amount of data available, distributed computing has become a necessity. Apache Spark is one of the forerunners in distributed computing domain. In this hands-on session, the audience will learn about the background and basic concepts of Apache Spark. The speaker will build a reference implementation live and introduce new concepts along the way.

Collections.compare(JDK, Apache, Eclipse, Guava…):

Collections are a staple in any programming language: the need to collect, sort, or iterate over values is needed by nearly all developers. The Java language introduced the Collections framework long ago. It has plenty to offer, but many find it lacking: the number of collection libraries as active open source projects demonstrates the need for something else. This session does a holistic comparison of the most-common collections (pun intended!) frameworks, what they have to offer, and what you should consider for your next project. It also shows common programmer use cases; how each library handles them; and the impact on memory, processing power, and ease of use/coding. Come and let us help you choose the right bag for your tricks!

API Deep Dive: Designing Eclipse Collections

When designing an API, its authors have to consider many aspects: style, naming, scope, and implementation details are among these aspects. They have a direct impact on the resulting code, and its implementation can go in many different directions. How do you choose the best route to go? How do you maintain symmetry? How do you guarantee consistency and performance across the framework? Last but not the least, what is the complexity associated with adding a new API? Come take a look behind the curtains of a widely used API that has many years of development and that you can contribute to.


by Nikhil Nanivadekar at April 20, 2018 12:13 AM

ECF Photon supports Bndtools

by Scott Lewis (noreply@blogger.com) at April 19, 2018 09:39 PM

A second major enhancement for ECF Photon is adding support for using Bndtools to develop and test OSGi Remote Services.   Bndtools is increasingly popular for developing OSGi-based applications and frameworks, and we've added support for Bndtools Workspace, Project, and Run Descriptor templates for developing and testing remote services.

Initial documentation is available at Bndtools Support for Remote Services Development.

Note that these templates and the RSA impl may change slightly before ECF Photon, and new/additional templates will be added to (e.g.) support other distribution and discovery providers.



by Scott Lewis (noreply@blogger.com) at April 19, 2018 09:39 PM

What’s Coming in Sirius 6.0?

by Pierre-Charles David (pierre-charles.david@obeo.fr) at April 19, 2018 12:59 PM

The next major version of Sirius , version 6.0, will be released on June 27, 2018 as part of the Eclipse Photon Simultaneous Release, with a corresponding version of Obeo Designer Community Edition soon after. Many of the new features for this release are already available for testing in milestone versions, and now is the right time to test them and give us feedback: there’s still time to make adj...

by Pierre-Charles David (pierre-charles.david@obeo.fr) at April 19, 2018 12:59 PM

Who, what, when, where at EclipseCon France?

by Anonymous at April 19, 2018 06:19 AM

The schedule is now published, so you can work on your plan for the week! Thank you again for all the great submissions that make up this fantastic program.

There are even more choices this year, since Wednesday has five concurrent talks all day with a special focus on Jakarta EE and Eclipse MicroProfile.

Stay tuned for more about the Unconference on June 12. We'll see you soon in Toulouse!


by Anonymous at April 19, 2018 06:19 AM

EclipseCon France 2018 – Sessions

by tevirselrahc at April 18, 2018 03:05 PM

The sessions lineup for EclipseCon France 2018 have been published! If you plan on attending, I have a couple of suggestions for you!

First, a session about me (of course) and more love for my toolsmiths:

Papyrus as a Platform

by Philip Langer (EclipseSource Services GmbH).

Model-based engineering tools are most successful, if they are as domain-specific as possible, reflecting the specific needs of the domain and its users. Thus, not only a domain-specific modeling language, but also a specialized modeling environment is required that takes the domain users’ background, their roles, and currently used infrastructure into account. Often, the domain-specific modeling languages have a considerable overlap with UML though.

The second session is not directly focused on me, but it is very relevant to using me as a platform for domain-specific tools (and again, more love for my toolsmiths):

Comparison and merge use-cases from practice with EMF Compare

by Laurent Delaigue (Obeo) and Philip Langer (EclipseSource Services GmbH)

Have you ever needed to compare and merge heterogeneous domain-specific models (with both textual and graphical syntaxes)? Or maybe you needed to review changes on graphical models? We did.

 

Did I miss a presentation? If so, let me know!


by tevirselrahc at April 18, 2018 03:05 PM

IoT Developer Survey 2018 | Results are in!

April 17, 2018 02:00 PM

Results from the IoT Developer Survey are in! Read about the key findings about IoT cloud platforms, databases, security, and more.

April 17, 2018 02:00 PM

Key Trends from the IoT Developer Survey 2018

by Benjamin Cabé at April 17, 2018 12:44 PM

Executive Summary

The IoT Developer Survey 2018 collected feedback from 502 individuals between January and March 2018.

The key findings in this year’s edition of the survey include the following:

  • Amazon AWS and Microsoft Azure are the top 2 cloud services for IoT. Google Cloud Platform is failing to get traction.
  • MQTT remains the standard of choice for IoT messaging, while AMQP is becoming more and more popular as companies scale their IoT deployments and backend systems.
  • 93% of the databases and data stores used for IoT are open source software. Data collected and used in IoT applications is incredibly diverse, from time series sensor data to device information to logs.

Introduction

For the past four years, the IoT Developer Survey has been a great way to look at the IoT landscape, from understanding the key challenges for people building IoT solutions, to identifying relevant open source technology or standards.

Just like in previous years (see results from 2017, 2016 and 2015 survey), the Eclipse IoT Working Group has collaborated with a number of organizations to promote the survey to different IoT developer communities: Agile-IoT H2020 Project, IEEE, and the Open Mobile Alliance (now OMA SpecWorks).

We had a total of 502 individual responses. You will find a link to the complete report at the end of this blog post, as well as pointers to download the raw survey data.

Here are the key trends that we identified this year:

Amazon and Azure get traction, Google slips behind

For the past few years, we’ve asked people what cloud platform they use or plan on using for building their IoT solution.

IoT Developer Survey 2018: IoT Cloud Platforms Adoption – Amazon vs. Microsoft vs. Google

Since 2016, Amazon AWS has always come up as the platform of choice for the respondents, followed by Microsoft Azure and Google Cloud Platform.

📎 The use of AWS for building IoT solutions increased by 21% since 2017. 

Looking at this year’s results, there is a clear upward trend in terms of adoption for Amazon AWS (51.8%, a 21% increase from last year) and Microsoft Azure (31.21%, a 17% increase from 2017). In the meantime, Google Cloud Platform is struggling to get adoption from IoT developers (18.8%, an 8% year-to-year decrease).

📎 Google Cloud Platform struggles, with an 8% decrease in market share for IoT deployments since 2017. 

Seeing AWS ahead of the pack is no surprise. It seems to be the public cloud platform of choice for developers, according to the recent Stack Overflow Developer Survey, and one of the most loved platforms for development in general. And looking at the same survey, it seems Google is not really doing great with their Cloud Platform (it is used by 8.0% of the respondents vs. 24.1% for AWS).

IoT Developer Survey 2018: IoT Cloud Platforms Adoption – Trends

It will be interesting to see how, and if, Google catches up in the IoT cloud race, and whether we will see more acquisitions similar to Xively’s in February to help beef up their IoT offering in 2018. Since Microsoft is planning to invest $5 billion in IoT over the next four years, the IoT cloud competition will definitely be interesting to follow…

IoT Data is finally getting attention

While IoT has been around for a while now, it looks like developers are starting to realize that beyond the “cool” factor of building connected devices, the real motivation and business opportunity for IoT is in collecting data and making sense out of it.

📎 Collecting and analyzing data becomes #2 concern for #IoT developers. 

This year, 18% of the respondents identified Data Collection & Analytics as one of their top concerns for developing IoT solutions. This is a 50% increase from last year, and puts this topic as #2 concern—Security remains #1, and Connectivity is sharing the third place with Integration with Hardware.

IoT Developer Survey 2018: Key IoT Concerns

Unsurprisingly, industries such as Industrial Automation or Smart Cities tend to care about IoT data collection and analytics even more—23% of the respondents working in those industries consider data collection & analytics to be a key concern.

IoT Developer Survey 2018: Key IoT Concerns - Trends

On a side note, it is great to get the confirmation of a trend we identified last year, with Interoperability clearing becoming less of a concern for IoT developers. It’s been ranking #2 since we started doing the survey in 2015, and is now relegated to the 5th place.

As someone working with IoT open source communities on a day-to-day basis, I can’t help but think about the crucial role open standards and open source IoT platforms have had in making IoT interoperability a reality.

Consolidation in IoT messaging protocols

📎 MQTT is used in 62% of IoT solutions and remains the IoT messaging protocol of choice. 

An area I particularly like to observe year-over-year is the evolution of IoT messaging protocols. For many years now, MQTT has established itself as a protocol of choice for IoT, and this year’s survey is just confirming this: MQTT is used by over 62% of our respondents, followed by HTTP (54.1%).

Six years after IBM and Eurotech open sourced their implementations of the MQTT protocol (see the Eclipse Paho project), and with the ever-increasing popularity of the Eclipse Mosquitto project (and many other open MQTT-based projects out there of course), this is once again a demonstration that open wins. With MQTT 5 around the corner and several of the identified “limitations” of the protocol gone, MQTT will possibly become even more clearly THE IoT messaging standard in the future.

IoT Developer Survey 2018: Consolidation in IoT Messaging Protocols

It would appear that the use of HTTP is declining (54.1%), perhaps to the benefit of the more lightweight and versatile HTTP/2 (24.9% vs. 16.8% last year). XMPP (4.3%) is one of the protocols that seems to be losing the protocol consolidation battle, with a continued decline since 2016.

📎 Adoption of AMQP increased by over 30% since 2017 as people scale their IoT deployments. 

Since more and more people start scaling their IoT deployments, it is likely a reason for the significant increase in AMQP’s adoption (18.2%, from 13.9% last year), which is a core element of many IoT backends.

The use of proprietary vendor protocols and in-house protocols is steadily decreasing, confirming that the industry at large tends to favor open standards over closed solutions.

It will be interesting to watch how the adoption of DDS (4.9%) evolves over time. It already seems to be getting some traction in domains such as Automotive, where 10% of the respondents said they are using it.

IoT Developer Survey 2018: IoT Messaging Protocols – Trends

Focus on security increases

It is always interesting to watch how developers approach security in the context of IoT, and it has always been mentioned as the #1 concern for IoT developers since we started doing the survey in 2015.

However, it is no secret that security is hard, and there is unfortunately still only a limited set of security-related practices that are on the front burner of IoT developers. Communication-layer security (e.g the use of TLS or DTLS) and data encryption remain the two most popular practices, used by respectively 57.3% and 45.1% of the respondents.

IoT Developer Survey 2018: IoT Security Technologies

For the first time in the history of this survey, we explicitly asked respondents if they were using blockchain or distributed ledger technology (DLT) in their IoT solutions. I was frankly surprised to see that it would appear to be the case for 11% of the respondents. As the technology matures, and as some of the barriers making it sometimes impractical for constrained/embedded devices slowly disappear, I am expecting blockchain & DLT to be used more and more for securing IoT solutions (and probably in combination with data monetization use cases).

📎 Adoption of over-the-air updates to keep IoT applications up-to-date and secure increased by almost 50% since last year. 

To end on a positive note, it is pretty clear that developers are starting to bake security into their IoT products, as an increasing number of developers indicated they implement security techniques compared to 2017. Over-the-air updates appear to be used more and more (27.3%, a 47% increase from 2017). Open device management standards such as LWM2M, together with open source implementations such as Eclipse Wakaama and Eclipse Leshan, are certainly making it easier for developers to implement OTA in their solutions.

IoT Data is multifaceted and open source databases dominate the market

This year we added a few questions to the survey aimed at understanding better the kind of IoT data being collected, and how it is being stored.

It is interesting to see that across all industries, IoT data is equally multifaceted, and a wide variety of data is being collected by today’s IoT applications. 61.9% of the data collected is time series data (e.g sensor data), but almost equally important are device information (60.4%) and log data (54.1%). This is not really surprising as collecting sensor data is only half of the IoT operational equation: one also needs to be able to track and manage their fleet of devices.

IoT Developer Survey 2018: Types of IoT Data

Keeping that in mind, it is interesting to look at the landscape of databases and data stores used for IoT applications. While time series data is the most common form of data that IoT applications collect, traditional relational databases (namely, MySQL, with a clear leading position at 44.6%) are still widely used. It is likely reflecting the importance of storing all kinds of device metadata or legacy enterprise data in addition to sensor data.

IoT Developer Survey 2018: IoT Databases

With regards to NoSQL and time series databases, MongoDB (29.8%) and InfluxDB (15.7%) seem to be the two platforms of choice for storing non-relational IoT data (e.g time series).

📎 93% of databases used in IoT are open source. 

It is worth highlighting that an astounding majority (93%) of the databases used for IoT are open source, with Amazon DynamoDB (6.9%) being the only notable exception. With something as critical and sensitive as IoT data, it seems that solution developers tend to favor technology that is not only easy and free to access, but more importantly that allows them to really “own” their data.

Linux remains the undisputed IoT operating system

Once again, Linux (71.8%) remains the leading operating system across IoT devices, gateways, and cloud backends.

IoT Developer Survey 2018: Top IoT Operating Systems & Distros

Although Amazon’s acquisition of FreeRTOS occurred just a few months before the survey opened, it might partially explain the significant increase in its reported adoption. Going from 13% in 2016 to 20% this year, it becomes the leading embedded IoT operating system, followed by Arm Mbed (9%) and Contiki (7%).

📎 FreeRTOS becomes the leading embedded #IoT operating system, followed by Arm Mbed and Contiki OS. 

In terms of Linux distributions, and as Raspberry Pi stays a very popular platform for IoT prototyping, Raspbian (43.3%) remains the top Linux distribution followed by Ubuntu (40.2%).

IoT Developer Survey 2018: IoT Linux Distributions – Trends


You can find the complete report on Slideshare.

Should you want to play with the raw data yourself, we made it available as a Google Spreadsheet here – feel free to export it as whatever format suits you best.

Mike Milinkovich and I will be doing a webinar on Thursday, April 19, to go through the results and discuss our findings. Don’t forget to RSVP!

Thanks to everyone who took the time to fill out this survey, and thanks again to IEEE, OMA SpecWorks and the Agile-IoT project for their help with the promotion.

I am very interested in hearing your thoughts and feedback about this year’s findings in the comments of this post. And, of course, we are always open to suggestions on how to improve the survey in the future!

L’article Key Trends from the IoT Developer Survey 2018 est apparu en premier sur Benjamin Cabé.


by Benjamin Cabé at April 17, 2018 12:44 PM