Orion 14.0 New and Noteworthy

by Mike Rennie at March 21, 2017 07:46 PM

Another three months and another awesome release! Its that time again where I share all of the cool new features, enhancements and fixes with you. As usual with every release, there were lots of changes, so lets jump right in.

Accessibility

The work that began in 13.0 to make Orion completely accessible to every developer continued at a furious pace in 14. This time around, our work was focussed on having the correct colour contrast.

We tightened up our colours in the light theme so that all colours pass the WCAG 2.0 AA guideline for colour contrast. The changes are subtle, but they do make text easier to read, as seen in this before-and-after snapshot of selected code in the editor.

Selected text comparison

Comparing selected text in 14.0 vs. 13.0

Language Tools

Automatic Project Configuration

The JavaScript tooling can now read and understand complex project configurations and automatically configure Tern for the best development experience. For example, the tools can better read and understand package.json files and automatically load available plugins into Tern (rather than the user having to tailor their configuration settings).

Projects Anywhere

Using the new support from the platform to find project contexts, the JavaScript tools can now support a “project” at any level in the navigator. Where a project means any folder that contains JavaScript project-like files – package.json, .tern-project, etc.

Smarter Defaults

The default configuration for the JavaScript tools has been retooled to provide more support right out of the box. In Orion 13.0 (and before), we started the tools in a very bare-bone fashion, and would alert you about potential configuration changes (with quick fixes). Now we automatically start with ECMA, node and browser support, and configure your project as you code.

Disable Linting In-File

Tired of being nagged about a particular code pattern used in certain places (but like to be warned elsewhere)? You can now use the new quickfix to ignore the problem in the current file.

Disable rule in-file

Disable rule in-file

JavaScript Type Icons

In an effort to make the overload of information (while coding in JavaScript) a bit more understandable, we have added icons to help users immediately understand the type of something. For example, F stands for function, O is for objects, C is for classes, etc.

Type icons

Type icons

Improved ESLint configuration file support

We have improved how the JavaScript tools handle the various forms of ESLint configuration files. We now properly support all entries of the files (except for extends).

SVG Support

The CSS and HTML parsers have been updated to properly support SVG attributes and properties. The HTML and CSS validation has also been updated to properly process the new attributes and properties.

Platform Improvements

Syntax Styling

Syntax styling grammars can now define a firstLineMatch attribute.  This enables multiple grammars to be defined for a content type, and the grammar that gets applied will be chosen based on the first line of content.

Tasks

The node server now stores its tasks metadata in a Mongo DB when running as multi-tenant.  As a result, requests querying long-running tasks can now be handled by different server instances that have access to the shared Mongo DB.

Automatic Syntax Checking

Previously, syntax checking took place when a file is saved – if you have autosave turned on in Orion, this is not a problem, as problem markers would be updated as you made changes. If however, you had autosave turned off, any problem markers would quickly become stale causing confusion. Now, in Orion 14, syntax checking will take place on a regular interval even if autosave is turned off, to try and avoid stale problem markers piling up.

New File Client API

The Orion file client has been updated with the ability to find a project given a particular resource path. The new API can be invoked as:

fileClient.getProject(resourcePath, options)

Information Annotations

A new type of annotation has been added to Orion – the info annotation.

The info annotation

The “info” annotation

Annotation Visibility

Always wanted to only show annotations in certain parts of the IDE? Well, now you can.

Simply navigate to the editor settings preference page, and look for the Annotations, Overview Annotations and Text Annotations sections to configure annotation visibility as you’d like.

Annotation visibilities

Annotation visibilities

Don’t forget, you can also use the handy star buttons to have the preference(s) show up in the quick preference menu.

IDE Themes

Finally, after all this time, we have the ability to change the theme of not just the editor, but the entire IDE from the preferences!

Not happy with the default theme in Orion? Head over to the IDE Theme preferences page to change to another theme (currently there are only two of them) or create your own (by modifying an existing theme and saving it as your own).

IDE Theme preferences

IDE Theme preferences


by Mike Rennie at March 21, 2017 07:46 PM

Open IoT Challenge 3.0 — Winners

by Roxanne on IoT at March 20, 2017 10:07 AM

In case you missed the Eclipse IoT announcement last week, the Open IoT Challenge 3.0 winners were announced!

And the winners are…

Open IoT Challenge 3.0 Winners

Congratulations to the InMoodForLife team for coming in first. Their solution analyzes and monitors the sleep patterns of individuals affected by bipolar disorder. The goal of the solution is to improve the therapeutic approach, react, and adapt the treatment faster. The team already shared part of their future plans with us and it sounds very promising! We hope they will go on to help many individuals suffering from bipolar disorder.

The krishi IoT and RHDS teams also worked very hard and delivered great solutions. Read the full announcement to find out more.

Open IoT Challenge 3.0 Sponsors

by Roxanne on IoT at March 20, 2017 10:07 AM

Scala is here

by codepitbull at March 20, 2017 12:00 AM

TL;DR

  • Scala support for Vert.x is here!
  • It is based on Scala 2.12, no support for 2.11 planned
  • all Vert.x-modules are available in a Scala flavor
  • It’s awesome
  • Get started here

Intro

The rise of Scala as one of the most important languages on the JVM caught many (me included) by surprise. This hybrid of functional and imperative paradigms struck a chord with many developers. Thanks to Scala a lot of people who’d never have touched a language like Haskell got exposed to functional programming. This exposure was one of the driving forces to get streams and lambda into the JVM.

With the release of Vert.x 3.4.0 we finally introduced Scala to the family of supported languages: vertx-lang-scala.

In this post I will introduce the new stack and how the power of Scala can be used in your favorite reactive toolkit.

Basics

vertx-lang-scala is based on Scala 2.12. There are no plans to support 2.11.

All modules available for Vert.x are supported (you can check here ).

Future and Promise both need a ExecutionContext
Modules use the following naming-scheme: io.vertx:-scala_2.12:. The Scala version of io.vertx:vert-web:3.4.0 would be io.vertx:vertx-web-scala_2.12:3.4.0.

There is an sbt-based quickstart-project available that will be updated for each Vert.x-release.

Please note: Although sbt is used in this quickstart it is by no means required. There are no special plugins involved so vertx-lang-scala can easily be used with Gradle or Maven.

I use sbt as it is the default build system used for Scala projects.

Quickstart

Let’s get started by cloning the quickstart:

git clone git@github.com:vert-x3/vertx-sbt-starter.git

You just got the following things:

  • An sbt project containing dependencies to Vert.x-core and Vert.x-web
  • The ability to create a fat-jat via sbt assembly
  • The ability to create a docker container via sbt docker
  • A few example verticles
  • Unit test examples
  • a pre-configured Scala-shell inside sbt

We will now run the application to get some quick satisfaction. Use sbt assembly to produce the fat-jar followed by java -jar target/scala-2.12/vertx-scala-sbt-assembly-0.1-SNAPSHOT.jar. Now point your browser to http://localhost:8666/hello for a classic welcome message.

The details

Open your IDE so we can take a look at what’s going on under the hood. We start with the HttpVerticle.

package io.vertx.scala.sbt

import io.vertx.lang.scala.ScalaVerticle
import io.vertx.scala.ext.web.Router

import scala.concurrent.Future

class HttpVerticle extends ScalaVerticle { // <1>


  override def startFuture(): Future[Unit] = { // <2>
    val router = Router.router(vertx) // <3>
    val route = router
      .get("/hello")
        .handler(_.response().end("world"))

    vertx //<4>
      .createHttpServer()
      .requestHandler(router.accept)
      .listenFuture(8666, "0.0.0.0")  // <5>
        .map(_ => ()) // <6>
  }
}
  1. ScalaVerticle is the base class for all Scala-Verticles. It provides all required methods to integrate with the Vert.x-runtime.
  2. There are two ways to start a Verticle. Overriding startFuture, like in this example, tells Vert.x to only consider the Verticle fully started after the returned Future[Unit] has been successfully completed. Alternatively one can override start and by that signal to Vert.x the instant availability of the Verticle.
  3. This block creates a Router for incoming HTTP-requests. It registers a handler to answer with “world” if a request to the URL “/hello” arrives. The class is coming from the Vert.x-web-module.
  4. Every Verticle has access to the Vert.x-instance. Here we use it to create a webserver and register our router to handle incoming requests.
  5. We finally reached the reason why I use startFuture in the first place. All operations in Vert.x are asynchronous. So starting the webserver most definitely means it takes some more time until it bound to the given port (8666 in this case). That’s why listenFuture is used, which returns a Future which in turn contains the actual instance of the webserver that just got started. So our Verticle will be ready to receive requests after the returned Future has been completed.
  6. In most cases we can return the Future directly. In this case the Future returned by listenFuture has the wrong type. We get a Future[HttpServer] but we need a Future[Unit] as you can see in the signature of startFuture. This call takes care of mapping the given Future[HttpServer] to the required return type.

Testing

I use ScalaTest for all my testing needs. It comes with stellar support for asynchronous operations and is a perfect fit for testing Vert.x-applications.

The following HttpVerticleSpec shows how to test an HTTP-API using only Vert.x-classes. Personally I prefer REST-assured with its rich DSL. For this post I wanted to stick with Vert.x-API, so here we go.

package io.vertx.scala.sbt

import org.scalatest.Matchers

import scala.concurrent.Promise

class HttpVerticleSpec extends VerticleTesting[HttpVerticle] with Matchers { // <1>

  "HttpVerticle" should "bind to 8666 and answer with 'world'" in { // <2>
    val promise = Promise[String] // <3>

    vertx.createHttpClient()  // <4>
      .getNow(8666, "127.0.0.1", "/hello",
        r => {
          r.exceptionHandler(promise.failure)
          r.bodyHandler(b => promise.success(b.toString))
        })

    promise.future.map(res => res should equal("world")) // <5>
  }

}
  1. VerticleTesting is a base class for your tests included with the quickstart-project. It’s a small helper that takes care of deploying/un-deploying the Verticle to be tested and manages a Vert.x-instance. It additionally extends AsyncFlatSpec so we can use Futures as test-return-types.
  2. Isn’t it nice and readable?
  3. The promise is required as the whole test will run async
  4. We use the vertx-instance provided by VerticleTesting to create a Netty-based HttpClient. We instruct the client to call the specified URL and to succeed the Promise with the returned body.
  5. This creates the actual assertion. After getting the Future from the Promise an assertion is created: The Result should be equal to the String “world”. ScalaTest takes care of evaluating the returned Future.

That’s all you need to get started!

Futures in vertx-lang-scala

Now for a more in depth topic I think is worth mentioning. vertx-lang-scala treats async operations the Scala-way which is a little different from what you might be used from Vert.x. For async operations like subscribing to the eventbus or deploying a Verticle you would call a method like this:

vertx.deployVerticle("com.foo.OtherVerticle", res -> {
  if (res.succeeded()) {
    startFuture.complete();
  } else {
    startFuture.fail(res.cause());
  }
});

The deployVerticle method takes the Verticle-name and a Handler[AsyncResult] as its arguments. The Handler[AsyncResult] is called after Vert.x tried deploying the Verticle. This style can also be used for Scala (which might ease the transition when coming from the Java-world) but their is a way more scalaish way of doing this.

For every method taking a Handler[AsyncResult] as its argument I create an alternative method using Scala-Futures.

vertx.deployVerticleFuture("com.foo.OtherVerticle") // <1>
  .onComplete{  // <2>
    case Success(s) => println(s"Verticle id is: $s") // <3>
    case Failure(t) => t.printStackTrace()
  }
  1. A method providing a Future based alternative gets Future appended to its name and returns a Future instead of taking a Handler as its argument.
  2. We are now free to use Future the way we want. In this case onComplete is used to react on the completion.
  3. Pattern matching on the result <3

I strongly recommend using this approach over using Handlers as you won’t run into Callback-hell and you get all the goodies Scala provides for async operations.

Future and Promise both need a ExecutionContext
The VertxExecutionContext is made implicitly available inside the ScalaVerticle. It makes sure all operations are executed on the correct Event Loop. If you are using Vert.x without Verticles you have to provide it on your own.

Using the console

A great feature of sbt is the embedded, configurable Scala-console. The console available in the quickstart-project is pre-configured to provide a fresh Vert.x-instance and all required imports so you can start playing around with Vert.x in an instant.

Execute the following commands in the project-folder to deploy the HttpVerticle:

sbt
> console
scala> vertx.deployVerticle(nameForVerticle[HttpVerticle])
scala> vertx.deploymentIDs

After executing this sequence you can now point your browser http://localhost:8666/hello to see our message. The last command issued shows the Ids under which Verticles have been deployed.

To get rid of the deployment you can now type vertx.undeploy(vertx.deploymentIDs.head).

That’s it!

This was a very quick introduction to our new Scala-stack. I hope to have given you a little taste of the Scala goodness now available with Vert.x. I recommend digging a little more through the quickstart to get a feeling for what’s there. In my next blog post I will explain some of the decisions I made and the obstacles I faced with the differences between Java and Scala /Hint: They are way bigger than I was aware of).

Enjoy!


by codepitbull at March 20, 2017 12:00 AM

Eclipse Demo Schedule at Devoxx US

March 19, 2017 10:40 AM

Visit the Eclipse Foundation at booth #318 on March 21-23 in San Jose, CA for some exciting demos.

March 19, 2017 10:40 AM

Announcing Orion 14

by Mike Rennie at March 17, 2017 06:56 PM

We are pleased to announce the fourteenth release of Orion, “Your IDE in the Cloud”. You can run it now on OrionHub or download the server to run your own instance. Once again, thank you to all committers and contributors for your hard work this release.  There were 150 bugs and enhancements fixed, across more than 380 commits from 14 authors!

What’s new in Orion 14?  This release was focussed on quality and ease of use – Orion 14 is more accessible (better colours and accessibility), easier to start coding in (the tools now automatically understand complex project configurations, so you don’t have to), and just more awesome in general.

We continued to improve the Node.js server (which is used on orion.eclipse.org or locally), and continued to improve our Electron app. Lastly, we began work in 14.0 to provide collaborative development support and debugging support directly in Orion! Stay tuned in Orion 15 for these features to officially land.

Enjoy!


by Mike Rennie at March 17, 2017 06:56 PM

Debugging Xtext grammars – what to do when your language is ambiguous

by Holger Schill (schill@itemis.com) at March 17, 2017 03:00 PM

Xtext uses ANTLR to generate a lexer and parser out of your grammar. Technically an LL(*) parser gets generated. This means it cannot deal with left recursion and has an infinite lookahead. You might know what that means, but to make it easier you could think about LL(*) parsers like this: A parser gets an ordered list of things (called tokens) to collect in a labyrinth. When it’s not clear which way to go it stands still and tries to look in all directions until their end. As soon as it is obvious where to go, it continues walking and collecting. There is no way back – so decisions should be correct. Sometimes this is not the case and the parser can make a clear decision. In this situation it gets tricky to understand where the problem is and how to resolve it. Mostly shown errors and warnings are not that meaningful.

debugging-xtext-grammar.jpg


What do you normally do when the Xtext workflow reports warnings or errors while the parser gets generated? Obviously errors can’t be ignored since the parser will not get generated and the workflow fails – but what about warnings? Do you try to solve them by staring at the grammar and try to think like a parser? Or you ignore them because it seams to work? Really?

In projects we have seen people dealing with such problems in various ways. Ignoring warnings is not a good idea, since ANTLR switches off alternatives and you do not know which one. We have seen people consequently ignoring such warnings, because they cannot figure out the real cause and things got complex. However ignoring those warnings should not be an option.

I have seen that a large group of Xtext users do not know ANTLRWorks or that it can help here. So let’s make two trivial examples to see how to use the tool with Xtext.

Warnings

Let’s make a trivial example where it is really obvious what the problem is:

example-grammar-debugging-xtext-grammars.png

We have two parser rules (Element1 and Element2) that look identically except that there are different parts optional. During the workflow runs it reports the following warnings:

warning(200): ../com.itemis.blog.antlrworks.dsl/src-gen/com/itemis/blog/antlrworks/parser/antlr/internal/InternalDsl.g:114:2: Decision can match input such as "'element' 'id' RULE_ID 'int' RULE_INT" using multiple alternatives: 1, 2

As a result, alternative(s) 2 were disabled for that input

warning(200): ../com.itemis.blog.antlrworks.dsl.ide/src-gen/com/itemis/blog/antlrworks/ide/contentassist/antlr/internal/InternalDsl.g:156:1: Decision can match input such as "'element' 'id' RULE_ID 'int' RULE_INT" using multiple alternatives: 1, 2 

In this case the parser gets generated, but it tells us that there were different alternatives for the same input and that ANTLR decided to disable 2 of them. It does not tell us which once. Let’s find out what the cause is.

ANTLRWorks comes as an executable jar – running it should not be a problem as long as java is installed. If you want to open a grammar it expects an *.g file. Xtext should have generated one in the src-gen folder. In our example there is a .g file located here: com.itemis.blog.antlrworks.dsl/src-gen/com/itemis/blog/antlrworks/parser/antlr/internal/InternalDsl.g

The grammar looks a bit strange and there is a lot of Java stuff in the grammar. ANTLRWorks will fail to compile the grammar… damn.

Error-debugging-xtext-grammars.pngOk, the generated ANTLR grammar cannot be directly used in ANTLRWorks since it is modified to the needs of Xtext. To generate a so-called debugable grammar you need to modify the workflow a bit like this.

workflow-debugging-xtext-grammars.png

Now you’ll find a .g file in src-gen that carries the name DebugInternal*.g. This file can be easily used with ANTLRWorks.

Path-debugging-xtext-grammars.png

After you have started ANTLRWorks click on File->Open and select the DebugInternal*.g file.

File-debugging-xtext-grammars.png

ANTLRWorks will open the grammar and you’ll see the different rules. So far no warnings are shown. To let the tool do it’s job click on debug – the button looks like a bug. After doing that the ruleElement is marked read. By clicking on the rule you’ll see the problem that caused the warnings and the different alternatives. To really see what the disabled alternatives are you could enable them as shown in the next picture. The read arrows will show the disabled alternatives.

ANTLRWorks-debugging-xtext-grammars.pngAs already said this is a very trivial example and the tool just points out what we already know. In typical projects we have far more complex scenarios where it is nearly impossible to get the cause of a warning without ANTLRWorks. Especially when a lot of parts are optional it gets tricky. To really understand what the parser does there is the possibility to debug the grammar with a given input and see how the parsetree is constructed. Clicking on debug once more brings up a window to define the input.

Input-debugging-xtext-grammars.png

Debugging in ANTLRWorks works similar like you know it from Eclipse and you can step forward and backward. At the end the parsetree will show that the parser went into the rule “Element1” instead of “Element2”. From a grammar point of view both rules would be valid but ANTLR switched of the alternative. Otherwise no clear decision could be made.

debugging-xtext-grammars.png

If we add a second line as an input for the debugger and leave out the intValue the parsetree looks like this:

Input-debugging-xtext-grammars.png

debugging-xtext-grammars.png

As the intValue is mandatory in the ruleElement1 the parser will go into ruleElement2 for the second entry. This is the only case where “Element2” is picked. In this trivial example it’s not that hard to guess what the parser will do. In more complex example this debugging feature will really bring a big benefit to solve your ambiguities. 

Errors

 What about errors? Do you know what to do when the workflow reports that a rule has a non-LL(*) decision? What the hell is left-refactoring, syntactic predicates and why should I use backtracking – should I really? We’ll handle that in another blogpost, but for now let’s have a look at a simple language that has some expressions.

grammar-debugging-xtext-grammars.png 

When we try to run the Xtext workflow the generator will report the following error:

 error(211): ../com.itemis.blog.antlrworks.dsl/src-gen/com/itemis/blog/antlrworks/parser/antlr/internal/InternalDsl.g:114:2: [fatal] rule ruleExpression has non-LL(*) decision due to recursive rule invocations reachable from alts 1,3. Resolve by left-factoring or using syntactic predicates or using backtrack=true option. 

Various exceptions are show below the error, but Xtext will generate the .g file anyway – so there is a chance to find out what the problem is. You might already know what’s wrong, but let’s try to use ANTLRWorks. The compiler in ANTLRWorks will show the very same error, but after ignoring that the rule element is marked red. The different alternatives are marked red.

ANTLRWorks-Error-debugging-xtext-grammars.png
In this case left-recursion is not our problem. ANTLRWorks shows us, that ruleBlockExpression and ruleListLiteral are the cause.

rule1.png

rule-debugging-xtext-grammars.png

After having a closer look it is obvious that the syntax is equal if there is only one expression inside – that makes our grammar ambiguous. Do we really want a ListLiteral to exist on the same level as a BlockExpression? Do we really want a BlockExpression contain various other BlockExpressions?

After considering these questions a refactored grammar looks like this:

refactored-grammar-debugging-xtext-grammars.png

After doing that the workflow will run through and we are good to have a second look in ANTLRWorks to see the parsetree for a simple expression:

input-debugging-xtext-grammars.png

debug-Expression-debugging-xtext-grammars.png

These examples are very trivial to make it obvious where the problem is. The intension was to let you know how to use ANTLRWorks with Xtext. Don't ignore warnings anymore – you might not know the implications. The parsetree might look different as you though.

Stay tuned for another post about syntactic predicates, left-recursion / left-refactoring and why backtracking is not an option. And if you've got any questions regarding Xtext – don't hesitate to contact us!

Contact the  itemis Xtext team


by Holger Schill (schill@itemis.com) at March 17, 2017 03:00 PM

EclipseCon France: Last Chance to Submit!

March 17, 2017 02:25 PM

March 29 is the final submission deadline for EclipseCon France. Visit the CFP page for info. See you in Toulouse this June!

March 17, 2017 02:25 PM

Vert.x and IoT in Rome : what a meetup !

by ppatierno at March 17, 2017 10:45 AM

Yesterday I had a great day in Rome for a meetup hosted by Meet{cast} (powered by dotnetpodcast community) and Codemotion, speaking about Vert.x and how we can use it for developing “end to end” Internet of Things solutions.

17352445_10208955590111131_6229030843024604532_n

17352567_10208955588791098_766816304298598626_n

I started with an high level introduction on Vert.x and how it works, its internals and its main usage then I moved to dig into some specific components useful for developing IoT applications like the MQTT server, AMQP Proton and Kafka client.

17342690_10208955588751097_8818320599257580571_n

17352571_10208955588951102_2851165399929439718_n

It was interesting to know that even in Italy a lot of developers and companies are moving to use Vert.x for developing microservices based solutions. A lot of interesting questions came out … people seem to like it !

Finally, in order to prove the Vert.x usage in enterprise applications I showed two real use cases that today work thanks to the above components : Eclipse Hono and EnMasse. I had few time to explain better how EnMasse works in details, the Qpid Dispatch Router component in particular and for this reason I hope to have a future meetup on that, the AMQP router concept is quite new today ! In any case, knowing that such a scalable platform is based (even) on Vert.x was a great news for the attendees.

17264802_10208955590191133_8923182437405273553_n

If you are interested to know more about that, you can take a look to the slides and the demo. In the coming days, the video of the meetup will be available online but it will be in Italian (my apologies for my English only friends :-)). Hope you’ll enjoy the content !

Of course, I had some networking with attendees after the meetup and … with some beer 🙂

17310150_1421561734583219_8414988688301135801_o



by ppatierno at March 17, 2017 10:45 AM

Congratulations to the Open IoT Challenge 3.0 Winners!

March 16, 2017 03:15 PM

Eclipse IoT is pleased to announce the winners of the third annual Open IoT Challenge.

March 16, 2017 03:15 PM

JBoss Tools 4.4.4.AM1 for Eclipse Neon.2

by jeffmaury at March 15, 2017 09:50 PM

Happy to announce 4.4.4.AM1 (Developer Milestone 1) build for Eclipse Neon.2.

Downloads available at JBoss Tools 4.4.4 AM1.

What is New?

Full info is at this page. Some highlights are below.

OpenShift 3

Although our main focus is bug fixes, we continue to work on providing better experience for container based development in JBoss Tools and Developer Studio. Let’s go through a few interesting updates here and you can find more details on the What’s New page.

OpenShift Server Adapter enhanced flexibility

OpenShift server adapter is a great tool that allows developers to synchronize local changes in the Eclipse workspace with running pods in the OpenShift cluster. It also allows you to remote debug those pods when the server adapter is launched in Debug mode. The supported stacks are Java and NodeJS.

As pods are ephemeral OpenShift resources, the server adapter definition was based on an OpenShift service resource and the pods are then dynamically computed from the service selector.

This has a major drawback as it allows to use this feature only for pods that are part of a service, which may be logical for Web based applications as a route (and thus a service) is required in order to access the application.

So, it is now possible to create a server adapter from the following OpenShift resources:

  • service (as before)

  • deployment config

  • replication controller

  • pod

If a server adapter is created from a pod, it will be created from the associated OpenShift resource, in the preferred order:

  • service

  • deployment config

  • replication controller

As the OpenShift explorer used to display OpenShift resources that were linked to a service, it has been enhanced as well. It now displays resources linked to a deployment config or replication controller. Here is an example of a deployment with no service ie a deployment config:

server adapter enhanced

So, as an OpenShift server adapter can be created from different kind of resources, the kind of associated resource is displayed when creating the OpenShift server adapter:

server adapter enhanced1

Once created, the kind of OpenShift resource adapter is also displayed in the Servers view:

server adapter enhanced2

This information is also available from the server editor:

server adapter enhanced3

Server Tools

API Change in JMX UI’s New Connection Wizard

While hardly something most users will care about, extenders may need to be aware that the API for adding connection types to the &aposNew JMX Connection&apos wizard in the &aposJMX Navigator&apos has changed. Specifically, the &aposorg.jboss.tools.jmx.ui.providerUI&apos extension point has been changed. While previously having a child element called &aposwizardPage&apos, it now requires a &aposwizardFragment&apos.

A &aposwizardFragment&apos is part of the &aposTaskWizard&apos framework first used in WTP’s ServerTools, which has, for a many years, been used throughout JBossTools. This framework allows wizard workflows where the set of pages to be displayed can change based on what selections are made on previous pages.

This change was made as a direct result of a bug caused by the addition of the Jolokia connection type in which some standard workflows could no longer be completed.

This change only affects adopters and extenders, and should have no noticable change for the user, other than that the below bug has been fixed.

Forge Tools

Forge Runtime updated to 3.6.0.Final

The included Forge runtime is now 3.6.0.Final. Read the official announcement here.

startup

Enjoy!

Jeff Maury


by jeffmaury at March 15, 2017 09:50 PM

Papyrus-IC Research/Academia webinar

by eposse at March 15, 2017 05:19 PM

On Friday, March 17th at 16:00 CET, 15:00 GMT, and 11:00 EDT, the Papyrus Industry Consortium’s (a.k.a. Papyrus-IC or, as I prefer, Me-IC) Research and Academia committee will host their third webinar of the year. The topic is on an industry perspective on software product lines with speakers from Saab and Pure-Systems. See this link for the connection information.


Filed under: Uncategorized

by eposse at March 15, 2017 05:19 PM

EMF Support for Che – Day 4: Building Che

by Maximilian Koegel and Jonas Helming at March 15, 2017 01:53 PM

In this blog series, we share our experiences extending Eclipse Che to add EMF support.. The first post covers our goals. In previous posts, we describe how to add support for code generation and create a custom stack, which provides the framework for code generation out of the box.

So far, we have not yet written any code to extend Che. We have used the concept of Che workspaces, which are Docker containers, to deploy additional tools (in our case the code generator). As the browser IDE allows to execute any command on the workspace runtime, we did not have to implement something to extend it so far.

However, there are requirements, where enhancing the workspace is not sufficient. This is typically the case when you enhance the UI of the browser IDE with new features or you add new APIs into agents running within workspaces.

In this post, we will describe how to make a minor enhancement to the browser IDE with a simple “hello world” example. It is a good  first step to introduce the general process for extending Che. We will do more complex extensions in future blog posts.

We at EclipseSource have many years of experience in building extensions for the Eclipse IDE and in building Eclipse-based applications. So before we go into detail, let us summarize our experience in developing extensions for Che from a viewpoint of a desktop Eclipse IDE developer. We focus on the most obvious similarities and differences.

The similarity: Eclipse Che has an inherently extensible and flexible architecture. This is mainly due to a central pattern: Service-Orientation. There are several ways how this pattern is used for extensibility:

  • Che provides services to build up almost anything in the UI. As an example, there is a service to register the actions which are shown in the various menus. As another example, there is a service to register file extensions (defining icons and default editors). These services can be layered into complex UI objectives such as creating a perspective with different menus, panels, and layouts.
  • Che provides services to access resources (e.g. the source files) and the workspace runtime (e.g. to trigger commands). By using these existing features, it is fairly simple to add new features on top.
  • The Che server is mainly a collection of RESTful services. By adding custom REST services to the server, you can easily enhance it. Thus custom services can then be consumed by custom extensions in the browser IDE.
  • Finally, Che defines service interfaces, which can be implemented by an extension to provide new things. As an example, you can implement a service, which implements the behavior to create a custom project type.

The difference: There is currently no runtime plugin mechanism for the Che browser IDE comparable to what you are used to from the classic Eclipse IDE (a.k.a update sites/p2 repositories). This means, to extend Che, you need to build your custom version of it which contains your extensions and then deploy the full assembly somewhere. Technically, “plugins” in Che are maven modules, that you add to the global build during compile time. There is no runtime extensibility of the browser IDE. So, isn’t this a step backwards compared to the classic Eclipse IDE? There are different answers to this question, depending on the point of view:

  • No, because Che supports extensibility at runtime based on its workspace concept. If you miss any tool or runtime component, you can simply deploy it into your workspace and easily share it with co-workers. As an example, we enabled the EMF code generation only by extending the workspace at runtime (see this post). In the classic Eclipse IDE, everybody had to install the same things again (at least when not using Oomph).
  • Partially, as you cannot extend neither the browser IDE, nor the Che server at runtime, but only the workspace. However, Che has a different deployment scenario. As it is a client-server application, the idea is to do the set-up once in a central way and then share it for a group of developers, which require the same set of tools. This enables a very fast setup for developers on a project (pretty much like Oomph for classic Eclipse). It even enables to use different IDEs, which access the same workspace. However, it means more work for the author and maintainer of extensions, but a simpler life/set-up for the developers.
  • Yes, we miss OSGi, extension points and p2 repositories! Although OSGi and p2 have been criticized in the past, they are a very powerful combination to build up modular and extensible applications. This in combination with the great tooling provided by Eclipse made it possible to efficiently develop, install, update and deploy extensions to the IDE. This was probably one of the core factors to create such a huge ecosystem of tools.

So at the end, it really depends on the scenario and your design objectives. It is worth mentioning, that there is a general trend (also followed by Che) to move UI related parts of the IDE into server-side abstractions, so that they become independent. The language server protocol is a good example for this. In this scenario, an IDE only has to support to interpret the abstractions, e.g. the LSP. Therefore, client-site extensibility may become less important.

Anyways, let us get started with building Che locally.

 

First, you need install all the prerequisites to build and run Che. We already mentioned that you will need Docker, but there are a number of tools and environment settings needed for building Che.

There are two ways to get started. We recommend to start headless, first, and follow the guide how to clone and build Che. As a second step, in case you want to use an IDE, we recommend the Che workspace setup guide for an extensive overview.

We recommend to take the time to read the provided documentation carefully, as it contains useful information and lots of useful hints how to develop for Che. You should pay special attention to the “Super Dev Mode”, which allows hot code replacement for GWT applications and therefore drastically reduces the turnaround time when working on the browser IDE.

There is also an option to build Che within a preconfigured Docker environment which will spare you the trouble of setting it up yourself. Finally, the building Che will take some time to complete even if you have a fast machine. Therefore, we recommend to take a close look at the options, which are described in detail here.

The build process will result into a number of artifacts, also called assemblies in Che terminology. You can find them within the “assembly” subdirectory. You can match the assemblies to the logical components of the Che architecture. Just compare the directory with the architecture schema and the modules overview.

After the build completed you can start Che on your local machine. The simplest way to do that is to navigate to the assembly/assembly-main/target/[che-version]/[che-version] directory and execute “bin/che.sh start”. A number of long messages should confirm the successful start of the services and point you to open localhost:8080 in your browser. If everything went well you will be greeted by the Che dashboard.

So now that we can build check locally, let us do a very simple change to verify the build process. An example for such a simple change would be to add a project template to the browser IDE. Project templates can be instantiated by any user of the IDE. As we are working on EMF support, it would be useful to have an example EMF project as a template. We have already manually imported such a template from a Git repository in the second part of this blog series, which we will now add as a fixed template to our custom assembly.

Project templates are basically pointers to existing Git repositories. This enables to easily maintain the templates without re-distributing the IDE itself. The sample templates are maintained in the following file:

ide/che-core-ide-templates/src/main/resources/samples.json

Let us add following snippet to this file:

  {
   "name": "emfforms-makeithappen-blank",
   "displayName": "emfforms-makeithappen-blank",
   "path": "/",
   "description": "EMFForms, make it happen!",
   "projectType": "java",
   "mixins": [],
   "attributes": {
     "language": [
       "java"
     ]
   },
   "modules": [],
   "problems": [],
   "source": {
     "type": "git",
     "location": "https://github.com/eclipsesource/emfforms-makeithappen-blank",
     "parameters": {}
   },
   "commands": [],
   "links": [],
   "category": "Samples",
   "tags": [
     "maven",
     "java"
   ]
 }

Afterwards we need to stop the currently running che instance by executing “bin/che.sh stop”, rebuild Che and afterwards start it again using “bin/che.sh start”.

As a result, the new project template will be available among the existing ones:

Of course, this was a very simple change, it did not even involve any coding. However, we are now prepared to do more complex changes and start coding. Please note, that in our example, we changed a configuration file of Che to add our custom project template. Those kind of extensions are usually done by invoking services, which allow to extend the base configuration of Che (in our case add a new project template). We will get back to this cleaner solution later in this series.

When starting to code extensions for Che, those are typically placed in separated maven modules (i.e. plugins). This is conceptually pretty much like developing plugins for classic Eclipse. That means, the custom code will be separated from the core of Che. We will describe this more in detail in the next blog post of this series. As an example, we will create plugin, which registers a custom file type for “.ecore” including a custom icon.

So stay tuned!

Please note, that due to Eclipse Converge and Devoxx US, the next post will be published in 3 weeks. Please note that we will give a talk at Eclipse Converge about our experience with extending Che. So in case you haven’t already, please register soon.
If you are interested in learning more about the prototype for EMF support, if you want to contribute or sponsor its further development, or if you want support for creating your own extension for Che, please feel free to contact us.

_MG_4540b2
Co-Author
Mat Hansen


TwitterGoogle+LinkedInFacebook

Leave a Comment. Tagged with che, eclipse, che, eclipse


by Maximilian Koegel and Jonas Helming at March 15, 2017 01:53 PM

Fancy tooltips

by Christian Pontesegger (noreply@blogger.com) at March 15, 2017 12:02 PM

I always liked the tooltips available in eclipse editors. Having a browser widget that may capture focus is nice to display more complex help topics in the UI. Unfortunately the eclipse implementation is heavily bound to editors and cannot be used for other parts.

Well, up to now. For EASE I wanted to reuse these tooltips to display API documentation in a treeviewer. The result looks quite satisfactory:


I built some API to add these tooltips to any kind of SWT controls. While it may not be perfect it seems rather simple to use for me.
  final HoverManager hoverManager = new HoverManager(parent);
hoverManager.addHover(fModulesComposite.getTreeViewer(), new IHoverContentProvider() {

@Override
public void populateToolbar(BrowserInformationControl control, ToolBarManager toolBarManager) {
// nothing to do
}

@Override
public String getContent(Object origin, Object detail) {
return "<p>This is HTML content</p>";
}
});
To see these tooltips in action get a nightly build of EASE and open the Modules Explorer view.

Now I am wondering if there is any interest in making this API available for other eclipse projects.
When extracting the functionality I had to access some internal classes from org.eclipse.jface.text and JDT - mostly because of package private methods. Porting back these changes would be possible, still I am wondering if org.eclipse.jface.text would be the right place for it. Why should a generic view depend on jface.text just to get nice tooltip support?

So lets see if there is interest in adopting this feature and where to put it.

by Christian Pontesegger (noreply@blogger.com) at March 15, 2017 12:02 PM

OSGi Declarative Services news in Eclipse Oxygen

by Dirk Fauth at March 15, 2017 07:57 AM

With this blog post I want to share my excitement about the OSGi DS related news that are coming with Eclipse Oxygen. I want to use this blog post to inform about the new features and also the changes you will face with these. With Oxygen M6 you can already have a look at those features and also provide feedback if you find any issues.

Note:
You don’t have to be a committer or contribute code to be part of an Open Source Community. Also testing new features and providing feedback is a very welcome contribution to a project. So feel free to participate in making the Eclipse Oxygen release even better than the previous releases!

DS 1.3 with Felix SCR

The Equinox team decided to drop Equinox DS (stuck with DS 1.2) and replace it with Felix SCR (Bug 501950). This brings DS 1.3 to Eclipse which was the last missing piece in the OSGi R6 compendium in Equinox.

It was already possible to exchange Equinox DS with Felix SCR with Neon, but now you don’t need to replace it yourself, it is directly part of Equinox. There are some important things to notice though, which I will list here:

Felix SCR bundle from Orbit

The Felix SCR bundle included in Equinox/Eclipse is not equal to the Felix SCR bundle from Apache. The Apache bundle imports and exports the org.osgi packages it requires, e.g. the component related interfaces like ComponentContext, ComponentFactory or ComponentServiceObjects. It also contains the Promises API used by Felix SCR, which was not available in Equinox before. The Felix SCR bundle in Orbit does not contain these packages. They are provided by other Equinox bundles, which are now required to use DS with Equinox.

Note:
If you are interested in some more information about the reasons for the changes to the Orbit Felix SCR bundle, have a look at Bug 496559 where Thomas Watson explained the reasons very nicely.

The bundles needed for DS in Equinox are now as follows:

  • org.apache.felix.scr
    The Declarative Services implementation.
  • org.eclipse.osgi.services
    Contains the required OSGi service interfaces.
  • org.eclipse.osgi.util
    Contains the Promises API and implementation required by Felix SCR.
  • org.eclipse.equinox.ds (optional)
    Wrapper bundle to start Felix SCR and provide backwards compatibility.

Adding the Promises API (see OSGi R6 Compendium Specification chapter 705) in Equinox is also a very nice, but worth its own blog post. So I will not go into more details here. The more interesting thing is that org.eclipse.equinox.ds is still available and in some scenarios required. It does not contain a DS implementation anymore. It is used as a wrapper bundle to start Felix SCR and provide backwards compatibility. The main reasons are:

  1. Auto-starting DS
    The Equinox startup policy is to start bundles only if a class is accessed from them, or if it is configured for auto-starting. As the SCR needs to be automatically started but actually no one really accesses a class from it, every Eclipse application that makes use of Declarative Services configured the auto-start of org.eclipse.equinox.ds in the Product Configuration. If that bundle would be simply replaced, every Eclipse based product would need to modify the Product Configuration.
  2. Behavioral Compatibility
    Equinox DS and Felix SCR behave differently in some cases. For example Felix SCR deactivates and destroys a component once the last consumer, that references the component instance, is done with it. Equinox DS on the other hand keeps the instance (I explained that in my Control OSGi DS Component Instances blog post). As p2 and probably also other implementations rely on the Equinox behavior that components are not deactivated and destroyed immediately, the property

    ds.delayed.keepInstances=true

    is set automatically by org.eclipse.equinox.ds.

Considering these changes it is also possible to remove org.eclipse.equinox.ds from an Eclipse Product Configuration and solely rely on org.apache.felix.scr. You just need to ensure org.apache.felix.scr is automatically started and ds.delayed.keepInstances is set to true (e.g. required when using p2 as described in Bug 510673.

DS Console Commands

If you want to inspect services via console, you need to know the new commands, as the old commands are not available anymore:

Equinox DS Felix SCR Description
list/ls
[bundle-id]
scr:list
[bundle-id]
List all components.
component|comp
<comp-id>
scr:info
<comp-id>
Print all component information.
enable|en
<comp-id>
scr:enable
<comp-name>
Enable a component.
disable|dis
<comp-id>
scr:disable
<comp-name>
Disable a component.
enableAll|enAll
[bundle-id]
- Enable all components.
disableAll|disAll
[bundle-id]
- Disable all components.

Despite some different command names and the fact that the short versions are not supported, you should notice the following:

  • The scope (scr:) is probably not needed in Equinox because there are by default no multiple commands with the same name. So only the command names after the colon can be used.
  • There are no equivalent commands to enable or disable all components at once.
  • To enable or disable a component you need to specify the name of the component, not the id that is shown by calling list before.

DS 1.3 Annotations Support in PDE

With Eclipse Neon the DS Annotations Support was added to PDE. Now Peter Nehrer (Twitter: @pnehrer) has contributed the support for DS 1.3 annotations. In the Preferences you will notice that you can specify which DS specification version you want to use. By default it is set to 1.3. The main idea is that it is possible to configure that only DS 1.2 annotations should be used in case you still need to develop on that specification level (e.g. for applications that run on Eclipse Neon).

PDE_DS_annotations_1.3

The Preferences page also has another new setting “Add DS Annotations to classpath”, which is enabled by default. That setting will automatically add the necessary library to the classpath. While this is nice if you only implement a plain OSGi application, this will cause issues in case of Eclipse RCP applications that are build using Tycho. The JAR that is added to the classpath is located in the IDE, so the headless Tycho build is not aware of it! For Eclipse RCP development I therefore suggest to disable that setting and add org.osgi.service.component.annotations as an optional dependency to the Import-Package header as described in my Getting Started tutorial. At least if the bundles should be build with Tycho.

As a quick overview, with DS 1.3 the following modifications to the annotations are available:

  • Life cycle methods accept Component Property Types as parameter
  • Introduction of the Field Strategy which means @Reference can be used for field injection
  • Event methods can get the ComponentServiceObjects parameter type for PROTOTYPE scoped references, and there are multiple parameter type options for these methods
  • @Component#configurationPid
    multiple configuration PID values can be set and the value “$” can be used as placeholder for the name of the component
  • @Component#servicefactory
    deprecated and replaced by scope
  • @Component#reference
    specify Lookup Strategy references
  • @Component#scope
    specify the service scope of the component
  • @Reference#bind
    specify the name of the bind event method of a reference
  • @Reference#field
    name of the field, typically not specified manually
  • @Reference#fieldOption
    specify how field values should be managed
  • @Reference#scope
    specify the reference scope

Note:
For further information have a look at my previous blog posts where I explained these options in comparison to DS 1.2.

Although already in a very good shape, the DS 1.3 annotations are not finished 100% as of now. I already uncovered the following missing pieces:

  • Missing Require-Capability header in MANIFEST.MF (Bug 513216)
  • Missing Provide-Capability header in MANIFEST.MF (Bug 490063)
  • False error when using bind/updated/unbind parameter on field references (Bug 513462)

IMHO it would be also nice if the necessary p2.inf files are automatically created/updated to support p2 Capability Advice configurations, which is necessary because p2 still does not support OSGi capabilities.

As stated at the beginning, you could help with the implementation by testing and giving feedback on this implementation. It would be very helpful to have more people testing this, to have a stable implementation in the Oxygen release.

Thanks to Peter for adding that long waiting feature to PDE!

@Service Annotation for Eclipse RCP

Also for RCP development there are some news with regards to OSGi services. The @Service annotation, created by Tom Schindl for the e(fx)clipse project, has been ported to the Eclipse Platform (introduced here).

When using the default Eclipse 4 injection mechanisms, the injection of OSGi services is limited to a unary cardinality. Given an OSGi service of type StringInverter (see my previous tutorials) the injection can be done like this:

public class SamplePart {

    @Inject
    StringInverter inverter;

    @PostConstruct
    public void postConstruct(Composite parent) {
        ...
    }
}
public class SamplePart {

    @Inject
    @Optional
    StringInverter inverter;

    @PostConstruct
    public void postConstruct(Composite parent) {
        ...
    }
}

This means:

  • Only a single service instance can get injected.
  • If the cardinality is MANDATORY (no @Optional), a service instance needs to be available, otherwise the injection fails with an exception.
  • If the cardinality is OPTIONAL (@Inject AND @Optional) and no service is available at creation time, a new service will get injected when it becomes available.

This behavior is similar to the DYNAMIC GREEDY policy for OSGi DS service references. But the default injection mechanism for OSGi services has several issues that are reported in Bug 413287.

  • If a service is injected and a new service becomes available, the new service will be injected, regardless of his service ranking. So even if the new service has a lower ranking it will be injected. Compared with the OSGi service specification this is incorrect as the service with the highest ranking should be used, or, if the ranking is equal, the service that was registered first .
  • If a service is injected and it becomes unavailable, there is no injection of a service with a lower service ranking. Instead null will be injected, even if a valid service is still available.
  • If a service implements multiple service interfaces, only the first service key is reset.
  • If a service instance should be created per bundle or per requestor by using either a service factory or scope, there will be only one instance for every request, because the service is always requested via BundleContext of one of the platform bundles.

Note:
I was able to provide a fix for the first three points. The last issue in the list regarding scoped services can not be solved for the default injection mechanism.

The @Service annotation was introduced to solve all these issues and additionally support the multiple cardinality (only MULTIPLE, not AT_LEAST_ONE).

To use it simply add @Service additionally to @Inject:

public class SamplePart {

    @Inject
    @Service
    StringInverter inverter;

    @PostConstruct
    public void postConstruct(Composite parent) {
        ...
    }
}

The above snippet is similar to the Field Strategy in OSGi DS. To get something similar to the Event Strategy you would use method injection like in the following snippet:

public class SamplePart {

    StringInverter inverter;

    @Inject
    public void setInverter(@Service StringInverter inverter) {
        this.inverter = inverter;
    }
    @PostConstruct
    public void postConstruct(Composite parent) {
        ...
    }
}

With using the @Service annotation on a unary reference, you get a behavior similar to the DYNAMIC GREEDY policy for OSGi DS service references, which is actually the same as with the default injection mechanism after my fix is applied. Additionally the usage of a service factory or scoped services is supported by using the @Service annotation, as the BundleContext of the requestor is used to retrieve the service.

Note:
While writing this blog post there is an issue with the OPTIONAL cardinality in case no service is available at creation time. If a new service becomes available, it is not injected automatically. I created Bug 513563 for this and provided a fix for both, the Eclipse Platform and e(fx)clipse.

One interesting feature of the @Service annotation is the support of the MULTIPLE cardinality. This way it is possible to get all OSGi services of a specific type injected, in the same order as in the OSGi service registry. For this simply use the injection on a list of the desired service type.

public class SamplePart {

    @Inject
    @Service
    List<StringInverter> inverter;

    @PostConstruct
    public void postConstruct(Composite parent) {
        ...
    }
}

Another nice feature (and also pretty new for e(fx)clipse) is the filter support. Tom introduced this here. e(fx)clipse supports static as well as dynamic filters that can change at runtime. Because of dependency issues only the support for static filters was ported to the Eclipse Platform. Via filterExpression type element it is possible to specify an LDAP filter to constrain the set of services that should be injected. This is similar to the target type element of OSGi DS service references.

public class SamplePart {

    // only get services injected that have specified the
    // value "online" for the component property "connection"
    @Inject
    @Service(filterExpression="(connection=online)")
    List<StringInverter> inverter;

    @PostConstruct
    public void postConstruct(Composite parent) {
        ...
    }
}

With the @Service annotation the Eclipse injection for OSGi services aligns better with OSGi DS. And with the introduction of DS 1.3 to Equinox the usage of OSGi services for Eclipse RCP applications should become even more a common pattern than it was before with using the Equinox only Extension Points.

For me the news on OSGi DS in the Eclipse Platform are the most interesting ones in the Oxygen release. But of course not the only ones. So I encourage everyone to try out the newest Oxygen milestone releases to get the best out of it for everyone!


by Dirk Fauth at March 15, 2017 07:57 AM

A Thought on a Componentized Future of IDEs.

by Doug Schaefer at March 13, 2017 03:56 PM

If you haven’t heard of the Language Server Protocol and the language servers they inspire, take a Google around. There’s something very interesting happening here. The direction the LSP starts, and we’ve had discussions around a Debugger Server Protocol as well, opens the door to the componentization of IDEs.

And this is quite different than the plug-in model we have with Eclipse. Instead of creating a UI platform and having plug-ins add menu items, preference pages, editors, views, etc., build your IDE from the other direction. Take a collection of components that don’t have UI, that implement IMHO the hard things about IDEs, language parsers, build systems, debugger frameworks, and wrap them with your own custom purpose IDE “shell”.

Eclipse suffers to a significant extent the “Tragedy of the Commons”. There is a large amount of inconsistency between plug-ins that do similar things but do it in different ways. Why is there a Perspective for every language? Because each language plug-in developer has different ideas on how the “Code” perspective should be laid out. And maybe, for their users, they’re right.

An alternative reality has language plug-in providers provide APIs that allow IDE builders to provide their own user experiences. Yes, that would be a lot more work and probably not practical. But as the doors open to a new generation of IDEs, language plug-in providers need to think about how they’d plug into many of them. It’s not clear which one will be the winner or even if there will be a winner.

It’s a brave new world. And we have a way to go before we figure it all out. But it’s a great time to think outside the box and see what sticks to the walls, or in my case, what I don’t erase from my whiteboard ;).


by Doug Schaefer at March 13, 2017 03:56 PM

Data-driven Apps made easy with Vert.x 3.4.0 and headless CMS Gentics Mesh

by jotschi at March 13, 2017 12:00 AM

In this article, I would like to share why Vert.x is not only a robust foundation for the headless Content Management System Gentics Mesh but also how the recent release 3.4.0 can be used to build a template-based web server with Gentics Mesh and handlebars.

A headless CMS focuses on delivering your content through an API and allows editors creating and managing that data through a web-based interface. Unlike a traditional CMS, it does not provide a specifically rendered output. The frontend part (the head) is literally cut off, allowing developers create websites, apps, or any other data-driven projects with their favourite technologies.

Vert.x 3.4.0 has just been released and it comes with a bunch of new features and bug fixes. I am especially excited about a small enhancement that changes the way in which the handlebars template engine handle their context data. Previously it was not possible to resolve Vert.x ‘s JsonObjects within the render context. With my enhancement #509 - released in Vert.x 3.4.0 - it is now possible to access nested data from these objects within your templates. Previously this would have required flattening out each object and resolving it individually, which would have been very cumbersome.

I’m going to demonstrate this enhancement by showing how to build a product catalogue using Vert.x together with handlebars templates to render and serve the web pages. The product data is managed, stored and delivered by the CMS server as source for JSON data.

Clone, Import, Download, Start - Set up your product catalogue website quickly

Let’s quickly set up everything you need to run the website before I walk you through the code.

1. Clone - Get the full Vert.x with Gentics Mesh example from Github

Fire up your terminal and clone the example application to the directory of your choice.

git clone git@github.com:gentics/mesh-vertx-example.git

2. Import - The maven project in your favourite IDE

The application is set up as a maven project and can be imported in Eclipse IDE via File → Import → Existing Maven Project

3. Download - Get the headless CMS Gentics Mesh

Download the latest version of Gentics Mesh and start the CMS with this one-liner

java -jar mesh-demo-0.6.xx.jar

For the current example we are going to use the read-only user credentials (webclient:webclient). If you want to play around with the demo data you can point your browser to http://localhost:8080/mesh-ui/ to reach the Gentics Mesh user interface and use one of the available demo credentials to login.

4. Start - The application and browse the product catalogue

You can start the Vert.x web server by running Server.java.

That’s it - now you can access the product catalogue website in your browser: http://localhost:3000

Why Vert.x is a good fit for Gentics Mesh

Before digging into the example, let me share a few thoughts on Vert.x and Gentics Mesh in combination. In this example Vert.x is part of the frontend stack in delivering the product catalogue website. But it might also be of interest to you that Vert.x is also used at the very heart of Gentics Mesh itself. The Gentics Mesh REST API endpoints are built on top of Vert.x as a core component.

The great thing about Vert.x is that there are a lot of default implementations for various tasks such as authentication, database integration, monitoring and clustering. It is possible to use one or more features and omit the rest and thus your application remains lightweight.

Curious about the code?

Source: https://github.com/gentics/mesh-vertx-example

Now that everything is up and running let’s have a detailed look at the code.

A typical deployment unit for Vert.x is a verticle. In our case we use the verticle to bundle our code and run the web server within it. Once deployed, Vert.x will run the verticle and start the HTTP server code.

The Gentics Mesh REST client is used to communicate with the Gentics Mesh server. The Vert.x web library is used to set up our HTTP Router. As with other routing frameworks like Silex and Express, the router can be used to create routes for inbound HTTP requests. In our case we only need two routes. The main route which accepts the request will utilize the Gentics Mesh Webroot API Endpoint which is able to resolve content by a provided path. It will examine the response and add fields to the routing context.

The other route is chained and will take the previously prepared routing context and render the desired template using the handlebars template handler.

First we can handle various special requests path such as “/“ for the welcome page. Or the typical favicon.ico request. Other requests are passed to the Webroot API handler method.

Once the path has been resolved to a WebRootResponse we can examine that data and determine whether it is a binary response or a JSON response. Binary responses may occur if the requested resource represents an image or any other binary data. Resolved binary contents are directly passed through to the client and the handlebars route is not invoked.

Examples

JSON responses on the other hand are examined to determine the type of node which was located. A typical node response contains information about the schema used by the node. This will effectively determine the type of the located content (e.g.: category, vehicle).

The demo application serves different pages which correspond to the identified type. Take a look at the template sources within src/main/resources/templates/ if you are interested in the handlebars syntax. The templates in the example should cover most common cases.

The Mesh REST Client library internally makes use of the Vert.x core HTTP client.

RxJava is being used to handle these async requests. This way we can combine all asynchronously requested Gentics Mesh resources (breadcrumb, list of products) and add the loaded data into the routing context.

The Vert.x example server loads JSON content from the Gentics Mesh server. The JsonObject is placed in the handlebars render context and the template can access all nested fields within.

This way it is possible to resolve any field within the handlebars template.

That’s it! Finally, we can invoke mvn clean package in order to package our webserver. The maven-shade-plugin will bundle everything and create an executable jar.

What’s next?

Future releases of Gentics Mesh will refine the Mesh REST Client API and provide a GraphQL which will reduce the JSON overhead. Using GraphQL will also reduce the amount of requests which need to be issued.

Thanks for reading. If you have any futher questions or feedback don’t hesitate to send me a tweet to @Jotschi or @genticsmesh.


by jotschi at March 13, 2017 12:00 AM

Run Eclipse IDE on One Version of Java, but Target Another

by waynebeaton at March 10, 2017 07:05 PM

The Eclipse IDE for Java™ Developers (and the other Java developer variants) is itself a Java application that’s used to build Java applications. That relationship can be a bit weird to wrap your brain around.

Written almost entirely in Java, the Eclipse IDE requires a Java Runtime Environment (JRE) to run. A JRE provides just the runtime platform: it doesn’t include the source code and Javadoc for any of the base Java libraries, or any of the development tools that are included in the Java Development Kit (JDK). An Eclipse IDE runs just fine on a JRE.

If you’re building Java applications, however, you really need to have access to a JDK. By default, an Eclipse IDE will configure itself to build applications against the JRE that it was launched on. If that JRE is part of a JDK, then you’ll get access to all the goodies that you need to get useful content assist, documentation, debugging support, etc. If the runtime platform is just a JRE, then a lot of that valuable goodness will be missing (but compiling still works because the Java development tools include the Eclipse Compiler for Java)

Here’s where it gets a bit weird. You can run an Eclipse IDE on a JRE from one version of Java and build applications that target one or more different versions of Java. You can, for example, run your Eclipse IDE on Java 8, but use it to build applications based on basically any earlier version of Java. You can select the default Java version for your workspace in the preferences (on the Java > Compiler page), or individually in the properties for each Java Project (preferred).

JDK Compliance settings in the Preferences (Java > Compiler)

In order to actually build applications on a different version of Java, you need to connect your Eclipse IDE with the corresponding JDK. To do this, first install the JDK, and then tell the Eclipse IDE where to find it via the Java > Installed JREs page in the workspace preferences. With additional JDKs installed, you can configure individual projects to use specific versions of the compiler and runtime.

All of this is a long way of saying that you can configure your Eclipse IDE, Oxygen Edition milestone build to run on a Java 9 JRE, (download a JDK from the JDK 9 Early Access site) but use it to build applications that target earlier versions of Java (i.e. keep doing your day job). Even doing just this and providing feedback will be very helpful to the Eclipse projects involved in the Eclipse IDE. If you actually want to build Java 9 applications, you’ll need to install the Java 9 Support (BETA) for Oxygen from the Eclipse Marketplace and provide feedback to the team.

Note that the JDT product in the Eclipse Foundation’s Bugzilla instance is specifically for reporting bugs that are directly related to the Java development tools. Use the EGit product to report issues with the Git integration, the m2e product to report issues with the Maven integration, the Buildship product to report issues with the Gradle integration, the Platform product for issues with regard to the basic IDE framework, the Web Tools product for issues with enterprise Java and web development, or the EPP (Eclipse Packaging Project) product if you’re not sure.

Many committers from the various Eclipse projects that contribute to the Eclipse IDE, including at least a couple of committers from the Java development tools project in particular, will be at Devoxx US. If you want to learn more about Java 9 support in the Eclipse IDE, you’ll be able to find them at the Eclipse Foundation’s booth (or we’ll be able to help you find them).

I’ll also be doing a demonstration of Test First Java Development using the Eclipse IDE at the booth. Come by and see if I can keep myself within the twenty minute limit…

If you want to learn more about the great features available in the Eclipse IDE, follow @EclipseJavaIDE on Twitter (follow me while you’re at it).

devoxx_black_transparent400



by waynebeaton at March 10, 2017 07:05 PM

Papyrus-RT Beta?!

by tevirselrahc at March 09, 2017 06:35 PM

I just heard, through the minion grapevine, that there will be an official, managed beta for Papyrus for Real Time! Just like the ones for the big-boys commercial tools! I’m so proud of Me-RT!

The beta will be based on Papyrus-RT v0.9, which is planned for release on March 23 (aligned with Eclipse Neon.3), as reported in Papyrus-RT Roadmap: follow the train.

This is exciting! Users will be able to play with me, to give their opinions, to suggest improvements, and, basically, to help me get better!

I will get my product management/development minions to tell me more…and I will, of course, let you know!

Things are getting more and more interesting…


Filed under: Uncategorized

by tevirselrahc at March 09, 2017 06:35 PM

Visualizing Docker Containers in Eclipse Che with Weave Scope

by Tracy M at March 09, 2017 12:20 PM

7scope.png

Last Thursday, at the London PaaS User Group (LOPUG) meetup and heard the best type of technical talk: the kind that immediately inspires you to try out the technology you’ve just learnt about. Stev Witzel showed a demo of a mash-up of Cloud Foundry tools (BOSH) with Weave Scope, an open source tool for visualizing, managing and monitoring containers. In the demo, Weave Scope provided a very slick visualization of the individual components and apps running on the Cloud Foundry platform and instantly gave me a deeper comprehension of the complex system.

I was keen to try out Weave Scope so asked myself “Can I quickly and easily use WeaveScope to visualise the Docker containers in Eclipse Che?

The answer was a resounding yes! First of all the Weave Works documentation is terrific (note to self: guides with Katacoda are my new gold standard for documentation). I found this  getting started article which I could adapt for my use-case. Given I already had Che installed & running on my Windows 10 laptop, installing & running Weave Scope was straightforward.

docker-machine ssh default
sudo wget -O /usr/local/bin/scope \
   https://github.com/weaveworks/scope/releases/download/latest_release/scope
sudo chmod a+x /usr/local/bin/scope
sudo scope launch

 

To my pure joy and delight, it JUST WORKED!!

In one browser I had Eclipse Che up and running, I created a few different workspaces

2scope

In a separate browser I had Weave Scope running.  As I created workspaces in Che, I would see them automagically appear in the scope tool.

1scope.png

Incidentally, when I first started working with Che it took me a while to understand each Eclipse Che workspace is a separate Docker container, but with this visualisation the understanding is immediate. Weave Scope is a great tool for those new to the technologies to quickly grasp some key concepts.

I love, love, love visual representations of systems; it’s a more natural way to quickly gain insights into a system, not to mention commit things to memory in the imagery part of my brain. Weave Scope let’s you monitor CPU usage:

3scope.png

You can also monitor memory usage as workspaces loaded up:

4scope.png

Plus it has a nice way to inspect containers, even attach or exec a shell on them:

5scope.png

In this case the system is relatively small and everything is running locally. However, I can easily see how it would be really useful to use Weave Scope when you want to troubleshoot Eclipse Che running in a production environment in the cloud by comparing it with a system running on a local development machine.

Weave Scope is open source, licensed under Apache-2.0.

Key Takeaways

Weave Scope is a great, slick, open source tool for visualizing & monitoring containers and is really super simple to get started with and use.

Eclipse Che, based on the ubiquitous Docker containers, allows leveraging of a whole world of awesome container-related technology, including the Weave Scope visualisation tool.



by Tracy M at March 09, 2017 12:20 PM