Xtext for IntelliJ IDEA - Preview

by Sven Efftinge (noreply@blogger.com) at October 02, 2015 06:44 PM

Today we've released a preview version for the upcoming IntelliJ IDEA support of Xtext. With this it is now possible to develop Xtext languages entirely in IDEA and due to the cool Gradle support in any other environment, too.

The preview plugins for IntelliJ IDEA can be installed from the following repository location:


Note that Xtext requires the latest IDEA 15 Preview build.

I've recorded a small screencast that shows what Xtext in IntellijJ IDEA looks like:

by Sven Efftinge (noreply@blogger.com) at October 02, 2015 06:44 PM

Handly 0.3.1 released

by Vladimir Piskarev at October 02, 2015 03:00 PM

I am pleased to announce the availability of the Handly 0.3.1 release. This service release addresses issues found in the 0.3 version.

by Vladimir Piskarev at October 02, 2015 03:00 PM

Download Eclipse Mars.1

October 02, 2015 03:00 PM

Eclipse Mars.1 has just been released and is available for download.

October 02, 2015 03:00 PM

WTP 3.7.1 Released!

October 02, 2015 10:00 AM

The Web Tools Platform's 3.7.1 Release is now available! Installation and update can be performed using the Mars Update Site at http://download.eclipse.org/releases/mars/. Release 3.7.1 fixes issues that occur in prior releases or have been reported since 3.7's release. WTP 3.7.1 is featured in the Mars.1 Eclipse IDE for Java EE Developers, with selected portions also included in other packages. Adopters can download the build itself directly.

More news

October 02, 2015 10:00 AM

Overriding a method in Eclipse IDE and (non-Javadoc) comment lines

October 02, 2015 07:44 AM

Almost everything is configurable in Eclipse. If the default values do not work for your project, you should invest time to tune your IDE. Yesterday a colleague told me about a configuration to change something I found annoying every day: By default, when you override a method in a child class in Eclipse you get something like this:

public class ViewDetailsButton extends AbstractExtensibleButton { 
 /* (non-Javadoc)
 * @see org.eclipse.scout.rt.client.ui.form.fields.button.AbstractButton#execClickAction() 
 protected void execClickAction() throws ProcessingException { 
 // TODO Auto-generated method stub 

I never understood why the "(non-Javadoc)" comment lines were generated and I always removed them. This is not really a big deal (moving the cursor, pressing CTRL+D, going back to the method body) and I could live with it. Now that I know that switching off the generation of those line can be configured, I ask myself why I did not did it sooner.

Under preferences, open the "Code Templates" preference page (under the Java code style). Select "Comments > Overriding methods" in the tree and click on the "Edit…" button. In the second Dialog you can edit the pattern.

October 02, 2015 07:44 AM

Eclipse Performance revisited

October 01, 2015 07:30 PM

With the release of Mars.1 and Neon.2 today, I thought it would be good to see what effect (if any) the optimisations that I’ve been working on have had. So I took Neon and Mars for a spin, and compared the outputs.

I’ve been timing the startup of an application with the org.eclipse.osgi/debug traces; specifically, the loader (which displays whenever a class is being loaded), and bundleTime (which measures the execution time of the Bundle-Activator). By running the application, then hitting the red stop button when the main window is available, it’s possible to get a measure of how fast the application starts up. I don’t do ‘quit’ from the application (because that would cause more classes to be loaded) so I just terminate the JVM at the right point.

However, to automate it, and to allow others to experiment as well, I invested in some time in creating various launch configurations pushed them to https://github.com/alblue/EclipsePerformanceTests.git so that others could replicate my setup. Essentially, there’s an E4 application with no content, that starts up and then shuts down shortly afterwards. There are some external tools that can process these lists to give numbers that are hidden in the application.

That’s the good news. The bad news is that the optimisations so far haven’t had much of an effect; in fact, start-up of Neon is slightly slower than Mars at the moment. Starting an Eclipse SDK instance (the original SDK from http://download.eclipse.org/eclipse/downloads/ as opposed to the EPP, so I can get measurements without automated reporting) led to a start-up time of Mars of around 6.1s and Neon of 6.3s.


In fact, the start-up of the empty project has remained around the same, at 1.8s (after files are in the cache). Strangely enough, if you look at the list of classes loaded there are a few more classes that are loaded (such as o.e.core.internal.content.BasicDescription and o.e.core.internal.content.ContentType) which weren’t there before. On the plus side, the total byte size has dropped slightly (about 4k) and we’re now down to 21 activators, from 30 before. This was counterbalanced by the activators' removal as well as a number of other inner classes; for example, the migration of inner classes to lambdas in Lars' commit was a reduction of the load of separate classes. (Lars, if you want to take another one then JobManager would be a good one, as would E4Application and WBWRenderer … but never mind that now.)

Now the additional .content. changes are suspicious, if only because I have pushed a few changes to that recently. I originally thought that the removal of static references were at fault, but it turns out that the move to declarative services caused the problem.

How can that happen, I hear you ask? Well, it’s a damn good question because it took me a while to work that out as well. And the other question – what to do about it – is also another interesting one as well :)

As a side-note, measuring performance of Eclipse at start-up is a little challenging. Unlike correctness testing (where you can run tests in the IDE) the performance of the application, for performance testing there’s a variation depending on whether you are running the code from a JAR or from a project in the workspace (different class loader resolutions are used; there are different paths which load the content if it’s a File or an InputStream, different mechanisms for accessing resources and the like). You can test some deltas before and after, but to test it for real installing it into a host workspace and restarting is the minimum requirement.

There’s also differences between the builds published by the Eclipse infra and ones that your code does; it has been signed, and often gone through the pack200 processing/unprocessing. So the code that ultimately gets delivered is not quite the same as you can test locally. Other minor differences include the version of the Java runtime and compiler as well as a whole host of other potential issues. It’s not as much of a science as trying to minimise the variations to determine testing.

Anyway; back to DS. The changes to the ContentTypeManager included changing the ExtensionRegistry listener to an instance method, and to use DS to assign it (instead of the prior Activator). Why does this single change cause additional classes to be resolved?

It turns out this is a side-effect of the way DS works. When a related service is set, DS looks up the method from the XML file and then uses getDeclaredMethod() to look it up. In this case, it runs something equivalent to ContentTypeManager.class.getDeclaredMethod("setExtensionRegistry"). This is not far off what it used to do before in the Activator. So why does this do anything different now?

Well, the main reason is the way that the Java reflection is implemented. Although the code calls the single variant getDeclaredMethod("name"), internally this expands to getDeclaredMethods() and then filters them afterwards. As a result, you’re not just getting your method; you’re getting all methods. This means that all classes defined as exceptions, parameters or return results in that class will subsequently be loaded even though they are completely unnecessary. Although they aren’t actually initialized (their static blocks aren’t run) the class objects need to be defined so that they have placeholder types in methods that we don’t even need. This will then recurse to super-interfaces and super-classes (but not their contents) which will result in the additional classes being lodaed.

So we traded off the loading of the single Activator class of 5k for four classes which are 54k in size. Oops. Not a sensible trade-off.

The advantage that DS gives us is that it’s not acquired until it’s first used. This should be a boon because it means that we can defer the cost of loading these more expensive classes until we really need it. And do we need a content type manager for an empty window?

Aargh. It’s another Activator. This time, it’s PlatformActivator calling InternalPlatform which calls contentTracker.open. Unfortunately, a ServiceTracker calling open() will then trigger the initialization of the very service that we’re trying to be lazy in instantiating. Sigh.

As a side note, this is why we need a Suppliers factory. Instead of having all these buggy references to eagerly activated ServiceTrackers, we should be delegating to a single implementation that would Do The Right Thing, including deferring opening the tracker until it’s been accessed for the first time. (It would also help Tom, who would in future be able to replace this implementation with other non-OSGi implementations, such as ServiceLoader or whatever might come out of Jigsaw.)

The alternative of course is to replace it with a bunch of DS components; either of these solutions would work. If we can defer the accessing of the IContentTypeManager service, then none of the classes would be loaded.

Unfortunately, there’s no way of fixing the JDK. The new DS specification permits installing services in fields (though I’m not sure if this is exposed in PDE’s DS implementation yet). This wouldn’t help in this particular instance because the setting of the extension registry needs side-effects, which aren’t visible if only a field is set. In addition, the getField() call will perform the same resolution; and there’s more likely to be more fields which are defined with implementation classes than methods (which should generally be interfaces).

We could split the implementation to reduce the number of declared methods; for example, having a ContentTypeManagerDS subclass that exposes the DS required methods may reduce the number of classes that need to be resolved. Another alternative is to have a delegate which implements the interface and forwards the implementation methods; but in this case, the IContentTypeManager is such a large interface (with several super-interfaces and nested types) that this doesn’t buy much. Or we could just revert the commits in this particular case.

The good news is that this doesn’t particularly affect the SDK; the content type manager is used there, and these classes are loaded. So in the real test – how long it takes to spool up the SDK – either of these implementations are likely to be loaded in any case. It’s only in the startup of a simple E4 application that you’re likely to notice the difference; and this has a potential solution with addressing the PlatformActivator and friends.

October 01, 2015 07:30 PM

OSGi enRoute 1.0!

by Peter Kriens (noreply@blogger.com) at October 01, 2015 07:00 PM

On September 29, 2015, we finally released OSGi enRoute 1.0 ... The road has been longer than expected but we expanded the scope with IoT and a lot happened in the past year. So what is OSGi enRoute 1.0? If this blog is too long to read (they tell me millennials have a reading problem :-), then you could start with the quick start tutorial. OSGi enRoute is an open source project that tries to

by Peter Kriens (noreply@blogger.com) at October 01, 2015 07:00 PM

IoT is Everywhere at EclipseCon Europe

by Ian Skerrett at October 01, 2015 03:23 PM

The best thing about the Eclipse IoT community is that that participants real IoT practitioners and IoT experts. Our face-2-face meetings bring together senior technical leaders that are working on the core technology that is powering the IoT solutions of the present and future. This is why I am looking forward to the upcoming Eclipse IoT meetings at  EclipseCon Europe on Nov. 2-5. It is going to be 3 days of IoT learning and discovery for anyone interested in building IoT solutions.

Day 1 – IoT Unconference

Monday, November 2 we will have the IoT Unconference. The unconference is split into two parts 1) update from each of the IoT projects and learning about potential areas of collaboration, 2) open discussions and guest speakers. We encourage presentations and topic from outside the Eclipse IoT community and topics relevant to the existing community. For instance, I am sure there will be lots of discussion about an IoT Server Platform. This is a great opportunity for exploration and discussion on key IoT technical issues.

Day 2 – IoT Day

Tuesday, November 3 we will be hosting the IoT Day. This is a 1-day event for anyone who wants to immerse themselves in understanding how to build IoT solutions. There will be sessions on IoT security, IoT data processing and analytics, open approaches to smart home, IoT hardware, and others. There will be speakers from Deutsche Telekom, Bosch, Eurotech, Relayr, Red Hat and others. The nice thing about the IoT Day is that you can register just for 1 day so you don’t need to dedicate an entire week. Of course, EclipseCon Europe attendees can attend any of the IoT Day sessions as well.

Day 3 – IoT Playground

Wednesday, November 4 will be the IoT Playground. This is where you can see real IoT practitioners show off their work.

More IoT at ECE

IoT will be throughout the entire EclipseCon Europe schedule. Two of the keynotes, from BOSCH and BMW, executives will spotlight these companies strategies for IoT. In addition to the IoT Day, there will be sessions on the oneM2M standard from Orange, Buidling Smart Grids with Eclipse IoT, session on Eclipse Concierge, a super small OSGi runtime for IoT, software update for IoT, embedded Java for IoT and what seems to be a great talk Demystifing the Smartness


If you are interested in learning about IoT and getting in-depth with the Eclipse IoT experts, then EclipseCon Europe is the place for you. Register today.

by Ian Skerrett at October 01, 2015 03:23 PM

Debugging PHP with Eclipse PDT: A WordPress Example

by support-octavio at October 01, 2015 03:01 PM

IntroductionIf you code PHP and are tired of fighting l […]

The post Debugging PHP with Eclipse PDT: A WordPress Example appeared first on Genuitec.

by support-octavio at October 01, 2015 03:01 PM

Tycho 12: Build source features

by Christian Pontesegger (noreply@blogger.com) at October 01, 2015 01:48 PM

Providing update sites containing source code for developers is considered good style. Used in a target platform it allows developers to see your implementation code. This makes debugging far easier as users do not need to checkout your source code from repositories they have to find first.

Tycho allows to package such repositories very easily.

Tycho Tutorials

For a list of all tycho related tutorials see Tycho Tutorials Overview

Source code for this tutorial is available on github as a single zip archive, as a Team Project Set or you can browse the files online.

Step 1: Create a source update site project

Create a new project of type Plug-in Development/Update Site Project. Name it com.codeandme.tycho.releng.p2.source and leave all the other settings to their defaults. You will end up in the Site Manifest Editor of your site.xml file. Instead of editing this file by hand we will immediately delete site.xml and copy over the category.xml file from com.codeandme.tycho.releng.p2.

Mavenize the project the same way as we did in tutorial 5: set Packaging to eclipse-repository and add the project to com.codeandme.tycho.releng/pom.xml.

Step 2: Modify category.xml

Source plug-ins and features will be created by tycho on the fly, so we have no real projects in the workspace we could add with the Site Manifest Editor. Therefore we need to open category.xml with the Text Editor. Tycho does not care about the url property, so remove it. Feature ids need to be changed to <original.id>.source.

If you like you can move all source features to a dedicated category:
<?xml version="1.0" encoding="UTF-8"?>
<feature id="com.codeandme.tycho.plugin.feature" version="1.0.0.qualifier">
<category name="source_components"/>
<category-def name="source_components" label="Developer Resources"/>
Step 3: Configure tycho source builds

To enable source builds we need to extend com.codeandme.tycho.releng/pom.xml a bit. The source below contains only the additions to our pom file, so merge them accordingly (full version on github).

<!-- enable source feature generation -->


<!-- provide plug-ins not containing any source code -->
<plugin id="com.codeandme.tycho.product" />



When building source plug-ins, tycho expects every plug-in project to actually contain source code. If projects do not contain source, we need to exclude them as we do on line 26.

After building the project we will end up with a p2 site containing binary builds and source builds of each feature/plug-in.

by Christian Pontesegger (noreply@blogger.com) at October 01, 2015 01:48 PM

Tycho 11: Install root level features

by Christian Pontesegger (noreply@blogger.com) at October 01, 2015 12:55 PM


Do you know about root level features?

Components installed in eclipse are called installable units (IUs). These are either features or products. Now IUs might be containers for other features, creating a tree like dependency structure. Lets take a short look at the Installation Details (menu Help / About Eclipse) of our sample product from tycho tutorial 8:

We can see that there exists one root level feature Tycho Built Product which contains all the features we defined for our product. What is interesting is, that the Update... and Uninstall... buttons at the bottom are disabled when we select child features.

So in an RCP application we may only update/uninstall root level features. This means that if we want to update a sub component, we need to create a new version of our main product. For a modular application this might not be a desired behavior.

The situation changes when a user installs additional components into a running RCP application. Such features will be handled as root level features and can therefore be updated separately. So our target will be to create a base product and install our features in an additional step.

Great news is, that tycho 0.20.0 allows us to do this very easily.

Tycho Tutorials

For a list of all tycho related tutorials see Tycho Tutorials Overview

Source code for this tutorial is available on github as a single zip archive, as a Team Project Set or you can browse the files online.

Step 1: Identify independent features

Tycho will do all the required steps for us, we only need to identify features to be installed at root level. So open your product file using either the Text Editor or XML Editor. Locate the section with the feature definitions. Now add an installMode="root" attribute to any feature to be installed on root level.
<feature id="org.eclipse.e4.rcp"/>
<feature id="org.eclipse.platform"/>
<feature id="com.codeandme.tycho.plugin.feature" installMode="root"/>
<feature id="com.codeandme.tycho.product.feature"/>
<feature id="org.eclipse.help" installMode="root"/>
<feature id="org.eclipse.emf.ecore"/>
<feature id="org.eclipse.equinox.p2.core.feature"/>
<feature id="org.eclipse.emf.common"/>
<feature id="org.eclipse.equinox.p2.rcp.feature"/>
<feature id="org.eclipse.equinox.p2.user.ui"/>
<feature id="org.eclipse.rcp"/>
<feature id="org.eclipse.equinox.p2.extras.feature"/>

Make sure to update the tycho version to be used to 0.20.0 or above.

Nothing more to do, build your  product and enjoy root level features in action.

by Christian Pontesegger (noreply@blogger.com) at October 01, 2015 12:55 PM

How to run your web server and MQTT WebSockets broker on the same port

by Benjamin Cabé at October 01, 2015 11:48 AM

I was just asked how one can deploy a similar setup as the iot.eclipse.org MQTT sandbox, where MQTT over WebSockets is available on port 80, just like the rest of the website.

There are actually two ways of achieving this.

Mosquitto as the main frontend

It’s a little-known fact but together with built-in WebSockets support (added in version 1.4), Mosquitto also can act as basic HTTP server, and directly serve a bunch of static resources for you. The config option you’re looking for is “http_dir“, that will allow you to serve the content of a directory over HTTP.

Granted you are running a version of Mosquitto that has WebSockets support, here how your mosquitto.conf file should look like to enable WebSockets *and* regular HTTP connections:

listener 80
protocol websockets
http_dir /home/johndoe/htdocs

Of course, you will need to make sure that you do not have any other daemons (like Apache, nginx, …) already running and using port 80 :-)

Once Mosquitto is setup this way, you can use any MQTT client that supports WebSockets to connect to ws://yourhost URI.

ws://yourhost/ws, or ws://yourhost:80/foobar would work just fine too – Mosquitto doesn’t care about the path at all!

Apache front-end + mod_websocket_mosquitto

Since it’s likely you actually want a “real” HTTP server to serve your website (for security reasons, for being able to run PHP, etc.), another approach is to use Apache as the main HTTP front-end, as you would normally do, and configure it to tunnel WebSockets connections made on a given URI to your Mosquitto broker.

You can download an Apache module that does exactly that at https://github.com/willem4ever/mod_websocket_mosquitto. The instructions to compile and install it are pretty straightforward and you will end up with something like the following in your Apache configuration:

<IfModule mod_websocket.c>
Loadmodule mod_websocket_mosquitto /usr/lib/apache2/modules/mod_websocket_mosquitto.so
 <Location /mosquitto>
 MosBroker localhost
 MosPort 1883
 SetHandler websocket-handler
 WebSocketHandler /usr/lib/apache2/modules/mod_websocket_mosquitto.so mosquitto_init

by Benjamin Cabé at October 01, 2015 11:48 AM

Get involved! The OSGi IoT Contest 2015 Has Begun

by Mike Francis (noreply@blogger.com) at October 01, 2015 08:29 AM

So its Oct 1, 2015 and we are pleased to announce that the SDK for the OSGi IoT Contest 2015 is available as promised.  The Contest is open to all and you don't have to attend the OSGi Community Event to participate, although we would certainly love to see you there. The OSGi Community Event is co-located with EclipseCon Europe and will be taking place from Nov 3 to 5, 2015 in Ludwigsburg,

by Mike Francis (noreply@blogger.com) at October 01, 2015 08:29 AM

Presentation: The Making of XRobots

by Jan Koehnlein at October 01, 2015 01:15 AM

Jan Koehnlein presents the making of the XRobots game combining Lego Mindstorms with LeJOS, image recognition with OpenCV, augmented reality, Xtend, Xtext with Xbase, Eclipse, Orion, Jetty, JavaFX.

By Jan Koehnlein

by Jan Koehnlein at October 01, 2015 01:15 AM

Eclipse Night London

by Tracy M at September 30, 2015 01:24 PM

Eclipse Night London was an evening for bringing together various folks of the Eclipse ecosystem (new and old) to talk tech and share a bite & a beverage (or two). The ultramodern offices of Stackoverflow Careers in London provided a great setting for the event. The relentless rain didn’t put off the attendees, some of whom were coming from as far afield as Cambridge and Oxfordshire.

IMG_20150916_173018Emanuel of Genuitec & Tracy of KichwaCoders ready to kick things off

First up was Ian Mayo who demoed Debrief, a maritime analysis workbench based on Eclipse RCP that is used by the royal navy. Deftly going from slick demo to slick demo it was great to learn and watch. The best bit was saved for last, watching the visualisation of the manoeuvering of two submarines onscreen.

Matt Gerring talked to us about how Eclipse is used at Diamond Light Source, the synchrotron in Oxfordshire dubbed the UK’s biggest experiment. The experimental facility at Diamond handles tremendous amounts of data daily and the DAWNSci project is the workbench that helps the scientists make sense of it. Despite some tech gremlins interfering, Matt was able to talk us through it and demo some of the powerful capabilities of DAWNSci, which build on lots of existing projects in Eclipse and is part of the Eclipse Science Working Group.

Genuitec were the main sponsors of the evening and my co-host Emanuel Darlea spoke about the Eclipse based projects they have to offer, including MyEclipse and Secure Delivery Centre. That led nicely into the break and time for more refreshments and chatting.

IMG_20150916_171740Mmm sushi

Mike Milikovich, Executive Director of the Eclipse Foundation gave us an awesome overview of how Eclipse has evolved over the years, and how it continues to do so, now including Cloud and IoT platforms under its wide umbrella. It was really interesting hearing about the ‘survival of the fittest’ approach to open source and how this means the Foundation have no idea what comes next – it is whatever technology evolves best. Also Mike talked about how the biggest challenge to Eclipse is not another IDE or technology or foundation but simply complacency, by its members and users. IMG_20150916_202244

As if on cue, Alex Blewitt took the stage and inspired us all with his tongue-in-cheek presentation ‘How to write bad eclipse plugins‘. It was a terrific talk, full of energy, humour and insights into the bad bad practices we may sometimes slip into (but my plug-ins are more important than all the others..). It rounded of the evening in grand style and the presentation is worth checking out here, plus for a little taste of the talk on the night watch this.


By the end the room was buzzing, conversations flowed, more drinks were had, and eventually relocated to the pub downstairs. Stackoverflow offices were great, especially thanks to Natalie and her team who made us feel very welcome and ensured we had everything we needed on the evening. Many thanks to the folks who braved the rain to make it such a great event. Also thanks to the folks behind the scenes who made it happen: Tim & Sara from Genuitec and Jelena from Eclipse Foundation. It was a great evening for learning, sharing and enjoying good company. We’ll definitely be doing it again, join the Eclipse London User Group so we’ll let you know when.

by Tracy M at September 30, 2015 01:24 PM

JSON Schema Validation with Play

by Maximilian Koegel and Jonas Helming at September 30, 2015 01:14 PM

In this post, we introduce a way of validating JSON HTTP requests based on a given JSON Schema instead of manually implementing the validation. We were recently approached to implement validation of JSON HTTP requests based on Play’s Validation API and a JSON schema. Play already provides a great API for performing JSON validation via its Reads/Writes combinators, which are also used to convert JSON to other data types. Let’s say, for example, that we model a blog application and we have a Post case class:

case class Post(id: Option[Long], title: String, body: String = "")

To ensure that the title of a Post is at least three characters long, we’d create this Reads instance:

val titleIsAtLeast3CharsLong: Reads[Post] = 
  (JsPath \ "title").read[String](Reads.minLength[String](3))
val json = Json.obj("title" -> "Hello there")
json.validate(titleIsAtLeast3CharsLong) // valid

Alternatively, there’s also the Unified Validation library, which is not part of Play itself, but aims to unify the validation concepts across different domains, such as JSON and forms, by writing so-called Rules. The above example rewritten with the Unified Validation library would look as follows:

val titleIsAtLeast3CharsLong = 
  (Path \ "title").from[JsValue](Rules.minLength(3))
val json = Json.obj("title" -> "Oh")
titleIsAtLeast3CharsLong.validate(json) // invalid

Unfortunately, none of these libraries allowed us to consume a JSON schema. And we didn’t want to rewrite the schema via Reads/Writes or Rules either, since the JSON schema may regularly change and one would need to update all validation rules accordingly.

Therefore we decided to roll out our own JSON Schema Validator based on the Unified validation library and on Play’s existing validation mechanism.

The basic idea is simple: instead of wiring the Validation rules using Reads or Rules in our application logic, we rather provide a Schema Validator that takes a JSON schema and a JSON instance as input and validates the JSON instance against the schema. In our case, Reads therefore only acts as a mean to convert JSON instances into domain types, but do not contain any validation logic. This goes hand in hand with JSON Macro inception, which allows us to automatically generate Reads/Writes based on a given case class.

In order to illustrate things, here’s an example.

val postReads = Json.reads[Post]
val schema = Json.fromJson[SchemaType](Json.parse(
      |"properties": {
      |  "id":    { "type": "integer" },
      |  "title": { "type": "string", "minLength": 3 },
      |  "body":  { "type": "string" },
      |  "required": ["title"]

The schema requires the title property to be set on an instance with a minimum length of three characters.

In order to demonstrate the typical usage of the Validator, what follows is a snippet from Play Controller that enables to submit only valid Posts, i.e. such ones, where the title is at least three characters long.

def save = Action(parse.json) { implicit request =>
  val json: JsValue = request.body
  val result: VA[Post] = SchemaValidator.validate(schema, json, postReads)
  // fold applies `invalid` if result is a Failure or `valid` if it is a Success
    invalid = { errors: Seq[(Path, Seq[ValidationError])] =>
    valid = { post =>

As you can see we only need to call the SchemaValidator’s validate method. validate actually returns a JsValue, but if we provide it a Reads[A] instance (postReads of type Reads[Post] in this example), it will use that instance to convert the JsValue into an A type, which is Post in this example.
VA is part of the Unified Validation library and represents the possible outcomes: either a Success or a Failure. In the latter case it will hold the error message why validation has failed.

Of course this little example may seem a bit artificial, but it already illustrates some benefits:

  • We can consume an existing JSON schema instead of rewriting it within the libraries provided in Play
  • It’s better maintainable: especially very complex validation rules tend to get messy and unreadable whereas JSON schema is simple to read and maintain
  • We can easily update: this is especially true, if the JSON schema is consumed from an external source (e.g., another web service), since we do not need to re-compile and hence could change the validation semantics at runtime
  • If you decide to inline the schema, like we did in the example, the library also comes with Writes that you can utilize to generate a valid JSON schema which then may be consumed by other parts of your tool chain

If you would like to try this out yourself head over to this github repo. We would be very happy about any feedback, so get in touch with us!


Guest Blog Post
Guest Author: Edgar Müller


Leave a Comment. Tagged with javascript, javascript

by Maximilian Koegel and Jonas Helming at September 30, 2015 01:14 PM

Proposal: Funding Eclipse Platform Development

by Mike Milinkovich at September 29, 2015 12:00 PM

Last month I announced that the Eclipse Foundation is going to begin using personal and corporate donations to fund Eclipse platform development. Of course, the devil is in the details, and as an open source community we need to define an open and transparent process for how work is prioritized, and funds are allocated. Today, we are publicizing a draft document that lays out such a process.

One thing that we know is that the process can seem sort of heavyweight when you first read it. Be assured that we will be putting together some open-ended work packages to ensure that it remains lightweight and agile as possible.

If you have any comments or feedback, please post them on the ide-dev@eclipse.org list (subscribe here).

We are looking forward to your feedback!

Filed under: Foundation

by Mike Milinkovich at September 29, 2015 12:00 PM

Spring Framework: @RestController vs @Controller

by Srivatsan Sundararajan at September 28, 2015 03:11 PM

Spring MVC Framework and RESTSpring’s annotation based […]

The post Spring Framework: @RestController vs @Controller appeared first on Genuitec.

by Srivatsan Sundararajan at September 28, 2015 03:11 PM

Welcome Tony McCrary as a new eclipse.platform.ui Committer

by Lars Vogel at September 28, 2015 12:07 PM

I would like to welcome Tony McCrary as platform.ui committer.

Tony is the amazing guy behind the new Eclipse icons which look way better on a dark background. He has drawn these svg icons and created a Maven renderer to convert them into png files. I have not counted these icons recently, but the last time I looked he created more than 1500 icons. And Tony stayed in for several years, even though he got little thanks from the projects he contributed the icons to. Certain projects which received hundreds of icons complained about errors in 1-2 icons, instead of saying thank you.

Thanks Tony, for having such a thick skin.

Tony is also a kick-ass developer and maintain his closed source SWT port based on Open GL. Scary stuff if you ask me.

Welcome Tony!

by Lars Vogel at September 28, 2015 12:07 PM

ECF 3.11.0 - Custom Remote Services Distribution Providers

by Scott Lewis (noreply@blogger.com) at September 27, 2015 08:05 PM

ECF 3.11.0 is released.

What's New:  A new distribution provider api to simplify the creation of Remote Services distribution providers.    We've used this new API to recently create distribution providers based upon Jax Rest Services standard (with CXF, Jersey, and RestEasy implementations) and Hazelcast.

by Scott Lewis (noreply@blogger.com) at September 27, 2015 08:05 PM