Running SWTBot tests in Travis

by Lorenzo Bettini at July 28, 2015 08:11 AM

The problem I was having when running SWTBot tests in Travis CI was that I could not use the new container-based infrastructure of Travis, which allows to cache things like the local maven repository. This was not possible since to run SWTBot tests you need a Window Manager (in Linux, you can use metacity), and so you had to install it during the Travis build; this requires sudo and using sudo prevents the use of the container-based infrastructure. Not using the cache means that each build would download all the maven artifacts from the start.

Now things have changed :)

When running in the container-based infrastructure, you’re still allowed to use Travis’  APT sources and packages extensions, as long as the package you need is in their whitelist. Metacity was not there, but I opened a request for that, and now metacity is available :)

Now you can use the container-based infrastructure and install metacity together (note that you won’t be able to cache installed apt packages, so each time the build runs, metacity will have to be reinstalled, but installing metacity is much faster than downloading all the Maven/Tycho artifacts).

The steps to run SWTBot tests in Travis can be summarized as follows:

sudo: false

language: java

jdk: oraclejdk7

cache:
  directories:
  - $HOME/.m2

env: DISPLAY=:99.0

install: true

addons:
  apt:
    packages:
    - metacity

#before_install:
# - sudo apt-get update
# - sudo apt-get install metacity

before_script:
 - sh -e /etc/init.d/xvfb start
 - metacity --sm-disable --replace 2> metacity.err &

I left the old steps “before_install” commented out, just as a comparison.

  • “sudo: false” enables the container based infrastructure
  • “cache:” ensures that the Maven repository is cached
  • “env:” enables the use of graphical display
  • “addons:apt:packages” uses the extensions that allow you to install whitelisted APT packages (metacity in our case).
  • “before_script:” starts the virtual framebuffer and then metacity.

Then, you can specify the Maven command to run your build (here are some examples:

Happy SWTBot testing! :)

 

 

Be Sociable, Share!

by Lorenzo Bettini at July 28, 2015 08:11 AM

How to update Eclipse to bleeding edge (Rawhide)

by Leo Ufimtsev at July 27, 2015 08:30 PM

Beta-Image

Sometimes you want the latest and greatest Eclipse on Fedora, as it includes the latest bug fixes.

To do so, you will have to install the rawhide repository (1) and then update the Eclipse packages to those of Rawhide (2)

1) Enable Rawhide Repository

The below will download the repository file into your /etc/yum.repos.d/ (for both yum and dnf):

dnf install fedora-repos-rawhide

2) Update Eclipse to Rawhide

This command will update all “Eclipse” packages on your system:

sudo dnf --enablerepo=rawhide update $(sudo dnf list installed | grep eclipse | cut -f1 -d " ")

Explanation:
– This lists all installed Eclipse packages: sudo dnf list installed | grep eclipse
– This get’s the first column delimited by spaces: cut -f1 -d " "
– This enables rawhide only for this update command: --enablerepo=rawhide

How to downgrade back

I haven’t actually done this myself yet, so downgrade only at your own risk :-).

But you can downgrade by removing the new packages and then re-installing them from the regular repos. Use same command as above but instead of ‘update’ use ‘remove.
If you have success with downgrading, please feel free to post a comment to let me know and I’ll update this post.



by Leo Ufimtsev at July 27, 2015 08:30 PM

#OSCON 2015 and the Rise of Open Source Offices

by Chris Aniszczyk at July 27, 2015 04:18 PM

I had a fantastic time at OSCON last week. It was a crazy busy week for Twitter announcing that we are helping form the Cloud Native Computing Foundation and unifying some of the work that has been going on in the Kubernetes and Mesos ecosystems:

It’s rare that you see two communities and the large companies behind them put their egos besides and do what is better for everyone in the long term in the infrastructure space. We also formally joined the Open Container Initiative and plan on donating an AppC C++ implementation in the future:

Thank you to everyone who came to our ping pong tournament party and learned a bit more about the sport of table tennis:

We also had a great @TODOGroup panel at OSCON discussing how different companies are running and establishing open source offices… along with what works and some lessons learned:

Finally, thank you to everyone who came to my talk about lessons learned from Twitter creating its open source office on Thursday:

It’s always amazing to see how many companies are starting to form open source offices, from my talk I tried to highlight some of the better known ones from larger companies and even startups (along with their mission statements):

I really expect this trend to continue in the future, for example Box is looking to hire their first Head of Open Source and even  Guy Martin was just hired to create and run an open source office at Autodesk… Autodesk!

At the end of the day, as more businesses become software companies to some nature, they will naturally depend on a plethora of open source software. Businesses will look to find ways to build better relationships with the open source communities their software depends on to maximize value for their business, it’s in their best interests.


by Chris Aniszczyk at July 27, 2015 04:18 PM

Developing a source code editor in JavaFX (on the Eclipse 4 Application Platform)

by Tom Schindl at July 27, 2015 11:26 AM

You’ve chosen e4 and JavaFX as the technologies to implement your cool rich client application. Congrats!

In the first iteration you’ve implemented all the form UIs you need in your application which you made flashy with the help of CSS, animations for perspective switches, lightweight dialogs as introduced in e(fx)clipse 2.0, … .

In the 2nd iteration your task now might be to implement some scripting support but for that you require an editor who at least supports lexical syntax highlighting. So you now have multiple choices:

  • Use a WebView and use an editor written in JavaScript like Orion
  • Use the StyledTextArea shipped with e(fx)clipse 2.0 and implement all the hard stuff like paritioning, tokenizing, …
  • Get e(fx)clipse 2.1 and let the IDE generate the editor for you

If you decided to go with the last option the following explains how you get that going by developing a e4 JavaFX application

application

like shown in this video

Get e(fx)clipse 2.1.0

As of this writing e(fx)clipse 2.1 has not been released so you need to grab the nightly builds eg by simply downloading our All-in-One build.

Set up a target platform

We have a self-contained target platform feature (org.eclipse.fx.code.target.feature) available from our runtime-p2 repository to get started super easy.

target_1

target_2

target_3

target_4

Warning: Make sure you uncheck “Include required software” because the target won’t resolve if you have that checked!

target_5

target_6

Get e(fx)clipse 2.1

You need the latest builds from the e(fx)clipse nightly jobs. The easiest way to get started is to download our nightly all-in-one build from http://downloads.efxclipse.bestsolution.at/downloads/nightly/sdk/.

Setup the project

The project setup is done like you are used to for all e4 on JavaFX applications.

project_1

project_2

project_3

The wizard should have created:

  • at.bestsolution.sample.code.app: The main application module
  • at.bestsolution.sample.code.app.feature: The feature making up the main application module
  • at.bestsolution.sample.code.app.product: The product definition require for exporting
  • at.bestsolution.sample.code.app.releng: The release engineering project driving the build

Now we need to add some dependencies to your MANIFEST.MF:

  • org.eclipse.fx.core: Some Core APIs
  • org.eclipse.fx.code.editor: Core (=UI Toolkit independent) APIs for code editors
  • org.eclipse.fx.code.editor.fx: JavaFX dependent APIs for code editors
  • org.eclipse.text: Core APIs for text parsing, …
  • org.eclipse.fx.text: Core APIs for text parsing, highlighting, …
  • org.eclipse.fx.text.ui: JavaFX APIs for text parsing, highlighting, …
  • org.eclipse.fx.ui.controls: Additional controls for eg a File-System-Viewer
  • org.eclipse.osgi.services: OSGi-Service APIs we make use of when generateing DS-Services
  • org.eclipse.fx.core.di: Dependency Inject addons
  • org.eclipse.fx.code.editor.e4: code editor integration to e4
  • org.eclipse.fx.code.editor.fx.e4: JavaFX code editor integration to e4

For export reasons also add all those bundles to the feature.xml in at.bestsolution.sample.code.app.feature.

Generate editor infrastructure

Having everything configured now appropriately we start developing:

  • Create package at.bestsolution.sample.code.app.editor
  • Create a file named dart.ldef and copy the following content into it
    package at.bestsolution.sample.code
    
    dart {
    	partitioning {
    		partition __dftl_partition_content_type
    		partition __dart_singlelinedoc_comment
    		partition __dart_multilinedoc_comment
    		partition __dart_singleline_comment
    		partition __dart_multiline_comment
    		partition __dart_string
    		rule {
    			single_line __dart_string "'" => "'"
    			single_line __dart_string '"' => '"'
    			single_line __dart_singlelinedoc_comment '///' => ''
          		single_line __dart_singleline_comment '//' => ''
          		multi_line __dart_multilinedoc_comment '/**' => '*/'
          		multi_line  __dart_multiline_comment '/*' => '*/'
    		}
    	}
    	lexical_highlighting {
    		rule __dftl_partition_content_type whitespace javawhitespace {
    			default dart_default
    			dart_operator {
    				character [ ';', '.', '=', '/', '\\', '+', '-', '*', '<', '>', ':', '?', '!', ',', '|', '&', '^', '%', '~' ]
    			}
    			dart_bracket {
    				character [ '(', ')', '{', '}', '[', ']' ]
    			}
    			dart_keyword {
    				keywords [ 	  "break", "case", "catch", "class", "const", "continue", "default"
    							, "do", "else", "enum", "extends", "false", "final", "finally", "for"
    							,  "if", "in", "is", "new", "null", "rethrow", "return", "super"
    							, "switch", "this", "throw", "true", "try", "var", "void", "while"
    							, "with"  ]
    			}
    			dart_keyword_1 {
    				keywords [ 	  "abstract", "as", "assert", "deferred"
    							, "dynamic", "export", "external", "factory", "get"
    							, "implements", "import", "library", "operator", "part", "set", "static"
    							, "typedef" ]
    			}
    			dart_keyword_2 {
    				keywords [ "async", "async*", "await", "sync*", "yield", "yield*" ]
    			}
    			dart_builtin_types {
    				keywords [ "num", "String", "bool", "int", "double", "List", "Map" ]
    			}
    		}
    		rule __dart_singlelinedoc_comment {
    			default dart_doc
    			dart_doc_reference {
    				single_line "[" => "]"
    			}
    		}
    		rule __dart_multilinedoc_comment {
    			default dart_doc
    			dart_doc_reference {
    				single_line "[" => "]"
    			}
    		}
    		rule __dart_singleline_comment {
    			default dart_single_line_comment
    		}
    		rule __dart_multiline_comment {
    			default dart_multi_line_comment
    		}
    		rule __dart_string {
    			default dart_string
    			dart_string_inter {
    				single_line "${" => "}"
    				//TODO We need a $ => IDENTIFIER_CHAR rule
    			}
    		}
    	}
    	integration {
    		javafx {
    			java "at.bestsolution.sample.code.app.editor.generated"
    			e4 "at.bestsolution.sample.code.app.editor.generated"
    		}
    	}
    
    }
    
  • Xtext will prompt to add the Xtext nature to your project
    ldef_2
  • The sources are generated into the src-gen folder you should add that one to your build path
    add-source
  • It’s important to note that beside the files generated by the ldef-Language there are 2 files generated to your OSGi-INF-Folder by DS-Tooling from ca.ecliptical.pde.ds

I won’t explain the details of the dart.ldef-File because there’s already a blog post with a detailed description of the file.

The only part that is new is the integration section:

integration {
	javafx {
		java "at.bestsolution.sample.code.app.editor.generated"
		e4 "at.bestsolution.sample.code.app.editor.generated"
	}
}

who configures the code generator to:

  • Generate Java code for the partitioning and tokenizing
  • Generate e4 registration informations in terms of OSGi-Services

In contrast to the last blog where we’ve run our stuff in an NONE-OSGi/NONE-e4-world where we had to wire stuff ourselves this is not needed this time because the Eclipse DI container will take care of that!

Define a Filesystem-Viewer-Part

To browse the filesystem we need a viewer which might look like this:

package at.bestsolution.sample.code.app;

import java.net.URI;
import java.nio.file.Path;
import java.nio.file.Paths;

import javax.annotation.PostConstruct;
import javax.inject.Inject;
import javax.inject.Named;

import org.eclipse.e4.core.di.annotations.Optional;
import org.eclipse.e4.ui.di.PersistState;
import org.eclipse.fx.code.editor.services.TextEditorOpener;
import org.eclipse.fx.core.Memento;
import org.eclipse.fx.ui.controls.filesystem.FileItem;
import org.eclipse.fx.ui.controls.filesystem.ResourceEvent;
import org.eclipse.fx.ui.controls.filesystem.ResourceItem;
import org.eclipse.fx.ui.controls.filesystem.ResourceTreeView;

import javafx.collections.FXCollections;
import javafx.scene.layout.BorderPane;

public class ResourceViewerPart {
	@Inject
	TextEditorOpener opener;

	private Path rootDirectory;

	private ResourceTreeView viewer;

	@PostConstruct
	void init(BorderPane parent, Memento memento) {
		viewer = new ResourceTreeView();

		if( rootDirectory == null ) {
			String dir = memento.get("root-dir", null);
			if( dir != null ) {
				rootDirectory = Paths.get(URI.create(dir));
			}
		}

		if( rootDirectory != null ) {
			viewer.setRootDirectories(FXCollections.observableArrayList(ResourceItem.createObservedPath(rootDirectory)));
		}
		viewer.addEventHandler(ResourceEvent.openResourceEvent(), this::handleOpenResource);
		parent.setCenter(viewer);
	}

	@Inject
	@Optional
	public void setRootDirectory(@Named("rootDirectory") Path rootDirectory) {
		this.rootDirectory = rootDirectory;
		if( viewer != null ) {
			viewer.setRootDirectories(FXCollections.observableArrayList(ResourceItem.createObservedPath(rootDirectory)));
		}
	}

	private void handleOpenResource(ResourceEvent<ResourceItem> e) {
		e.getResourceItems()
			.stream()
			.filter( r -> r instanceof FileItem)
			.map( r -> (FileItem)r)
			.filter( r -> r.getName().endsWith(".dart"))
			.forEach(this::handle);
	}

	private void handle(FileItem item) {
		opener.openEditor(item.getUri());
	}

	@PersistState
	public void rememberState(Memento memento) {
		if( rootDirectory != null ) {
			memento.put("root-dir", rootDirectory.toFile().toURI().toString());
		}
	}
}

Define the application

e4 applications as you already know are not defined by code but with the help of the e4 application model which is stored by default in e4xmi-Files. The final model has to looks like this:

model

The important parts are:

  • DirtyStateTrackingAddon: Is a special addon who tracks the dirty state added to the applications Addon section
  • Handler: We using a framework handler org.eclipse.fx.code.editor.e4.handlers.SaveFile
  • Window-Variables: We have 2 special variables defined at the window level (activeInput, rootDirectory)
    window-variables
  • PartStack-Tags: We tagged the Part Stack who is hosting the editors with editorContainer
    stack-tags
  • Resource Viewer Part: We register the resource viewer implementation from above to the part definition
  • Root Directory Handler: To set the root directory we have a handler who looks like this
    package at.bestsolution.sample.code.app.handler;
    
    import java.io.File;
    import java.nio.file.Path;
    import java.nio.file.Paths;
    
    import org.eclipse.e4.core.di.annotations.Execute;
    import org.eclipse.fx.core.di.ContextValue;
    
    import javafx.beans.property.Property;
    import javafx.stage.DirectoryChooser;
    import javafx.stage.Stage;
    
    public class SetRootDirectory {
    
    	@Execute
    	public void setRootDirectory(@ContextValue("rootDirectory") Property<Path> rootDirectory, Stage stage) {
    		DirectoryChooser chooser = new DirectoryChooser();
    		File directory = chooser.showDialog(stage);
    		if( directory != null ) {
    			rootDirectory.setValue(Paths.get(directory.getAbsolutePath()));
    		}
    	}
    }
    


by Tom Schindl at July 27, 2015 11:26 AM

Some Rest with Vert.x

by cescoffier at July 27, 2015 12:00 AM

Previously in this blog series

This post is part of the Introduction to Vert.x series. So, let’s have a quick look about the content of the previous posts. In the first post, we developed a very simple Vert.x 3 application, and saw how this application can be tested, packaged and executed. In the last post, we saw how this application became configurable and how we can use a random port in test.

Well, nothing fancy… Let’s go a bit further this time and develop a CRUD-ish application. So an application exposing an HTML page interacting with the backend using a REST API. The level of RESTfullness of the API is not the topic of this post, I let you decide as it’s a very slippery topic.

So, in other words we are going to see:

  • Vert.x Web - a framework that let you create Web applications easily using Vert.x
  • How to expose static resources
  • How to develop a REST API

The code developed in this post is available on the post-3 branch of this Github project. We are going to start from the post-2 codebase.

So, let’s start.

Vert.x Web

As you may have notices in the previous posts, dealing with complex HTTP application using only Vert.x Core would be kind of cumbersome. That’s the main reason behind Vert.x Web. It makes the development of Vert.x base web applications really easy, without changing the philosophy.

To use Vert.x Web, you need to update the pom.xml file to add the following dependency:

<dependency>
  <groupId>io.vertxgroupId>
  <artifactId>vertx-webartifactId>
  <version>3.0.0version>
dependency>

That’s the only thing you need to use Vert.x Web. Sweet, no ?

Let’s now use it. Remember, in the previous post, when we requested http://localhost:8080, we reply a nice Hello World message. Let’s do the same with Vert.x Web. Open the io.vertx.blog.first.MyFirstVerticle class and change the start method to be:

@Override
public void start(Future fut) {
 // Create a router object.
 Router router = Router.router(vertx);

 // Bind "/" to our hello message - so we are still compatible.
 router.route("/").handler(routingContext -> {
   HttpServerResponse response = routingContext.response();
   response
       .putHeader("content-type", "text/html")
       .end("

Hello from my first Vert.x 3 application

"
); }); // Create the HTTP server and pass the "accept" method to the request handler. vertx .createHttpServer() .requestHandler(router::accept) .listen( // Retrieve the port from the configuration, // default to 8080. config().getInteger("http.port", 8080), result -> { if (result.succeeded()) { fut.complete(); } else { fut.fail(result.cause()); } } ); }

You may be surprise by the length of this snippet (in comparison to the previous code). But as we are going to see, it will make our app on steroids, just be patient.

As you can see, we start by creating a Router object. The router is the cornerstone of Vert.x Web. This object is responsible for dispatching the HTTP requests to the right handler. Two other concepts are very important in Vert.x Web:

  • Routes - which let you define how request are dispatched
  • Handlers - which are the actual action processing the requests and writing the result. Handlers can be chained.

If you understand these 3 concepts, you have understood everything in Vert.x Web.

Let’s focus on this code first:

router.route("/").handler(routingContext -> {
  HttpServerResponse response = routingContext.response();
  response
      .putHeader("content-type", "text/html")
      .end("

Hello from my first Vert.x 3 application

"
); });

It routes requests arriving on “/“ to the given handler. Handlers receive a RoutingContext object. This handler is quite similar to the code we had before, and it’s quite normal as it manipulates the same type of object: HttpServerResponse.

Let’s now have a look to the rest of the code:

vertx
    .createHttpServer()
    .requestHandler(router::accept)
    .listen(
        // Retrieve the port from the configuration,
        // default to 8080.
        config().getInteger("http.port", 8080),
        result -> {
          if (result.succeeded()) {
            fut.complete();
          } else {
            fut.fail(result.cause());
          }
        }
    );
}

It’s basically the same code as before, except that we change the request handler. We pass router::accept to the handler. You may not be familiar with this notation. It’s a reference to a method (here the method accept from the router object). In other worlds, it instructs vert.x to call the accept method of the router when it receives a request.

Let’s try to see if this work:

mvn clean package
java -jar target/my-first-app-1.0-SNAPSHOT-fat.jar

By opening http://localhost:8080 in your browser you should see the Hello message. As we didn’t change the behavior of the application, our tests are still valid.

Exposing static resources

Ok, so we have a first application using vert.x web. Let’s see some of the benefits. Let’s start with serving static resources, such as an index.html page. Before we go further, I should start with a disclaimer: “the HTML page we are going to see here is ugly like hell : I’m not a UI guy”. I should also add that there are probably plenty of better ways to implement this and a myriad of frameworks I should try, but that’s not the point. I tried to keep things simple and just relying on JQuery and Twitter Bootstrap, so if you know a bit of JavaScript you can understand and edit the page.

Let’s create the HTML page that will be the entry point of our application. Create an index.html page in src/main/resources/assets with the content from here. As it’s just a HTML page with a bit of JavaScript, we won’t detail the file here. If you have questions, just post comments.

Basically, the page is a simple CRUD UI to manage my collection of not-yet-finished bottles of Whisky. It was made in a generic way, so you can transpose it to your own collection. The list of product is displayed in the main table. You can create a new product, edit one or delete one. These actions are relying on a REST API (that we are going to implement) through AJAX calls. That’s all.

Once this page is created, edit the io.vertx.blog.first.MyFirstVerticle class and change the start method to be:

@Override
public void start(Future fut) {
 Router router = Router.router(vertx);
 router.route("/").handler(routingContext -> {
   HttpServerResponse response = routingContext.response();
   response
       .putHeader("content-type", "text/html")
       .end("

Hello from my first Vert.x 3 application

"
); }); // Serve static resources from the /assets directory router.route("/assets/*").handler(StaticHandler.create("assets")); vertx .createHttpServer() .requestHandler(router::accept) .listen( // Retrieve the port from the configuration, // default to 8080. config().getInteger("http.port", 8080), result -> { if (result.succeeded()) { fut.complete(); } else { fut.fail(result.cause()); } } ); }

The only difference with the previous code is the router.route("/assets/*").handler(StaticHandler.create("assets")); line. So, what does this line mean? It’s actually quite simple. It routes requests on “/assets/*” to resources stored in the “assets” directory. So our index.html page is going to be served using http://localhost:8080/assets/index.html.

Before testing this, let’s take a few seconds on the handler creation. All processing actions in Vert.x web are implemented as handler. To create a handler you always call the create method.

So, I’m sure you are impatient to see our beautiful HTML page. Let’s build and run the application:

mvn clean package
java -jar target/my-first-app-1.0-SNAPSHOT-fat.jar

Now, open your browser to http://localhost:8080/assets/index.html. Here it is… Ugly right? I told you.

As you may notice too… the table is empty, this is because we didn’t implement the REST API yet. Let’s do that now.

REST API with Vert.x Web

Vert.x Web makes the implementation of REST API really easy, as it basically routes your URL to the right handler. The API is very simple, and will be structured as follows:

  • GET /api/whiskies => get all bottles (getAll)
  • GET /api/whiskies/:id => get the bottle with the corresponding id (getOne)
  • POST /api/whiskies => add a new bottle (addOne)
  • PUT /api/whiskies/:id => update a bottle (updateOne)
  • DELETE /api/whiskies/id => delete a bottle (deleteOne)

We need some data…

But before going further, let’s create our data object. Create the src/main/java/io/vertx/blog/first/Whisky.java with the following content:

package io.vertx.blog.first;

import java.util.concurrent.atomic.AtomicInteger;

public class Whisky {

  private static final AtomicInteger COUNTER = new AtomicInteger();

  private final int id;

  private String name;

  private String origin;

  public Whisky(String name, String origin) {
    this.id = COUNTER.getAndIncrement();
    this.name = name;
    this.origin = origin;
  }

  public Whisky() {
    this.id = COUNTER.getAndIncrement();
  }

  public String getName() {
    return name;
  }

  public String getOrigin() {
    return origin;
  }

  public int getId() {
    return id;
  }

  public void setName(String name) {
    this.name = name;
  }

  public void setOrigin(String origin) {
    this.origin = origin;
  }
}

It’s a very simple bean class (so with getters and setters). We choose this format because Vert.x is relying on Jackson to handle the JSON format. Jackson automates the serialization and deserialization of bean classes, making our code much simpler.

Now, let’s create a couple of bottles. In the MyFirstVerticle class, add the following code:

// Store our product
private Map products = new LinkedHashMap();
// Create some product
private void createSomeData() {
  Whisky bowmore = new Whisky("Bowmore 15 Years Laimrig", "Scotland, Islay");
  products.put(bowmore.getId(), bowmore);
  Whisky talisker = new Whisky("Talisker 57° North", "Scotland, Island");
  products.put(talisker.getId(), talisker);
}

Then, in the start method, call the createSomeData method:

@Override
public void start(Future fut) {

  createSomeData();

  // Create a router object.
  Router router = Router.router(vertx);

  // Rest of the method
}

As you have noticed, we don’t really have a backend here, it’s just a (in-memory) map. Adding a backend will be covered by another post.

Get our products

Enough decoration, let’s implement the REST API. We are going to start with GET /api/whiskies. It returns the list of bottles in a JSON Array.

In the start method, add this line just below the static handler line:

router.get("/api/whiskies").handler(this::getAll);

This line instructs the router to handle the GET requests on “/api/whiskies” by calling the getAll method. We could have inlined the handler code, but for clarity reasons let’s create another method:

private void getAll(RoutingContext routingContext) {
  routingContext.response()
      .putHeader("content-type", "application/json; charset=utf-8")
      .end(Json.encodePrettily(products.values()));
}

As every handler our method receives a RoutingContext. It populates the response by setting the content-type and the actual content. Because our content may contain weird characters, we force the charset to UTF-8. To create the actual content, no need to compute the JSON string ourself. Vert.x lets us use the Json API. So Json.encodePrettily(products.values()) computes the JSON string representing the set of bottles.

We could have used Json.encodePrettily(products), but to make the JavaScript code simpler, we just return the set of bottles and not an object containing ID => Bottle entries.

With this in place, we should be able to retrieve the set of bottle from our HTML page. Let’s try it:

mvn clean package
java -jar target/my-first-app-1.0-SNAPSHOT-fat.jar

Then open the HTML page in your browser (http://localhost:8080/assets/index.html), and should should see:

Alt text

I’m sure you are curious, and want to actually see what is returned by our REST API. Let’s open a browser to http://localhost:8080/api/whiskies. You should get:

[ {
  "id" : 0,
  "name" : "Bowmore 15 Years Laimrig",
  "origin" : "Scotland, Islay"
}, {
  "id" : 1,
  "name" : "Talisker 57° North",
  "origin" : "Scotland, Island"
} ]

Create a product

Now we can retrieve the set of bottles, let’s create a new one. Unlike the previous REST API endpoint, this one need to read the request’s body. For performance reason, it should be explicitly enabled. Don’t be scared… it’s just a handler.

In the start method, add these lines just below the line ending by getAll:

router.route("/api/whiskies*").handler(BodyHandler.create());
router.post("/api/whiskies").handler(this::addOne);

The first line enables the reading of the request body for all routes under “/api/whiskies”. We could have enabled it globally with router.route().handler(BodyHandler.create()).

The second line maps POST requests on /api/whiskies to the addOne method. Let’s create this method:

private void addOne(RoutingContext routingContext) {
  final Whisky whisky = Json.decodeValue(routingContext.getBodyAsString(),
      Whisky.class);
  products.put(whisky.getId(), whisky);
  routingContext.response()
      .setStatusCode(201)
      .putHeader("content-type", "application/json; charset=utf-8")
      .end(Json.encodePrettily(whisky));
}

The method starts by retrieving the Whisky object from the request body. It just reads the body into a String and passes it to the Json.decodeValue method. Once created it adds it to the backend map and returns the created bottle as JSON.

Let’s try this. Rebuild and restart the application with:

mvn clean package
java -jar target/my-first-app-1.0-SNAPSHOT-fat.jar

Then, refresh the HTML page and click on the Add a new bottle button. Enter the data such as: “Jameson” as name and “Ireland” as origin (purists would have noticed that this is actually a Whiskey and not a Whisky). The bottle should be added to the table.

Status 201 ?
As you can see, we have set the response status to 201. It means CREATED, and is the generally used in REST API that create an entity. By default vert.x web is setting the status to 200 meaning OK.

Finishing a bottle

Well, bottles do not last forever, so we should be able to delete a bottle. In the start method, add this line:

router.delete("/api/whiskies/:id").handler(this::deleteOne);

In the URL, we define a path parameter :id. So, when handling a matching request, Vert.x extracts the path segment corresponding to the parameter and let us access it in the handler method. For instance, /api/whiskies/0 maps id to 0.

Let’s see how the parameter can be used in the handler method. Create the deleteOne method as follows:

private void deleteOne(RoutingContext routingContext) {
  String id = routingContext.request().getParam("id");
  if (id == null) {
    routingContext.response().setStatusCode(400).end();
  } else {
    Integer idAsInteger = Integer.valueOf(id);
    products.remove(idAsInteger);
  }
  routingContext.response().setStatusCode(204).end();
}

The path parameter is retrieved using routingContext.request().getParam("id"). It checks whether it’s null (not set), and in this case returns a Bad Request response (status code 400). Otherwise, it removes it from the backend map.

Status 204 ?
As you can see, we have set the response status to 204 - NO CONTENT. Response to the HTTP Verb delete have generally no content.

The other methods

We won’t detail getOne and updateOne as the implementations are straightforward and very similar. Their implementations are available on Github.

Cheers !

It’s time to conclude this post. We have seen how Vert.x Web lets you implement a REST API easily and how it can serve static resources. A bit more fancy than before, but still pretty easy.

Next time, we are going to improve our tests to cover the REST API.

Say Tuned & Happy Coding !


by cescoffier at July 27, 2015 12:00 AM

AUTOSAR: OCL, Xtend, oAW for validation

by Andreas Graf at July 25, 2015 05:13 PM

In a recent post, I had written about Model-to-Model-transformation with Xtend. In addition to M2M-transformation, Xtend and the new Sphinx Check framework are a good pair for model validation. There are other frameworks, such as OCL, which are also candidates. Xpand (formerly known as oAW) is used in COMASSO. This blog post sketches some questions / issues to consider when choosing a framework for model validation.

Support for unsettable attributes

EMF supports attributes that can have the status “unset” (i.e. they have never been explicitly set), as well as default values. When accessing this kind of model element attributes with the standard getter-method, you will not be able to distinguish whether the model element has been explicitly set to the same value as the default value or if it has never been touched.

If this kind of check is relevant, the validation technology should support access to the EMF methods that support the explicit predicate if a value has been set.

Inverse References

With AUTOSAR, a large number of checks will involve some logic to see, if a given element is referenced by other elements (e.g. checks like “find all signals that are not referenced from a PDU”). Usually, these references are uni-directional and traversal of the model is required to find referencing elements. In these cases, performance is heavily influenced by the underlying framework support. A direct solution would be to traverse the entire model or use the utility functions of EMF. However, if the technology allows access to frameworks like IncQuery, a large number of queries / checks can be significantly sped up.

Error Reporting

Error Reporting is central for the usability of the generated error messages. This involves a few aspects that can be explained at a simple example: Consider that we want to check that each PDU has a unique id.

Context

In Xpand (and similar in OCL), a check could look like:

context PDU ERROR "Duplicate ID in "+this.shortName:
this.parent.pdu.exists(e|e.id==this.id && e != this)

This results in quadratic runtime, since the list of PDU is is fully traversed for each PDU that is checked. This can be improved in several ways:

  1. Keep the context on the PDU level, but allow some efficient caching so that the code is not executed so often. However, that involves some additional effort in making sure that the caches are created / torn down at the right time (e.g. after model modification)
  2. Move the context up to package or model level and have a framework that allows to generate errors/warning not only for the elements in the context, but on any other (sub-) elements. The Sphinx Check framework supports this.

Calculation intensive error messages

Sometimes the calculation of a check is quite complex and, in addition, generating a meaningful error message might need some results from that calculation. Consider e.g. this example from the ocl documentation:

invariant SufficientCopies('There are '
+ library.loans->select((book = self))->size().toString()
+ ' loans for the ' + copies.toString() + ' copies of \'' + name + '\''):
library.loans->select((book = self))->size() <= copies;

The fragment

library.loans->select((book = self))->size()

is used in both the error message as well as the calculation of the predicate. In this case, this is no big deal, but when the calculation gets more complex, this can be annoying. Approaches are

  1. Factor the code into a helper function and call it twice. Under the assumption, that the error message is only evaluated when a check fails that should not incur much overhead. However, it moves the actual predicate away from the invariant statement.
  2. In Xpand, any information can be attached to any model elements. So in the check body, the result of the complex calculation can be attached to the model element and the information is retrieved in the calculation of the error message.
  3. In the Sphinx Check framework, error messages can be calculated from within the check body.

User documentation for checks

Most validation frameworks support the definition of at least an error code (error id) and a descriptive message. However, more detailed explanations of the checks are often required for the users to be able to work with and fix check results. For the development process, it is beneficial if that kind of description is stored close to the actual tests. This could be achieved by analysing comments near the validations, tools like javadoc etc. The Sphinx frameworks describes ids, messages, severities and user documentation in an EMF model. During runtime of an Eclipse RCP, it is possible to use the dynamic help functionality of the Eclipse Help to generate documentation for all registered checks on the fly.

 

 

Here are some additional features of the Xtend language that come in Handy when writing validations:

TopicExplanation
ComfortXtend has a number of features that make writing checks very concise and comfortable. The most important is the concise syntax to navigate over models. This helps to avoid loops that would be required when implementing in Java


val r = eAllContents.filter(EcucChoiceReferenceDef).findFirst[
shortName == "DemMemoryDestinationRef"]
}
PerformanceXtend compiles to plain Java. This gives higher performance than many interpreted transformation languages. In addition, you can use any Java profiler (such as Yourkit, JProfiler) to find bottlenecks in your transformations.
Long-Term-SupportXtend compiles to plain Java. You can just keep the compiled java code for safety and be totally independent about the Xtend project itself.
Test-SupportXtend compiles to plain Java. You can just use any testing tools (such as JUnit integration in Eclipse or mvn/surefire). We have extensive test cases for the transformation that are documented in nice reports that are generated with standard Java tooling.
Code CoverageXtend compiles to plain Java. You can just use any code coverage tools (such as Jacoco)
DebuggingDebugger integration is fully supported to step through your code.
ExtensibilityXtend is fully integrated with Java. It does not matter if you write your code in Java or Xtend.
DocumentationYou can use standard Javadocs in your Xtend transformations and use the standard tooling to get reports.
ModularityXtend integrates with Dependency Injection. Systems like Google Guice can be used to configure combinations of model transformation.
Active AnnotationsXtend supports the customization of its mapping to Java with active annotations. That makes it possible to adapt and extend the transformation system to custom requirements.
Full EMF supportThe Xtend transformations operate on the generated EMF classes. That makes it easy to work with unsettable attributes etc.
IDE IntegrationThe Xtend editors support essential operations such as "Find References", "Go To declaration" etc.

 


by Andreas Graf at July 25, 2015 05:13 PM

Eclipse Hackathon Hamburg – 2015 Q3

by eselmeister at July 25, 2015 10:11 AM

Yesterday, we had our third Eclipse Hackathon (2015) in Hamburg, Germany. It was a great meeting :-).
https://wiki.eclipse.org/Hackathon_Hamburg_2015_Q3

Eclipse Hackathon Hamburg Q3 2015
* Foto (C) by Tobias Baumann

Stay tuned, the next Hackathon will be in approx. three month.
https://wiki.eclipse.org/Hackathon_Hamburg_2015_Q4


by eselmeister at July 25, 2015 10:11 AM

DemoCamp Mars in Stuttgart: Great People, Talks, and Food

by Niko Stotz at July 24, 2015 01:11 PM

SpeakersWe had a nice DemoCamp in Stuttgart for Eclipse Mars Release train. About 50 people had a great time alongside great food.

The full agenda, including links to all slides, can be found in the Eclipse Wiki.

MatthiasThe first talk by Matthias Zimmermann showed the Business Application Framework Scout, especially the new features of Mars and the upcoming next release.

MartinAfterwards, Martin Schreiber presented their experience with Tycho and some practical solutions.

JinyingJinying Yu gave an overview of the CloudScale project. It analyzes, estimates and simulates the scalability of a software system, especially for moving it to a cloud service.

MarcoAfter some refreshments and discussions during the break, Marco Eilers impressed with an Xtend interpreter, including tracing between the input model of a transformation and the resulting model or text – in both directions!

MiroThe next talk by Miro Spönemann showed the current state of Xtext on the Web and in IntelliJ. As Miro is the lead developer of the Web variant, he could easily answer all questions in-depth.

HaraldFinally, Harald Mackamul gave an overview of the APP4MC project. They extend the findings of Amalthea project to multi- and many-core systems.

We finished the DemoCamp with more discussions, food, and beer.

I’d like to thank all the great speakers and the attendees for making this DemoCamp fun as ever.


by Niko Stotz at July 24, 2015 01:11 PM

Honored to join the Eclipse Architecture Council

by Lars Vogel at July 24, 2015 11:52 AM

I recently been elected to join the Eclipse Architecture Council. The Eclipse Architecture Council serves the community by identifying and tackling any issues that hinder Eclipse’s continued technological success and innovation, widespread adoption, and future growth.

Looking forward to help here.


by Lars Vogel at July 24, 2015 11:52 AM

Copyright Headers from the Git History

by Eike Stepper (noreply@blogger.com) at July 24, 2015 09:47 AM

The other day Vincent Zurczak has blogged about Updating Copyright Mentions with Eclipse and his way of maintaining legal headers in software artifacts is very similar to what we've always done in CDO. We had the exact same header in all artifacts and we used search and replace once per year to update them all. For us that has several disadvantages:
  1. Most importantly that means to modify files that have no other (real) changes in that year.
  2. It was hard to find the files with missing legal headers.
  3. It was hard (well, mostly because I wasn't smart enough) to have different copyright owners.
What I always envisioned was a tool that identifies files that could or should have legal headers, consults the Git history for these files and assembles a copyright line as follows:

Copyright (c) 2008, 2009, 2011-2013 Owner and others.

Yesterday I've finally finished this tool:



A simple Check Copyrights for missing copyrights ends with:

Copyrights missing: 0
Copyrights rewritten: 0
Files visited: 22722
Time needed: 5.73 seconds

If there are copyrights missing the tool produces a list of the paths and can optionally open them in editors. The Update Copyrights action takes approx. 35 minutes on the same working tree and results in files with beautiful legal headers that are totally in line with the Git history.

If you are interested in the code have a look at UpdateCopyrightsAction.java. There are just a few places that are CDO-specific and I would be happy to review your patches to make the tool more flexible.

by Eike Stepper (noreply@blogger.com) at July 24, 2015 09:47 AM

Developing a source code editor in JavaFX (without e4 and OSGi)

by Tom Schindl at July 24, 2015 09:36 AM

In my last blog post I introduced the DSL we’ll ship with e(fx)clipse 2.1 in August 2015.

Our main deployment platform is of course e4 on JavaFX but because we have a clean architecture based on IoC and services our components don’t know about OSGi and hence can be used in any Java(FX) application no matter if you run on OSGi or not.

To show you that those are not idle words the first blog showing the code editor components in action uses plain Java and maven as build tool – for those who want to use them in OSGi a blog will follow soon.

app

The following video demonstrates the final application in action

Step 1: Install e(fx)clipse 2.1 or later

At the time of this writing e(fx)clipse 2.1 has not been released so the best option to get started is to download our all-in-one nightly build.

Step 2: Create a new maven project

maven

maven2

maven3

maven4

Step 3: Modify the pom.xml

First we need modify the source and target version for the Java compiler by adding:

<!-- ... -->
<build>
	<plugins>
		<plugin>
			<artifactId>maven-compiler-plugin</artifactId>
			<configuration>
				<source>1.8</source>
				<target>1.8</target>
			</configuration>
		</plugin>
	</plugins>
</build>
<!-- ... -->

Because the dependencies have not yet been released we need to add the Sonatype snapshot repository with:

<!-- ... -->
<repositories>
	<repository>
		<id>sonatype-snapshots</id>
		<url>https://oss.sonatype.org/content/repositories/snapshots/</url>
	</repository>
</repositories>
<!-- ... -->

And finally we add the JavaFX-Code editor component with:

<!-- ... -->
<dependencies>
	<dependency>
		<groupId>at.bestsolution.eclipse</groupId>
		<artifactId>org.eclipse.fx.code.editor.fx</artifactId>
		<version>2.1.0-SNAPSHOT</version>
	</dependency>
</dependencies>
<!-- ... -->

At the end your pom.xml should look like this:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<groupId>at.bestsolution.sample</groupId>
	<artifactId>at.bestsolution.sample.code</artifactId>
	<version>0.0.1-SNAPSHOT</version>
	<repositories>
		<repository>
			<id>sonatype-snapshots</id>
			<url>https://oss.sonatype.org/content/repositories/snapshots/</url>
		</repository>
	</repositories>

	<dependencies>
		<dependency>
			<groupId>at.bestsolution.eclipse</groupId>
			<artifactId>org.eclipse.fx.code.editor.fx</artifactId>
			<version>2.1.0-SNAPSHOT</version>
		</dependency>
	</dependencies>

	<build>
		<plugins>
			<plugin>
				<artifactId>maven-compiler-plugin</artifactId>
				<configuration>
					<source>1.8</source>
					<target>1.8</target>
				</configuration>
			</plugin>
		</plugins>
	</build>
</project>

Step 4: Language definition

Now that we have configured our project appropriately we can start defining our application. We start with creating a package like “at.bestsolution.sample.code” and there we add a file named “dart.ldef

ldef

After the file is created. Eclipse will prompt you for adding the Xtext nature to your project because of course the DSL is implemented with the help of Xtext.

ldef_2

The last step before we start defining our language is to make some modifications to the project. So open the project properties and navigate to LDef/Compiler check the “Enable project specific settings” and modify the Output Folder/Directory value to src/main/java

ldef_3

Now paste the following content to the dart.ldef file:

package at.bestsolution.sample.code

dart {
	partitioning {
		partition __dftl_partition_content_type
		partition __dart_singlelinedoc_comment
		partition __dart_multilinedoc_comment
		partition __dart_singleline_comment
		partition __dart_multiline_comment
		partition __dart_string
		rule {
			single_line __dart_string "'" => "'"
			single_line __dart_string '"' => '"'
			single_line __dart_singlelinedoc_comment '///' => ''
      		single_line __dart_singleline_comment '//' => ''
      		multi_line __dart_multilinedoc_comment '/**' => '*/'
      		multi_line  __dart_multiline_comment '/*' => '*/'
		}
	}
	lexical_highlighting {
		rule __dftl_partition_content_type whitespace javawhitespace {
			default dart_default
			dart_operator {
				character [ ';', '.', '=', '/', '\\', '+', '-', '*', '<', '>', ':', '?', '!', ',', '|', '&', '^', '%', '~' ]
			}
			dart_bracket {
				character [ '(', ')', '{', '}', '[', ']' ]
			}
			dart_keyword {
				keywords [ 	  "break", "case", "catch", "class", "const", "continue", "default"
							, "do", "else", "enum", "extends", "false", "final", "finally", "for"
							,  "if", "in", "is", "new", "null", "rethrow", "return", "super"
							, "switch", "this", "throw", "true", "try", "var", "void", "while"
							, "with"  ]
			}
			dart_keyword_1 {
				keywords [ 	  "abstract", "as", "assert", "deferred"
							, "dynamic", "export", "external", "factory", "get"
							, "implements", "import", "library", "operator", "part", "set", "static"
							, "typedef" ]
			}
			dart_keyword_2 {
				keywords [ "async", "async*", "await", "sync*", "yield", "yield*" ]
			}
			dart_builtin_types {
				keywords [ "num", "String", "bool", "int", "double", "List", "Map" ]
			}
		}
		rule __dart_singlelinedoc_comment {
			default dart_doc
			dart_doc_reference {
				single_line "[" => "]"
			}
		}
		rule __dart_multilinedoc_comment {
			default dart_doc
			dart_doc_reference {
				single_line "[" => "]"
			}
		}
		rule __dart_singleline_comment {
			default dart_single_line_comment
		}
		rule __dart_multiline_comment {
			default dart_multi_line_comment
		}
		rule __dart_string {
			default dart_string
			dart_string_inter {
				single_line "${" => "}"
				//TODO We need a $ => IDENTIFIER_CHAR rule
			}
		}
	}
	integration {
		javafx {
			java "at.bestsolution.sample.code.generated"
		}
	}

}

I won’t explain most of the files content because I’ve done that already in the last blog post “Defining source editors with a DSL” but in brief it holds the paritioning and tokenizing rules required to provide lexical highlighting for Google Dart.

The only new section is:

package org.eclipse.fx.code.dart

dart {
....
	integration {
		javafx {
			java "at.bestsolution.sample.code.generated"
		}
	}

}

This section holds the configuration for the code generator. In our case we instruct it to generate some java code for us and your project explorer should show something similar to this.

parser_stuff

For us the 2 most important classes are:

  • DartPartitioner: This one is responsible for partitioning your source file into eg comment-sections, code-sections, string-sections, …
  • DartPresentationReconciler: This one is responsible to tokenize the different partitions eg to create keyword-tokens, ….

Step 5: Setup an editor control

Now that the parsing infrastructure is in place we can create our setup for the editor control. For that we create a new class “DartEditor” in “at.bestsolution.sample.code” and make it extend “org.eclipse.fx.code.editor.fx.TextEditor“.

package at.bestsolution.sample.code;

import org.eclipse.fx.code.editor.StringInput;

public class DartEditor extends TextEditor {
	public DartEditor(StringInput input) {
		setInput(input);
		setDocument(new InputDocument(input));
		setPartitioner(new DartPartitioner());
		setSourceViewerConfiguration(
			new DefaultSourceViewerConfiguration(input, new DartPresentationReconciler(), null, null, null)
		);
	}
}

The above is the minimal configuration you need when creating an editor:

  • org.eclipse.fx.code.editor.Input: Is the abstraction for the retrieving and storing the content eg on the filesystem, …
  • org.eclipse.jface.text.Document: Is the text buffer behind the text editor
  • org.eclipse.jface.text.IDocumentPartitioner: The component responsible to partition the source file
  • org.eclipse.jface.text.source.SourceViewerConfiguration: The component responsible for syntax highlighting, error displaying and auto-completion

The final application

Now that we have an editor component we can build our final application which needs to have a filesystem browser on the left and a tab folder with editors on the right.

package at.bestsolution.sample.code;

import java.io.File;
import java.nio.charset.StandardCharsets;
import java.nio.file.Path;
import java.nio.file.Paths;

import org.eclipse.fx.code.editor.SourceFileInput;
import org.eclipse.fx.ui.controls.filesystem.FileItem;
import org.eclipse.fx.ui.controls.filesystem.ResourceEvent;
import org.eclipse.fx.ui.controls.filesystem.ResourceItem;
import org.eclipse.fx.ui.controls.filesystem.ResourceTreeView;

import javafx.application.Application;
import javafx.beans.binding.Bindings;
import javafx.beans.binding.StringExpression;
import javafx.beans.property.ReadOnlyBooleanProperty;
import javafx.collections.FXCollections;
import javafx.event.ActionEvent;
import javafx.scene.Scene;
import javafx.scene.control.Menu;
import javafx.scene.control.MenuBar;
import javafx.scene.control.MenuItem;
import javafx.scene.control.Tab;
import javafx.scene.control.TabPane;
import javafx.scene.input.KeyCode;
import javafx.scene.input.KeyCodeCombination;
import javafx.scene.input.KeyCombination;
import javafx.scene.layout.BorderPane;
import javafx.stage.DirectoryChooser;
import javafx.stage.Stage;

public class DartEditorSample extends Application {

	private TabPane tabFolder;
	private ResourceTreeView viewer;

	static class EditorData {
		final Path path;
		final DartEditor editor;

		public EditorData(Path path, DartEditor editor) {
			this.path = path;
			this.editor = editor;
		}
	}

	@Override
	public void start(Stage primaryStage) throws Exception {
		BorderPane p = new BorderPane();
		p.setTop(createMenuBar());

		viewer = new ResourceTreeView();
		viewer.addEventHandler(ResourceEvent.openResourceEvent(), this::handleOpenResource);
		p.setLeft(viewer);

		tabFolder = new TabPane();
		p.setCenter(tabFolder);

		Scene s = new Scene(p, 800, 600);
		s.getStylesheets().add(getClass().getResource("default.css").toExternalForm());

		primaryStage.setScene(s);
		primaryStage.show();
	}

	private MenuBar createMenuBar() {
		MenuBar bar = new MenuBar();

		Menu fileMenu = new Menu("File");

		MenuItem rootDirectory = new MenuItem("Select root folder ...");
		rootDirectory.setOnAction(this::handleSelectRootFolder);

		MenuItem saveFile = new MenuItem("Save");
		saveFile.setAccelerator(new KeyCodeCombination(KeyCode.S,KeyCombination.META_DOWN));
		saveFile.setOnAction(this::handleSave);


		fileMenu.getItems().addAll(rootDirectory, saveFile);

		bar.getMenus().add(fileMenu);

		return bar;
	}

	private void handleSelectRootFolder(ActionEvent e) {
		DirectoryChooser chooser = new DirectoryChooser();
		File directory = chooser.showDialog(viewer.getScene().getWindow());
		if( directory != null ) {
			viewer.setRootDirectories(
					FXCollections.observableArrayList(ResourceItem.createObservedPath(Paths.get(directory.getAbsolutePath()))));
		}
	}

	private void handleSave(ActionEvent e) {
		Tab t = tabFolder.getSelectionModel().getSelectedItem();
		if( t != null ) {
			((EditorData)t.getUserData()).editor.save();
		}
	}

	private void handleOpenResource(ResourceEvent<ResourceItem> e) {
		e.getResourceItems()
			.stream()
			.filter( r -> r instanceof FileItem)
			.map( r -> (FileItem)r)
			.filter( r -> r.getName().endsWith(".dart"))
			.forEach(this::handle);
	}

	private void handle(FileItem item) {
		Path path = (Path) item.getNativeResourceObject();

		Tab tab = tabFolder.getTabs().stream().filter( t -> ((EditorData)t.getUserData()).path.equals(path) ).findFirst().orElseGet(() -> {
			return createAndAttachTab(path, item);
		});
		tabFolder.getSelectionModel().select(tab);
	}

	private Tab createAndAttachTab(Path path, FileItem item) {
		BorderPane p = new BorderPane();
		DartEditor editor = new DartEditor(new SourceFileInput(path, StandardCharsets.UTF_8));
		editor.initUI(p);

		ReadOnlyBooleanProperty modifiedProperty = editor.modifiedProperty();
		StringExpression titleText = Bindings.createStringBinding(() -> {
			return modifiedProperty.get() ? "*" : "";
		}, modifiedProperty).concat(item.getName());

		Tab t = new Tab();
		t.textProperty().bind(titleText);
		t.setContent(p);
		t.setUserData(new EditorData(path, editor));
		tabFolder.getTabs().add(t);
		return t;
	}

	public static void main(String[] args) {
		Application.launch(args);
	}
}

The last thing that needs to be done is to fill the “at/bestsolution/sample/code/default.css” with life:

.styled-text-area .list-view {
	-fx-background-color: white;
}

.styled-text-area .dart.dart_default {
	-styled-text-color: rgb(0, 0, 0);
}

.styled-text-area .dart.dart_operator {
	-styled-text-color: rgb(0, 0, 0);
}

.styled-text-area .dart.dart_bracket {
	-styled-text-color: rgb(0, 0, 0);
}

.styled-text-area .dart.dart_keyword {
	-styled-text-color: rgb(127, 0, 85);
	-fx-font-weight: bold;
}

.styled-text-area .dart.dart_keyword_1 {
	-styled-text-color: rgb(127, 0, 85);
	-fx-font-weight: bold;
}

.styled-text-area .dart.dart_keyword_2 {
	-styled-text-color: rgb(127, 0, 85);
	-fx-font-weight: bold;
}

.styled-text-area .dart.dart_single_line_comment {
	-styled-text-color: rgb(63, 127, 95);
}

.styled-text-area .dart.dart_multi_line_comment {
	-styled-text-color: rgb(63, 127, 95);
}

.styled-text-area .dart.dart_string {
	-styled-text-color: rgb(42, 0, 255);
}

.styled-text-area .dart.dart_string_inter {
	-styled-text-color: rgb(42, 0, 255);
	-fx-font-weight: bold;
}

.styled-text-area .dart.dart_builtin_types {
	-styled-text-color: #74a567;
	-fx-font-weight: bold;
}

.styled-text-area .dart.dart_doc {
	-styled-text-color: rgb(63, 95, 191);
}

.styled-text-area .dart.dart_doc_reference {
	-styled-text-color: rgb(63, 95, 191);
	-fx-font-weight: bold;
}


by Tom Schindl at July 24, 2015 09:36 AM

A custom search provider

by Christian Pontesegger (noreply@blogger.com) at July 24, 2015 06:22 AM

In this tutorial we will implement a custom search for your RCP. Therefore we will first create the search logic and add UI components in a second step.
I found few resources on search functionality, here are some from the Eclipse FAQs:
To have something to search for we will look for files on the local file system. So this is our target for today:
 

Source code for this tutorial is available on github as a single zip archive, as a Team Project Set or you can browse the files online.

Step 1: Search Results


First create a new Plug-in Project and create a class FileSearchResult that implements ISearchResult. Therefore you need a dependency to org.eclipse.search.
package com.codeandme.searchprovider;

import java.io.File;
import java.util.Collection;
import java.util.HashSet;

import org.eclipse.core.runtime.ListenerList;
import org.eclipse.jface.resource.ImageDescriptor;
import org.eclipse.search.ui.ISearchQuery;
import org.eclipse.search.ui.ISearchResult;
import org.eclipse.search.ui.ISearchResultListener;
import org.eclipse.search.ui.SearchResultEvent;

public class FileSearchResult implements ISearchResult {

private final ISearchQuery fQuery;
private final ListenerList fListeners = new ListenerList();

private final Collection<File> fResult = new HashSet<File>();

public FileSearchResult(ISearchQuery query) {
fQuery = query;
}

@Override
public String getLabel() {
return fResult.size() + " file(s) found";
}

@Override
public String getTooltip() {
return "Found files in the filesystem";
}

@Override
public ImageDescriptor getImageDescriptor() {
return null;
}

@Override
public ISearchQuery getQuery() {
return fQuery;
}

@Override
public void addListener(ISearchResultListener l) {
fListeners.add(l);
}

@Override
public void removeListener(ISearchResultListener l) {
fListeners.remove(l);
}

private void notifyListeners(File file) {
SearchResultEvent event = new FileSearchResultEvent(this, file);

for (Object listener : fListeners.getListeners())
((ISearchResultListener) listener).searchResultChanged(event);
}

public void addFile(File file) {
fResult.add(file);
notifyListeners(file);
}
}
Nothing special here, we will use addFile() later in this tutorial to add more and more files to our result set. Whenever the result changes, we need to inform interested listeners. So lets see how FileSearchResultEvent looks like:
package com.codeandme.searchprovider;

import java.io.File;

import org.eclipse.search.ui.ISearchResult;
import org.eclipse.search.ui.SearchResultEvent;

public class FileSearchResultEvent extends SearchResultEvent {

private final File fAddedFile;

protected FileSearchResultEvent(ISearchResult searchResult, File addedFile) {
super(searchResult);
fAddedFile = addedFile;
}

public File getAddedFile() {
return fAddedFile;
}
}
Even simpler! To update results incrementally we hold a reference to the added file.

Step 2: The search query

The main logic for a search is implemented as an ISearchQuery. It has 2 jobs: do the actual search and populate the search result.
package com.codeandme.searchprovider;

import java.io.File;
import java.io.FileFilter;
import java.util.Collection;
import java.util.HashSet;

import org.eclipse.core.runtime.IProgressMonitor;
import org.eclipse.core.runtime.IStatus;
import org.eclipse.core.runtime.OperationCanceledException;
import org.eclipse.core.runtime.Status;
import org.eclipse.search.ui.ISearchQuery;
import org.eclipse.search.ui.ISearchResult;

public class FileSearchQuery implements ISearchQuery {

private final File fRoot;
private final String fFilter;
private final boolean fRecursive;

private final FileSearchResult fSearchResult;

public FileSearchQuery(String root, String filter, boolean recursive) {
fRoot = (root.isEmpty()) ? File.listRoots()[0] : new File(root);
fFilter = filter;
fRecursive = recursive;
fSearchResult = new FileSearchResult(this);
}

@Override
public IStatus run(IProgressMonitor monitor) throws OperationCanceledException {
Collection<File> entries = new HashSet<File>();
entries.add(fRoot);

do {
File entry = entries.iterator().next();
entries.remove(entry);

entry.listFiles(new FileFilter() {

@Override
public boolean accept(File pathname) {
if ((pathname.isFile()) && (pathname.getName().contains(fFilter))) {
// accept file
fSearchResult.addFile(pathname);

return true;
}

if ((pathname.isDirectory()) && (fRecursive))
entries.add(pathname);

return false;
}
});

} while (!entries.isEmpty());

return Status.OK_STATUS;
}

@Override
public String getLabel() {
return "Filesystem search";
}

@Override
public boolean canRerun() {
return true;
}

@Override
public boolean canRunInBackground() {
return true;
}

@Override
public ISearchResult getSearchResult() {
return fSearchResult;
}
}
Our query needs 3 parameters:
  • a root object
  • a file name filter
  • a recursive flag to scan subfolders
In the constructor we create the FileSearchResult and link it to our query. Then we traverse folders and add file by file to our search result. We do not check our input parameters very thoroughly for the sake of simplicity.

The logic is implemented, now lets add some nice UI elements.

Step 3: Search Dialog

To add a new search page to the search dialog (main menu/Search/Search...) we use the extension point org.eclipse.search.searchPages.
 
Just provide id, label and a class reference to:
package com.codeandme.searchprovider.ui;

import org.eclipse.jface.dialogs.DialogPage;
import org.eclipse.search.ui.ISearchPage;
import org.eclipse.search.ui.ISearchPageContainer;
import org.eclipse.search.ui.NewSearchUI;
import org.eclipse.swt.SWT;
import org.eclipse.swt.widgets.Composite;

import com.codeandme.searchprovider.FileSearchQuery;

public class FileSearchPage extends DialogPage implements ISearchPage {

private ISearchPageContainer fContainer;

@Override
public boolean performAction() {
FileSearchQuery searchQuery = new FileSearchQuery("/home", "txt", true);
NewSearchUI.runQueryInForeground(fContainer.getRunnableContext(), searchQuery);

return true;
}

@Override
public void setContainer(ISearchPageContainer container) {
fContainer = container;
}

@Override
public void createControl(Composite parent) {
Composite root = new Composite(parent, SWT.NULL);

[...]

// need to set the root element
setControl(root);
}
}
I have removed the UI code from createControl() as it is not relevant here. If interested you can find it on github. The only important thing here is to set the root control using setControl().

performAction() starts the actual search. Therefore we create a query and run it either as foreground or as background job. By returning true we indicate that the search dialog shall be closed.

Step 4: Display search results

To add a new page to the Search View we need another extension point: org.eclipse.search.searchResultViewPages.

Add a new viewPage with an id and label. To link this page with search results of a certain kind we provide the class of search results this page can display. Set it to our FileSearchResult type. Finally we provide a class implementation for the page:
package com.codeandme.searchprovider.ui;

import org.eclipse.search.ui.ISearchResult;
import org.eclipse.search.ui.ISearchResultListener;
import org.eclipse.search.ui.ISearchResultPage;
import org.eclipse.search.ui.ISearchResultViewPart;
import org.eclipse.search.ui.SearchResultEvent;
import org.eclipse.swt.SWT;
import org.eclipse.swt.layout.FillLayout;
import org.eclipse.swt.widgets.Composite;
import org.eclipse.swt.widgets.Control;
import org.eclipse.swt.widgets.Display;
import org.eclipse.swt.widgets.Text;
import org.eclipse.ui.IActionBars;
import org.eclipse.ui.IMemento;
import org.eclipse.ui.PartInitException;
import org.eclipse.ui.part.IPageSite;

import com.codeandme.searchprovider.FileSearchResultEvent;

public class SearchResultPage implements ISearchResultPage, ISearchResultListener {

private String fId;
private Composite fRootControl;
private IPageSite fSite;
private Text ftext;

@Override
public Object getUIState() {
return null;
}

@Override
public void setInput(ISearchResult search, Object uiState) {
search.addListener(this);
}

@Override
public void setViewPart(ISearchResultViewPart part) {
}

@Override
public void setID(String id) {
fId = id;
}

@Override
public String getID() {
return fId;
}

@Override
public String getLabel() {
return "Filesystem Search Results";
}

@Override
public IPageSite getSite() {
return fSite;
}

@Override
public void init(IPageSite site) throws PartInitException {
fSite = site;
}

@Override
public void createControl(Composite parent) {
fRootControl = new Composite(parent, SWT.NULL);
fRootControl.setLayout(new FillLayout(SWT.HORIZONTAL));

ftext = new Text(fRootControl, SWT.BORDER | SWT.READ_ONLY | SWT.H_SCROLL | SWT.V_SCROLL | SWT.CANCEL | SWT.MULTI);
}

@Override
public void dispose() {
// nothing to do
}

@Override
public Control getControl() {
return fRootControl;
}

@Override
public void setActionBars(IActionBars actionBars) {
}

@Override
public void setFocus() {
fRootControl.setFocus();
}

@Override
public void restoreState(IMemento memento) {
// nothing to do
}

@Override
public void saveState(IMemento memento) {
// nothing to do
}

@Override
public void searchResultChanged(SearchResultEvent event) {
if (event instanceof FileSearchResultEvent) {
Display.getDefault().asyncExec(new Runnable() {
@Override
public void run() {
String newText = ftext.getText() + "\n" + ((FileSearchResultEvent) event).getAddedFile().getAbsolutePath();
ftext.setText(newText);
}
});
}
}
}
Lots of getters and setters, but nothing complicated. For this example we use a simple text control to add the names of found files. setInput() will be called by the framework to link this view with a search result. Now we attach a listeners to get notified on changes on the result.

I found one caveat while implementing: when your query code (from there we trigger searchResultChanged()) raises an exception, you do not get the output on the console as you might expect. Basically this means that your result page remains empty without any notification that something went wrong. Use the debugger to single step through your code in that case.

by Christian Pontesegger (noreply@blogger.com) at July 24, 2015 06:22 AM

Presentation: Java EE 7 Using Eclipse

by Arun Gupta at July 24, 2015 01:56 AM

Arun Gupta explains how to do Java EE 7 development with Eclipse, leveraging the new APIs - WebSocket, Batch, JSON Processing, and Concurrency Utilities.

By Arun Gupta

by Arun Gupta at July 24, 2015 01:56 AM

A Good Thread Pool

by Eike Stepper (noreply@blogger.com) at July 23, 2015 06:38 PM

The java.uti.concurrency package comes with a whole bunch of classes that can be extremely useful in concurrent Java applications. This article is about the ThreadPoolExecutor class, how it behaved unexpectedly for me and what I did to make it do what I want.

As the name suggests a thread pool is an Executor in Java. It's even an ExecutorService but that's irrelevant for the understanding of the fundamental behavior. The only important operational method of a thread pool is void execute(Runnable task). You pass in your task and the pool will eventually execute it on one of its worker threads. A thread pool is made up of the following components:


When you create a thread pool you must pass in a BlockingQueue instance that will become the work queue of the thread pool. You can optionally pass in a thread factory and a rejection handler. You cannot control the implementation class of the internal worker pool but you can influence it's behavior with the following important parameters:

  1. The corePoolSize defines kind of a minimum number of worker threads to keep in the internal worker pool. The reason it's not called minPoolSize is probably that directly after the creation of the thread pool the internal worker pool starts with zero worker threads. Initial workers are then created as needed but they're only ever removed from the worker pool if there are more of them than corePoolSize.
  2. The maxPoolSize defines a strict upper bound for the number of work threads in the internal worker pool.
  3. The keepAliveTime defines the time that an idle worker thread may stay in the internal worker pool if there are more than corePoolSize workers in the pool.

The Javadoc of the ThreadPoolExecutor class recommends to use the Executors.newCachedThreadPool() factory method to create a thread pool. The result is an unbounded thread pool with automatic thread reclamation. A look at the code of the factory method reveals that corePoolSize=0 and maxPoolSize=Integer.MAX_VALUE. The work queue is a SynchronousQueue, which has no internal capacity; it basically functions as a direct pipe to the next idle or newly created worker thread.

When you hammer this thread pool with lots of tasks the work queue will never grow; tasks will never be rejected because the worker pool is unbounded. Your JVM will soon become unresponsive because the pool will create thousands of worker threads!

What I really wanted is a thread pool with, let's say, maxPoolSize=100 and a work queue that temporarily keeps all the tasks that are scheduled while all of the 100 threads are busy. So I instantiated a ThreadPoolExecutor directly (without the recommended factory method), passed in corePoolSize=10, maxPoolSize=100, and a LinkedBlockingQueue to be used as the work queue. And here comes the big surprise: This thread pool never creates more than corePoolSize worker threads! Instead the work queue will grow and grow and grow. The tasks in it will always compete for the 10 core workers. Why is that?

To understand you need to know how the execute() method works. Of course it's all Javadoc'ed, but that doesn't mean it's expectation-compliant (well, I know that expectations can be subjective). The following flow diagram illustrates what the execute() method does:


There are only three different outcomes, the task can be enqueued in the work queue, a new worker thread can be created, or the task can be rejected. Three conditions are checked to determine the outcome at a specific point in time. The first condition is only relevant in the warm-up phase of the pool, but then it becomes interesting:

enqueue is always preferred over newWorker!

That means that, with an unbounded work queue, no more than corePoolSize workers will  ever be created; maxPoolSize becomes completely irrelevant. Now we have seen one pool configuration that only ever creates new workers (the default) and one that only ever enqueues tasks. Between these two evils is probably the a thread pool with both a bounded worker pool and a bounded work queue, but obviously such a thread pool will reject tasks when hammered enough.

That's all not what I wanted, but wait:

I control the work queue implementation!

Peeking again at the code of the execute() method shows that the only interaction between the thread pool and the work queue here is the call workQueue.offer(task) and per contract this method returns whether it accepted the offer or not. So, the simple solution to my problem is a BlockingQueue implementation with an offer() method overridden to accept the offered task only if the worker pool contains less than maxPoolSize threads.

Subclassing LinkedBlockingQueue would do that trick but there's a small problem remaining now: The three conditions (see above) are checked in the execute() method of the thread pool without any synchronization. That means, if my work queue does not accept a task because there are still less than maxPoolSize workers allocated the third condition is not necessarily true a nanosecond later. The task would be completely rejected from the pool rather than be enqueued. The solution to this problem is a custom rejection handler that takes the rejected task and puts it back at the beginning of the work queue. And now it becomes clear why subclassing LinkedBlockingDequeue is a better alternative: It provides the needed addFirst(Runnable task) method.

If you try to implement these ideas you'll likely discover a few technical complications, such as the LinkedBlockingDeque class not being available in Java 1.5. If you're interested in my concrete solution please have a look at the source code of my good thread pool. Enjoy...


by Eike Stepper (noreply@blogger.com) at July 23, 2015 06:38 PM

Eclipse Newsletter - Exploring Mars

July 23, 2015 01:48 PM

Explore the top 10 Mars features, learn about Thym, rediscover Jubula, and get up to date on Sirius 3.0.

July 23, 2015 01:48 PM

Defining source editors with a DSL

by Tom Schindl at July 23, 2015 11:02 AM

As of today it is quite cumbersome to define source editors who are built on top of the Eclipse Text Framework.

In the upcoming 2.1 release of e(fx)clipse we’ll ship a first set of components making it dead simple to define editors for e4 on JavaFX applications (adding support for Eclipse 4.x and e4 on SWT would be possible in future as well).

One of the central components of the new code editing support is a DSL who allows you define all relevant parts of your code editor (as of 2.1 it only deals with lexical syntax highlighting).

This blog explains the definition of a source editor for Google Dart. Follow up blog posts will make use of the definition created in this post to build a complete editor like this:

editor

Overview

Let’s start from the bottom up:

The following file defines the complete setup required for a source code editor with syntax highlighting stored in a file named “dart.ldef”

package org.eclipse.fx.code.dart.text

dart {
	partitioning {
		partition __dftl_partition_content_type
		partition __dart_singlelinedoc_comment
		partition __dart_multilinedoc_comment
		partition __dart_singleline_comment
		partition __dart_multiline_comment
		partition __dart_string
		rule {
			single_line __dart_string "'" => "'"
			single_line __dart_string '"' => '"'
			single_line __dart_singlelinedoc_comment '///' => ''
      			single_line __dart_singleline_comment '//' => ''
      			multi_line __dart_multilinedoc_comment '/**' => '*/'
      			multi_line  __dart_multiline_comment '/*' => '*/'
		}
	}
	lexical_highlighting {
		rule __dftl_partition_content_type whitespace javawhitespace {
			default dart_default
			dart_operator {
				character [ ';', '.', '=', '/', '\\', '+', '-', '*', '<', '>', ':', '?', '!', ',', '|', '&', '^', '%', '~' ]
			}
			dart_bracket {
				character [ '(', ')', '{', '}', '[', ']' ]
			}
			dart_keyword {
				keywords [ 	  "break", "case", "catch", "class", "const", "continue", "default"
							, "do", "else", "enum", "extends", "false", "final", "finally", "for"
							,  "if", "in", "is", "new", "null", "rethrow", "return", "super"
							, "switch", "this", "throw", "true", "try", "var", "void", "while"
							, "with"  ]
			}
			dart_keyword_1 {
				keywords [ 	  "abstract", "as", "assert", "deferred"
							, "dynamic", "export", "external", "factory", "get"
							, "implements", "import", "library", "operator", "part", "set", "static"
							, "typedef" ]
			}
			dart_keyword_2 {
				keywords [ "async", "async*", "await", "sync*", "yield", "yield*" ]
			}
			dart_builtin_types {
				keywords [ "num", "String", "bool", "int", "double", "List", "Map" ]
			}
		}
		rule __dart_singlelinedoc_comment {
			default dart_doc
			dart_doc_reference {
				single_line "[" => "]"
			}
		}
		rule __dart_multilinedoc_comment {
			default dart_doc
			dart_doc_reference {
				single_line "[" => "]"
			}
		}
		rule __dart_singleline_comment {
			default dart_single_line_comment
		}
		rule __dart_multiline_comment {
			default dart_multi_line_comment
		}
		rule __dart_string {
			default dart_string
			dart_string_inter {
				single_line "${" => "}"
				//TODO We need a $ => IDENTIFIER_CHAR rule
			}
		}
	}
}

As you note the file is split in 2 big sections.

package org.eclipse.fx.code.dart.text

dart {
	partitioning {
		
	}
	lexical_highlighting {

	}
}
  • partitioning: this section defines how different partitions in your file can be identified. Most likely you have partitions for the code, comments, … but eventually it is up to you
  • lexical_highlighting: this sections defines how a partition is tokenized to eg highlight keywords, operators, …

Partitioning

partitioning {
	partition __dftl_partition_content_type
	partition __dart_singlelinedoc_comment
	partition __dart_multilinedoc_comment
	partition __dart_singleline_comment
	partition __dart_multiline_comment
	partition __dart_string
	rule {
		single_line __dart_string "'" => "'"
		single_line __dart_string '"' => '"'
		single_line __dart_singlelinedoc_comment '///' => ''
      		single_line __dart_singleline_comment '//' => ''
      		multi_line __dart_multilinedoc_comment '/**' => '*/'
      		multi_line  __dart_multiline_comment '/*' => '*/'
	}
}

The partitioning section starts with a list of all available partitions. At least the __dftl_partition_content_type is required but for dart we define:

  • __dftl_partition_content_type: This probably the most important section because this is where all your code is going to
    class Rectangle {
      num left;   
      num top; 
      num width; 
      num height; 
      
      num get right             => left + width;
          set right(num value)  => left = value - width;
      num get bottom            => top + height;
          set bottom(num value) => top = value - height;
    }
    
  • __dart_singlelinedoc_comment: A single line documentary comment
    /// This is single line doc [aNumber]
    printNumber(num aNumber) {}
    
  • __dart_multilinedoc_comment: A multi line documentary comment
    /**
     * A multi line document comment [aNumber]
     */
    printNumber(num aNumber) {}
    
  • __dart_singleline_comment: A single line comment
    var a = 12; // Single line comment
    
  • __dart_multiline_comment: A multi line comment
    /*
     * A multi line comment
     */
    var a = 12;
    
  • __dart_string:
    var a = 0;
    var s1 = 'A string ${a} with interpolation';
    var s2 = "A string ${a} with interpolation";
    

Once the list of partitions is defined we need to define the rules used to identify the partitions. As of now we support 2 rule types:

  • Single line rule: rule starts if start sequence is detected and ends with a line break or if the end sequence is found
    single_line __dart_singleline_comment '//' => ''
    
  • Multi line rule: rule starts if start sequence is detected and rule ends with the end of file or if the end sequence is found
    multi_line  __dart_multiline_comment '/*' => '*/'
    

Lexical highlighting

lexical_highlighting {
	rule __dftl_partition_content_type whitespace javawhitespace {
		default dart_default
		dart_operator {
			character [ ';', '.', '=', '/', '\\', '+', '-', '*', '<', '>', ':', '?', '!', ',', '|', '&', '^', '%', '~' ]
		}
		dart_bracket {
			character [ '(', ')', '{', '}', '[', ']' ]
		}
		dart_keyword {
			keywords [ 
				"break", "case", "catch", "class", "const", "continue", "default"
				, "do", "else", "enum", "extends", "false", "final", "finally", "for"
				,  "if", "in", "is", "new", "null", "rethrow", "return", "super"
				, "switch", "this", "throw", "true", "try", "var", "void", "while"
				, "with"  ]
		}
		dart_keyword_1 {
			keywords [ 	  
				"abstract", "as", "assert", "deferred"
				, "dynamic", "export", "external", "factory", "get"
				, "implements", "import", "library", "operator", "part", "set", "static"
				, "typedef" ]
			}
		dart_keyword_2 {
			keywords [ "async", "async*", "await", "sync*", "yield", "yield*" ]
		}
		dart_builtin_types {
			keywords [ "num", "String", "bool", "int", "double", "List", "Map" ]
		}
	}
	rule __dart_singlelinedoc_comment {
		default dart_doc
		dart_doc_reference {
			single_line "[" => "]"
		}
	}
	rule __dart_multilinedoc_comment {
		default dart_doc
		dart_doc_reference {
			single_line "[" => "]"
		}
	}
	rule __dart_singleline_comment {
		default dart_single_line_comment
	}
	rule __dart_multiline_comment {
		default dart_multi_line_comment
	}
	rule __dart_string {
		default dart_string
		dart_string_inter {
			single_line "${" => "}"
			//TODO We need a $ => IDENTIFIER_CHAR rule
		}
	}
}

As you note for each partition from above we define how we split this is into tokens (eg dart_default, dart_keyword, …) so that we can color them differently eg keywords, … .

We currently support the following tokenizer rules

  • character: This rule allows to define single value tokens like operators, block definitions, …
    dart_bracket {
    	character [ '(', ')', '{', '}', '[', ']' ]
    }
    
  • keywords: This rule allows to define a list of keywords
    dart_keyword {
    	keywords [ "break", "case", "catch", "class", "const", "continue", "default", ... ]
    }
    
  • single_line: This rule starts with the start sequence and ends with a new line or if the end sequence is matched
    dart_doc_reference {
    	single_line "[" => "]"
    }
    
  • multi_line: This rule starts with the start sequence and ends with an EOF or if the end sequence is matched. We don’t require this rule for dart

From the token to the colored string

Now that we have defined how we want our sourcecode to be tokenized we need the final step. Defining what a token “dart_keyword” means in the UI. For JavaFX we simply map the token names into JavaFX CSS class selectors.

.styled-text-area .dart.dart_default {
	-styled-text-color: rgb(0, 0, 0);
}

.styled-text-area .dart.dart_operator {
	-styled-text-color: rgb(0, 0, 0);
}

.styled-text-area .dart.dart_bracket {
	-styled-text-color: rgb(0, 0, 0);
}

.styled-text-area .dart.dart_keyword {
	-styled-text-color: rgb(127, 0, 85);
	-fx-font-weight: bold;
}

.styled-text-area .dart.dart_keyword_1 {
	-styled-text-color: rgb(127, 0, 85);
	-fx-font-weight: bold;
}

.styled-text-area .dart.dart_keyword_2 {
	-styled-text-color: rgb(127, 0, 85);
	-fx-font-weight: bold;
}

.styled-text-area .dart.dart_single_line_comment {
	-styled-text-color: rgb(63, 127, 95);
}

.styled-text-area .dart.dart_multi_line_comment {
	-styled-text-color: rgb(63, 127, 95);
}

.styled-text-area .dart.dart_string {
	-styled-text-color: rgb(42, 0, 255);
}

.styled-text-area .dart.dart_string_inter {
	-styled-text-color: rgb(42, 0, 255);
	-fx-font-weight: bold;
}

.styled-text-area .dart.dart_builtin_types {
	-styled-text-color: #74a567;
	-fx-font-weight: bold;
}

.styled-text-area .dart.dart_doc {
	-styled-text-color: rgb(63, 95, 191);
}

.styled-text-area .dart.dart_doc_reference {
	-styled-text-color: rgb(63, 95, 191);
	-fx-font-weight: bold;
}

The rule to make up the selector is trivial. Let’s for example look at:

.styled-text-area .dart.dart_keyword {
	-styled-text-color: rgb(127, 0, 85);
	-fx-font-weight: bold;
}
...
dart {
	lexical_highlighting {
		rule __dftl_partition_content_type whitespace javawhitespace {
			dart_keyword {
				...
			}
		}
	}
}
...
  • .styled-text-area: Selector to narrow the window where the real token selector is applicable
  • .dart.dart_keyword: Selector made up from the language name (dart) and the token name (dart_keyword)


by Tom Schindl at July 23, 2015 11:02 AM

Vert.x Application Configuration

by cescoffier at July 20, 2015 12:00 AM

Previously in ‘Introduction to Vert.x’

In this post, we developed a very simple Vert.x 3 application, and saw how this application can be tested, packaged and executed. That was nice, isn’t it ? Well, ok, that was only the beginning. In this post, we are going to enhance our application to support external configuration.

So just to remind you, we have an application starting a HTTP server on the port 8080 and replying a polite “Hello” message to all HTTP requests. The previous code is available here. The code developed in this post is in the post-2 branch.

So, why do we need configuration?

That’s a good question. The application works right now, but well, let’s say you want to deploy it on a machine where the port 8080 is already taken. We would need to change the port in the application code and in the test, just for this machine. That would be sad. Fortunately, Vert.x applications are configurable.

Vert.x configurations are using the JSON format, so don’t expect anything complicated. They can be passed to verticle either from the command line, or using an API. Let’s have a look.

No ‘8080’ anymore

The first step is to modify the io.vertx.blog.first.MyFirstVerticle class to not bind to the port 8080, but to read it from the configuration:

public void start(Future fut) {
  vertx
      .createHttpServer()
      .requestHandler(r -> {
        r.response().end("

Hello from my first " + "Vert.x 3 application

"
); }) .listen( // Retrieve the port from the configuration, // default to 8080. config().getInteger("http.port", 8080), result -> { if (result.succeeded()) { fut.complete(); } else { fut.fail(result.cause()); } } ); }

So, the only difference with the previous version is config().getInteger("http.port", 8080). Here, our code is now requesting the configuration and check whether the http.port property is set. If not, the port 8080 is used as fall-back. The retrieved configuration is a JsonObject.

As we are using the port 8080 by default, you can still package our application and run it as before:

mvn clean package
java -jar target/my-first-app-1.0-SNAPSHOT-fat.jar

Simple right ?

API-based configuration - Random port for the tests

Now that the application is configurable, let’s try to provide a configuration. In our test, we are going to configure our application to use the port 8081. So, previously we were deploying our verticle with:

vertx.deployVerticle(MyFirstVerticle.class.getName(), context.asyncAssertSuccess());

Let’s now pass some deployment options:

port = 8081;
DeploymentOptions options = new DeploymentOptions()
    .setConfig(new JsonObject().put("http.port", port)
);
vertx.deployVerticle(MyFirstVerticle.class.getName(), options, context.asyncAssertSuccess());

The DeploymentOptions object lets us customize various parameters. In particular, it lets us inject the JsonObject retrieved by the verticle when using the config() method.

Obviously, the test connecting to the server needs to be slightly modified to use the right port (port is a field):

vertx.createHttpClient().getNow(port, "localhost", "/", response -> {
  response.handler(body -> {
    context.assertTrue(body.toString().contains("Hello"));
    async.complete();
  });
});

Ok, well, this does not really fix our issue. What happens when the port 8081 is used too. Let’s now pick a random port:

ServerSocket socket = new ServerSocket(0);
port = socket.getLocalPort();
socket.close();

DeploymentOptions options = new DeploymentOptions()
    .setConfig(new JsonObject().put("http.port", port)
    );

vertx.deployVerticle(MyFirstVerticle.class.getName(), options, context.asyncAssertSuccess());

So, the idea is very simple. We open a server socket that would pick a random port (that’s why we put 0 as parameter). We retrieve the used port and close the socket. Be aware that this method is not perfect and may fail if the picked port becomes used between the close method and the start of our HTTP server. However, it would work fine in the very high majority of the case.

With this in place, our test is now using a random port. Execute them with:

mvn clean test

External configuration - Let’s run on another port

Ok, well random port is not what we want in production. Could you imagine the face of your production team if you tell them that your application is picking a random port. It can actually be funny, but we should never mess with the production team.

So for the actual execution of your application, let’s pass the configuration in an external file. The configuration is stored in a json file.

Create the src/main/conf/my-application-conf.json with the following content:

{
  "http.port" : 8082
}

And now, to use this configuration just launch your application with:

java -jar target/my-first-app-1.0-SNAPSHOT-fat.jar -conf src/main/conf/my-application-conf.json

Open a browser on http://localhost:8082, here it is !

How does that work ? Remember, our fat jar is using the Starter class (provided by Vert.x) to launch our application. This class is reading the -conf parameter and create the corresponding deployment options when deploying our verticle.

Conclusion

After having developed your first Vert.x application, we have seen how this application is configurable, and this without adding any complexity to our application. Next time we are going to see how we can use vertx-web to develop a small application serving static pages and a REST API. A bit more fancy, but still very simple.

Happy Coding and & Stay Tuned!


by cescoffier at July 20, 2015 12:00 AM

Eclipse, the IDE for IoT

by Doug Schaefer at July 19, 2015 06:00 PM

One of my hobbies recently has been playing with my Arduino Uno. I just love how accessible microcontrollers have become along with all the little sensors and LEDs and other hardware components. It’s so easy now to make your own “smart” electronics project. Lots of tutorials on the Web and a handful of on-line electronic shops to help you get started.

What we’re also starting to see is cheap wifi chips that you can add to your project. Not only that, but chips that have the microcontroller and wifi on the same silicon and SDKs to write programs that hook up your sensors and LEDs to the internet. Anyone can jump on the Internet of Things bandwagon.

But what is the Internet of Things that everyone’s talking about? My definition is more than just accessible hardware. It includes accessible web servers and web clients as well. I have a server I pay $5/month for and was dead easy to set up. It’s small but it does what I need for now. Web development is exploding with easy to use frameworks and mobile continues to provide open, or at least freely accessible tools.

And that brings me back to Eclipse. I’m using CDT for my devices, especially as I work to make the Arduino C++ plug-ins mature enough for others to use and then add support for my other devices, the ESP8266 and Raspberry Pi. I’ve written web servers in both node.js and vert.x. The Nodeclipse plugins have helped with node, and vert.x is just plain Java and I use m2e to manage my Maven pom files and builds. And client side, well, Eclipse needs a bit of work to better support the newer JavaScript languages like React.js’s JSX and Angular 2’s TypeScript. And I’m happy to see the Andmore project keeping the Android plug-ins alive. And, of course, I have the BB10 plug-ins we’ve built as part of Momentics so I can program my phone.

That’s why I love Eclipse. I can build my entire IoT stack from device to client with the same IDE in the same workspace. I can debug the entire stack at once. We should also have the capability to navigate code from the client to device as well. What line of C++ code in my device software triggered that React component to light up in my browser? With great static analysis that a Integrated development environment can bring you, we should be able to see that.

Eclipse is uniquely positioned to be the IDE for IoT. It’s not only the technology that brings all these environments together, it’s the community. What other community has the diversity of device developers, web developers and mobile developers all working together on a common tools platform? That’s what makes Eclipse an exciting community to be a part of.


by Doug Schaefer at July 19, 2015 06:00 PM

Presentation: Building Business UIs with EMF Forms

by Maximilian Koegel at July 17, 2015 09:41 PM

Maximilian Koegel introduces declarative UI modeling, the EMF Forms framework and its tooling to create view models, sharing from his experience applying the concept to commercial projects.

By Maximilian Koegel

by Maximilian Koegel at July 17, 2015 09:41 PM

Getting started with JSONForms

by Maximilian Koegel and Jonas Helming at July 17, 2015 08:03 AM

JSONForms is an AngularJS-based framework to simplify the creation of forms for data entry and edit in web applications. It allows to declaratively define the data and layout of a form and to embed the form into your HTML with one simple <jsonforms> tag. If you would like to know more about JSONForms here are is an introduction and here is an explanation of its core principles. Also the JSONForms homepage is a good starting point.

This time we will be more hands-on: We’ll demonstrate how JSONForms can be integrated into an existing AngularJS application and how it eases the development process of web forms.

 

We’ll create a simple form for a user based on a JSON schema and a custom UI schema. The rendered result will be bound to the underlying model by making use of Angular’s two-way-databinding. At the end of this tutorial we’ll have a form describing the basic properties of user:

image03

We’ll also change the UI schema to demonstrate the benefits of the declarative approach.

So, let’s get started. First of all, please clone this github repository:

https://github.com/edgarmueller/jsonforms-seed.git

This repository contains a basic JSONForms project template to get you started. It contains an index.html, CSS stylesheets as well as an app.js file containing the application logic. The index.html specifies all relevant dependencies as well as some boilerplate HTML. To retrieve all dependencies we’ll use Bower. If you don’t have Bower yet installed (or never heard of it) please follow these instructions. Then, navigate to the cloned repository and execute the following command from with your shell:

bower install

This will install all relevant dependencies.

Now open the js/app.js file. You’ll see that the controller already contains a predefined controller called MyController with a schema describing a user. In this example we’ll use a stripped-down version of the schema we outlined last time, where a user has only four properties: name, age, gender and birth date. To extend this project template with forms that allow to show and edit instances of users, we’ll add the UI schema to the controller describing our form and bind it to schema. You can just copy and paste the UI schema into the controller for now. Writing schemas by hand shouldn’t happen very often, since we will provide tooling for creating UI schemas, but it is beneficial to know that the UI schemas are just regular JSON.

app.controller('MyController', ['$scope', function($scope) {
 
// [..]
 
  $scope.uiSchema = {
    "elements": [
      {
        "type": "HorizontalLayout",
        "elements": [
          {
            "type": "VerticalLayout",
            "elements": [
              {
                "type": "Control",
                "label": "Name",
                "scope": {
                  "$ref": "#/properties/name"
                }
              },
              {
                "type": "Control",
                "label": "Age",
                "scope": {
                  "$ref": "#/properties/age"
                }
              }
            ]
          },
          {
            "type": "VerticalLayout",
            "elements": [
              {
                "type": "Control",
                "label": "Height",
                "scope": {
                  "$ref": "#/properties/height"
                }
              },
              {
                "type": "Control",
                "label": "Gender",
                "scope": {
                  "$ref": "#/properties/gender"
                }
              }
            ]
          }
        ]
      }
    ]
  };
}

The layout above can be best illustrated with this image:

image00

As one might expect, the VerticalLayout lay outs all its children vertically while the HorizontalLayout orders its children horizontally.

With the UI schema, all there is left to do is to wind things up. This happens via the usage of a custom directive provided by JSONForms. Open the index.html and replace the TODO comment with this line:

<jsonforms schema="schema" ui-schema="uiSchema" data="data"/>

The attributes schema and ui-schema specify the (data) schema and the UI schema to be generated, respectively. The data attribute specifies the JSON instance that should be bound to the generated form. Note that JSONForms makes use of Angular two-way-databinding for the data attribute, hence the underlying model will change while editing the form.

Also, you might have noticed the jsf class attribute in the div that wraps the jsonforms element in the template. Setting this CSS style (which is part of the jsonforms.css) is crucial for every JSONForms application since it is needed to layout the generated form.

Once the jsonforms directive has been added, you can already run this example and try out the generated form. Open the index.html in a browser and you should see the following page. You can perform any changes within the form and the actual data which is visualized as a JSON object will update.

 

image03

If we now want to rearrange the form, for instance, to align all elements vertically, we can do so by changing a single line in the UI schema. Change the type property of the top-level element from VerticalLayout to HorizontalLayout and you’ll see the form pictured beneath.

image01

This should give you a good impression of why one would like to use JSONForms: Changes to the UI are easy to be made and HTML templates don’t have to be touched. Also, once tooling support for creating UI schemas is available, writing complex forms will be a lot less time-consuming.

You are encouraged to play around with the UI schema and move elements around. We are looking forward to receive your feedback!
In the next blog post, we’ll have a look at how JSONForms can be integrated with REST services providing the data for your forms.

This tutorial is also available on the JSONForms webpage, where you can find more information on JSONForms. Play on and stay tuned!

Guest Blog Post
Guest Author: Edgar Müller


TwitterGoogle+LinkedInFacebook

Leave a Comment. Tagged with emf, emfforms, javascript, JSON, JSONForms, emf, emfforms, javascript, JSON, JSONForms


by Maximilian Koegel and Jonas Helming at July 17, 2015 08:03 AM