OpenChrom 1.0.0 “Aston” GA

by eselmeister at August 03, 2015 11:52 AM

I proudly announce the official release of OpenChrom 1.0.0 “Aston”.
Some new data converters and a lot of improvements have been added.

https://www.openchrom.net/download
https://en.wikipedia.org/wiki/OpenChrom

OpenChrom 1.0.0 Aston


by eselmeister at August 03, 2015 11:52 AM

Meet EGerrit

by Maximilian Koegel and Jonas Helming at August 03, 2015 10:32 AM

Gerrit is not only the first name of the dutch computer scientist Gerrit Blaauw, who was the key engineer behind the IBM System/360 project in the early 60ies, Gerrit is also the name of one of the most popular code review tools. Just as Gerrit Blaauw made the successful case for the 8-bit computer architecture (as opposed to the prevalent 6-bit architecture at that time), Gerrit — the code review tool — furnished code review as a way of efficient collaborative software development in a large scale and significantly changed the way of collaboration among software developers in many capital open-source projects, such as Android, LibreOffice, as well as Eclipse. Gerrit closely integrates with Git and provides a web-based user interface. As we developers like to use top-notch IDEs, such as Eclipse, the integration of Gerrit with Eclipse would finally allow us to stop switching back and forth between Eclipse and a web browser showing the Gerrit review we are working on currently. Therefore, in the spirit of EGit, which addresses the seamless integration of Git with the Eclipse IDE, EGerrit aims to bring all the benefits of supporting the entire Gerrit review workflow within Eclipse. EGerrit is currently in the Project Proposal Phase and is just about to be born as a new member of the Eclipse Technology Project family; reason enough to have a look at the intents of EGerrit and investigate what to expect.

The Gerrit Code Review Workflow

Before we dig into EGerrit, however, we recap the main features and the workflow of Gerrit. If you are a seasoned Gerrit user though, you may safely skip this section.

Gerrit is built around a shared Git repository and manages each contribution to this shared Git repository as an own review. A review goes through the following states: A new contribution is in the state “needs review” until all necessary reviewers’ approvals are present, which is typically at least one review confirming that the contribution is approved; once they are present its state changes to “ready to submit” and can now be merged into the development branch. For improved traceability  also the information of already merged reviews, including its associated discussions and revisions, is still accessible in Gerrit but the review is closed with the state “merged”. If the review decision, however, indicates that the contribution should not be merged, reviewers (or the authoring contributor) may change the review’s state to “abandoned”.

EGerrit_abb1

Figure 1: Gerrit’s web interface showing an open review with conflicts and related changes

 

In Gerrit, the unit of a review is always a single Git commit. This leverages a powerful mechanism of Git for code review, because the developer may thereby easily group changes into a logical units by simply putting them into consecutive commits. As each review concerns exactly one commit, also review dependencies can be derived from the dependencies of the respective Git commits: if an open review exists for a commit and the same commit is the ancestor of a new commit that is pushed for review, Gerrit will mark the new review to be dependent on the review of its ancestor commit (cf. Related Changes in Figure 1). Gerrit relies on Git also to handle another important mechanism: if the commit of a review is modified (using git commit –amend), for instance because of reviewer comments, all depending reviews are obviously affected. Therefore, Gerrit may rebase the commits of the depending reviews on top of the changed ancestor commit with one click. If the rebase cannot be done automatically due to conflicts, Gerrit will mark the review as conflicting (cf. Cannot Merge in Figure 1).

EGerrit_abb2

Figure 2: The Gerrit Code Review Workflow

 

Figure 2 illustrates the basic code review workflow of Gerrit. When developers wish to make contributions to the shared Git repository, they first clone the Git repository ((1) in Figure 1), or pull the latest version of the code base, if they have cloned it already, and start their work by applying changes.

Once the changes are ready to be shared and reviewed, they are committed to the local Git repository ((2) in Figure 2), typically on top of a new branch dedicated to the respective feature to be contributed (featureX).

To publish a commit for review, the developer simply pushes the commit to the Gerrit server using the standard means of Git; that is, git push <gerrit-url/gerrit-remote-name> ((3) in Figure 2). Gerrit acts as a common Git remote repository. However, in contrast to a common Git repository, Gerrit opens a new review for each pushed commit with the initial state: needs review. Optionally, the developer can add specific reviewers to the opened review explicitly.

Reviewers may now start their work either by going through the changes in the diffs shown in the web interface of Gerrit or by pulling the commit to be reviewed to their local repository ((4) in Figure 2). Therefore, Gerrit offers dedicated Git “refs” for each review and each revision within a review. Thus, the changes of a review may be fetched using fetch <gerrit-url/gerrit-remote-name> refs/changes/<change-id>/<revision-id>.

EGerrit_abb3

Figure 3: The diff viewer with commenting capabilities of the Gerrit web interface

 

Once finished with the code review, reviewers may provide an overall rating ((5) in Figure 2), as well as add comments on lines in changed files using the web interface as shown in Figure 3.

As soon as the review is uploaded, the original developer is notified about the review results. Depending on the decision, the developer may happily enjoy the positive review or — and this is the more common case — improve his or her contribution by addressing the review comments.  Once the developer is done, the changed files are amended to the original commit and again pushed to Gerrit, which will detect that this is a revision of an ongoing review (based on a change id in the commit message) and the process (steps (3) to (6)) starts over again.

Gerrit offers another neat feature: it provides specific hooks that, for instance, allow to start the build and automated tests of the pushed commit on a continuous integration server ((7) in Figure 2). The continuous integration server may act as an automatic reviewer and add the results of the build to the review. If the build and the automated tests succeed, it adds a +1 Verified rating to the review.

Finally, if all necessary reviewers’ approvals are added to the review, which is usually configured to be at least one +2 code review and no -2 code review, Gerrit enables merging the commit into the development branch with a single click ((8) in Figure 2).

Why Integrating Gerrit with Eclipse? Why EGerrit?

While the web interface of Gerrit is simple and straightforward to use, there are several reasons why a tight integration with Eclipse makes the reviewers and contributors task a more enjoyable task. To begin with, there simply is no better way of reading and browsing through code than using the top-notch IDE we are all used to work with everyday. Syntax highlighting, code navigation, such as jumping to classes or method declarations, problem markers, and other features of the Eclipse IDE are as valuable for code review as they are for software development itself. Furthermore, many reviewers’ tasks, such as investigating the effects of a change, also require stepping through the code with a debugger or running ad-hoc tests. All these features are available in the Eclipse IDE but not in the web interface of Gerrit. It is needless to say that switching back and forth between the Eclipse IDE and the Gerrit web interface to, for instance, add line comments to a review interrupts the workflow and makes the reviewing process more inefficient. EGerrit is hence a natural complement of the Gerrit web interface and aims at providing a more convenient way of performing reviews for users of the Eclipse IDE.

Some of the functionality of Gerrit has already been integrated with the Eclipse IDE in the Mylyn Gerrit connector as part of the Mylyn top-level project. So why is starting another project with partly overlapping goals be a good idea? First of all, diversity and the freedom of choice is what makes open source unique and awesome. Mylyn is great and also the Gerrit integration within the Mylyn reviews sub project offers very useful features. However, the major goal of the Mylyn reviews is to provide generic means for integrating code review systems at some point. In contrast the primary goal of EGerrit is to achieve feature parity with the Gerrit web interface. While generality is a good thing, implementing an outstanding integration of Gerrit within the Eclipse IDE is challenging enough on its own without having to worry about a generic framework and a generic workflow. Eventually, feature parity with Gerrit and providing a generic review framework are competing incompatible goals. Following the same spirit as EGit for integrating Git, EGerrit is a dedicated project with the clear focus on providing a high-quality integration of Gerrit and only Gerrit — including its specific interfaces, protocols, and workflows, which made Gerrit as popular as it is right now.

EGerrit: What to expect

EGerrit intends to provide feature parity with the Gerrit web front-end supporting Gerrit version 2.9 and higher. It targets novice Gerrit users who are seasoned Eclipse users, as well as experienced Gerrit users who are new to Eclipse. EGerrit focuses on the developers’ needs in their day-to-day code reviewing tasks and aims at increasing the developers’ productivity when performing code reviews and when receiving feedback from ongoing reviews as a contributor who pushed a change. Gerrit’s administrative tasks, such as user and project administration in Gerrit, are left out for now, as integrating these tasks with Eclipse would bring only limited added value in comparison to performing such tasks in Gerrit’s web interface.

EGerrit_abb4a

EGerrit_abb4b

Figure 4: Mockups of the EGerrit user interface

 

From a user perspective, EGerrit consists of a dedicated Gerrit change viewer and a Gerrit dashboard view (cf. Figure 4). The dashboard view is a direct integration of the dashboard available in Gerrit’s web interface, which gives a great overview on reviews of interest. The EGerrit dashboard will also allow to perform queries over reviews with the same query syntax as in the Gerrit web interface, e.g., status:open (reviewer:self OR owner:self) will give you the list of open reviews to which you are either assigned as a reviewer or which you have pushed for review yourself as a contributor. Obviously, double-clicking a review in the EGerrit dashboard will open up EGerrit’s change viewer.

The change viewer features four tabs: Summary, Message, Files, and History. The Summary tab provides all relevant meta information of a change under review at a glance, such as the state of the review (Needs Code-Review), the project and the target branch, the reviewers and their votes, as well as related and conflicting changes. Depending on the state of the review, the Gerrit change viewer enables users to submit, abandon, and rebase the change, as well as to checkout, pull, or cherry-pick the change into the local workspace with one click. Wherever necessary, these actions will be integrated closely with EGit. The Message tab shows the commit message and potentially allows you to modify it directly.

EGerrit_abb5

Figure 5: The diff viewer of EGerrit

 

The Files tab will probably be one of the most important tabs as it presents the list of files that have been changed with this commit. From there you can open up the diff viewer, to investigate the fine-grained differences of each file. The EGerrit team plans to spend a significant effort on providing a diff viewer within the Eclipse IDE that provides several of the popular features available in the Gerrit web interface, such as the line-by-line way of showing changed lines: instead of drawing connector lines between the left- and the right-hand side text blocks, as it is currently with the diff viewer in the Eclipse IDE, corresponding lines will be aligned on the same horizontal position and blank lines will be shown if on the opposite side a line has been added or deleted, respectively (cf. Figure 5); this has the effect of a cleaner look in case of many changes. The diff viewer will also feature to show the differences among different versions, such as among different revisions of the same change as well as to diff against the current workspace version, etc. In EGerrit’s diff viewer, also the functionality of adding and viewing line comments will be integrated. Moreover, it will provide different means of navigating through the diffs, such as file-by-file, change-by-change, and comment-by-comment. Finally, the History tab will list all the updates of a review, such as new comments and ratings, new revisions (called patch sets in EGerrit), etc.

Another aspect that will be in the focus of the EGerrit team is performance. A clever design of the data exchange protocol between the EGerrit client and the Gerrit server is key to efficiently working with Gerrit in Eclipse. Therefore, different strategies will be investigated to figure out the optimal amount of data to be pre-fetched and cached to ensure a fluent and responsive user interface, while at the same time avoid a lack data synchronicity between the data cached in the client and the data that is stored on the Gerrit server.

Outlook: Model Review

We are also very happy to announce that EGerrit will also be the home of the review capabilities for EMF models that we plan to develop as part of the joint Collaborative Modeling Initiative. The model review extension of EGerrit will allow users to use Gerrit seamlessly also for changes that involve EMF-based models.

EGerrit_abb6

Figure 6: Initial prototype for model review

 

That includes reviewing model changes with EMF Compare instead of the text diff viewer, as well as annotating changed models with model-based comments that are linked to model elements in contrast to lines (cf. Figure 6).

EGerrit_abb7

Figure 7: Initial prototype for UML model review

 

Besides model-based comments that can be added and viewed in the generic tree-based editor of EMF, we also plan to provide extensions for showing and editing comments in GMF and Papyrus diagram viewers. Therefore, these extensions will also be able to provide dedicated notations, which specify how the comment shall be visualized. For instance, in Papyrus UML diagrams, we plan to reuse the notation of UML comments for review comments to ensure a familiar look and feel. Needless to say, the information on the comments and their positions on the canvas will be saved alongside the review and not inside the model under review (cf. Figure 7).

Conclusions

In summary, there is a lot to look forward to as EGerrit evolves and grows up to be a mature part of the Eclipse tooling family. Given nowadays’ popularity of Gerrit in general and in the Eclipse community in particular, dedicated and optimized support for Gerrit is certainly something that may have a significant impact and a large number of potential users. As always, it is hard to say when we will see a first stable release. The EGerrit team announced the second quarter of 2015 for an initial version. We are definitely looking forward to the results, continue our work on integrating model review with EGerrit early on, and we will definitely keep you posted on the overall progress. Keep an eye on the Collaborative Modeling Initiative and the EGerrit project proposal web page.

_MG_4540b2
Guest Blog Post
Guest Author: Philip Langer


TwitterGoogle+LinkedInFacebook

Leave a Comment. Tagged with collaborative, EGerrit, emf, gerrit, git, modeling, Papyrus, collaborative, EGerrit, emf, gerrit, git, modeling, Papyrus


by Maximilian Koegel and Jonas Helming at August 03, 2015 10:32 AM

Unit and Integration Tests

by cescoffier at August 03, 2015 12:00 AM

Previously in “introduction to vert.x”

Let’s refresh our mind about what we developed so far in the introduction to vert.x series. In the first post, we developed a very simple Vert.x 3 application, and saw how this application can be tested, packaged and executed. In the second post, we saw how this application became configurable and how we can use a random port in test, and use another configurable port in production. Finally, the previous post has shown how to use vertx-web and how to implement a small REST API. However, we forgot an important task. We didn’t test the API. In this post we will increase the confidence we have on this application by implementing unit and integration tests.

The code of this post is available in the post-4 branch of the project. The starting post, however is the code available in the post-3 branch.

Tests, Tests, Tests…

This post is mainly about tests. We distinguish two types of tests: unit tests and integration tests. Both are equally important, but have different focus. Unit tests ensure that one component of your application, generally a class in the Java world, behaves as expected. The application is not tested as a whole, but pieces by pieces. Integration tests are more black box in the sense that the application is started and tested generally externally.

In this post we are going to start with some more unit tests as a warm up session and then focus on integration tests. If you already implemented integration tests, you may be a bit scared, and it makes sense. But don’t worry, with Vert.x there are no hidden surprises.

Warmup : Some more unit tests

Let’s start slowly. Remember in the first post we have implemented a unit test with vertx-unit. The test we did is dead simple:

  1. we started the application before the test
  2. we checks that it replies “Hello”

Just to refresh your mind, let’s have a look to the code

@Before
  public void setUp(TestContext context) throws IOException {
    vertx = Vertx.vertx();
    ServerSocket socket = new ServerSocket(0);
    port = socket.getLocalPort();
    socket.close();
    DeploymentOptions options = new DeploymentOptions()
        .setConfig(new JsonObject().put("http.port", port)
        );
    vertx.deployVerticle(MyFirstVerticle.class.getName(), options, context.asyncAssertSuccess());
  }

The setUp method is invoked before each test (as instructed by the @Before annotation). It, first, creates a new instance of Vert.x. Then, it gets a free port and then deploys our verticle with the right configuration. Thanks to the context.asyncAssertSuccess() it waits until the successful deployment of the verticle.

The tearDown is straightforward and just closes the Vert.x instance. It automatically un-deploys the verticles:

@After
  public void tearDown(TestContext context) {
    vertx.close(context.asyncAssertSuccess());
  }

Finally, our single test is:

@Test
  public void testMyApplication(TestContext context) {
    final Async async = context.async();
    vertx.createHttpClient().getNow(port, "localhost", "/", response -> {
      response.handler(body -> {
        context.assertTrue(body.toString().contains("Hello"));
        async.complete();
      });
    });
   }
It is only checking that the application replies "Hello" when we emit a HTTP request on `/`. Let's now try to implement some unit tests checkin that our web application and the REST API behave as expected. Let's start by checking that the `index.html` page is correctly served. This test is very similar to the previous one:
@Test
  public void checkThatTheIndexPageIsServed(TestContext context) {
    Async async = context.async();
    vertx.createHttpClient().getNow(port, "localhost", "/assets/index.html", response -> {
      context.assertEquals(response.statusCode(), 200);
      context.assertEquals(response.headers().get("content-type"), "text/html");
      response.bodyHandler(body -> {
        context.assertTrue(body.toString().contains("My Whisky Collection"));
        async.complete();
      });
    });
  }
We retrieve the `index.html` page and check: 1. it's there (status code 200) 2. it's a HTML page (content type set to "text/html") 3. it has the right title ("My Whisky Collection")
Retrieving content
As you can see, we can test the status code and the headers directly on the HTTP response, but ensure that the body is right, we need to retrieve it. This is done with a body handler that receives the complete body as parameter. Once the last check is made, we release the `async` by calling `complete`.
Ok, great, but this actually does not test our REST API. Let's ensure that we can add a bottle to the collection. Unlike the previous tests, this one is using `post` to _post_ data to the server:
@Test
  public void checkThatWeCanAdd(TestContext context) {
    Async async = context.async();
    final String json = Json.encodePrettily(new Whisky("Jameson", "Ireland"));
    final String length = Integer.toString(json.length());
    vertx.createHttpClient().post(port, "localhost", "/api/whiskies")
        .putHeader("content-type", "application/json")
        .putHeader("content-length", length)
        .handler(response -> {
          context.assertEquals(response.statusCode(), 201);
          context.assertTrue(response.headers().get("content-type").contains("application/json"));
          response.bodyHandler(body -> {
            final Whisky whisky = Json.decodeValue(body.toString(), Whisky.class);
            context.assertEquals(whisky.getName(), "Jameson");
            context.assertEquals(whisky.getOrigin(), "Ireland");
            context.assertNotNull(whisky.getId());
            async.complete();
          });
        })
        .write(json)
        .end();
  }

First we create the content we want to add. The server consumes JSON data, so we need a JSON string. You can either write your JSON document manually, or use the Vert.x method (Json.encodePrettily) as done here. Once we have the content, we create a post request. We need to configure some headers to be correctly read by the server. First, we say that we are sending JSON data and we also set the content length. We also attach a response handler very close to the checks made in the previous test. Notice that we can rebuild our object from the JSON document send by the server using the JSON.decodeValue method. It’s very convenient as it avoids lots of boilerplate code. At this point the request is not emitted, we need to write the data and call the end() method. This is made using .write(json).end();.

The order of the methods is important. You cannot write data if you don’t have a response handler configured. Finally don’t forget to call end.

So, let’s try this. You can run the test using:

mvn clean test

We could continue writing more unit test like that, but it could become quite complex. Let’s see how we could continue our tests using integration tests.

IT hurts

Well, I think we need to make that clear, integration testing hurts. If you have experience in this area, can you remember how long did it take to setup everything correctly? I get new white hairs by just thinking about it. Why are integration tests more complicated? It’s basically because of the setup:

  1. We must start the application in a close to production way
  2. We must then run the tests (and configure them to hit the right application instance)
  3. We must stop the application

That does not sound unconquerable like that, but if you need Linux, MacOS X and Windows support, it quickly get messy. There are plenty of great frameworks easing this such as Arquillian, but let’s do it without any framework to understand how it works.

We need a battle plan

Before rushing into the complex configuration, let’s think a minute about the tasks:

Step 1 - Reserve a free port
We need to get a free port on which the application can listen, and we need to inject this port in our integration tests.

Step 2 - Generate the application configuration
Once we have the free port, we need to write a JSON file configuring the application HTTP Port to this port.

Step 3 - Start the application
Sounds easy right? Well it’s not that simple as we need to launch our application in a background process.

Step 4 - Execute the integration tests
Finally, the central part, run the tests. But before that we should implement some integration tests. Let’s come to that later.

Step 5 - Stop the application
Once the tests have been executed, regardless if there are failures or errors in the tests, we need to stop the application.

There are multiple way to implement this plan. We are going to use a generic way. It’s not necessarily the better, but can be applied almost everywhere. The approach is tight to Apache Maven. If you want to propose an alternative using Gradle or a different tool, I will be happy to add your way to the post.

Implement the plan

As said above, this section is Maven-centric, and most of the code goes in the pom.xml file. If you never used the different Maven lifecycle phases, I recommend you to look at the introduction to the Maven lifecycle.

We need to add and configure a couple of plugins. Open the pom.xml file and in the section add:

<plugin>
  <groupId>org.codehaus.mojogroupId>
  <artifactId>build-helper-maven-pluginartifactId>
  <version>1.9.1version>
  <executions>
    <execution>
      <id>reserve-network-portid>
      <goals>
        <goal>reserve-network-portgoal>
      goals>
      <phase>process-sourcesphase>
      <configuration>
        <portNames>
          <portName>http.portportName>
        portNames>
      configuration>
    execution>
  executions>
plugin>

We use the build-helper-maven-plugin (a plugin to know if you are often using Maven) to pick up a free port. Once found, the plugin assigns the http.port variable to the picked port. We execute this plugin early in the build (during the process-sources phase), so we can use the http.port variable in the other plugin. This was for the first step.

Two actions are required for the second step. First, in the pom.xml file, just below the opening tag, add:

<testResources>
  <testResource>
    <directory>src/test/resourcesdirectory>
    <filtering>truefiltering>
  testResource>
testResources>

This instructs Maven to filter resources from the src/test/resources directory. Filter means replacing placeholders by actual values. That’s exactly what we need as we now have the http.port variable. So create the src/test/resources/my-it-config.json file with the following content:

{
  "http.port": ${http.port}
}

This configuration file is similar to the one we did in previous posts. The only difference is the ${http.port} which is the (default) Maven syntax for filtering. So, when Maven is going to process or file it will replace ${http.port} by the selected port. That’s all for the second step.

The step 3 and 5 are a bit more tricky. We should start and stop the application. We are going to use the maven-antrun-plugin to achieve this. In the pom.xml file, below the build-helper-maven-plugin, add:


<plugin>
  <artifactId>maven-antrun-pluginartifactId>
  <version>1.8version>
  <executions>
    <execution>
      <id>start-vertx-appid>
      <phase>pre-integration-testphase>
      <goals>
        <goal>rungoal>
      goals>
      <configuration>
        <target>
          
          <exec executable="${java.home}/bin/java"
                dir="${project.build.directory}"
                spawn="true">
            <arg value="-jar"/>
            <arg value="${project.artifactId}-${project.version}-fat.jar"/>
            <arg value="-conf"/>
            <arg value="${project.build.directory}/test-classes/my-it-config.json"/>
          exec>
        target>
      configuration>
    execution>
    <execution>
      <id>stop-vertx-appid>
      <phase>post-integration-testphase>
      <goals>
        <goal>rungoal>
      goals>
      <configuration>
        
        <target>
          <exec executable="bash"
                dir="${project.build.directory}"
                spawn="false">
            <arg value="-c"/>
            <arg value="ps ax | grep -i '${project.artifactId}' | awk 'NR==1{print $1}' | xargs kill -SIGTERM"/>
          exec>
        target>
      configuration>
    execution>
  executions>
plugin>

That’s a huge piece of XML, isn’t it ? We configure two executions of the plugin. The first one, happening in the pre-integration-test phase, executes a set of bash command to start the application. It basically executes:

java -jar my-first-app-1.0-SNAPSHOT-fat.jar -conf .../my-it-config.json

Is the fatjar created ?
The fat jar embedding our application is created in the package phase, preceding the pre-integration-test, so yes, the fat jar is created.

As mentioned above, we launch the application as we would in a production environment.

Once, the integration tests are executed (step 4 we didn’t look at it yet), we need to stop the application (so in the the post-integration-test phase). To close the application, we are going to invoke some shell magic command to find our process in with the ps command and send the SIGTERM signal. It is equivalent to:

ps
.... -> find your process id
kill your_process_id -SIGTERM

And Windows ?
I mentioned it above, we want Windows to be supported and these commands are not going to work on Windows. Don’t worry, Windows configuration is below….

We should now do the fourth step we (silently) skipped. To execute our integration tests, we use the maven-failsafe-plugin. Add the following plugin configuration to your pom.xml file:

<plugin>
  <groupId>org.apache.maven.pluginsgroupId>
  <artifactId>maven-failsafe-pluginartifactId>
  <version>2.18.1version>
  <executions>
    <execution>
      <goals>
        <goal>integration-testgoal>
        <goal>verifygoal>
      goals>
      <configuration>
        <systemProperties>
          <http.port>${http.port}http.port>
        systemProperties>
      configuration>
    execution>
  executions>
plugin>

As you can see, we pass the http.port property as a system variable, so our tests are able to connect on the right port.

That’s all ! Wow…. Let’s try this (for windows users, you will need to be patient or to jump to the last section).

mvn clean verify

We should not use mvn integration-test because the application would still be running. The verify phase is after the post-integration-test phase and will analyse the integration-tests results. Build failures because of integration tests failed assertions are reported in this phase.

Hey, we don’t have integration tests !

And that’s right, we set up everything, but we don’t have a single integration test. To ease the implementation, let’s use two libraries: AssertJ and Rest-Assured.

AssertJ proposes a set of assertions that you can chain and use fluently. Rest Assured is a framework to test REST API.

In the pom.xml file, add the two following dependencies just before :

<dependency>
      <groupId>com.jayway.restassuredgroupId>
      <artifactId>rest-assuredartifactId>
      <version>2.4.0version>
      <scope>testscope>
    dependency>
    <dependency>
      <groupId>org.assertjgroupId>
      <artifactId>assertj-coreartifactId>
      <version>2.0.0version>
      <scope>testscope>
    dependency>

Then, create the src/test/java/io/vertx/blog/first/MyRestIT.java file. Unlike unit test, integration test ends with IT. It’s a convention from the Failsafe plugin to distinguish unit (starting or ending with Test) from integration tests (starting or ending with IT). In the created file add:

package io.vertx.blog.first;

import com.jayway.restassured.RestAssured;
import org.junit.AfterClass;
import org.junit.BeforeClass;

public class MyRestIT {

  @BeforeClass
  public static void configureRestAssured() {
    RestAssured.baseURI = "http://localhost";
    RestAssured.port = Integer.getInteger("http.port", 8080);
  }

  @AfterClass
  public static void unconfigureRestAssured() {
    RestAssured.reset();
  }
}

The methods annotated with @BeforeClass and @AfterClass are invoked once before / after all tests of the class. Here, we just retrieve the http port (passed as a system property) and we configure REST Assured.

It’s now time to implement a real test. Let’s check we can retrieve an individual product:

@Test
public void checkThatWeCanRetrieveIndividualProduct() {
// Get the list of bottles, ensure it's a success and extract the first id.
final int id = get("/api/whiskies").then()
    .assertThat()
    .statusCode(200)
    .extract()
    .jsonPath().getInt("find { it.name=='Bowmore 15 Years Laimrig' }.id");
// Now get the individual resource and check the content
get("/api/whiskies/" + id).then()
    .assertThat()
    .statusCode(200)
    .body("name", equalTo("Bowmore 15 Years Laimrig"))
    .body("origin", equalTo("Scotland, Islay"))
    .body("id", equalTo(id));
}

Here you can appreciate the power and expressiveness of Rest Assured. We retrieve the list of product, ensure the response is correct, and extract the id of a specific bottle using a JSON (Groovy) Path expression.

Then, we try to retrieve the metadata of this individual product, and check the result.

Let’s now implement a more sophisticated scenario. Let’s add and delete a product:

@Test
public void checkWeCanAddAndDeleteAProduct() {
  // Create a new bottle and retrieve the result (as a Whisky instance).
  Whisky whisky = given()
      .body("{\"name\":\"Jameson\", \"origin\":\"Ireland\"}").request().post("/api/whiskies").thenReturn().as(Whisky.class);
  assertThat(whisky.getName()).isEqualToIgnoringCase("Jameson");
  assertThat(whisky.getOrigin()).isEqualToIgnoringCase("Ireland");
  assertThat(whisky.getId()).isNotZero();
  // Check that it has created an individual resource, and check the content.
  get("/api/whiskies/" + whisky.getId()).then()
      .assertThat()
      .statusCode(200)
      .body("name", equalTo("Jameson"))
      .body("origin", equalTo("Ireland"))
      .body("id", equalTo(whisky.getId()));
  // Delete the bottle
  delete("/api/whiskies/" + whisky.getId()).then().assertThat().statusCode(200);
  // Check that the resrouce is not available anymore
  get("/api/whiskies/" + whisky.getId()).then()
      .assertThat()
      .statusCode(404);
}
So, now we have integration tests let's try:
mvn clean verify

Simple no ? Well, simple once the setup is done right…. You can continue implementing other integration tests to be sure that everything behave as you expect.

Dear Windows users…

This section is the bonus part for Windows user, or people wanting to run their integration tests on Windows machine too. The command we execute to stop the application is not going to work on Windows. Luckily, it’s possible to extend the pom.xml with a profile executed on Windows.

In your pom.xml, just after , add:

<profiles>

<profile>
  <id>windowsid>
  <activation>
    <os>
      <family>windowsfamily>
    os>
  activation>
  <build>
       <plugins>
         <plugin>
           <artifactId>maven-antrun-pluginartifactId>
           <version>1.8version>
           <executions>
             <execution>
               <id>stop-vertx-appid>
               <phase>post-integration-testphase>
               <goals>
                 <goal>rungoal>
               goals>
               <configuration>
                 <target>
                   <exec executable="wmic"
                         dir="${project.build.directory}"
                         spawn="false">
                     <arg value="process"/>
                     <arg value="where"/>
                     <arg value="CommandLine like '%${project.artifactId}%' and not name='wmic.exe'"/>
                     <arg value="delete"/>
                   exec>
                 target>
               configuration>
             execution>
           executions>
         plugin>
       plugins>
  build>
profile>
profiles>

This profile replaces the actions described above to stop the application with a version working on windows. The profile is automatically enabled on Windows. As on others operating systems, execute with:

mvn clean verify

Conclusion

Wow, what a trip ! We are done… In this post we have seen how we can gain confidence in Vert.x applications by implementing both unit and integration tests. Unit tests, thanks to vert.x unit, are able to check the asynchronous aspect of Vert.x application, but could be complex for large scenarios. Thanks to Rest Assured and AssertJ, integration tests are dead simple to write… but the setup is not straightforward. This post have shown how it can be configured easily. Obviously, you could also use AssertJ and Rest Assured in your unit tests.

Next time, we are going to replace the in memory backend with a database, and use asynchronous integration with this database.

Stay Tuned & Happy Coding !


by cescoffier at August 03, 2015 12:00 AM

Article: EIP Designer: Bridging the Gap Between EA and Development

by Laurent Broudoux at August 02, 2015 09:50 AM

This article presents the EIP Designer project, an Eclipse-based tool for introducing integration patterns into an EA design, providing fluidity and continuity while filling the gap existing between EA practices and concrete software development.

By Laurent Broudoux

by Laurent Broudoux at August 02, 2015 09:50 AM

Presentation: Eclipse & Gradle–The Best of Both Worlds

by Hans Dockter, Etienne Studer at July 31, 2015 07:10 PM

Hans Dockter, Etienne Studer present an Eclipse plug-in for Gradle, demonstrating the integration between Eclipse and Gradle.

By Hans Dockter, Etienne Studer

by Hans Dockter, Etienne Studer at July 31, 2015 07:10 PM

Beta2 brings some Bower to Mars

by akazakov at July 31, 2015 07:01 AM

Today a new beta is available from our download and update sites!

jbosstools bower
Remember that since Beta1 we require Java 8 for installing and using of JBoss Tools. We still support developing and running applications using older Java runtimes. See more in Beta1 blog.

Installation

JBoss Developer Studio comes with everything pre-bundled in its installer. Simply download it from our JBoss Products page and run it like this:

java -jar jboss-devstudio-<installername>.jar

JBoss Tools or Bring-Your-Own-Eclipse (BYOE) JBoss Developer Studio require a bit more:

This release requires at least Eclipse 4.5 (Mars) but we recommend using the Eclipse 4.5 Mars JEE Bundle since then you get most of the dependencies preinstalled.

Once you have installed Eclipse, you can either find us on the Eclipse Marketplace under "JBoss Tools" or "JBoss Developer Studio".

We are now using Eclipse Marketplace feature of having just one market place entry for all old versions.

For JBoss Tools, you can also use our update site directly if you are up for it.

http://download.jboss.org/jbosstools/mars/development/updates

Note: Integration Stack tooling will become available from JBoss Central at a later date.

What is new ?

Full info is at this page. Some highlights are below.

Bower

We’ve added support for easy setup and invocation of Bower using your locally installed bower command line tool on Windows, OS X and Linux.

We provide a Bower Init wizard for getting started.

bower-init-wizard-page

Once your project has a bower.json file you can now easily run bower update by right-clicking on the file and selecting Run ▸ As…​ ▸ Bower Update.

We are working on contributing this and additional Javascript integration to Eclipse JSDT. We will keep you posted!

OpenShift v3

We continue to work on improving OpenShift v3 tooling and this release has a few new features and important bug fixes but overall OpenShift v3 tooling is still in very early stages.

Manage your OpenShift v3 Projects

You can now create and delete OpenShift v3 projects.

manage-projects

manage-projects-wizard

If you try to create a new application, but you have no project yet, the tools will prompt you to create one first. Once you are done you can always get back and manage your OpenShift projects via a link in the application wizard.

Manually Trigger Builds

You can manually trigger builds when selecting your Build Configs in the OpenShift Explorer.

start-build

Once you triggered you should see a new build appear in the Builds category in the OpenShift Explorer. You can see its state next to its name or in the Properties view. Refreshing the Explorer will show you when the build completes.

Port Forwarding

Assuming that your application exposes ports you can now forward those to your local machine with JBoss Tools 4.3.0.Beta2.

port-forwarding-wizard

More details of new OpenShift v3 features are at Whats New.

Java EE Batch Tooling

The batch tooling now has hyperlink support for @BatchProperty to navigate between classes and their relevant job .xml files.

openon

There are more news at Whats New.

Exploded nested jars

In WildFly 8.2 there is now support for hotloading resources from exploded jars inside deployments, i.e. a jar inside your WEB-INF/libs.

This allows you to have faster reload times for module web applications using resources from nested jars.

Thanks to patch from Vsevolod Golovanov we now support this when you are using our server tools. Thanks Vsevolod!

Deploy Hybrid project to FeedHenry

You can now take a hybrid mobile project created with Thym and deploy it to a FeedHenry cloud.

NewApplicationWiz

Enjoy!

Alexey Kazakov


by akazakov at July 31, 2015 07:01 AM

EASE @ EclipseCon Europe 2015

by Christian Pontesegger (noreply@blogger.com) at July 30, 2015 10:06 AM



This years EclipseCon will be very exciting for me. Not only was my talk proposal I love scripting accepted as an early bird pick, there are also other parties starting to propose talks concerning EASE.

I love to see this project start to fly and like to give you some insights and an outlook of what to expect at EclipseCon.

How it all began

I remember my first EclipseCon in 2012. I had this intriguing idea in my mind about a scripting framework in Eclipse, but no idea whom to talk to about it. By chance I met Wayne Beaton who offered help and encouraged me to find some interested parties to support my idea.

It took me almost a year to provide a prototype and start advertising this idea, but soon after my first post on this topic on this blog, Polarsys got interested and things started to move very quickly. In summer 2013 we agreed to introduce scripting (called EScript at that time) to the e4 incubator. Thanks to Paul Webster we got a nice place to play and learn about the eclipse way of working on a software project.

When the call for papers openend for EclipseCon Europe 2013 I thought 'What the hell' and sent a proposal for a talk on scripting. I never expected it to get accepted, but few months later I found myself talking about scripting at EclipseCon.

Encouraged by the positive feedback Arthur Daussy and myself worked hard on the implementation. Finally we moved out of the e4 incubator in 2014 and now you may install this great scripting environment in your IDE.

The role of EclipseCon

Meeting users, committers and eclipse staff members in person helped me a  lot  to make this project happen. In the beginning it was the encouragements and good advice I got from experienced members. Now I am looking forward to hear the user side, your success stories, your fresh ideas.

If you read this far you are definitely interested in scripting, so take the chance and meet me in person @ EclipseCon. If there is enough interest, we may have a BOF on scripting topics there. please leave me a note if you would like to see this happen.


I love scripting - what we will cover

This talk will give an overview what scripting can do for you and how you can implement it in your IDE or RCP. We will start to have a short look on the architecture (1%) and then immediately switch to a live demo (99%). There we will start with simple script commands, continue with Java integration and introduce scripting libraries. Next we will learn how to provide custom libraries for your RCP.

Once the basics are clear, we will start to extend the IDE with user scripts, start to write scripted unit tests and finally show rapid prototyping by dynamically instantiating java files from your workspace.

Expect a fast roller coaster ride through the features of EASE. I am sure you want more of it after your first ride!

see you at EclipseCon

by Christian Pontesegger (noreply@blogger.com) at July 30, 2015 10:06 AM

Keynote Speakers Announced for the 10th EclipseCon Europe Conference

July 29, 2015 01:00 PM

Join us November 3-5, 2015 at the Forum am Schlosspark in Ludwigsburg, Germany.

July 29, 2015 01:00 PM

Sphinx’ ResourceSetListener, Notification Processing and URI change detection

by Andreas Graf at July 29, 2015 11:35 AM

The Sphinx framework adds functionality for model management to the base EMF tooling. Since it was developed within the Artop/AUTOSAR activities, it also includes code to make sure that the name-based references of AUTOSAR models are always correct.  This includes some post processing after model changes by working on the notifications that are generated within a transaction.

Detecting changes in reference URIs and dirty state of resources.

Assume that we have two resources SR.arxml (source) and target (TR.arxml) in the same ResourceSet (and on disk). TR contains an element TE with a qualified name /AUTOSAR/P/P2/Target which is referenced from a source element SE that is contained in SR. That means that the string “/AUTOSAR/P/P2/Target” is to be found somewhere in SR.arxml

Now what happens if some code changes TE’s name to NewName and saves the resource TR.arxml? If only TR.arxml would be saved, that means that we now would have an inconsistent model on disk, since the SR.arxml would still contain “/AUTOSAR/P/P2/Target” as a reference, which could not be resolved the next time the model is loaded.

We see that there are some model modifications that affect not only the resource of the modified elements, but also referencing resources. Sphinx determines the “affected” other resources and marks them as “dirty”, so that they are serialized the next time the model is written to make sure that name based references are still correct.

But obviously, only changes that affect the URIs of referencing elements should cause other resources to be set to dirty. Features that are not involved should not have that effect. Sphinx offers the possibility to specify specific strategies per meta-model. The interface is IURIChangeDetectorDelegate.

The default implementation for XMI based resources is:

 

@Override
	public List detectChangedURIs(Notification notification) {
		List uriChangeNotifications = new ArrayList();

		Object notifier = notification.getNotifier();
		if (notifier instanceof EObject) {
			EObject eObject = (EObject) notifier;
			uriChangeNotifications.add(new URIChangeNotification(eObject, EcoreResourceUtil.getURI(eObject)));
		}

		return uriChangeNotifications;
	}

This will cause a change notification to be generated for any EObject that is modified, of course we do not want that. In contrast, the beginning of the Artop implementation of AutosarURIChangeDetectorDelegate looks like this:

 

public List detectChangedURIs(Notification notification) {
		List notifications = new ArrayList();
		if (notification.getNotifier() instanceof EObject) {
			EObject eObject = (EObject) notification.getNotifier();
			if (IdentifiableUtil.isIdentifiable(eObject)) {
				Object feature = notification.getFeature();
				if (feature instanceof EAttribute) {
					EAttribute attribute = (EAttribute) feature;
					// we check that modified feature correspond to the shortname of the identifiable
					if ("shortName".equals(attribute.getName())) { //$NON-NLS-1$

A notification on a feature is only processed if the modified object is an Identifiable and the modified feature is the one used for URI calculation (shortName in AUTOSAR). There is additional code in this fragment to detect changes in containment hierarchy, which is not shown here.

So if you use Sphinx for your own metamodel with name based referencing, have a look at AutosarURIChangeDetectorDelegate and create your own custom implementation for efficiency.

 LocalProxyChangeListener

In addition, Sphinx detects objects that have been removed from the containment tree and updates references by turning the removed object into proxies! That might be unexpected if you then work with the removed object later on. The rationale is well-explained in the Javadoc of LocalProxyChangeListener:
Detects {@link EObject model object}s that have been removed from their
{@link org.eclipse.emf.ecore.EObject#eResource() containing resource} and/or their {@link EObject#eContainer
containing object} and turns them as well as all their directly and indirectly contained objects into proxies. This
offers the following benefits:

  • After removal of an {@link EObject model object} from its container, all other {@link EObject model object}s that
    are still referencing the removed {@link EObject model object} will know that the latter is no longer available but
    can still figure out its type and {@link URI}.
  • If a new {@link EObject model object} with the same type as the removed one is added to the same container again
    later on, the proxies which the other {@link EObject model object}s are still referencing will be resolved as usual
    and therefore get automatically replaced by the newly added {@link EObject model object}.
  • In big models, this approach can yield significant advantages in terms of performance because it helps avoiding
    full deletions of {@link EObject model object}s involving expensive searches for their cross-references and those of
    all their directly and indirectly contained objects. It does all the same not lead to references pointing at
    “floating” {@link EObject model object}s, i.e., {@link EObject model object}s that are not directly or indirectly
    contained in a resource.

 

 

 

 


by Andreas Graf at July 29, 2015 11:35 AM

Running SWTBot tests in Travis

by Lorenzo Bettini at July 28, 2015 08:11 AM

The problem I was having when running SWTBot tests in Travis CI was that I could not use the new container-based infrastructure of Travis, which allows to cache things like the local maven repository. This was not possible since to run SWTBot tests you need a Window Manager (in Linux, you can use metacity), and so you had to install it during the Travis build; this requires sudo and using sudo prevents the use of the container-based infrastructure. Not using the cache means that each build would download all the maven artifacts from the start.

Now things have changed :)

When running in the container-based infrastructure, you’re still allowed to use Travis’  APT sources and packages extensions, as long as the package you need is in their whitelist. Metacity was not there, but I opened a request for that, and now metacity is available :)

Now you can use the container-based infrastructure and install metacity together (note that you won’t be able to cache installed apt packages, so each time the build runs, metacity will have to be reinstalled, but installing metacity is much faster than downloading all the Maven/Tycho artifacts).

The steps to run SWTBot tests in Travis can be summarized as follows:

sudo: false

language: java

jdk: oraclejdk7

cache:
  directories:
  - $HOME/.m2

env: DISPLAY=:99.0

install: true

addons:
  apt:
    packages:
    - metacity

#before_install:
# - sudo apt-get update
# - sudo apt-get install metacity

before_script:
 - sh -e /etc/init.d/xvfb start
 - metacity --sm-disable --replace 2> metacity.err &

I left the old steps “before_install” commented out, just as a comparison.

  • “sudo: false” enables the container based infrastructure
  • “cache:” ensures that the Maven repository is cached
  • “env:” enables the use of graphical display
  • “addons:apt:packages” uses the extensions that allow you to install whitelisted APT packages (metacity in our case).
  • “before_script:” starts the virtual framebuffer and then metacity.

Then, you can specify the Maven command to run your build (here are some examples:

Happy SWTBot testing! :)

 

 

Be Sociable, Share!

by Lorenzo Bettini at July 28, 2015 08:11 AM

How to update Eclipse to bleeding edge (Rawhide)

by Leo Ufimtsev at July 27, 2015 08:30 PM

Beta-Image

Sometimes you want the latest and greatest Eclipse on Fedora, as it includes the latest bug fixes. E.g

To do so, you will have to install the rawhide repository (1) and then update the Eclipse packages to those of Rawhide (2)

1) Download Rawhide Repository

The below will download the repository file into your /etc/yum.repos.d/ (for both yum and dnf). (This will not make dnf use Rawhide for all your packages, it simply gives it access to the repo on demand.)

dnf install fedora-repos-rawhide

2) Update Eclipse to Rawhide

This command will update all “Eclipse” packages on your system to that of Rawhide:

sudo dnf --enablerepo=rawhide update $(sudo dnf list installed | grep eclipse | cut -f1 -d " ")

Explanation:
– This lists all installed Eclipse packages: sudo dnf list installed | grep eclipse
– This get’s the first column delimited by spaces: cut -f1 -d " "
– This enables rawhide only for this update command: --enablerepo=rawhide

How to downgrade back

I haven’t actually done this myself yet, so downgrade only at your own risk :-).

But you can downgrade by removing the new packages and then re-installing them from the regular repos. Use same command as above but instead of ‘update’ use ‘remove.
If you have success with downgrading, please feel free to post a comment to let me know and I’ll update this post.

Updating Eclipse in the future

If you want to update eclipse, run the command above again. I haven’t gotten around to figuring out a way to make Fedora always use Rawhide for Eclipse for updates, if you know, please feel free to leave a comment.



by Leo Ufimtsev at July 27, 2015 08:30 PM

#OSCON 2015 and the Rise of Open Source Offices

by Chris Aniszczyk at July 27, 2015 04:18 PM

I had a fantastic time at OSCON last week. It was a crazy busy week for Twitter announcing that we are helping form the Cloud Native Computing Foundation and unifying some of the work that has been going on in the Kubernetes and Mesos ecosystems:

It’s rare that you see two communities and the large companies behind them put their egos besides and do what is better for everyone in the long term in the infrastructure space. We also formally joined the Open Container Initiative and plan on donating an AppC C++ implementation in the future:

Thank you to everyone who came to our ping pong tournament party and learned a bit more about the sport of table tennis:

We also had a great @TODOGroup panel at OSCON discussing how different companies are running and establishing open source offices… along with what works and some lessons learned:

Finally, thank you to everyone who came to my talk about lessons learned from Twitter creating its open source office on Thursday:

It’s always amazing to see how many companies are starting to form open source offices, from my talk I tried to highlight some of the better known ones from larger companies and even startups (along with their mission statements):

I really expect this trend to continue in the future, for example Box is looking to hire their first Head of Open Source and even  Guy Martin was just hired to create and run an open source office at Autodesk… Autodesk!

At the end of the day, as more businesses become software companies to some nature, they will naturally depend on a plethora of open source software. Businesses will look to find ways to build better relationships with the open source communities their software depends on to maximize value for their business, it’s in their best interests.


by Chris Aniszczyk at July 27, 2015 04:18 PM

Developing a source code editor in JavaFX (on the Eclipse 4 Application Platform)

by Tom Schindl at July 27, 2015 11:26 AM

You’ve chosen e4 and JavaFX as the technologies to implement your cool rich client application. Congrats!

In the first iteration you’ve implemented all the form UIs you need in your application which you made flashy with the help of CSS, animations for perspective switches, lightweight dialogs as introduced in e(fx)clipse 2.0, … .

In the 2nd iteration your task now might be to implement some scripting support but for that you require an editor who at least supports lexical syntax highlighting. So you now have multiple choices:

  • Use a WebView and use an editor written in JavaScript like Orion
  • Use the StyledTextArea shipped with e(fx)clipse 2.0 and implement all the hard stuff like paritioning, tokenizing, …
  • Get e(fx)clipse 2.1 and let the IDE generate the editor for you

If you decided to go with the last option the following explains how you get that going by developing a e4 JavaFX application

application

like shown in this video

Get e(fx)clipse 2.1.0

As of this writing e(fx)clipse 2.1 has not been released so you need to grab the nightly builds eg by simply downloading our All-in-One build.

Set up a target platform

We have a self-contained target platform feature (org.eclipse.fx.code.target.feature) available from our runtime-p2 repository to get started super easy.

target_1

target_2

target_3

target_4

Warning: Make sure you uncheck “Include required software” because the target won’t resolve if you have that checked!

target_5

target_6

Get e(fx)clipse 2.1

You need the latest builds from the e(fx)clipse nightly jobs. The easiest way to get started is to download our nightly all-in-one build from http://downloads.efxclipse.bestsolution.at/downloads/nightly/sdk/.

Setup the project

The project setup is done like you are used to for all e4 on JavaFX applications.

project_1

project_2

project_3

The wizard should have created:

  • at.bestsolution.sample.code.app: The main application module
  • at.bestsolution.sample.code.app.feature: The feature making up the main application module
  • at.bestsolution.sample.code.app.product: The product definition require for exporting
  • at.bestsolution.sample.code.app.releng: The release engineering project driving the build

Now we need to add some dependencies to your MANIFEST.MF:

  • org.eclipse.fx.core: Some Core APIs
  • org.eclipse.fx.code.editor: Core (=UI Toolkit independent) APIs for code editors
  • org.eclipse.fx.code.editor.fx: JavaFX dependent APIs for code editors
  • org.eclipse.text: Core APIs for text parsing, …
  • org.eclipse.fx.text: Core APIs for text parsing, highlighting, …
  • org.eclipse.fx.text.ui: JavaFX APIs for text parsing, highlighting, …
  • org.eclipse.fx.ui.controls: Additional controls for eg a File-System-Viewer
  • org.eclipse.osgi.services: OSGi-Service APIs we make use of when generateing DS-Services
  • org.eclipse.fx.core.di: Dependency Inject addons
  • org.eclipse.fx.code.editor.e4: code editor integration to e4
  • org.eclipse.fx.code.editor.fx.e4: JavaFX code editor integration to e4

For export reasons also add all those bundles to the feature.xml in at.bestsolution.sample.code.app.feature.

Generate editor infrastructure

Having everything configured now appropriately we start developing:

  • Create package at.bestsolution.sample.code.app.editor
  • Create a file named dart.ldef and copy the following content into it
    package at.bestsolution.sample.code
    
    dart {
    	partitioning {
    		partition __dftl_partition_content_type
    		partition __dart_singlelinedoc_comment
    		partition __dart_multilinedoc_comment
    		partition __dart_singleline_comment
    		partition __dart_multiline_comment
    		partition __dart_string
    		rule {
    			single_line __dart_string "'" => "'"
    			single_line __dart_string '"' => '"'
    			single_line __dart_singlelinedoc_comment '///' => ''
          		single_line __dart_singleline_comment '//' => ''
          		multi_line __dart_multilinedoc_comment '/**' => '*/'
          		multi_line  __dart_multiline_comment '/*' => '*/'
    		}
    	}
    	lexical_highlighting {
    		rule __dftl_partition_content_type whitespace javawhitespace {
    			default dart_default
    			dart_operator {
    				character [ ';', '.', '=', '/', '\\', '+', '-', '*', '<', '>', ':', '?', '!', ',', '|', '&', '^', '%', '~' ]
    			}
    			dart_bracket {
    				character [ '(', ')', '{', '}', '[', ']' ]
    			}
    			dart_keyword {
    				keywords [ 	  "break", "case", "catch", "class", "const", "continue", "default"
    							, "do", "else", "enum", "extends", "false", "final", "finally", "for"
    							,  "if", "in", "is", "new", "null", "rethrow", "return", "super"
    							, "switch", "this", "throw", "true", "try", "var", "void", "while"
    							, "with"  ]
    			}
    			dart_keyword_1 {
    				keywords [ 	  "abstract", "as", "assert", "deferred"
    							, "dynamic", "export", "external", "factory", "get"
    							, "implements", "import", "library", "operator", "part", "set", "static"
    							, "typedef" ]
    			}
    			dart_keyword_2 {
    				keywords [ "async", "async*", "await", "sync*", "yield", "yield*" ]
    			}
    			dart_builtin_types {
    				keywords [ "num", "String", "bool", "int", "double", "List", "Map" ]
    			}
    		}
    		rule __dart_singlelinedoc_comment {
    			default dart_doc
    			dart_doc_reference {
    				single_line "[" => "]"
    			}
    		}
    		rule __dart_multilinedoc_comment {
    			default dart_doc
    			dart_doc_reference {
    				single_line "[" => "]"
    			}
    		}
    		rule __dart_singleline_comment {
    			default dart_single_line_comment
    		}
    		rule __dart_multiline_comment {
    			default dart_multi_line_comment
    		}
    		rule __dart_string {
    			default dart_string
    			dart_string_inter {
    				single_line "${" => "}"
    				//TODO We need a $ => IDENTIFIER_CHAR rule
    			}
    		}
    	}
    	integration {
    		javafx {
    			java "at.bestsolution.sample.code.app.editor.generated"
    			e4 "at.bestsolution.sample.code.app.editor.generated"
    		}
    	}
    
    }
    
  • Xtext will prompt to add the Xtext nature to your project
    ldef_2
  • The sources are generated into the src-gen folder you should add that one to your build path
    add-source
  • It’s important to note that beside the files generated by the ldef-Language there are 2 files generated to your OSGi-INF-Folder by DS-Tooling from ca.ecliptical.pde.ds

I won’t explain the details of the dart.ldef-File because there’s already a blog post with a detailed description of the file.

The only part that is new is the integration section:

integration {
	javafx {
		java "at.bestsolution.sample.code.app.editor.generated"
		e4 "at.bestsolution.sample.code.app.editor.generated"
	}
}

who configures the code generator to:

  • Generate Java code for the partitioning and tokenizing
  • Generate e4 registration informations in terms of OSGi-Services

In contrast to the last blog where we’ve run our stuff in an NONE-OSGi/NONE-e4-world where we had to wire stuff ourselves this is not needed this time because the Eclipse DI container will take care of that!

Define a Filesystem-Viewer-Part

To browse the filesystem we need a viewer which might look like this:

package at.bestsolution.sample.code.app;

import java.net.URI;
import java.nio.file.Path;
import java.nio.file.Paths;

import javax.annotation.PostConstruct;
import javax.inject.Inject;
import javax.inject.Named;

import org.eclipse.e4.core.di.annotations.Optional;
import org.eclipse.e4.ui.di.PersistState;
import org.eclipse.fx.code.editor.services.TextEditorOpener;
import org.eclipse.fx.core.Memento;
import org.eclipse.fx.ui.controls.filesystem.FileItem;
import org.eclipse.fx.ui.controls.filesystem.ResourceEvent;
import org.eclipse.fx.ui.controls.filesystem.ResourceItem;
import org.eclipse.fx.ui.controls.filesystem.ResourceTreeView;

import javafx.collections.FXCollections;
import javafx.scene.layout.BorderPane;

public class ResourceViewerPart {
	@Inject
	TextEditorOpener opener;

	private Path rootDirectory;

	private ResourceTreeView viewer;

	@PostConstruct
	void init(BorderPane parent, Memento memento) {
		viewer = new ResourceTreeView();

		if( rootDirectory == null ) {
			String dir = memento.get("root-dir", null);
			if( dir != null ) {
				rootDirectory = Paths.get(URI.create(dir));
			}
		}

		if( rootDirectory != null ) {
			viewer.setRootDirectories(FXCollections.observableArrayList(ResourceItem.createObservedPath(rootDirectory)));
		}
		viewer.addEventHandler(ResourceEvent.openResourceEvent(), this::handleOpenResource);
		parent.setCenter(viewer);
	}

	@Inject
	@Optional
	public void setRootDirectory(@Named("rootDirectory") Path rootDirectory) {
		this.rootDirectory = rootDirectory;
		if( viewer != null ) {
			viewer.setRootDirectories(FXCollections.observableArrayList(ResourceItem.createObservedPath(rootDirectory)));
		}
	}

	private void handleOpenResource(ResourceEvent<ResourceItem> e) {
		e.getResourceItems()
			.stream()
			.filter( r -> r instanceof FileItem)
			.map( r -> (FileItem)r)
			.filter( r -> r.getName().endsWith(".dart"))
			.forEach(this::handle);
	}

	private void handle(FileItem item) {
		opener.openEditor(item.getUri());
	}

	@PersistState
	public void rememberState(Memento memento) {
		if( rootDirectory != null ) {
			memento.put("root-dir", rootDirectory.toFile().toURI().toString());
		}
	}
}

Define the application

e4 applications as you already know are not defined by code but with the help of the e4 application model which is stored by default in e4xmi-Files. The final model has to looks like this:

model

The important parts are:

  • DirtyStateTrackingAddon: Is a special addon who tracks the dirty state added to the applications Addon section
  • Handler: We using a framework handler org.eclipse.fx.code.editor.e4.handlers.SaveFile
  • Window-Variables: We have 2 special variables defined at the window level (activeInput, rootDirectory)
    window-variables
  • PartStack-Tags: We tagged the Part Stack who is hosting the editors with editorContainer
    stack-tags
  • Resource Viewer Part: We register the resource viewer implementation from above to the part definition
  • Root Directory Handler: To set the root directory we have a handler who looks like this
    package at.bestsolution.sample.code.app.handler;
    
    import java.io.File;
    import java.nio.file.Path;
    import java.nio.file.Paths;
    
    import org.eclipse.e4.core.di.annotations.Execute;
    import org.eclipse.fx.core.di.ContextValue;
    
    import javafx.beans.property.Property;
    import javafx.stage.DirectoryChooser;
    import javafx.stage.Stage;
    
    public class SetRootDirectory {
    
    	@Execute
    	public void setRootDirectory(@ContextValue("rootDirectory") Property<Path> rootDirectory, Stage stage) {
    		DirectoryChooser chooser = new DirectoryChooser();
    		File directory = chooser.showDialog(stage);
    		if( directory != null ) {
    			rootDirectory.setValue(Paths.get(directory.getAbsolutePath()));
    		}
    	}
    }
    


by Tom Schindl at July 27, 2015 11:26 AM

Some Rest with Vert.x

by cescoffier at July 27, 2015 12:00 AM

Previously in this blog series

This post is part of the Introduction to Vert.x series. So, let’s have a quick look about the content of the previous posts. In the first post, we developed a very simple Vert.x 3 application, and saw how this application can be tested, packaged and executed. In the last post, we saw how this application became configurable and how we can use a random port in test.

Well, nothing fancy… Let’s go a bit further this time and develop a CRUD-ish application. So an application exposing an HTML page interacting with the backend using a REST API. The level of RESTfullness of the API is not the topic of this post, I let you decide as it’s a very slippery topic.

So, in other words we are going to see:

  • Vert.x Web - a framework that let you create Web applications easily using Vert.x
  • How to expose static resources
  • How to develop a REST API

The code developed in this post is available on the post-3 branch of this Github project. We are going to start from the post-2 codebase.

So, let’s start.

Vert.x Web

As you may have notices in the previous posts, dealing with complex HTTP application using only Vert.x Core would be kind of cumbersome. That’s the main reason behind Vert.x Web. It makes the development of Vert.x base web applications really easy, without changing the philosophy.

To use Vert.x Web, you need to update the pom.xml file to add the following dependency:

<dependency>
  <groupId>io.vertxgroupId>
  <artifactId>vertx-webartifactId>
  <version>3.0.0version>
dependency>

That’s the only thing you need to use Vert.x Web. Sweet, no ?

Let’s now use it. Remember, in the previous post, when we requested http://localhost:8080, we reply a nice Hello World message. Let’s do the same with Vert.x Web. Open the io.vertx.blog.first.MyFirstVerticle class and change the start method to be:

@Override
public void start(Future fut) {
 // Create a router object.
 Router router = Router.router(vertx);

 // Bind "/" to our hello message - so we are still compatible.
 router.route("/").handler(routingContext -> {
   HttpServerResponse response = routingContext.response();
   response
       .putHeader("content-type", "text/html")
       .end("

Hello from my first Vert.x 3 application

"
); }); // Create the HTTP server and pass the "accept" method to the request handler. vertx .createHttpServer() .requestHandler(router::accept) .listen( // Retrieve the port from the configuration, // default to 8080. config().getInteger("http.port", 8080), result -> { if (result.succeeded()) { fut.complete(); } else { fut.fail(result.cause()); } } ); }

You may be surprise by the length of this snippet (in comparison to the previous code). But as we are going to see, it will make our app on steroids, just be patient.

As you can see, we start by creating a Router object. The router is the cornerstone of Vert.x Web. This object is responsible for dispatching the HTTP requests to the right handler. Two other concepts are very important in Vert.x Web:

  • Routes - which let you define how request are dispatched
  • Handlers - which are the actual action processing the requests and writing the result. Handlers can be chained.

If you understand these 3 concepts, you have understood everything in Vert.x Web.

Let’s focus on this code first:

router.route("/").handler(routingContext -> {
  HttpServerResponse response = routingContext.response();
  response
      .putHeader("content-type", "text/html")
      .end("

Hello from my first Vert.x 3 application

"
); });

It routes requests arriving on “/“ to the given handler. Handlers receive a RoutingContext object. This handler is quite similar to the code we had before, and it’s quite normal as it manipulates the same type of object: HttpServerResponse.

Let’s now have a look to the rest of the code:

vertx
    .createHttpServer()
    .requestHandler(router::accept)
    .listen(
        // Retrieve the port from the configuration,
        // default to 8080.
        config().getInteger("http.port", 8080),
        result -> {
          if (result.succeeded()) {
            fut.complete();
          } else {
            fut.fail(result.cause());
          }
        }
    );
}

It’s basically the same code as before, except that we change the request handler. We pass router::accept to the handler. You may not be familiar with this notation. It’s a reference to a method (here the method accept from the router object). In other worlds, it instructs vert.x to call the accept method of the router when it receives a request.

Let’s try to see if this work:

mvn clean package
java -jar target/my-first-app-1.0-SNAPSHOT-fat.jar

By opening http://localhost:8080 in your browser you should see the Hello message. As we didn’t change the behavior of the application, our tests are still valid.

Exposing static resources

Ok, so we have a first application using vert.x web. Let’s see some of the benefits. Let’s start with serving static resources, such as an index.html page. Before we go further, I should start with a disclaimer: “the HTML page we are going to see here is ugly like hell : I’m not a UI guy”. I should also add that there are probably plenty of better ways to implement this and a myriad of frameworks I should try, but that’s not the point. I tried to keep things simple and just relying on JQuery and Twitter Bootstrap, so if you know a bit of JavaScript you can understand and edit the page.

Let’s create the HTML page that will be the entry point of our application. Create an index.html page in src/main/resources/assets with the content from here. As it’s just a HTML page with a bit of JavaScript, we won’t detail the file here. If you have questions, just post comments.

Basically, the page is a simple CRUD UI to manage my collection of not-yet-finished bottles of Whisky. It was made in a generic way, so you can transpose it to your own collection. The list of product is displayed in the main table. You can create a new product, edit one or delete one. These actions are relying on a REST API (that we are going to implement) through AJAX calls. That’s all.

Once this page is created, edit the io.vertx.blog.first.MyFirstVerticle class and change the start method to be:

@Override
public void start(Future fut) {
 Router router = Router.router(vertx);
 router.route("/").handler(routingContext -> {
   HttpServerResponse response = routingContext.response();
   response
       .putHeader("content-type", "text/html")
       .end("

Hello from my first Vert.x 3 application

"
); }); // Serve static resources from the /assets directory router.route("/assets/*").handler(StaticHandler.create("assets")); vertx .createHttpServer() .requestHandler(router::accept) .listen( // Retrieve the port from the configuration, // default to 8080. config().getInteger("http.port", 8080), result -> { if (result.succeeded()) { fut.complete(); } else { fut.fail(result.cause()); } } ); }

The only difference with the previous code is the router.route("/assets/*").handler(StaticHandler.create("assets")); line. So, what does this line mean? It’s actually quite simple. It routes requests on “/assets/*” to resources stored in the “assets” directory. So our index.html page is going to be served using http://localhost:8080/assets/index.html.

Before testing this, let’s take a few seconds on the handler creation. All processing actions in Vert.x web are implemented as handler. To create a handler you always call the create method.

So, I’m sure you are impatient to see our beautiful HTML page. Let’s build and run the application:

mvn clean package
java -jar target/my-first-app-1.0-SNAPSHOT-fat.jar

Now, open your browser to http://localhost:8080/assets/index.html. Here it is… Ugly right? I told you.

As you may notice too… the table is empty, this is because we didn’t implement the REST API yet. Let’s do that now.

REST API with Vert.x Web

Vert.x Web makes the implementation of REST API really easy, as it basically routes your URL to the right handler. The API is very simple, and will be structured as follows:

  • GET /api/whiskies => get all bottles (getAll)
  • GET /api/whiskies/:id => get the bottle with the corresponding id (getOne)
  • POST /api/whiskies => add a new bottle (addOne)
  • PUT /api/whiskies/:id => update a bottle (updateOne)
  • DELETE /api/whiskies/id => delete a bottle (deleteOne)

We need some data…

But before going further, let’s create our data object. Create the src/main/java/io/vertx/blog/first/Whisky.java with the following content:

package io.vertx.blog.first;

import java.util.concurrent.atomic.AtomicInteger;

public class Whisky {

  private static final AtomicInteger COUNTER = new AtomicInteger();

  private final int id;

  private String name;

  private String origin;

  public Whisky(String name, String origin) {
    this.id = COUNTER.getAndIncrement();
    this.name = name;
    this.origin = origin;
  }

  public Whisky() {
    this.id = COUNTER.getAndIncrement();
  }

  public String getName() {
    return name;
  }

  public String getOrigin() {
    return origin;
  }

  public int getId() {
    return id;
  }

  public void setName(String name) {
    this.name = name;
  }

  public void setOrigin(String origin) {
    this.origin = origin;
  }
}

It’s a very simple bean class (so with getters and setters). We choose this format because Vert.x is relying on Jackson to handle the JSON format. Jackson automates the serialization and deserialization of bean classes, making our code much simpler.

Now, let’s create a couple of bottles. In the MyFirstVerticle class, add the following code:

// Store our product
private Map products = new LinkedHashMap();
// Create some product
private void createSomeData() {
  Whisky bowmore = new Whisky("Bowmore 15 Years Laimrig", "Scotland, Islay");
  products.put(bowmore.getId(), bowmore);
  Whisky talisker = new Whisky("Talisker 57° North", "Scotland, Island");
  products.put(talisker.getId(), talisker);
}

Then, in the start method, call the createSomeData method:

@Override
public void start(Future fut) {

  createSomeData();

  // Create a router object.
  Router router = Router.router(vertx);

  // Rest of the method
}

As you have noticed, we don’t really have a backend here, it’s just a (in-memory) map. Adding a backend will be covered by another post.

Get our products

Enough decoration, let’s implement the REST API. We are going to start with GET /api/whiskies. It returns the list of bottles in a JSON Array.

In the start method, add this line just below the static handler line:

router.get("/api/whiskies").handler(this::getAll);

This line instructs the router to handle the GET requests on “/api/whiskies” by calling the getAll method. We could have inlined the handler code, but for clarity reasons let’s create another method:

private void getAll(RoutingContext routingContext) {
  routingContext.response()
      .putHeader("content-type", "application/json; charset=utf-8")
      .end(Json.encodePrettily(products.values()));
}

As every handler our method receives a RoutingContext. It populates the response by setting the content-type and the actual content. Because our content may contain weird characters, we force the charset to UTF-8. To create the actual content, no need to compute the JSON string ourself. Vert.x lets us use the Json API. So Json.encodePrettily(products.values()) computes the JSON string representing the set of bottles.

We could have used Json.encodePrettily(products), but to make the JavaScript code simpler, we just return the set of bottles and not an object containing ID => Bottle entries.

With this in place, we should be able to retrieve the set of bottle from our HTML page. Let’s try it:

mvn clean package
java -jar target/my-first-app-1.0-SNAPSHOT-fat.jar

Then open the HTML page in your browser (http://localhost:8080/assets/index.html), and should should see:

Alt text

I’m sure you are curious, and want to actually see what is returned by our REST API. Let’s open a browser to http://localhost:8080/api/whiskies. You should get:

[ {
  "id" : 0,
  "name" : "Bowmore 15 Years Laimrig",
  "origin" : "Scotland, Islay"
}, {
  "id" : 1,
  "name" : "Talisker 57° North",
  "origin" : "Scotland, Island"
} ]

Create a product

Now we can retrieve the set of bottles, let’s create a new one. Unlike the previous REST API endpoint, this one need to read the request’s body. For performance reason, it should be explicitly enabled. Don’t be scared… it’s just a handler.

In the start method, add these lines just below the line ending by getAll:

router.route("/api/whiskies*").handler(BodyHandler.create());
router.post("/api/whiskies").handler(this::addOne);

The first line enables the reading of the request body for all routes under “/api/whiskies”. We could have enabled it globally with router.route().handler(BodyHandler.create()).

The second line maps POST requests on /api/whiskies to the addOne method. Let’s create this method:

private void addOne(RoutingContext routingContext) {
  final Whisky whisky = Json.decodeValue(routingContext.getBodyAsString(),
      Whisky.class);
  products.put(whisky.getId(), whisky);
  routingContext.response()
      .setStatusCode(201)
      .putHeader("content-type", "application/json; charset=utf-8")
      .end(Json.encodePrettily(whisky));
}

The method starts by retrieving the Whisky object from the request body. It just reads the body into a String and passes it to the Json.decodeValue method. Once created it adds it to the backend map and returns the created bottle as JSON.

Let’s try this. Rebuild and restart the application with:

mvn clean package
java -jar target/my-first-app-1.0-SNAPSHOT-fat.jar

Then, refresh the HTML page and click on the Add a new bottle button. Enter the data such as: “Jameson” as name and “Ireland” as origin (purists would have noticed that this is actually a Whiskey and not a Whisky). The bottle should be added to the table.

Status 201 ?
As you can see, we have set the response status to 201. It means CREATED, and is the generally used in REST API that create an entity. By default vert.x web is setting the status to 200 meaning OK.

Finishing a bottle

Well, bottles do not last forever, so we should be able to delete a bottle. In the start method, add this line:

router.delete("/api/whiskies/:id").handler(this::deleteOne);

In the URL, we define a path parameter :id. So, when handling a matching request, Vert.x extracts the path segment corresponding to the parameter and let us access it in the handler method. For instance, /api/whiskies/0 maps id to 0.

Let’s see how the parameter can be used in the handler method. Create the deleteOne method as follows:

private void deleteOne(RoutingContext routingContext) {
  String id = routingContext.request().getParam("id");
  if (id == null) {
    routingContext.response().setStatusCode(400).end();
  } else {
    Integer idAsInteger = Integer.valueOf(id);
    products.remove(idAsInteger);
  }
  routingContext.response().setStatusCode(204).end();
}

The path parameter is retrieved using routingContext.request().getParam("id"). It checks whether it’s null (not set), and in this case returns a Bad Request response (status code 400). Otherwise, it removes it from the backend map.

Status 204 ?
As you can see, we have set the response status to 204 - NO CONTENT. Response to the HTTP Verb delete have generally no content.

The other methods

We won’t detail getOne and updateOne as the implementations are straightforward and very similar. Their implementations are available on Github.

Cheers !

It’s time to conclude this post. We have seen how Vert.x Web lets you implement a REST API easily and how it can serve static resources. A bit more fancy than before, but still pretty easy.

In the next post, we are going to improve our tests to cover the REST API.

Say Tuned & Happy Coding !


by cescoffier at July 27, 2015 12:00 AM

AUTOSAR: OCL, Xtend, oAW for validation

by Andreas Graf at July 25, 2015 05:13 PM

In a recent post, I had written about Model-to-Model-transformation with Xtend. In addition to M2M-transformation, Xtend and the new Sphinx Check framework are a good pair for model validation. There are other frameworks, such as OCL, which are also candidates. Xpand (formerly known as oAW) is used in COMASSO. This blog post sketches some questions / issues to consider when choosing a framework for model validation.

Support for unsettable attributes

EMF supports attributes that can have the status “unset” (i.e. they have never been explicitly set), as well as default values. When accessing this kind of model element attributes with the standard getter-method, you will not be able to distinguish whether the model element has been explicitly set to the same value as the default value or if it has never been touched.

If this kind of check is relevant, the validation technology should support access to the EMF methods that support the explicit predicate if a value has been set.

Inverse References

With AUTOSAR, a large number of checks will involve some logic to see, if a given element is referenced by other elements (e.g. checks like “find all signals that are not referenced from a PDU”). Usually, these references are uni-directional and traversal of the model is required to find referencing elements. In these cases, performance is heavily influenced by the underlying framework support. A direct solution would be to traverse the entire model or use the utility functions of EMF. However, if the technology allows access to frameworks like IncQuery, a large number of queries / checks can be significantly sped up.

Error Reporting

Error Reporting is central for the usability of the generated error messages. This involves a few aspects that can be explained at a simple example: Consider that we want to check that each PDU has a unique id.

Context

In Xpand (and similar in OCL), a check could look like:

context PDU ERROR "Duplicate ID in "+this.shortName:
this.parent.pdu.exists(e|e.id==this.id && e != this)

This results in quadratic runtime, since the list of PDU is is fully traversed for each PDU that is checked. This can be improved in several ways:

  1. Keep the context on the PDU level, but allow some efficient caching so that the code is not executed so often. However, that involves some additional effort in making sure that the caches are created / torn down at the right time (e.g. after model modification)
  2. Move the context up to package or model level and have a framework that allows to generate errors/warning not only for the elements in the context, but on any other (sub-) elements. The Sphinx Check framework supports this.

Calculation intensive error messages

Sometimes the calculation of a check is quite complex and, in addition, generating a meaningful error message might need some results from that calculation. Consider e.g. this example from the ocl documentation:

invariant SufficientCopies('There are '
+ library.loans->select((book = self))->size().toString()
+ ' loans for the ' + copies.toString() + ' copies of \'' + name + '\''):
library.loans->select((book = self))->size() <= copies;

The fragment

library.loans->select((book = self))->size()

is used in both the error message as well as the calculation of the predicate. In this case, this is no big deal, but when the calculation gets more complex, this can be annoying. Approaches are

  1. Factor the code into a helper function and call it twice. Under the assumption, that the error message is only evaluated when a check fails that should not incur much overhead. However, it moves the actual predicate away from the invariant statement.
  2. In Xpand, any information can be attached to any model elements. So in the check body, the result of the complex calculation can be attached to the model element and the information is retrieved in the calculation of the error message.
  3. In the Sphinx Check framework, error messages can be calculated from within the check body.

User documentation for checks

Most validation frameworks support the definition of at least an error code (error id) and a descriptive message. However, more detailed explanations of the checks are often required for the users to be able to work with and fix check results. For the development process, it is beneficial if that kind of description is stored close to the actual tests. This could be achieved by analysing comments near the validations, tools like javadoc etc. The Sphinx frameworks describes ids, messages, severities and user documentation in an EMF model. During runtime of an Eclipse RCP, it is possible to use the dynamic help functionality of the Eclipse Help to generate documentation for all registered checks on the fly.

 

 

Here are some additional features of the Xtend language that come in Handy when writing validations:

TopicExplanation
ComfortXtend has a number of features that make writing checks very concise and comfortable. The most important is the concise syntax to navigate over models. This helps to avoid loops that would be required when implementing in Java


val r = eAllContents.filter(EcucChoiceReferenceDef).findFirst[
shortName == "DemMemoryDestinationRef"]
}
PerformanceXtend compiles to plain Java. This gives higher performance than many interpreted transformation languages. In addition, you can use any Java profiler (such as Yourkit, JProfiler) to find bottlenecks in your transformations.
Long-Term-SupportXtend compiles to plain Java. You can just keep the compiled java code for safety and be totally independent about the Xtend project itself.
Test-SupportXtend compiles to plain Java. You can just use any testing tools (such as JUnit integration in Eclipse or mvn/surefire). We have extensive test cases for the transformation that are documented in nice reports that are generated with standard Java tooling.
Code CoverageXtend compiles to plain Java. You can just use any code coverage tools (such as Jacoco)
DebuggingDebugger integration is fully supported to step through your code.
ExtensibilityXtend is fully integrated with Java. It does not matter if you write your code in Java or Xtend.
DocumentationYou can use standard Javadocs in your Xtend transformations and use the standard tooling to get reports.
ModularityXtend integrates with Dependency Injection. Systems like Google Guice can be used to configure combinations of model transformation.
Active AnnotationsXtend supports the customization of its mapping to Java with active annotations. That makes it possible to adapt and extend the transformation system to custom requirements.
Full EMF supportThe Xtend transformations operate on the generated EMF classes. That makes it easy to work with unsettable attributes etc.
IDE IntegrationThe Xtend editors support essential operations such as "Find References", "Go To declaration" etc.

 


by Andreas Graf at July 25, 2015 05:13 PM

Eclipse Hackathon Hamburg – 2015 Q3

by eselmeister at July 25, 2015 10:11 AM

Yesterday, we had our third Eclipse Hackathon (2015) in Hamburg, Germany. It was a great meeting :-).
https://wiki.eclipse.org/Hackathon_Hamburg_2015_Q3

Eclipse Hackathon Hamburg Q3 2015
* Foto (C) by Tobias Baumann

Stay tuned, the next Hackathon will be in approx. three month.
https://wiki.eclipse.org/Hackathon_Hamburg_2015_Q4


by eselmeister at July 25, 2015 10:11 AM

DemoCamp Mars in Stuttgart: Great People, Talks, and Food

by Niko Stotz at July 24, 2015 01:11 PM

SpeakersWe had a nice DemoCamp in Stuttgart for Eclipse Mars Release train. About 50 people had a great time alongside great food.

The full agenda, including links to all slides, can be found in the Eclipse Wiki.

MatthiasThe first talk by Matthias Zimmermann showed the Business Application Framework Scout, especially the new features of Mars and the upcoming next release.

MartinAfterwards, Martin Schreiber presented their experience with Tycho and some practical solutions.

JinyingJinying Yu gave an overview of the CloudScale project. It analyzes, estimates and simulates the scalability of a software system, especially for moving it to a cloud service.

MarcoAfter some refreshments and discussions during the break, Marco Eilers impressed with an Xtend interpreter, including tracing between the input model of a transformation and the resulting model or text – in both directions!

MiroThe next talk by Miro Spönemann showed the current state of Xtext on the Web and in IntelliJ. As Miro is the lead developer of the Web variant, he could easily answer all questions in-depth.

HaraldFinally, Harald Mackamul gave an overview of the APP4MC project. They extend the findings of Amalthea project to multi- and many-core systems.

We finished the DemoCamp with more discussions, food, and beer.

I’d like to thank all the great speakers and the attendees for making this DemoCamp fun as ever.


by Niko Stotz at July 24, 2015 01:11 PM

Honored to join the Eclipse Architecture Council

by Lars Vogel at July 24, 2015 11:52 AM

I recently been elected to join the Eclipse Architecture Council. The Eclipse Architecture Council serves the community by identifying and tackling any issues that hinder Eclipse’s continued technological success and innovation, widespread adoption, and future growth.

Looking forward to help here.


by Lars Vogel at July 24, 2015 11:52 AM

Copyright Headers from the Git History

by Eike Stepper (noreply@blogger.com) at July 24, 2015 09:47 AM

The other day Vincent Zurczak has blogged about Updating Copyright Mentions with Eclipse and his way of maintaining legal headers in software artifacts is very similar to what we've always done in CDO. We had the exact same header in all artifacts and we used search and replace once per year to update them all. For us that has several disadvantages:
  1. Most importantly that means to modify files that have no other (real) changes in that year.
  2. It was hard to find the files with missing legal headers.
  3. It was hard (well, mostly because I wasn't smart enough) to have different copyright owners.
What I always envisioned was a tool that identifies files that could or should have legal headers, consults the Git history for these files and assembles a copyright line as follows:

Copyright (c) 2008, 2009, 2011-2013 Owner and others.

Yesterday I've finally finished this tool:



A simple Check Copyrights for missing copyrights ends with:

Copyrights missing: 0
Copyrights rewritten: 0
Files visited: 22722
Time needed: 5.73 seconds

If there are copyrights missing the tool produces a list of the paths and can optionally open them in editors. The Update Copyrights action takes approx. 35 minutes on the same working tree and results in files with beautiful legal headers that are totally in line with the Git history.

If you are interested in the code have a look at UpdateCopyrightsAction.java. There are just a few places that are CDO-specific and I would be happy to review your patches to make the tool more flexible.

by Eike Stepper (noreply@blogger.com) at July 24, 2015 09:47 AM

Developing a source code editor in JavaFX (without e4 and OSGi)

by Tom Schindl at July 24, 2015 09:36 AM

In my last blog post I introduced the DSL we’ll ship with e(fx)clipse 2.1 in August 2015.

Our main deployment platform is of course e4 on JavaFX but because we have a clean architecture based on IoC and services our components don’t know about OSGi and hence can be used in any Java(FX) application no matter if you run on OSGi or not.

To show you that those are not idle words the first blog showing the code editor components in action uses plain Java and maven as build tool – for those who want to use them in OSGi a blog will follow soon.

app

The following video demonstrates the final application in action

Step 1: Install e(fx)clipse 2.1 or later

At the time of this writing e(fx)clipse 2.1 has not been released so the best option to get started is to download our all-in-one nightly build.

Step 2: Create a new maven project

maven

maven2

maven3

maven4

Step 3: Modify the pom.xml

First we need modify the source and target version for the Java compiler by adding:

<!-- ... -->
<build>
	<plugins>
		<plugin>
			<artifactId>maven-compiler-plugin</artifactId>
			<configuration>
				<source>1.8</source>
				<target>1.8</target>
			</configuration>
		</plugin>
	</plugins>
</build>
<!-- ... -->

Because the dependencies have not yet been released we need to add the Sonatype snapshot repository with:

<!-- ... -->
<repositories>
	<repository>
		<id>sonatype-snapshots</id>
		<url>https://oss.sonatype.org/content/repositories/snapshots/</url>
	</repository>
</repositories>
<!-- ... -->

And finally we add the JavaFX-Code editor component with:

<!-- ... -->
<dependencies>
	<dependency>
		<groupId>at.bestsolution.eclipse</groupId>
		<artifactId>org.eclipse.fx.code.editor.fx</artifactId>
		<version>2.1.0-SNAPSHOT</version>
	</dependency>
</dependencies>
<!-- ... -->

At the end your pom.xml should look like this:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<groupId>at.bestsolution.sample</groupId>
	<artifactId>at.bestsolution.sample.code</artifactId>
	<version>0.0.1-SNAPSHOT</version>
	<repositories>
		<repository>
			<id>sonatype-snapshots</id>
			<url>https://oss.sonatype.org/content/repositories/snapshots/</url>
		</repository>
	</repositories>

	<dependencies>
		<dependency>
			<groupId>at.bestsolution.eclipse</groupId>
			<artifactId>org.eclipse.fx.code.editor.fx</artifactId>
			<version>2.1.0-SNAPSHOT</version>
		</dependency>
	</dependencies>

	<build>
		<plugins>
			<plugin>
				<artifactId>maven-compiler-plugin</artifactId>
				<configuration>
					<source>1.8</source>
					<target>1.8</target>
				</configuration>
			</plugin>
		</plugins>
	</build>
</project>

Step 4: Language definition

Now that we have configured our project appropriately we can start defining our application. We start with creating a package like “at.bestsolution.sample.code” and there we add a file named “dart.ldef

ldef

After the file is created. Eclipse will prompt you for adding the Xtext nature to your project because of course the DSL is implemented with the help of Xtext.

ldef_2

The last step before we start defining our language is to make some modifications to the project. So open the project properties and navigate to LDef/Compiler check the “Enable project specific settings” and modify the Output Folder/Directory value to src/main/java

ldef_3

Now paste the following content to the dart.ldef file:

package at.bestsolution.sample.code

dart {
	partitioning {
		partition __dftl_partition_content_type
		partition __dart_singlelinedoc_comment
		partition __dart_multilinedoc_comment
		partition __dart_singleline_comment
		partition __dart_multiline_comment
		partition __dart_string
		rule {
			single_line __dart_string "'" => "'"
			single_line __dart_string '"' => '"'
			single_line __dart_singlelinedoc_comment '///' => ''
      		single_line __dart_singleline_comment '//' => ''
      		multi_line __dart_multilinedoc_comment '/**' => '*/'
      		multi_line  __dart_multiline_comment '/*' => '*/'
		}
	}
	lexical_highlighting {
		rule __dftl_partition_content_type whitespace javawhitespace {
			default dart_default
			dart_operator {
				character [ ';', '.', '=', '/', '\\', '+', '-', '*', '<', '>', ':', '?', '!', ',', '|', '&', '^', '%', '~' ]
			}
			dart_bracket {
				character [ '(', ')', '{', '}', '[', ']' ]
			}
			dart_keyword {
				keywords [ 	  "break", "case", "catch", "class", "const", "continue", "default"
							, "do", "else", "enum", "extends", "false", "final", "finally", "for"
							,  "if", "in", "is", "new", "null", "rethrow", "return", "super"
							, "switch", "this", "throw", "true", "try", "var", "void", "while"
							, "with"  ]
			}
			dart_keyword_1 {
				keywords [ 	  "abstract", "as", "assert", "deferred"
							, "dynamic", "export", "external", "factory", "get"
							, "implements", "import", "library", "operator", "part", "set", "static"
							, "typedef" ]
			}
			dart_keyword_2 {
				keywords [ "async", "async*", "await", "sync*", "yield", "yield*" ]
			}
			dart_builtin_types {
				keywords [ "num", "String", "bool", "int", "double", "List", "Map" ]
			}
		}
		rule __dart_singlelinedoc_comment {
			default dart_doc
			dart_doc_reference {
				single_line "[" => "]"
			}
		}
		rule __dart_multilinedoc_comment {
			default dart_doc
			dart_doc_reference {
				single_line "[" => "]"
			}
		}
		rule __dart_singleline_comment {
			default dart_single_line_comment
		}
		rule __dart_multiline_comment {
			default dart_multi_line_comment
		}
		rule __dart_string {
			default dart_string
			dart_string_inter {
				single_line "${" => "}"
				//TODO We need a $ => IDENTIFIER_CHAR rule
			}
		}
	}
	integration {
		javafx {
			java "at.bestsolution.sample.code.generated"
		}
	}

}

I won’t explain most of the files content because I’ve done that already in the last blog post “Defining source editors with a DSL” but in brief it holds the paritioning and tokenizing rules required to provide lexical highlighting for Google Dart.

The only new section is:

package org.eclipse.fx.code.dart

dart {
....
	integration {
		javafx {
			java "at.bestsolution.sample.code.generated"
		}
	}

}

This section holds the configuration for the code generator. In our case we instruct it to generate some java code for us and your project explorer should show something similar to this.

parser_stuff

For us the 2 most important classes are:

  • DartPartitioner: This one is responsible for partitioning your source file into eg comment-sections, code-sections, string-sections, …
  • DartPresentationReconciler: This one is responsible to tokenize the different partitions eg to create keyword-tokens, ….

Step 5: Setup an editor control

Now that the parsing infrastructure is in place we can create our setup for the editor control. For that we create a new class “DartEditor” in “at.bestsolution.sample.code” and make it extend “org.eclipse.fx.code.editor.fx.TextEditor“.

package at.bestsolution.sample.code;

import org.eclipse.fx.code.editor.StringInput;

public class DartEditor extends TextEditor {
	public DartEditor(StringInput input) {
		setInput(input);
		setDocument(new InputDocument(input));
		setPartitioner(new DartPartitioner());
		setSourceViewerConfiguration(
			new DefaultSourceViewerConfiguration(input, new DartPresentationReconciler(), null, null, null)
		);
	}
}

The above is the minimal configuration you need when creating an editor:

  • org.eclipse.fx.code.editor.Input: Is the abstraction for the retrieving and storing the content eg on the filesystem, …
  • org.eclipse.jface.text.Document: Is the text buffer behind the text editor
  • org.eclipse.jface.text.IDocumentPartitioner: The component responsible to partition the source file
  • org.eclipse.jface.text.source.SourceViewerConfiguration: The component responsible for syntax highlighting, error displaying and auto-completion

The final application

Now that we have an editor component we can build our final application which needs to have a filesystem browser on the left and a tab folder with editors on the right.

package at.bestsolution.sample.code;

import java.io.File;
import java.nio.charset.StandardCharsets;
import java.nio.file.Path;
import java.nio.file.Paths;

import org.eclipse.fx.code.editor.SourceFileInput;
import org.eclipse.fx.ui.controls.filesystem.FileItem;
import org.eclipse.fx.ui.controls.filesystem.ResourceEvent;
import org.eclipse.fx.ui.controls.filesystem.ResourceItem;
import org.eclipse.fx.ui.controls.filesystem.ResourceTreeView;

import javafx.application.Application;
import javafx.beans.binding.Bindings;
import javafx.beans.binding.StringExpression;
import javafx.beans.property.ReadOnlyBooleanProperty;
import javafx.collections.FXCollections;
import javafx.event.ActionEvent;
import javafx.scene.Scene;
import javafx.scene.control.Menu;
import javafx.scene.control.MenuBar;
import javafx.scene.control.MenuItem;
import javafx.scene.control.Tab;
import javafx.scene.control.TabPane;
import javafx.scene.input.KeyCode;
import javafx.scene.input.KeyCodeCombination;
import javafx.scene.input.KeyCombination;
import javafx.scene.layout.BorderPane;
import javafx.stage.DirectoryChooser;
import javafx.stage.Stage;

public class DartEditorSample extends Application {

	private TabPane tabFolder;
	private ResourceTreeView viewer;

	static class EditorData {
		final Path path;
		final DartEditor editor;

		public EditorData(Path path, DartEditor editor) {
			this.path = path;
			this.editor = editor;
		}
	}

	@Override
	public void start(Stage primaryStage) throws Exception {
		BorderPane p = new BorderPane();
		p.setTop(createMenuBar());

		viewer = new ResourceTreeView();
		viewer.addEventHandler(ResourceEvent.openResourceEvent(), this::handleOpenResource);
		p.setLeft(viewer);

		tabFolder = new TabPane();
		p.setCenter(tabFolder);

		Scene s = new Scene(p, 800, 600);
		s.getStylesheets().add(getClass().getResource("default.css").toExternalForm());

		primaryStage.setScene(s);
		primaryStage.show();
	}

	private MenuBar createMenuBar() {
		MenuBar bar = new MenuBar();

		Menu fileMenu = new Menu("File");

		MenuItem rootDirectory = new MenuItem("Select root folder ...");
		rootDirectory.setOnAction(this::handleSelectRootFolder);

		MenuItem saveFile = new MenuItem("Save");
		saveFile.setAccelerator(new KeyCodeCombination(KeyCode.S,KeyCombination.META_DOWN));
		saveFile.setOnAction(this::handleSave);


		fileMenu.getItems().addAll(rootDirectory, saveFile);

		bar.getMenus().add(fileMenu);

		return bar;
	}

	private void handleSelectRootFolder(ActionEvent e) {
		DirectoryChooser chooser = new DirectoryChooser();
		File directory = chooser.showDialog(viewer.getScene().getWindow());
		if( directory != null ) {
			viewer.setRootDirectories(
					FXCollections.observableArrayList(ResourceItem.createObservedPath(Paths.get(directory.getAbsolutePath()))));
		}
	}

	private void handleSave(ActionEvent e) {
		Tab t = tabFolder.getSelectionModel().getSelectedItem();
		if( t != null ) {
			((EditorData)t.getUserData()).editor.save();
		}
	}

	private void handleOpenResource(ResourceEvent<ResourceItem> e) {
		e.getResourceItems()
			.stream()
			.filter( r -> r instanceof FileItem)
			.map( r -> (FileItem)r)
			.filter( r -> r.getName().endsWith(".dart"))
			.forEach(this::handle);
	}

	private void handle(FileItem item) {
		Path path = (Path) item.getNativeResourceObject();

		Tab tab = tabFolder.getTabs().stream().filter( t -> ((EditorData)t.getUserData()).path.equals(path) ).findFirst().orElseGet(() -> {
			return createAndAttachTab(path, item);
		});
		tabFolder.getSelectionModel().select(tab);
	}

	private Tab createAndAttachTab(Path path, FileItem item) {
		BorderPane p = new BorderPane();
		DartEditor editor = new DartEditor(new SourceFileInput(path, StandardCharsets.UTF_8));
		editor.initUI(p);

		ReadOnlyBooleanProperty modifiedProperty = editor.modifiedProperty();
		StringExpression titleText = Bindings.createStringBinding(() -> {
			return modifiedProperty.get() ? "*" : "";
		}, modifiedProperty).concat(item.getName());

		Tab t = new Tab();
		t.textProperty().bind(titleText);
		t.setContent(p);
		t.setUserData(new EditorData(path, editor));
		tabFolder.getTabs().add(t);
		return t;
	}

	public static void main(String[] args) {
		Application.launch(args);
	}
}

The last thing that needs to be done is to fill the “at/bestsolution/sample/code/default.css” with life:

.styled-text-area .list-view {
	-fx-background-color: white;
}

.styled-text-area .dart.dart_default {
	-styled-text-color: rgb(0, 0, 0);
}

.styled-text-area .dart.dart_operator {
	-styled-text-color: rgb(0, 0, 0);
}

.styled-text-area .dart.dart_bracket {
	-styled-text-color: rgb(0, 0, 0);
}

.styled-text-area .dart.dart_keyword {
	-styled-text-color: rgb(127, 0, 85);
	-fx-font-weight: bold;
}

.styled-text-area .dart.dart_keyword_1 {
	-styled-text-color: rgb(127, 0, 85);
	-fx-font-weight: bold;
}

.styled-text-area .dart.dart_keyword_2 {
	-styled-text-color: rgb(127, 0, 85);
	-fx-font-weight: bold;
}

.styled-text-area .dart.dart_single_line_comment {
	-styled-text-color: rgb(63, 127, 95);
}

.styled-text-area .dart.dart_multi_line_comment {
	-styled-text-color: rgb(63, 127, 95);
}

.styled-text-area .dart.dart_string {
	-styled-text-color: rgb(42, 0, 255);
}

.styled-text-area .dart.dart_string_inter {
	-styled-text-color: rgb(42, 0, 255);
	-fx-font-weight: bold;
}

.styled-text-area .dart.dart_builtin_types {
	-styled-text-color: #74a567;
	-fx-font-weight: bold;
}

.styled-text-area .dart.dart_doc {
	-styled-text-color: rgb(63, 95, 191);
}

.styled-text-area .dart.dart_doc_reference {
	-styled-text-color: rgb(63, 95, 191);
	-fx-font-weight: bold;
}


by Tom Schindl at July 24, 2015 09:36 AM