New Eclipse feature: See return value during debugging

by leoufimtsev at September 23, 2016 07:16 PM


With a recent patch, Eclipse can now show you the return value of a method during a debug session.

For years, when I was debugging and I needed to see the return value of a method, I would change code like:

return function();



String retVal = function();
return retVal;

And then step through the code and inspect the value of “retVal”.

Recently [September 2016] a patch was merged to support this feature. Now when you return from a method, in the upper method, in the variable view it shows the return value of the previously finished call:


As a side note, the reason this was not implemented sooner is that the Java virtual machine debugger did not provide this information until Java 1.6.

If your version of Eclipse doesn’t yet have that feature, try downloading a recent integration or nightly build.

Happy debugging.



by leoufimtsev at September 23, 2016 07:16 PM

JBoss Tools and Red Hat Developer Studio Maintenance Release for Eclipse Neon

by jeffmaury at September 21, 2016 04:04 PM

JBoss Tools 4.4.1 and Red Hat JBoss Developer Studio 10.1 for Eclipse Neon are here waiting for you. Check it out!



JBoss Developer Studio comes with everything pre-bundled in its installer. Simply download it from our JBoss Products page and run it like this:

java -jar jboss-devstudio-<installername>.jar

JBoss Tools or Bring-Your-Own-Eclipse (BYOE) JBoss Developer Studio require a bit more:

This release requires at least Eclipse 4.6 (Neon) but we recommend using the latest Eclipse 4.6 Neon JEE Bundle since then you get most of the dependencies preinstalled.

Once you have installed Eclipse, you can either find us on the Eclipse Marketplace under "JBoss Tools" or "Red Hat JBoss Developer Studio".

For JBoss Tools, you can also use our update site directly.

What is new?

Our main focus for this release was improvements for container based development and bug fixing.

Improved OpenShift 3 and Docker Tools

We continue to work on providing better experience for container based development in JBoss Tools and Developer Studio. Let’s go through a few interesting updates here.

Support for Container Labels

Users can now specify labels when running a container. The labels are saved in the launch configuration and can also be edited before relaunching the container.

Container Labels

Automatically detect known Docker daemon connections

When the Docker Explorer view is opened, the list of existing connections (saved from a previous session) is reloaded. In addition to this behaviour, the view will also attempt to find new connections using default settings such the &aposunix:///var/run/docker.sock&apos Unix socket or the &aposDOCKER_HOST&apos, &aposDOCKER_CERT_PATH&apos and &aposDOCKER_TLS_VERIFY&apos environment variables. This means that by default, in a new workspace, if a Docker daemon is reachable using one of those methods, the user does not have to use the "New Connection" wizard to get a connection.

Extension point for Docker daemon connection settings

An extension point has been added to the Docker core plugin to allow for custom connection settings provisionning.

Support for Docker Compose

Support for Docker Compose has finally landed !

Users can select a docker-compose.yml file and start Docker Compose from the context menu, using the Run > Docker Compose launcher shortcut.

The Docker Compose process displays it logs (with support for text coloring based on ANSI escape codes) and provides a stop button to stop the underlying process.

Docker Compose

Also, as with the support for building and running containers, a launch configuration is created after the first call to Docker Compose on the selected docker-compose.yml file.

Docker Image Hierarchy View Improvements

The new Docker Image Hierarchy view not only shows the relationships between images (which is particularly interesting when an image is built using a Dockerfile), but it also includes containers based on the images in the tree view while providing with all relevant commands (in the context menu) for containers and images.

Docker Image Hierarchy View

Server templates can now be displayed / edited

Server templates are now displayed in the property view under the Templates tab:

property view template

You can access/edit the content of the template with the Edit command.

Events can now be displayed

Events generated as part of the application livecycle are now displayed in the property view under the Events tab (available at the project level):

property view event

You can refresh the content of the event with the Refresh command or open the event in the OpenShift web console with the Show In → Web Console command.

Volume claims can now be displayed

Volume claims are now displayed in the property view under the Storage tab (available at the project level):

property view storage1

You can create a new volume claim using a resource file like the following:

          "apiVersion": "v1",
          "kind": "PersistentVolumeClaim",
          "metadata": {
              "name": "claim1"
          "spec": {
              "accessModes": [ "ReadWriteOnce" ],
              "resources": {
                  "requests": {
                      "storage": "1Gi"

If you deploy such a resource file with the New → Resource command at the project level, the Storage tab will be updated:

property view storage2

You can access/edit the content of the volume claim with the Edit command or open the volume claim in the OpenShift web console with the Show In → Web Console command.

Server Tools

QuickFixes now available in runtime detection

Runtime detection has been a feature of JBossTools for a long while, however, it would sometimes create runtime and server adapters with configuration errors without alerting the user. Now, the user will have an opportunity to execute quickfixes before completing the creation of their runtimes and servers.

JBIDE 15189 rt detect 1

To see this in action, we can first open up the runtime-detection preference page. We can see that our runtime-detection will automatically search three paths for valid runtimes of any type.

JBIDE 15189 rt detect 2

Once we click search, the runtime-detection’s search dialog appears, with results it has found. In this case, it has located an EAP 6.4 and an EAP 7.0 installation. However, we can see that both have errors. If we click on the error column for the discovered EAP 7.0, the error is expanded, and we see that we’re missing a valid / compatible JRE. To fix the issue, we should click on this item.

JBIDE 15189 rt detect 3

When we click on the problem for EAP 7, the new JRE dialog appears, allowing us to add a compatible JRE. The dialog helpfully informs us of what the restrictions are for this specific runtime. In this case, we’re asked to define a JRE with a minimum version of Java-8.

JBIDE 15189 rt detect 4

If we continue along with the process by locating and adding a Java 8 JRE, as shown above, and finish the dialog, we’ll see that all the errors will disappear for both runtimes. In this example, the EAP 6.4 required a JRE of Java 7 or higher. The addition of the Java 8 JRE fixed this issue as well.

JBIDE 15189 rt detect 5

Hopefully, this will help users preemptively discover and fix errors before being hit with surprising errors when trying to use the created server adapters.

Support for WildFly 10.1

The WildFly 10.0 Server adapter has been renamed to WildFly 10.x. It has been tested and verified to work for WildFly 10.1 installations.

Hibernate Tools

Hibernate Runtime Provider Updates

A number of additions and updates have been performed on the available Hibernate runtime providers.

New Hibernate 5.2 Runtime Provider

With final releases available in the Hibernate 5.2 stream, the time was right to make available a corresponding Hibernate 5.2 runtime provider. This runtime provider incorporates Hibernate Core version 5.2.2.Final and Hibernate Tools version 5.2.0.Beta1.

hibernate 5 2
Figure 1. Hibernate 5.2 is available
Other Runtime Provider Updates

The Hibernate 4.3 runtime provider now incorporates Hibernate Core version 4.3.11.Final and Hibernate Tools version 4.3.5.Final.

The Hibernate 5.0 runtime provider now incorporates Hibernate Core version 5.0.10.Final and Hibernate Tools version 5.0.2.Final.

The Hibernate 5.1 runtime provider now incorporates Hibernate Core version 5.1.1.Final and Hibernate Tools version 5.1.0.CR1.

Forge Tools

Added Install addon from the catalog command

From Forge 3.3.0.Final onwards it is now possible to query and install addons listed in the Forge addons page.

addon install from catalog

Forge Runtime updated to 3.3.1.Final

The included Forge runtime is now 3.3.1.Final. Read the official announcement here.



Freemarker 2.3.25

Freemarker library included in the Freemarker IDE was updated to latest available version 2.3.25.

flth / fltx file extensions added

The new flth and fltx extensions have been added and associated with Freemarker IDE. flth stands for HTML content whereas fltx stands for XML content.

Overhaul of the plugin template parser

The parser that FreeMarker IDE uses to extract IDE-centric information (needed for syntax highlighting, related tag highlighting, auto-completion, outline view, etc.) was overhauled. Several bugs were fixed, and support for the newer template language features were added. Also, the syntax highlighting is now more detailed inside expressions.

Fixed the issue when the (by default) yellow highlighting of the related FTL tags shift away from under the tag as you type.

Showing whitespace, block selection mode

The standard "Show whitespace characters" and "Toggle block selection mode" icons are now available when editing a template.

Improved automatic finishing of FreeMarker constructs

When you type <#, <@, ${, #{ and <#-- the freemarker editor now automatically closes them.

When a FreeMarker exception is printed to the console, the error position in it is a link that navigates to the error. This has worked long ago, but was broken for quite a while.

Fixed auto-indentation

When hitting enter, sometimes the new line haven’t inherited the indentation of the last line.

Updated the "database" used for auto completion

Auto completion now knows all directives and "built-ins" up to FreeMarker 2.3.25.

What is next?

Having JBoss Tools 4.4.1 and Developer Studio 10.1 out we are already working on the next maintenance release for Eclipse Neon.1.


Jeff Maury

by jeffmaury at September 21, 2016 04:04 PM

Native browser for GTK on linux

by Christian Pontesegger ( at September 21, 2016 09:14 AM

Having support for the internal browser is often not working out of the box on linux. You can check the status by opening your Preferences/General/Web Browser settings. If the radio button Use internal  web browser is enabled (not necessarily activated) internal browser support is working, otherwise not.

Most annoyingly without internal browser support help hovers in your text editors use a fallback mode not rendering links or images.

To solve this issue you may first check the SWT FAQ. For me working on gentoo linux the following command fixed the problem:
emerge net-libs/webkit-gtk:2
It is important to not only install the latest version of webkit-gtk which will not be recognized by Eclipse. After installation restart eclipse and your browser should work. Verified on Eclipse Neon.

by Christian Pontesegger ( at September 21, 2016 09:14 AM

Creating My First Web App with Angular 2 in Eclipse

by dimitry at September 20, 2016 02:00 PM

Angular 2 is a framework for building desktop and mobile web applications. After hearing rave reviews about Angular 2, I decided to check it out and take my first steps into modern web development. In this article, I’ll show you how to create a simple master-details application using Angular 2, TypeScript, Angular CLI and Eclipse […]

The post Creating My First Web App with Angular 2 in Eclipse appeared first on Genuitec.

by dimitry at September 20, 2016 02:00 PM

Eclipse 4.7 M2 is out with a focus on usability

by Lars Vogel at September 19, 2016 08:23 AM

Eclipse 4.7 M2 is out with a focus on usability.

From simplified filter functionality in the Problems, Bookmark and Task view, improved color usage for popups, simplified editor assignments for file extensions, enhancements of quick access, a configurable compare direction in the compare editor, etc. you will find lots of nice goodies which will increase your love with the Eclipse IDE.

Also the background jobs API has been improved and we run jobs still fast, even if you do a lot of status updates in your job implementation.

Checkout the Eclipse 4.7 M2 Notes and Noteworthy for the details.

by Lars Vogel at September 19, 2016 08:23 AM

Eclipse basics for Java development

by leoufimtsev at September 19, 2016 03:10 AM

Just a basic into to Eclipse, aimed at people who are new to Java. Covers creating a new project, debugging, common shortcuts/navigation, git.


Workspace contains your settings, ex your keyboard shortcut preferences, list of your open projects. You can have multiple workspaces.


You can switch between Workspaces via File ->Switch Workspaces


A project is essentially an application, or a library to an application. Projects can be opened or closed. Content of closed projects don’t appear in searches.

Hello world Project

To run some basic java code:

  • File -> New -> Java project
  • Give the project some name ->  finish.
  • Right click on src -> New -> Class
  • Give your Class some name, check “Public static void main(String [] args)”
  • Add “Hello World” print line:
    System.out.println(“Hello world”);
  • Right click on “” -> run as -> Java Application
  • Output printed in Console:
  • Next time you can run the file via run button:Selection_092.png
  • Or via “Ctrl+F11”


Set a breakpoint by double clicking on the line numbers in the margin, then click on the bug icon or right click and “Debug as” -> “Java appliaction”

For more info on debugging, head over to Vogella:

Switching perspectives

Eclipse has the notion of Perspectives. One is for Java development, one for debugging, (others could be C++ development, or task planning etc..). It’s basically a customisation of features and layout.

When you finish debugging, you can switch back to the java perspective:

Common keyboard shortcuts

  • Ctrl+/    – comment code “//”
  • Ctrl+shift+/    – comment code ‘/* … */’
  • Ctrl+F11   – Run last run configuration
  • Ctrl+Shift+L  Keyboard reminder cue sheet. (Type to search)
  • Ctrl+Shift+L, then Ctrl+Shift+L again, open keyboard preferences.
  • Ctrl+O – Java quick method Outline:
    Note: Regex and case search works. Ex “*Key” will find “getBackgroundColorKey()”, so will  “gBFCK”.
  • Ctrl+Shift+r – search for resource (navigate between your classes).
  • Ctrl+Shift+f – automatically format the selected code. (Or all code if no block selected).

For more on shortcuts, head over to Vogella:

Source code navigation

Right click on a method/variable to bring up a context menu, from there select:

Open Declaration (F3)

This is one of the most used functions. It’s a universal “jump to where the method/variable/class/constant is defined”.


Open Call hierarchy

See where variable or method is called.

Tip: For variables, you can narrow down the Field Access so that it only shows where a field is read/written.


Quick outline (Ctrl+O)

The quick outline is a quick way to find a function in your class. It has regex support and case search. E.g “*size” will find any method with ‘size’ in it and “cSI” will find ‘computeSizeInPixels’.
Tip: Press Ctrl+O again and you will be shown methods that get inherited from parent classes.


Navigate to super/implementation (Ctrl+click)

Sometimes you may want to see which sub-classes overrides a method. You can hover over a method and press ctrl+click, then on “Open Implementation”.


You will be presented with a list of sub-implementations.


You can similarly navigate to parent classes.

Code completion

Predict variable names, method names,

Start typing something, press: “Ctrl+space”


It can complete by case also, ex if you type “mOF” and press Ctrl+Space, it will expand to “myOtherFunction()”.


Typing “System.out.println();” is tedious. Instead you can type “syso” and then press ‘ctrl+space’. Eclipse can fill in the template code.Selection_084.png
You can find more on templates in Eclipse Preferences.

Git integration

99% of my git workflow happens inside Eclipse.

You will want to open three useful views:

Window -> Show view -> others

  • Team -> History
  • git -> Git Repositories
  • git -> Git Staging

You can manage git repositories in the “Git Repositories” view:


You can add changed files in the “Git Staging View” via drag and drop, and fill in the commit message. You can view your changes by double clicking on the files:


In the “History” view, you can create new branches, cherry pick commits, checkout older versions. Compare current files to previous versions etc..



More on Eclipse

If you want to know more about the Eclipse interface, feel free to head over to Vogella’s in-depth Eclipse tutorial:

Also free free to leave comments with questions.

by leoufimtsev at September 19, 2016 03:10 AM

Pushing the Eclipse IDE Forward

by Doug Schaefer at September 17, 2016 06:40 PM

It’s been a crazy week if you follow the ide-dev mailing list at Eclipse. We’ve had many posts over the years discussing our competitive relationship with IntelliJ and the depression that sets in when we try to figure out how to make Eclipse make better so people don’t hate on it so much and then how nothing changes.

This time, though,  sparked by what seemed to be an innocent post by Mickael Istria about yet another claim that IntelliJ has better content assist (which from what I’ve seen, it actually does). This time it sparked a huge conversation with many Eclipse contributors chiming in with their thoughts about where we are with the Eclipse IDE and what needs to be done to make things better. A great summary of the last few days has been captured in a German-language Jaxenter article.

The difference this time is that it’s actually sparked action. Mickael, Pascal Rapicault, and others have switched some of their focus on the low hanging user experience issues and are providing fixes for them. The community has been activated and I love seeing it.

Someone asked why the Architecture Council at Eclipse doesn’t step in and help guide some of this effort and after discussing it at our monthly call, we’ve decided to do just that. Dani Megert and I will revive the UI Guidelines effort and update the current set and extend it to more general user experience guidance. We’ll use the UI Best Practices group mailing list to hold public discussions to help with that. Everyone is welcome to participate. And I’m sure the ide-dev list will continue to be busy as contributors discuss implementation details.

Eclipse became the number one Java IDE with little marketing. Back in the 2000’s developers were hungry for a good Java IDE and since Eclipse was free and easy to set up (yes, unzipping the IDE wasn’t that bad an experience) and worked well, had great static analysis and refactoring, they fell in love with it.

Other IDEs have caught up and in certain areas passed Eclipse and, yes, IntelliJ has become more popular. It’s not because of marketing. Developers decide what they like to use by downloading it and trying it out. As long as we keep our web presence in shape that developers can find the IDE, especially the Java one, and then keep working to make it functionally the best IDE we can, we’ll be able to continue to serve the needs of developers for a long time.

Our best marketing comes from our users. That’s the same with all technology these days. I’d rather hear from someone who’s tried Docker Swarm than believe what the Docker people are telling me (for example). That’s how we got Eclipse to number one, and where we need to focus to keep the ball rolling. And as a contributor community, we’re working hard to get them something good to talk about.

by Doug Schaefer at September 17, 2016 06:40 PM

Me as text?

by tevirselrahc at September 16, 2016 07:05 AM

Over the last few days, a large group of my minions and admires met in Sweden at EMD2017 to talk about me…in all my incarnation.

One of the most polarizing discussion was about whether I should stay graphical or whether I also needed to be textual. For those who do not know, I am a UML-based modeling tool and therefore graphical by nature.

However, some of my minions think that I would be more usable if I also allowed them to create/edit models using text (just like this posting, but in a model instead of a blog post.

During the meeting, there was a lot of discussion about whether it was a good idea or not, whether it was useful or not, whether I was even able to support this!

The main point made by the pro-text minions was that many things are simply easier to do by writing text rather than drawing images, but that both could be supported. Other minions were saying that it was simply impossible.

Now, this is all a bit strange to me. After all, when I look at my picture, I am an image, but then I can express myself in text (again, like in this posting).

Regardless, any new capability given me makes me happy!

And I wonder how I would look as text…


I think I like myself better as an image, but it’s good to have a choice. In the end, I trust my minions.


Filed under: Papyrus, Textual Tagged: modeling, Textual, uml

by tevirselrahc at September 16, 2016 07:05 AM

Install "Plug-in Spy" in your Eclipse Neon IDE

September 15, 2016 10:00 PM

There is a lot of documentation about the Eclipse "Plug-in Spy" feature (Plug-in Spy for UI parts or Eclipse 3.5 - Plug-in Spy and menus). Im my opinion one information is missing: what you need to install to use the Spy feature in your Eclipse Neon IDE. Here is my small how-to.

Select "Install new Software…​" in the "Help" Menu. In the dialog, switch to the "The Eclipse Project Updates" update site (or enter its location Filter with "PDE" and select the "Eclipse PDE Plug-in Developer Resources". Validate your choices with "Next" and "Finish", Eclipse will install the feature and ask for a Restart.

2016 09 16 install dialog
Figure 1. Install new Software in Eclipse

If you prefer the Oomph way, you can paste the snippet contained in Listing 1 in your installation.setup file (Open it with the Menu: Navigate ▸ Open Setup ▸ Installation).

<?xml version="1.0" encoding="UTF-8"?>

Your Oomph Editor should looks like in Figure 2. Save the file and select "Perform Setup Task…​" (in the Help menu). Oomph will update your installation and will ask for a restart.

2016 09 16 installation oomph editor
Figure 2. Oomph setup Editor: installation.setup File

In both cases, after the restart you can press alt+shift+f1 and use the Plug-in Spy as in Figure 3.

2016 09 16 plugin spy
Figure 3. Plug-in Spy in Eclipse Neon

September 15, 2016 10:00 PM

Oomph 04: P2 install tasks

by Christian Pontesegger ( at September 15, 2016 02:15 PM

Form this tutorial onwards we are going to extend our project setup step by step. Today we are looking how to install additional plugins and features to our setup.

Source code for this tutorial is available on github as a single zip archive, as a Team Project Set or you can browse the files online.  

For a list of all Oomph related tutorials see my Oomph Tutorials Overview.

Step 1: Add the Repository

Open your Oomph setup file and create a new task of type P2 Director. To install components we need to provide a p2 site location and components from that location to install. So create a new Repository child to our task. When selected the Properties view will ask for a URL. Point to the p2 location you want to install stuff from. Leave Type to Combined. If you do not know about repository types you definitely do not need to change this setting.
When you are working with a p2 site provided by eclipse, Oomph can help to identify update site locations.

Step 2: Add features

Once you added the repository you can start to add features to be installed from that site. The manual way requires you to create a new child node of type Requirement. Go to its Properties and set Name to the feature id you want to install. You may add version ranges or make installs optional (which means do not throw an error when the feature cannot be installed).
The tricky part is to find out the name of the feature you want to install. I like to use the target platform editor from Mickael Barbero with its nice code completion features. An even simpler way is to use the Repository Explorer from Oomph:

Right click on your Repository node and select Explore. The Repository Explorer view comes up and displays features the same way as you might know it from the eclipse p2 installer. Now you can drag and drop entries to your P2 Director task.

by Christian Pontesegger ( at September 15, 2016 02:15 PM

Keynotes, Tracks and Sponsors Announced for the 11th Annual EclipseCon Europe Conference

September 15, 2016 01:19 PM

The Eclipse Foundation is pleased to announce EclipseCon Europe 2016.

September 15, 2016 01:19 PM

Contribute to the vogella Android tutorials via Github pull requests

by Lars Vogel at September 15, 2016 06:25 AM

If you want to contribute an improvement to the vogella Android tutorials, we are providing our Android tutorials Asciidoc source code via Github. Please clone and send your Pull Requests.

vogella Android tutorial at Github.

Thanks for your contributions and lets kill all these typos. :-)

by Lars Vogel at September 15, 2016 06:25 AM

Java 9 module-info Files in the Eclipse IDE

by waynebeaton at September 14, 2016 07:31 PM

Note that this post is not intended to be a status update; it’s just a quick update based on some experimenting that I’ve been doing with the beta code.

It’s been a while, but I’m back to experimenting in Java 9 support in the Eclipse IDE.

For testing purposes, I downloaded the most recent Oxygen (4.7) integration build (I20160914-0800) from the Eclipse Project downloads the latest  Java 9 JRE build (135).

I configured the Eclipse IDE to run on the Java 9 JVM. This still requires a minor change in the eclipse.ini file: to launch successfully, you must add to the vmargs section (I expect this to be resolved before Java 9 support is officially released; see Bug 493761 for more information). I used and used the Install new software… dialog to pull in updates from the BETA_JAVA9 SDK builds repository (see the Java9 Eclipsepedia page for more information).

I created a very simple Java application with a file. Content assist is available for this file.


Note that there is an error indicated on the import of java.awt.Frame. This error exists because the module info file does not provide visibility to that class (AWT is not included with java.base).

If we change that requires statement, the visibility issue is resolved and the compiler is happy. Well, mostly happy. Apparently not using declared variables gets you a stern warning (this is, of course, configurable).


The Eclipse Project is planning to ship support as part of an Eclipse Neon update release that coincides with the official release date of Java 9. I’ll be talking a bit about this during my JavaOne talk and demonstrating this (and more Java topics) at the Eclipse Foundation’s booth.

Conference: JavaOne
Session Type: Conference Session
Session ID: CON6469
Session Title: Developing Java Applications with Eclipse Neon
Room: Hilton—Continental Ballroom 6
Date and Time: 09/19/16, 11:00:00 AM – 12:00:00 PM

The call for papers for Devoxx US is open. Devoxx is a community conference from developers for developers. Submit your proposal now.

by waynebeaton at September 14, 2016 07:31 PM

Vert.x 3.3.3 is released !

by cescoffier at September 12, 2016 12:00 AM

We have just released Vert.x 3.3.3, a bug fix release of Vert.x 3.3.x.

Since the release of Vert.x 3.3.2, quite a few bugs have been reported. We would like to thank you all for reporting these issues.

Vert.x 3.3.3 release notes:

The event bus client using the SockJS bridge are available from NPM, Bower and as a WebJar:

Docker images are also available on the Docker Hub. The Vert.x distribution is also available from SDKMan and HomeBrew.

The artifacts have been deployed to Maven Central and you can get the distribution on Bintray.

Happy coding !

by cescoffier at September 12, 2016 12:00 AM

Qt World Summit 2016 San Francisco Conference App: Behind The Scenes

by ekkescorner at September 09, 2016 12:02 PM

Qt World Summit 2016

Meet me at this years Qt World Summit 2016 in San Francisco


I’ll speak about development of upcoming Qt World Summit Conference App running on

  • BlackBerry 10 (Qt 4.8, Cascades)
  • Qt 5.7 (Qt Quick Controls 2)
    • Android
    • iOS
    • Windows 10

My Session

See how easy it is to develop cross-platform mobile Apps using Qt 5.7+ and new Qt QuickControls 2


BlackBerry 10 Cascades Development ?

Already have BlackBerry 10 Apps (Cascades) ? Learn how to save your investment: most C++ Code für Business Logic, REST / Web Services, Persistence (SQLite, JSON) can be re-used and the app architecture is similar using Qt SIGNALS – SLOTS concept.

cu in San Francisco

Filed under: BB10, C++, Cascades, mobile, Qt

by ekkescorner at September 09, 2016 12:02 PM

Running nightly Eclipse for the impatient

by leoufimtsev at September 08, 2016 06:21 PM



If you’re an Eclipse developer, you might consider running a nightly version of Eclipse so that you can easily test out the latest patches. Bleeding edge is the cool stuff right? It’s actually surprisingly quite stable.

The advantage of this setup is that you won’t have to re-download a new version and re-download all the plugins over and over. You just configure the thing once and just click on ‘check for updates’ once in a while.

The setup is a little bit counter intuitive. This article is not just ‘follow these steps’, but more about understanding the mechanism and workflow.

My Experience with using nightly for 4 months

Eclipse doesn’t actually auto-update on it’s own. You manually trigger an update by going to help->check for updates. So you never really have the situation where one day Eclipse randomly stops working.
I don’t update my eclipse every day actually. Perhaps only once every 2-3 weeks or when I want run the latest patch. I’ve never had Eclipse break on me during the process, but never the less I tend to backup my Eclipse before every update via a bash script.

Pre-requisite: Understanding update sites

Instead of re-downloading eclipse each time, you can simply configure it to pull it’s packages from update sites. There are different update sites for different parts of Eclipse.

The list of core update sites can be found via:
Google: “Eclipse update sites” ->

Types of update sites

There are two important update sites that you have to be aware of. “Update” and “Release”.
Update” contains core Eclipse components, ex platform.ui. Ex:

Release” contains additional plugins like mylyn, etc..

But there are others, CDT, Orbit etc.. You can often find them by googling “PLUGIN-NAME update site”.

Which update sites to pick for ‘nighties’?

In general when you develop version N+1 (where N+1 is not released yet), you point to update sites of N, (because N+1 repositories are not released yet). Once N+1 is released, it becomes N and you change your update sites accordingly.

For example, I currently work on Oxygen, but I’m pointing my update sites to pull from ‘Neon’ (where Neon is older than Oxygen, N < O).


Now you’re ready to roll. You can install your desired plugins (ex mylyn, SWT Tools, git integration etc.)

To update this business, make a backup of your downloaded Eclipse (and maybe workspace), go to help -> check for updates.

If you have questions / feedback / suggestions, please post comments.

For more details on update, check out Vogella:

by leoufimtsev at September 08, 2016 06:21 PM

Centralized logging for Vert.x applications using the ELK stack

by ricardohmon at September 08, 2016 12:00 AM

This post entry describes a solution to achieve centralized logging of Vert.x applications using the ELK stack, a set of tools including Logstash, Elasticsearch, and Kibana that are well known to work together seamlessly.

Table of contents


This post was written in context of the project titled “DevOps tooling for Vert.x applications“, one of the Vert.x projects taking place during the 2016 edition of Google Summer of Code, a program that aims to bring together students with open source organizations, in order to help them to gain exposure to software development practices and real-world challenges.


Centralized logging is an important topic while building a Microservices architecture and it is a step forward to adopting the DevOps culture. Having an overall solution partitioned into a set of services distributed across the Internet can represent a challenge when trying to monitor the log output of each of them, hence, a tool that helps to accomplish this results very helpful.


As shown in the diagram below, the general centralized logging solution comprises two main elements: the application server, which runs our Vert.x application; and a separate server, hosting the ELK stack. Both elements are linked by Filebeat, a highly configurable tool capable of shipping our application logs to the Logstash instance, i.e., our gateway to the ELK stack.

Overview of centralized logging with ELK

App logging configuration

The approach described here is based on a Filebeat + Logstash configuration, that means first we need to make sure our app logs to a file, whose records will be shipped to Logstash by Filebeat. Luckily, Vert.x provides the means to configure alternative logging frameworks (e.g., Log4j, Log4j2 and SLF4J) besides the default JUL logging. However, we can use Filebeat independently of the logging framework chosen.

Log4j Logging

The demo that accompanies this post relies on Log4j2 as the logging framework. We instructed Vert.x to use this framework following the guidelines and we made sure our logging calls are made asynchronous, since we don’t want them to block our application. For this purpose, we opted for the AsyncAppender and this was included in the Log4J configuration together with the log output format described in a XML configuration available in the application’s Resource folder.

    <RollingFile name="vertx_logs" append="true" fileName="/var/log/vertx.log" filePattern="/var/log/vertx/$${date:yyyy-MM}/vertx-%d{MM-dd-yyyy}-%i.log.gz">
      <PatternLayout pattern="%d{ISO8601} %-5p %c:%L - %m%n" />
    <Async name="vertx_async">
      <AppenderRef ref="vertx_logs"/>
    <Root level="DEBUG">
      <AppenderRef ref="vertx_async" />

Filebeat configuration

Now that we have configured the log output of our Vert.x application to be stored in the file system, we delegate to Filebeat the task of forwarding the logs to the Logstash instance. Filebeat can be configured through a YAML file containing the logs output location and the pattern to interpret multiline logs (i.e., stack traces). Also, the Logstash output plugin is configured with the host location and a secure connection is enforced using the certificate from the machine hosting Logstash. We set the document_type to the type of instance that this log belongs to, which could later help us while indexing our logs inside Elasticsearch.

      document_type: trader_dashboard
        - /var/log/vertx.log
        pattern: "^[0-9]+"
        negate: true
        match: after
    enabled: true
      - elk:5044
    timeout: 15
      insecure: false
        - /etc/pki/tls/certs/logstash-beats.crt

ELK configuration

To take fully advantage of the ELK stack with respect to Vert.x and our app logs, we need to configure each of its individual components, namely Logstash, Elasticsearch and Kibana.


Logstash is the component within the ELK stack that is in charge of aggregating the logs from each of the sources and forwarding them to the Elasticsearch instance.
Configuring Logstash is straightforward with the help of the specific input and output plugins for Beats and Elasticsearch, respectively. In the previous section we mentioned that Filebeat could be easily coupled with Logstash. Now, we see that this can be done by just specifying Beat as the input plugin and set the parameters needed to be reached by our shippers (listening port, ssl key and certificate location).

input {
  beats {
    port => 5044
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-beats.crt"
    ssl_key => "/etc/pki/tls/private/logstash-beats.key"

Now that we are ready to receive logs from the app, we can use Logstash filtering capabilities to specify the format of our logs and extract the fields so they can be indexed more efficiently by Elasticsearch.
The grok filtering plugin comes handy in this situation. This plugin allows to declare the logs format using predefined and customized patterns based in regular expressions allowing to declare new fields from the information extracted from each log line. In the following block, we instruct Logstash to recognize our Log4j pattern inside a message field, which contains the log message shipped by Filebeat. After that, the date filtering plugin parses the timestamp field extracted in the previous step and replaces it for the one set by Filebeat after reading the log output file.

filter {
  grok {
    break_on_match => false
    match =>  [ "message", "%{LOG4J}"]
    match => [ "timestamp_string", "ISO8601"]
    remove_field => [ "timestamp_string" ]

The Log4j pattern is not included within the Logstash configuration, however, we can specify it using predefined data formats shipped with Logstash and adapt it to the specific log formats required in our application, as shown next.

# Pattern to match our Log4j format
SPACING (?:[\s]+)
LOGGER (?:[a-zA-Z$_][a-zA-Z$_0-9]*\.)*[a-zA-Z$_][a-zA-Z$_0-9]*
LOG4J %{TIMESTAMP_ISO8601:timestamp_string} %{LOGLEVEL:log_level}%{SPACING}%{LOGGER:logger_name}:%{LINE:loc_line} - %{JAVALOGMESSAGE:log_message}

Finally, we take a look at Logstash’s output configuration. This simply points to our elasticsearch instance, instructs it to provide a list of all cluster nodes (sniffing), defines the name pattern for our indices, assigns the document type according to the metadata coming from Filebeat, and allows to define a custom index template for our data.

output {
  elasticsearch {
    hosts => ["localhost"]
    sniffing => true
    manage_template => true
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
    template => "/etc/filebeat/vertx_app_filebeat.json"
    template_overwrite => true


Elasticsearch is the central component that enables the efficient indexing and real-time search capabilities of the stack. To take the most advantage of Elasticsearch, we can provide an indexing template of our incoming logs, which can help to optimize the data storage and match the queries issued by Kibana at a later point.
In the example below, we see an index template that would be applied to any index matching the pattern filebeat-*. Additionally, we declare our new log fields type, host, log_level, logger_name, and log_message, which are set as not_analyzed except for the last two that are set as analyzed allowing to perform queries based on regular expressions and not restricted to query the full text.

  "mappings": {
    "_default_": {
      "_all": {
        "enabled": true,
        "norms": {
          "enabled": false
      "dynamic_templates": [
          "template1": {
            "mapping": {
              "doc_values": true,
              "ignore_above": 1024,
              "index": "not_analyzed",
              "type": "{dynamic_type}"
            "match": "*"
      "properties": {
        "@timestamp": {
          "type": "date"
        "offset": {
          "type": "long",
          "doc_values": "true"
        "type": { "type": "string", "index": "not_analyzed" },
        "host": { "type": "string", "index": "not_analyzed" },
        "log_level": { "type": "string", "index": "not_analyzed" },
        "logger_name": { "type": "string", "index": "analyzed" },
        "log_message": { "type": "string", "index": "analyzed" }
  "settings": {
    "index.refresh_interval": "5s"
  "template": "filebeat-*"


Although we could fetch all our logs from Elasticsearch through its API, Kibana is a powerful tool that allows a more friendly query and visualization. Besides the option to query our data through the available indexed field names and search boxes allowing typing specific queries, Kibana allows creating our own Visualizations and Dashboards. Combined, they represent a powerful way to display data and gain insight in a customized manner. The accompanied demo ships with a couple of sample dashboards and visualizations that take advantage of the log fields that we specified in our index template and throw valuable insight. This includes: visualizing the number of log messages received by ELK, observe the proportion of messages that each log source produces, and directly find out the sources of error logs.

Kibana Dashboard

Log shipping challenge

The solution presented here relied on Filebeat to ship log data to Logstash. However, if you are familiar with the Log4j framework you may be aware that there exists a SocketAppender that allows to write log events directly to a remote server using a TCP connection. Although including the Filebeat + Logstash combination may sound an unnecessary overhead to the logging pipeline, they provide a number of benefits in comparison to the Log4j socket alternative:

  • The SocketAppender relies on the specific serialization of Log4j’s LogEvent objects, which is no an interchangeable format as JSON, which is used by the Beats solution. Although there are attempts to output the logs in a JSON format for Logstash, it doesn’t support multiline logs, which results in messages being split into different events by Logstash. On the other hand, there is no official nor stable input plugin for Log4j version 2.
  • While enabling Log4j’s async logging mode in an application delegates logging operations to separate threads, given their coexistence in the same JVM there is still the risk of data loss in case of a sudden JVM termination without proper log channel closing.
  • Filebeat is a data shipper designed to deal with many constraints that arise in distributed environments in a reliable manner, therefore it provides options to tailor and scale this operation to our needs: the possibility to load balance between multiple Logstash instances, specify the number of simultaneous Filebeat workers that ship log files, and specify a compression level in order to reduce the consumed bandwidth. Besides that, logs can be shipped in specific batch sizes, with maximum amount of retries, and specifying a connection timeout.
  • Lastly, although Filebeat can forward logs directly to Elasticsearch, using Logstash as an intermediary offers the possibility to collect logs from diverse sources (e.g., system metrics).


This post is accompanied by a demo based on the Vert.x Microservices workshop, where each of them is shipped in a Docker container simulating a distributed system composed of independent addressable nodes.
Also, the ELK stack is provisioned using a preconfigured Docker image by Sébastien Pujadas.

Following the guidelines in this post, this demo configures each of the Microservices of the workshop, sets up a Filebeat process on each of them to ship the logs to a central container hosting the ELK stack.


In order to run this demo, it is necessary to have Docker installed, then proceed with:

  • Cloning or downloading the demo repository.
  • Separately, obtaining the source code of the branch of the Microservices workshop adapted for this demo.

Building the example

The Docker images belonging to the Vert.x Microservices workshop need to be built separately to this project before this project can be launched.

Building the Vert.x Microservices workshop Docker images.

Build the root project and the Trader Dashboard followed by each of the modules contained in the solution folder. Issue the following commands for this:

mvn clean install
cd trader-dashboard
mvn package docker:build
cd ../solution/audit-service
mvn package docker:build
cd ../compulsive-traders
mvn package docker:build
cd ../portfolio-service
mvn package docker:build
cd ../quote-generator/
mvn package docker:build

Running the example

After building the previous images, build and run the example in vertx-elk using the following command:

docker-compose up

The demo

You can watch the demo in action in the following screencast:


The ELK stack is a powerful set of tools that ease the aggregation of logs coming from distributed services into a central server. Its main pillar, Elasticsearch, provides the indexing and search capabilities of our log data. Also, it is accompanied by the convenient input/output components: Logstash, which can be flexibly configured to accept different data sources; and Kibana, which can be customized to present the information in the most convenient way.

Logstash has been designed to work seamlessly with Filebeat, the log shipper which represents a robust solution that can be adapted to our applications without having to make significant changes to our architecture. In addition, Logstash can accept varied types of sources, filter the data, and process it before delivering to Elasticsearch. This flexibility comes with the price of having extra elements in our log aggregation pipeline, which can represent an increase of processing overhead or a point-of-failure. This additional overhead could be avoided if an application would be capable of delivering its log output directly to Elasticsearch.

Happy logging!

by ricardohmon at September 08, 2016 12:00 AM

ECF 3.13.2

by Scott Lewis ( at September 05, 2016 07:16 PM

ECF 3.13.2 is now available.

This is a maintenance/bug fix release, but includes new documentation on growing set of ECF distribution providers to support our implementation of OSGi Remote Services.

New and Noteworthy here.

by Scott Lewis ( at September 05, 2016 07:16 PM

Back to school update on FEEP

September 05, 2016 10:00 AM

You remember the Friends of Eclipse Enhancement Program, right? It is a program that utilizes all the donations made through the Friends of Eclipse program to make significant and meaningful improvements and enhancements to the Eclipse IDE/Platform. I think it is a good time for me to provide you with an update about what we have done in the last quarter with this program.

One of the major effort we have been focused on is the triage of the key Eclipse Platform UI bugs. The bid has been awarded to Patrik Suzzi and I must say that the Eclipse Platform team and the Eclipse Foundation have been delighted to work with Patrik. Since the beginning of April, he has triaged about 400 bugs and fixed or contributed to fix 70 bugs in the Platform. It granted him to become an Eclipse Platform UI committer. Congratulation!

Among others, Patrik has fixed some very annoying bugs like the broken feedback on drag and drop of overflown tabs editor or the inclusion of an Eclipse help search in the quick access field. By the way, if you don’t know what quick access is, I urge you to have a look at it.


Another area I’ve been working on is the progress monitor performance. When projects would do heavy progress reporting, the reporting itself was slowing down the running task by a huge factor. Have a look at how it now runs faster and smoother now.


Many more fixes and improvements can be done to the Eclipse IDE/Platform with this program. Obviously, the development depends on the amount of donations received. You can help improve the Eclipse Platform and make a difference. You only need to donate today!

September 05, 2016 10:00 AM

The second edition of the Xtext book has been published

by Lorenzo Bettini at September 05, 2016 07:04 AM

The second edition of the Xtext book, Implementing Domain-Specific Languages with Xtext and Xtend, was published at the end of August: So… get it while it’s hot 🙂

4965OS_5541_Implementing Domain Specific Languages with Xtext and Xtend - Second Edition

Please, see my previous post for details about the novelties in this edition.

Sources of the examples are on github:

Hope you’ll enjoy the book!

Be Sociable, Share!

by Lorenzo Bettini at September 05, 2016 07:04 AM