Announcing Extras for Eclipse

by Rüdiger Herrmann at February 10, 2016 08:00 AM

Written by Rüdiger Herrmann

Over the last months I finally found the time to implement some of the features that I missed most and were able to solve with reasonable effort when working with the Eclipse IDE.

The outcome is Extras for Eclipse, a collection of small extensions for the Eclipse IDE which include a launch dialog, a JUnit status bar, a launch configuration housekeeper, and little helper to accomplish recurring tasks with keyboard shortcuts.

They have been developed and proven useful in daily work over the last months to me and so I thought they might be useful for others, too. In this post I will walk through each of the features briefly.

The most noteworthy Extras for Eclipse at a glance:

  • A JUnit progress meter in the main status bar and a key binding to open the JUnit View
  • A dialog to quickly start or edit arbitrary launch configurations, think of Open Resource for launch configurations
  • An option to remove generated launch configurations when they are no longer needed
  • A key binding for the Open With… menu to choose the editor for the selected file
  • And yet another key binding to delete the currently edited file

The listed components can be installed separately so that you are free to choose whatever fits your needs.

Installation

Extras for Eclipse is available from the Eclipse Marketplace. If you want to give it a try, just drag the icon to your running Eclipse:

Drag to your running Eclipse workspace to install Extras for Eclipse

If you prefer, you can also install Extras for Eclipse directly from this software repository:

In the Eclipse main menu, select Help > Install New Software…, then enter the URL above and select Extras for the Eclipse IDE. Expand the item to select only certain features for installation.

Please note that a JRE 8 or later and Eclipse Luna (4.4) or later are required to run this software.

Extras for JUnit

If you are using the the JUnit View like Frank does and have it minimized in a fast view, the only progress indicator is the animated view icon. While it basically provides the desired information – progress and failure/success – I found it a little too subtle. But having the JUnit View always open is quite a waste of space.

That brought me to the JUnit Status Bar: a progress meter in the main status bar that mirrors the bar of the JUnit View but saves the space of the entire view that is only useful when diagnosing test failure.

Extras for Eclipse: JUnit Status Bar      Extras for Eclipse: JUnit Status Bar      Extras for Eclipse: JUnit Status Bar

Together with a key binding (Alt+Shift+Q U) to open the JUnit View when needed this made running tests even a little more convenient.

If you would like to hide the status bar, go to the Java > JUnit > JUnit Status Bar preference page. Note that due to a bug in Eclipse Mars (4.5) and later you need to resize the workbench window afterwards to make the change appear.

Launching with Keys

When working on Eclipse plug-ins I usually run tests with Ctrl+R or Alt+Shift+D T/P, but from time to time I also launch the application for a reality-check. And then Ctrl+F11/F11 to relaunch the previously launched application often isn’t the right choice. Neither does Launch the selected resource always pick the right one either.

Hence, leave the keyboard and grab the mouse, go to the main toolbar, find the Run/Debug tool item and select the appropriate launch configuration, if it is still among the most recently used ones. Otherwise open the launch dialog, …

Therefore, I was looking for quicker access and came up with the Start Launch Configuration dialog. It works much the same as Open Type or Open Resource: A filtered list shows all available launch configurations. Favorites and recently used launch configurations are listed first. With Return the selected launch configuration(s) can be debugged. A different launch mode (i.e. run or profile) can be chosen from the drop-down menu next to the filter text.

And most important, there is a key binding: Alt+F11. Or if you prefer Ctrl+3, the command is named Open Launch Dialog.

The screenshot below shows the Start Launch Configuration dialog in action:

Extras for Eclipse: Start Launch Configuration Dialog

If there are launch configurations that have currently running instances, their image is decorated with a running (Extras for Eclipse: Running Launch Configuration Decorator) symbol. And if you need to modify a launch configuration prior to running it, the Edit button gives you quick access to its properties.

Launch Configuration Housekeeping

With launch configurations there is another annoyance: each test that is run generates a launch configuration and if you code test driven, you will end up with many launch configurations. So many that they obscure the two or so manually created master test suites that actually matter.

That brought me to the idea to remove launch configurations that were generated when they are no longer needed. The no longer needed is currently reached when another launch configuration is run. This still gives the ability to re-run an application with Ctrl+F11/F11 but limits the number of launch configurations to those that are relevant.

The term generated applies to all launch configurations that aren’t explicitly created in the launch configuration dialog such as those created through the Run As > JUnit Test or Debug As > Java Application commands for example.

With the Run/Debug > Launching > Clean Up preference page, you can specify which launch configuration types should be considered when cleaning up.

Extras for Eclipse: Clean up Launch Configurations Preference Page

Open With… Key Binding

Extras for Eclipse: Open With... Key Binding

Sometimes I had the need to open a file with a different editor than the default one.

To work around the broken PDE target definition editor for example, I used to open target definition files with a text editor. While this particular editor has improved since its Mars release, I still have occassional use for Open With….

As an extension of the F3 or Return key that opens the selected file in the respective default editor, there is now Shift+F3 that shows the Open With… menu to choose an alternative editor.

Delete the Currently Edited File

For a while now I realized that I use the Package Explorer less and less. It even is in the way at times and may as well make a good candidate for a fast view.

I find the package explorer – or more generally navigation views – a useful tool to get to know the structure of a software project, but once you are comfortable with it the view adds less and less value while occupying much screen real estate.

Extras for Eclipse: Delete File in Editor

To navigate the sources I mostly use Open Type (Ctrl+Shift+T) or Open Declaration (F3) or the Quick Type Hierarchy (Ctrl+T) or the editors breadcrumb bar.

But to delete a file I have to go back to a navigation view, select the resource in question and hit the Del key.

This detour can be spared with yet another key binding, Alt+Del, that invokes the regular delete operation so that the behavior is the same as if the edited file was deleted from one of the navigation views.

Concluding Extras for Eclipse

This article describes the features that I found most noteworthy. For a complete list, please visit the project page.

For some of the extensions introduced here I have opened enhancement requests at Eclipse (see this Bugzilla query). If there is enough interest and support, I will eventually contribute them to the respective Eclipse projects.

I prefer a possibly slim IDE that consists only of those plug-ins that are actually necessary for the task at hand (mostly Java Tools, Plug-in Development Tools, MoreUnit, Maven integration, Git integration, EclEmma and Extras for Eclipse of course) – and in this environment the components have proven stable.

Therefore I would be grateful for hints if an Extra feature collides with plug-ins in a different setup. If you find a bug or would like to propose an enhancement please file an issue here:

Or if you event want to contribute to Extras for Eclipse, please read the Contributing Guidelines that list the few rules to contribute and explain how to set up the development environment.

The post Announcing Extras for Eclipse appeared first on Code Affine.


by Rüdiger Herrmann at February 10, 2016 08:00 AM

Using TypeScript LanguageService to build an JS/TypeScript IDE in Java

by Tom Schindl at February 09, 2016 06:34 PM

Yesterday I blogged about my endeavors into loading and using the TypeScript Language Service (V8 and Nashorn) and calling it from Java to get things like an outline, auto-completions, … .

Today I connected these headless pieces to my JavaFX-Editor-Framework. The result can be seen in the video below.

To get the TypeScript LanguageService feeling responsible not only for TypeScript files but also for JavaScript I used the 1.8 beta.

As you notice JS-Support is not yet really at a stage where it can replace eg Tern but I guess things are going to improve in future.



by Tom Schindl at February 09, 2016 06:34 PM

JSDT Project Structure

by psuzzi at February 09, 2016 11:07 AM

This post explains the JSDT projects structure, and it is the result of my direct experience.

This page serves also as part discussion for JSDT developments. Kudos -all to those who comment and leave constructive feedback: here and on JSDT bugzilla. [486037477020]

By reading this article you will be able to understand where the JSDT projects are located; which are the related git repositories and how to get the source code to work with one of those projects. i.e. JSDT Editor; Nodejs; Bower, JSON Editor, Gulp, Grunt, HTML>js, JSP>js , etc.

JSDT Repositories

The image below represents the current structure of JSDT repositories.

Almost all of the links to the above source code repositories are accessible via the https://projects.eclipse.org/projects/webtools/developer page.

Description:

  • eclipse.platform.runtime : [gerritBrowse repo] source repo required for quitting some IDE validation at compile-time.
  • webtools : [Browse repo] contains the website for all the webtools project. It’s big, needed to update project page
  • webtools.jsdt : [gerrit,Browse repo, github] source repo containing the most updated code for JSDT
  • webtools.jsdt.[core, debug, tests]: old source repos containing outdated code (last commit: 2013)
  • webtools.sourceediting: [gerritBrowse repo] source repo for JSDT Web and JSON

Note: the Gerrit [Review With Gerrit] icons are linking to the repos accepting Gerrit contributions, so anybody can easily contribute.

Early Project Structure

According older documentation, JSDT was split in four areas: Core, Debug, Tests and Web. The source of the first three was directly acessible under project source control, while the latter, because of its wider extent, was part of the parent project.

Dissecting the old jsdt_2010.psf, we see the original project structure.

structure-of-jsdt-projects-2010-cvs

Current Project Structure

The current project structure is based on the old structure, but it has additional projects. To simplify I split the project in four sets:

  • JSDT Core, Debug, Docs (& Tests): under the webtools.jsdt source repositories contains similar data to the old project.
  • JSDT.js : it is also under the webtools.jsdt source repo, but contains the Nodejs stuffs.
  • wst.json : under the webtools.sourceediting, contains the projects needed to parse / edit JSON
  • wst.jsdt.web : also under the webtools.sourcediting repo, contains the projects to include JSDT in Web editors

The image below represents simultaneously all the above project sets, as visible in my workspace.

JSDT-projects-structure_edit

A complete Project Set

Here you can find the complete projectset, containing the four projectsets above, plus the Platform dependencies, and the webtools project.

wst.jsdt.allProjects_20160209.psf

After importing, you should see the project sets below.

The full list of projects in my workspace is visible in the image below.

JSDT-all-related-projects

JSDT Development

At this point, to start with JSDT development, you will need to:

  1. clone the needed repositories to your local
  2. setup the development environment, as explained in my previous article.
  3. Import the referenced projectset
  4. Launch the inner eclipse with the source plugins you want

Note:

Your comments and suggestions are very welcome. Thanks for your feedback !

References:


by psuzzi at February 09, 2016 11:07 AM

OSGi – bundles / fragments / dependencies

by Dirk Fauth at February 09, 2016 08:02 AM

In the last weeks I needed to look at several issues regarding OSGi dependencies in different products. A lot of these issues were IMHO related to wrong usage of OSGi bundle fragments. As I needed to search for various solutions, I will publish my results and my opinion on the usage of fragments in this post. Partly also for myself to remind me about it in the future.

What is a fragment?

As explained in the OSGi Wiki, a fragment is a bundle that makes its contents available to another bundle. And most importantly, a fragment and its host bundle share the same classloader.

Looking at this from a more abstract point of view, a fragment is an extension to an existing bundle. This might be a simplified statement. But considering this statement helped me solving several issues.

What are fragments used for?

I have seen a lot of different usage scenarios for fragments. Considering the above statement, some of them where wrong by design. But before explaining when not to use fragments, let’s look when they are the agent of choice. Basically fragments need to be used whenever a resource needs to be accessible by the classloader of the host bundle. There are several use cases for that, most of them rely on technologies and patterns that are based on standard Java. For example:

  • Add configuration files to a third-party-plugin
    e.g. provide the logging configuration (log4j.xml for the org.apache.log4j bundle)
  • Add new language files for a resource bundle
    e.g. a properties file for locale fr_FR that needs to be located next to the other properties files by specification
  • Add classes that need to be dynamically loaded by a framework
    e.g. provide a custom logging appender
  • Provide native code
    This can be done in several ways, but more on that shortly.

In short: fragments are used to customize a bundle

When are fragments the wrong agent of choice?

To explain this we will look at the different ways to provide native code as an example.

One way is to use the Bundle-NativeCode manifest header. This way the native code for all environments are packaged in the same bundle. So no fragments here, but sometimes not easy to setup. At least I struggled with this approach some years ago.

A more common approach is to use fragments. For every supported platform there is a corresponding fragment that contains the platform specific native library. The host bundle on the other side typically contains the Java code that loads the native library and provides the interface to access it (e.g. via JNI). This scenario is IMHO a good example for using fragments to provide native code. The fragment only extend the host bundle without exposing something public.

Another approach is the SWT approach. The difference to the above scenario is, that the host bundle org.eclipse.swt is an almost empty bundle that only contains the OSGi meta-information in the MANIFEST.MF. The native libraries aswell as the corresponding Java code is supplied via platform dependent fragments. Although SWT is often referred as reference for dealing with native libraries in OSGi, I think that approach is wrong.

To elaborate why I think the approach org.eclipse.swt is using is wrong, we will have a look at a small example.

  1. Create a host bundle in Eclipse via File -> New -> Plug-in Project and name it org.fipro.host. Ensure to not creating an Activator or anything else.
  2. Create a fragment for that host bundle via File -> New -> Other -> Plug-in Development -> Fragment Project and name it org.fipro.host.fragment. Specify the host bundle org.fipro.host on the second wizard page.
  3. Create the package org.fipro.host in the fragment project.
  4. Create the following simple class (yes, it has nothing to do with native code in fragments, but it also shows the issues).
    package org.fipro.host;
    
    public class MyHelper {
    	public static void doSomething() {
    		System.out.println("do something");
    	}
    }
    

So far, so good. Now let’s consume the helper class.

  1. Create a new bundle via File -> New -> Plug-in Project and name it org.fipro.consumer. This time let the wizard create an Activator.
  2. In Activator#start(BundleContext) try to call MyHelper#doSomething()

Now the fun begins. Of course MyHelper can not be resolved at this time. We first need to make the package consumable in OSGi. This can be done in the fragment or the host bundle. I personally tend to configure Export-Package in the bundle/fragment where the package is located. We therefore add the Export-Package manifest header to the fragment. To do this open the file org.fipro.host.fragment/META-INF/MANIFEST.MF. Switch to the Runtime tab and click Add… to add the package org.fipro.host.

Note: As a fragment is an extension to a bundle, you can also specify the Export-Package header for org.fipro.host in the host bundle org.fipro.host. org.eclipse.swt is configured this way. But notice that the fragment packages are not automatically resolved using the PDE Manifest Editor and you need to add the manifest header manually.

After that the package org.fipro.host can be consumed by other bundles. Open the file org.fipro.consumer/META-INF/MANIFEST.MF and switch to the Dependencies tab. At this time it doesn’t matter if you use Required Plug-ins or Imported Packages. Although Import-Package should be always the preferred way, as we will see shortly.

Althought the manifest headers are configured correctly, the MyHelper class can not be resolved. The reason for this is PDE tooling. It needs additional information to construct proper class paths for building. This can be done by adding the following line to the manifest file of org.fipro.host

Eclipse-ExtensibleAPI: true

After this additional header is added, the compilation errors are gone.

Note: This additional manifest header is not necessary and not used at runtime. At runtime a fragment is always allowed to add additional packages, classes and resources to the API of the host.

After the compilation errors are gone in our workspace and the application runs fine, let’s try to build it using Maven Tycho. I don’t want to walk through the whole process of setting up a Tycho build. So let’s simply assume you have a running Tycho build and include the three projects to that build. Using POM-less Tycho this simply means to add the three projects to the modules section of the build.

You can find further information on Tycho here:
Eclipse Tycho for building Eclipse Plug-ins and RCP applications
POM-less Tycho builds for structured environments

Running the build will fail because of a Compilation failure. The Activator class does not compile because the import org.fipro.host cannot be resolved. Similar to PDE, Tycho is not aware of the build dependency to the fragment. This can be solved by adding an extra. entry to the build.properties of the org.fipro.consumer project.

extra.. = platform:/fragment/org.fipro.host.fragment

See the Plug-in Development Environment Guide for further information about build configuration.

After that entry was added to the build.properties of the consumer bundle, also the Tycho build succeeds.

What is wrong with the above?

At first sight it is quite obvious what is wrong with the above solution. You need to configure the tooling at several places to make the compilation and the build work. These workarounds even introduce dependencies where there shouldn’t be any. In the above example this might be not a big issue, but think about platform dependent fragments. Do you really want to configure a build dependency to a win32.win32.x86 fragment on the consumer side?

The above scenario even introduces issues for installations with p2. Using the empty host with implementations in the fragments forces you to ensure that at least (or exactly) one fragment is installed together with the host. Which is another workaround in my opinion (see Bug 361901 for further information).

OSGi purists will say that the main issue is located in PDE tooling and Tycho, because the build dependencies are kept as close as possible to the runtime dependencies (see for example here). And using tools like Bndtools you don’t need these workarounds. And in first place I agree with that. But unfortunately it is not possible (or only hard to achieve) to use Bndtools for Eclipse application development. Mainly because in plain OSGi, Eclipse features, applications and products are not known. Therefore also the feature based update mechanism of p2 is not usable. But I don’t want to start the discussion PDE vs. Bndtools. That is worth another (series) of posts.

In my opinion the real issue in the above scenario, and therefore also in org.eclipse.swt, is the wrong usage of fragments. Why is there a host bundle that only contains the OSGi meta information? After thinking a while about this, I realized that the only reason can be laziness! Users want to use Require-Bundle instead of configuring the several needed Import-Package entries. IMHO this is the only reason that the org.eclipse.swt bundle with the multiple platform dependent fragments exists.

Let’s try to think about possible changes. Make every platform dependent fragment a bundle and configure the Export-Package manifest header for every bundle. That’s it on the provider side. If you wonder about the Eclipse-PlatformFilter manifest header, that works for bundles aswell as for fragments. So we don’t loose anything here. On the consumer side we need to ensure that Import-Package is used instead of Require-Bundle. This way we declare dependencies on the functionality, not the bundle where the functionality originated. That’s all! Using this approach, the workarounds mentioned above can be removed. PDE and Tycho are working as intended, as they can simply resolve bundle dependencies. I have to admit that I’m not sure about p2 regarding the platform dependent bundles. Would need to check this separately.

Conclusion

Having a look at the two initial statements about fragments

  • a fragment is an extension to an existing bundle
  • fragments are used to customize a bundle

it is IMHO wrong to make API public available from a fragment. These statements could even be modified to become the following:

  • a fragment is an optional extension to an existing bundle

Having that statement in mind, things are getting even clearer when thinking about fragments. Here is another example to strengthen my statement. Guess you have a host bundle that already exports a package org.fipro.host. Now you have a fragment that adds an additional public class via that package, and in a consumer bundle that class is used. Using Bndtools or the workarounds for PDE and Tycho showed above, this should compile and build fine. But what if the fragment is not deployed or started at runtime? Since there is no constraint for the consumer bundle that would identify the missing fragment, the consumer bundle would start. And you will get a ClassNotFoundException at runtime.

Personally I think that everytime a direct dependency to a fragment is introduced, there is something wrong.

There might be exceptions to that rule. One could be to create a custom logging appender that needs to be accessible in other places, e.g. for programmatically configurations. As the logging appender needs to be in the same classloader as the logging framework (e.g. org.apache.log4j), it needs to be provided via fragment. And to access it programmatically, a direct dependency to the fragment is needed. But honestly, even in such a case a direct dependency to the fragment can be avoided with a good module design. Such a design could be for example to make the appender an OSGi service. The service interface would be defined in a separate API bundle and the programmatic access would be implemented against the service interface. Therefore no direct dependency to the fragment would be necessary.

As I struggled several days with searching for solutions on fragment dependency issues, I hope this post can help others, solving such issues. Basically my solution is to get rid of all fragments that export API and make them either separate bundles or let them provide their API via services.

If someone with a deeper knowledge in OSGi ever comes by this post and has some comments or remarks about my statements, please let me know. I’m always happy to learn something new or getting new insights.


by Dirk Fauth at February 09, 2016 08:02 AM

The buzz around Eclipse Che

by Ian Skerrett at February 08, 2016 10:56 PM

Just over two weeks ago the Eclipse Che project released a beta version of their Che 4.0 release. We published an article introducing Eclipse Che in our Eclipse Newsletter so readers can learn more about the highlights of Che.

The feedback in the community has been pretty exciting to watch. On twitter, people are certainly creating a buzz about the future of the IDE.

 

InfoWorld is calling Eclipse Che the launch of the cloud ide revolution.

The Eclipse Che GitHub repo has 1500 stars and 200 forks.

There have been over 100,000 downloads of the Che beta so people are trying it out.

The buzz is certainly growing around Eclipse Che. At EclipseCon in March you will be able to experience Eclipse Che first hand, including Tyler Jewell’s keynote address on the Evolution and Future of the IDE. If you are interested in the future of cloud IDEs then plan to attend EclipseCon

eclipsecon_logo20padding

 

 



by Ian Skerrett at February 08, 2016 10:56 PM

5 open source IoT projects to watch in 2016

by Benjamin Cabé at February 08, 2016 09:57 PM

The IoT industry is slowly but steadily moving from a world of siloed, proprietary solutions, to embracing more and more open standards and open source technologies.
What’s more, the open source projects for IoT are becoming more and more integrated, and you can now find one-stop-shop open source solutions for things like programming your IoT micro controller, or deploying a scalable IoT broker in a cloud environment.

Here are the Top 5 Open Source IoT projects that you should really be watching this year.

  • #1 – The Things Network

    LP-WAN technologies are going to be a hot topic for 2016. It's unclear who will win, but the availability of an open-source ecosystem around those is going to be key. The Things Network is a crowdsourced world-wide community for bringin LoRaWAN to the masses. Most of their backend is open-source and on Github.
<>
Note: you can click on the pictures to learn more!

 

What about you? What are the projects you think are going to make a difference in the months to come?

In case you missed it, the upcoming IoT Summit, co-located with EclipseCon North America, is a great opportunity for you to learn about some of the projects mentioned above, so make sure to check it out!


by Benjamin Cabé at February 08, 2016 09:57 PM

JavaScript Performance V8 vs Nashorn (for Typescript Language Service)

by Tom Schindl at February 08, 2016 02:18 PM

On the weekend I’ve worked on my API to interface with the Typescript language service from my Java code.

While the initial version I developed some months ago used the “tsserver” to communicate with the LanguageService I decided to rewrite that and to interface with the service directly (in memory or through an extra process).

For the in memory version I implemented 2 possible ways to load the JavaScript sources and call them

  • Nashorn
  • V8(with the help of j2v8)

I expected that Nashorn is slower than V8 already but after having implemented a small (none scientific) performance sample the numbers show that Nashorn is between 2 and 4 times slower than V8 (there’s only one call faster in Nashorn).

The sample code looks like this:

public static void main(String[] args) {
  try {
    System.err.println("V8");
    System.err.println("============");
    executeTests(timeit("Boostrap", () -> new V8Dispatcher()));
    System.err.println();
    System.err.println("Nashorn");
    System.err.println("============");
    executeTests(timeit("Nashorn", () -> new NashornDispatcher()));
  } catch (Throwable e) {
    e.printStackTrace();
  }
}

private static void executeTests(Dispatcher dispatcher) throws Exception {
  timeit("Project", () -> dispatcher.sendSingleValueRequest(
    "LanguageService", "createProject", String.class, "MyProject").get());

  timeit("File", () -> dispatcher.sendSingleValueRequest(
    "LanguageService", "addFile", String.class, "p_0", DispatcherPerformance.class.getResource("sample.ts")).get());

  timeit("File", () -> dispatcher.sendSingleValueRequest(
    "LanguageService", "addFile", String.class, "p_0", DispatcherPerformance.class.getResource("sample2.ts")).get());

  timeit("Outline", () -> dispatcher.sendMultiValueRequest(
    "LanguageService", "getNavigationBarItems", NavigationBarItemPojo.class, "p_0", "f_0").get());

  timeit("Outline", () -> dispatcher.sendMultiValueRequest(
    "LanguageService", "getNavigationBarItems", NavigationBarItemPojo.class, "p_0", "f_1").get());
}

Provides the following numbers:

V8
============
Boostrap : 386
Project : 72
File : 1
File : 0
Outline : 40
Outline : 10

Nashorn
============
Nashorn : 4061
Project : 45
File : 29
File : 2
Outline : 824
Outline : 39

The important numbers to compare are:

  • Bootstrap: ~400ms vs ~4000ms
  • 2nd Outline: ~10ms vs ~40ms

So performance indicates that the service should go with j2v8 but requiring that as hard dependency has the following disadvantages:

  • you need to ship different native binaries for each OS you want to run on
  • you need to ship v8 which might/or might not be a problem

So the strategy internally is that if j2v8 is available we’ll use v8, if not we fallback to the slower nashorn, a strategy I would recommend probably for your own projects as well.

If there are any Nashorn experts around who feel free to help me fix my implementation



by Tom Schindl at February 08, 2016 02:18 PM

Branch by Abstraction and OSGi

by David Bosschaert (noreply@blogger.com) at February 08, 2016 10:02 AM

Inspired by my friend Philipp Suter who pointed me at this wired article http://www.wired.com/2016/02/rebuilding-modern-software-is-like-rebuilding-the-bay-bridge which relates to Martin Fowler's Branch by Abstraction I was thinking: how would this work in an OSGi context?

Leaving aside the remote nature of the problem for the moment, let's focus on the pure API aspect here. Whether remote or not really orthogonal... I'll work through this with example code that can be found here: https://github.com/coderthoughts/primes

Let's say you have an implementation to compute prime numbers:
public class PrimeNumbers {
  public int nextPrime(int n) {
    // computes next prime after n - see
https://github.com/coderthoughts/primes details
    return p;
  }
}
And a client program that regularly uses the prime number generator. I have chosen a client that runs in a loop to reflect a long-running program, similar to a long-running process communicating with a microservice:
public class PrimeClient {
  private PrimeNumbers primeGenerator = new PrimeNumbers();
  private void start() {
    new Thread(() -> {
      while (true) {
        System.out.print("First 10 primes: ");
        for (int i=0, p=1; i<10; i++) {
          if (i > 0) System.out.print(", ");
          p = primeGenerator.nextPrime(p);
          System.out.print(p);
        }
        System.out.println();
        try { Thread.sleep(1000); } catch (InterruptedException ie) {}
      }
    }).start();
  }
 
  public static void main(String[] args) {
    new PrimeClient().start();
  }
}
If you have the source code cloned or forked using git, you can run this example easily by checking out the stage1 branch and using Maven:
.../primes> git checkout stage1
.../primes> mvn clean install
... maven output
[INFO] ------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------
Then run it from the client submodule:
.../primes/client> mvn exec:java -Dexec.mainClass=\
org.coderthoughts.primes.client.PrimeClient
... maven output
First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31
First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31
First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31
... and so on ...
Ok so our system works. It keeps printing out prime numbers, but as you can see there is a bug in the output. We also want to replace it in the future with another implementation. This is what the Branch by Abstraction Pattern is about.

In this post I will look at how to do this with OSGi Services. OSGi Services are just POJOs registered in the OSGi Service Registry. OSGi Services are dynamic, they can come and go, and OSGi Service Consumers dynamically react to these changes, as well see. In the following few steps will change the implementation to an OSGi Service. Then we'll update the service at runtime to fix the bug above, without even stopping the service consumer. Finally we can replace the service implementation with a completely different implementation, also without even stopping the client.

Turn the application into OSGi bundles

We'll start by turning the program into an OSGi program that contains 2 bundles: the client bundle and the impl bundle. We'll use the Apache Felix OSGi Framework and also use OSGi Declarative Services which provides a nice dependency injection model to work with OSGi Services.

You can see all this on the git branch called stage2:
.../primes> git checkout stage2
.../primes> mvn clean install
The Client code is quite similar to the original client, except that it now contains some annotations to instruct DS to start and stop it. Also the PrimeNumbers class is now injected instead of directly constructed via the @Reference annotation. The greedy policyOption instructs the injector to re-inject if a better match becomes available:
@Component
public class PrimeClient {
  @Reference(policyOption=ReferencePolicyOption.GREEDY)
  private PrimeNumbers primeGenerator;
  private volatile boolean keepRunning = false;
 
  @Activate
  private void start() {
    keepRunning = true;
    new Thread(() -> {
      while (keepRunning) {
        System.out.print("First 10 primes: ");
        for (int i=0, p=1; i<10; i++) {
          if (i > 0) System.out.print(", ");
          p = primeGenerator.nextPrime(p);
          System.out.print(p);
        }
        System.out.println();
        try { Thread.sleep(1000); } catch (InterruptedException ie) {}
      }
    }).start();
  }
 
  @Deactivate
  private void stop() {
    keepRunning = false;
  }
}
The prime generator implementation code is the same except for an added annotation. We register the implementation class in the Service Registry so that it can be injected into the client:
@Component(service=PrimeNumbers.class)
public class PrimeNumbers {
  public int nextPrime(int n) {
    // computes next prime after n
    return p;
  }
}
As its now an OSGi application, we run it in an OSGi Framework. I'm using the Apache Felix Framework version 5.4.0, but any other OSGi R6 compliant framework will do.
> java -jar bin/felix.jar
g! start http://www.eu.apache.org/dist/felix/org.apache.felix.scr-2.0.2.jar
g! start file:/.../clones/primes/impl/target/impl-0.1.0-SNAPSHOT.jar
g! install file:/.../clones/primes/client/target/client-0.1.0-SNAPSHOT.jar
Now you should have everything installed that you need:
g! lb
START LEVEL 1
ID|State |Level|Name
0|Active | 0|System Bundle (5.4.0)|5.4.0
1|Active | 1|Apache Felix Bundle Repository (2.0.6)|2.0.6
2|Active | 1|Apache Felix Gogo Command (0.16.0)|0.16.0
3|Active | 1|Apache Felix Gogo Runtime (0.16.2)|0.16.2
4|Active | 1|Apache Felix Gogo Shell (0.10.0)|0.10.0
5|Active | 1|Apache Felix Declarative Services (2.0.2)|2.0.2
6|Active | 1|impl (0.1.0.SNAPSHOT)|0.1.0.SNAPSHOT
7|Installed | 1|client (0.1.0.SNAPSHOT)|0.1.0.SNAPSHOT
We can start the client bundle:
g! start 7
First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31
First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31
... and so on ..
You can now also stop the client:
g! stop 7
Great - our OSGi bundles work :)
Now we'll do what Martin Fowler calls creating the abstraction layer.

Introduce the Abstraction Layer: the OSGi Service

Go to the branch stage3 for the code:
.../primes> git checkout stage3
.../primes> mvn clean install
The abstraction layer for the Branch by Abstraction pattern is provided by an interface that we'll use as a service interface. This interface is in a new maven module that creates the service OSGi bundle.
public interface PrimeNumberService {
    int nextPrime(int n);
}
Well turn our Prime Number generator into an OSGi Service. The only difference here is that our PrimeNumbers implementation now implements the PrimeNumberService interface. Also the @Component annotation does not need to declare the service in this case as the component implements an interface it will automatically be registered as a service under that interface:
@Component
public class PrimeNumbers implements PrimeNumberService {
    public int nextPrime(int n) {
      // computes next prime after n
      return p;
    }
}
Run everything in the OSGi framework. The result is still the same but now the client is using the OSGi Service:
g! lb
START LEVEL 1
   ID|State      |Level|Name
    0|Active     |    0|System Bundle (5.4.0)|5.4.0
    1|Active     |    1|Apache Felix Bundle Repository (2.0.6)|2.0.6
    2|Active     |    1|Apache Felix Gogo Command (0.16.0)|0.16.0
    3|Active     |    1|Apache Felix Gogo Runtime (0.16.2)|0.16.2
    4|Active     |    1|Apache Felix Gogo Shell (0.10.0)|0.10.0
    5|Active     |    1|Apache Felix Declarative Services (2.0.2)|2.0.2
    6|Active     |    1|service (1.0.0.SNAPSHOT)|1.0.0.SNAPSHOT
    7|Active     |    1|impl (1.0.0.SNAPSHOT)|1.0.0.SNAPSHOT
    8|Resolved  |    1|client (1.0.0.SNAPSHOT)|1.0.0.SNAPSHOT
g! start 8
First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31
You can introspect your bundles too and see that the client is indeed wired to the service provided by the service implementation:
g! inspect cap * 7
org.coderthoughts.primes.impl [7] provides:
-------------------------------------------
...
service; org.coderthoughts.primes.service.PrimeNumberService with properties:
   component.id = 0
   component.name = org.coderthoughts.primes.impl.PrimeNumbers
   service.bundleid = 7
   service.id = 22
   service.scope = bundle
   Used by:
      org.coderthoughts.primes.client [8]
Great - now we can finally fix that annoying bug in the service implementation: that it missed 2 as a prime! While we're doing this we'll just keep the bundles in the framework running...

Fix the bug in the implementation whitout stopping the client

The prime number generator is fixed in the code in stage4:
.../primes> git checkout stage4
.../primes> mvn clean install
It's a small change to the impl bundle. The service interface and the client remain unchanged. Let's update our running application with the fixed bundle:
First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31
First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31
First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31
g! update 7 file:/.../clones/primes/impl/target/impl-1.0.1-SNAPSHOT.jar
First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
Great - finally our service is fixed! And notice that the client did not need to be restarted! The  DS injection, via the @Reference annotation, handles all of the dynamics for us! The client code simply uses the service as a POJO.

The branch: change to an entirely different service implementation without client restart

Being able to fix a service without even restarting its users is already immensely useful, but we can go even further. I can write an entirely new and different service implementation and migrate the client to use that without restarting the client, using the same mechanism. 

This code is on the branch stage5 and contains a new bundle impl2 that provides an implementation of the PrimeNumberService that always returns 1. 
.../primes> git checkout stage5
.../primes> mvn clean install
While the impl2 implementation obviously does not produce correct prime numbers, it does show how you can completely change the implementation. In the real world a totally different implementation could be working with a different back-end, a new algorithm, a service migrated from a different department etc...

Or alternatively you could do a façade service implementation that round-robins across a number of back-end services or selects a backing service based on the features that the client should be getting.
In the end the solution will always end up being an alternative Service in the service registry that the client can dynamically switch to.

So let's start that new service implementation:
First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
g! start file:/.../clones/primes/impl2/target/impl2-1.0.0-SNAPSHOT.jar
First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
g! stop 7
First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
First 10 primes: 1, 1, 1, 1, 1, 1, 1, 1, 1, 1
First 10 primes: 1, 1, 1, 1, 1, 1, 1, 1, 1, 1
Above you can see that when you install and start the new bundle initially nothing will happen. At this point both services are installed at the same time. The client is still bound to the original service as its still there and there is no reason to rebind, the new service is no better match than the original. But when the bundle that provides the initial service is stopped (bundle 7) the client switches over to the implementation that always returns 1. This switchover could happen at any point, even halfway thought the production of the list, so you might even be lucky enough to see something like:
First 10 primes: 2, 3, 5, 7, 11, 13, 1, 1, 1, 1
I hope I have shown that OSGi services provide an excellent mechanism to implement the Branch by Abstraction pattern and even provide the possibility to do the switching between suppliers without stopping the client!

In the next post I'll show how we can add aspects to our services, still without modifying or even restarting the client. These can be useful for debugging, tracking or measuring how a service is used.

PS - Oh, and on the remote thing, this will work just as well locally or Remote. Use OSGi Remote Services to turn your local service into a remote one... For available Remote Services implementations see https://en.wikipedia.org/wiki/OSGi_Specification_Implementations#100:_Remote_Services

With thanks to Carsten Ziegeler for reviewing and providing additional ideas.

by David Bosschaert (noreply@blogger.com) at February 08, 2016 10:02 AM

Bug 75981 is fixed!

February 05, 2016 11:00 PM

Like many of my Eclipse stories, it starts during a coffee break.

  • Have you seen the new TODO template I have configured for our project?

  • Yes. It is nice…​

2016-02-06_todo-template-old

  • But I hate having to set the date manually.

  • I know but it is not possible with Eclipse.

  • …​

A quick search on Google pointed me to Bug 75981. I was not the only one looking for a solution to this issue:

By analyzing the Bugzilla history I have noticed that already 2 contributors have started to work on this (a long time ago) and the feedback to the latest patch never got any answers. I reworked the last proposal…​ and…​

I am happy tell you that you can now do the following:

2016-02-06_templates-preferences

Short description of the possibilities:

  • As before you can use the date variable with no argument. Example: ${date}

  • You can use the variable with additional arguments. In this case you will need to name the variable (since you are not reusing the date somewhere else, the name of the variable doesn’t matter). Example: ${mydate:date}

    • The first parameter is the date format. Example: ${d:date('yyyy-MM-dd')}

    • The second parameter is the locale. Example: ${maDate:date('EEEE dd MMMM yyyy HH:mm:ss Z', 'fr')}

Back to our use case, it now works as expected:

2016-02-06_todo-template-new

Do not hesitate to try the feature and to report any issue you can find. The fix is implemented with the M5 milestone release of Eclipse Neon. You can download this version now here:

This experiment was also a great opportunity for me to measure how the development process at Eclipse has been improved:

  • With Eclipse Oomph (a.k.a Eclipse Installer) it is possible setup the Workspace to work on "Platform Text" very quickly

  • With Gerrit it is much easier for me (a simple contributor) to work with the commiters of the project (propose a patch, discuss each line, push a new version, rebase on top of HEAD…​)

  • With the Maven Build, the build is reproducible (I never tried to build the platform with the old PDE Build, but I believe that this was not possible for somebody like me)

Where I spend most of the time:

  • Analysis of the proposed patches and existing feedbacks in Bugzilla

  • Figure out how I could add some unit tests (for the existing behaviour and for the new use cases).

This was a great experience for me and I am really happy to have contributed this fix.


February 05, 2016 11:00 PM

Jubula 8.2.2 has been released

February 05, 2016 10:37 AM

Our first official Jubula standlone release of the year is 8.2.2 - and it's got a lot of exciting new features!

From "beta" to "official"

Just before Christmas, we released a Jubula beta version that had some pretty awesome stuff in it (I'll get to what it is in a moment). I was so excited about the new features, that we decided to add a couple more that were in progress, then release it as an official version. That version is 8.2.2, and it can now be downloaded from the testing portal.

The highlights

The short version is that everything you've seen in beta releases since the end of October 2015 is now in the release. The longer version is much more exciting.

Copy and paste

I actually never thought I'd write these lines, but we have indeed added copy and paste support to the Jubula ITE. You can now copy Test Cases, Test Suites, Test Steps, and Event Handlers between editors. Why now? Well, I have been listening to the people who requested this over the years, and we have a new team member who needed a nice starter topic to work on. I still personally think it's evil Wink - you all know by now that we'd much prefer you to structure tests to be reusable and readable. Nevertheless, we hope you enjoy the new feature Smile.

Time reduction when saving

We've moved our completeness checks to a background job, so saving things doesn't block your continuing work as it had done previously.

Set Object Mapping Profile for individual technical names

Our standard object mapping profile is pretty amazing - it's heuristic, so even unnamed components can be located in an application. Sometimes though, you end up having to remap individual items more frequently and you ask the developers to name them. Now it's possible to specify for individual technical names that the component recognition for this name should only be based on its name. That way, you don't have to name everything, but can use the "Given Names" profile for technical names you know are set. This function is also available in the Client API.

New Test Steps for executing Java methods in the application context

Sometimes you just want to directly call a method you know is available in your application, or for a specific component. The new invoke method actions let you do just that. You can specify the class name and method name, as well as parameters - and you can execute the action either on the application in general or on specific components.

Multi-line comments in editors

There is a new option to add a comment node in the Test Case Editor and Test Suite Editor. The comments are shown directly in the editor, and you can use them to comment following nodes. This is in contrast to the descriptions, which are only shown for a selected node.

New dbtool options

The dbtool, for executing actions directly on the database, has two new options. You can now delete all test result summaries (including details) for a specific time or project, and you can just delete details for test result summaries for a time frame or project.

Oomph setup

In case you missed it, there is also an Oomph setup for Jubula.

As you can see, it's been a busy few months. Development continues, and our next beta release will contain updates to the JaCoCo support and HTML support, amongst other things.

Happy testing!


February 05, 2016 10:37 AM

Vert.x 3.2.1 is released !

by cescoffier at February 05, 2016 12:00 AM

We are pleased to announce the release of Vert.x 3.2.1!

The release contains many bug fixes and a ton of small improvements, such as future composition, improved Ceylon support, Stomp virtual host support, performance improvements… Full release notes can be found here:

https://github.com/vert-x3/wiki/wiki/3.2.1---Release-Notes

Breaking changes are here:

https://github.com/vert-x3/wiki/wiki/3.2.1---Breaking-Changes

The event bus client using the SockJS bridge are available from NPM, Bower and as a WebJar:

Dockers images are also available on the Docker Hub The vert.x distribution is also available from SDKMan.

Many thanks to all the committers and community whose contributions made this possible.

Next stop is Vert.x 3.3.0 which we hope to have out in May 2016.

The artifacts have been deployed to Maven Central and you can get the distribution on Bintray.

Happy coding !


by cescoffier at February 05, 2016 12:00 AM

Presentation: Developing Cloud-native Applications with the Spring Tool Suite

by Kris De Volder, Martin Lippert at February 04, 2016 10:30 PM

Kris De Volder and Martin Lippert show how to work effectively with Spring projects in Eclipse and the Spring Tool Suite (STS). They demo all the latest enhancements in the tools including features like much smarter property file editing, as well as new features in the Eclipse 4.5 (Mars) platform.

By Kris De Volder, Martin Lippert

by Kris De Volder, Martin Lippert at February 04, 2016 10:30 PM

New in the RAP Incubator: Charts with d3 and nvd3

by Ralf Sternberg at February 04, 2016 11:57 AM

Some time ago we’ve created a draft for a Chart widget for RAP, based on the famous D3.js. Together with one of our partners, we decided to carry this work forward and make it available in the RAP Incubator.

Charts Examples

D3 is a very flexible framework to turn data into dynamic charts. Looking at their examples, it’s amazing how many different types of charts there are. Whatever diagram you can think of, it can probably be done with d3.

Such great freedom comes at the price of some complexity. When all you need is a simple bar chart with two axis, you may not want to dive into the theory of scales, domains, layouts, and selections first. D3 offers a lot of tools, but no ready-to-use chart types. Happily, there are charting libraries built on top of d3. In fact, there are dozens of them.

We’ve decided to implement some basic chart widgets for the most common chart types based on nvd3, a library that provides good-looking charts for most of the common needs. Currently, there is a PieChart, a BarChart, and a LineChart widget with a basic set of properties, that is going to be extended. But we also kept the base classes, Chart and NvChart extensible to allow you to implement your own chart widgets for other d3 or nvd3 chart types with very little effort.

On the application side, creating a simple bar chart is fairly simple:

BarChart barChart = new BarChart( parent, SWT.NONE );
barChart.setItems(
  new BarItem( 759.3, "Chrome", blue ),
  new BarItem( 633.5, "Firefox", orange ),
  new BarItem( 384.6, "Edge", green )
);

To keep things lightweight, the Items are just data objects, not widgets. Colors can be specified as RGB objects. Since it’s in incubation, the API may still change slightly while we (and you) gather more insight.

The widget is now available in the RAP Incubator, it works with RAP 3.0 and 3.1. We hope you like it and we’re happy to hear what you think.


TwitterGoogle+LinkedInFacebook

Leave a Comment. Tagged with eclipse, incubator, new and noteworthy, rap, eclipse, incubator, new and noteworthy, rap


by Ralf Sternberg at February 04, 2016 11:57 AM

Eclipse Community Awards | Vote for a Deserving Project or Individual

February 03, 2016 09:32 AM

The Eclipse Community Awards voting deadline is Monday, February 8. Vote now!

February 03, 2016 09:32 AM

Beta2 for Eclipse Mars.2

by akazakov at February 02, 2016 04:51 PM

The second Beta of JBoss Tools 4.3.1 and JBoss Developer Studio 9.1.0 for our maintenance Mars release is available.

jbosstools jbdevstudio blog header
Remember that since JBoss Tools 4.3.0 we require Java 8 for installing and using of JBoss Tools. We still support developing and running applications using older Java runtimes. See more in Beta1 blog.

What is New?

Full info is at this page. Some highlights are below.

Eclipse Mars.2

JBoss Tools and JBoss Developer Studio are now targeting the latest Eclipse Mars.2 as a running platform with many issues fixed comparing to the previous Mars.1 release.

OpenShift 3

More than 60 issues targeting OpenShift 3 support have been fixed in this release. The OpenShift 3 integration was introduced as a technology preview feature in JBDS 9.0.0.GA but will graduate to a supported feature in the upcoming JBDS 9.1.0.GA release.

Incremental publishing

The OpenShift 3 server adapter now respects the auto-publish settings as declared in the server editor, giving the user the option to automatically publish on workspace changes, build events, or only when the user requests it. The server adapter is also able to incrementally deploy the server’s associated project with a quick call to rsync, ensuring minimal over-the-wire transfers and a fast turnaround for testing your project.

Support for Java EE projects

Experimental support for Java EE projects (Web and EAR) is now available. When the workspace project associated with the OpenShift 3 server is a Dynamic or Enterprise Application project, the server adapter builds an exploded version of the archive to a temporary local directory and replaces the version deployed on the remote OpenShift pod. That Pod Deployment Path, is now inferred automatically from the image stream tags on the remote Pod. A .dodeploy marker file is created for the remote server to redeploy the module if necessary (for EAP/WildFly servers supporting it).

Support for LiveReload

The new tooling includes LiveReload support for OpenShift 3 server adapters. This is accessible from the Show In > Web Browser via LiveReload Server menu. When a file is published to the server adapter, the Browser connected to the LiveReload server instance will automatically refresh.

openshift3-livereload-menu

This is particularly effective in conjunction with the Auto Publish mode for the OpenShift 3 server adapters, as all it takes to reload a web resource is saving the file under edition (Ctrl+S, or Cmd+S on Mac).

Simplified OpenShift Explorer view

Previously, the OpenShift 3 resources representation exposed a large amount of unnecessary information about OpenShift. The explorer view is now simplified and specific (and made much more robust) and focuses on an application-centric view.

simplified-openshift3-view

Everything that is no longer displayed directly under the OpenShift Explorer is accessible in the Properties view.

Red Hat Container Development Kit server adapter

The Red Hat Container Development Kit (CDK) server adapter now provides menus to quickly access the Docker Explorer and the OpenShift Explorer. Right-click on a running CDK server adapter and select an option in the Show In menu:

cdk-server-show-in-menus

Forge Tools

Forge Runtime updated to 3.0.0.Beta3

The included Forge runtime is now 3.0.0.Beta3. Read the official announcement here.

Stack support

Forge now supports choosing a technology stack when creating a project:

stack-new-project

In addition to setting up your project, choosing a stack automatically hides some input fields in the existing wizards, such as the JPA Version in the JPA: Setup wizard:

What is Next

We are approaching the final release for our first maintenance update for Eclipse Mars.2. It’s time to polish things up and prepare a release candidate.

Enjoy!

Alexey Kazakov


by akazakov at February 02, 2016 04:51 PM

EclipseCon France 2016 | Call for Papers

February 02, 2016 07:15 AM

Submit your talk for EclipseCon France taking place in Toulouse on June 7-9, 2016.

February 02, 2016 07:15 AM

ANCIT's Upcoming Classes in February 2016

by Its_Me_Malai (noreply@blogger.com) at February 02, 2016 02:02 AM

This February its Modeling Month @ ANCIT. We are offering series of public classrooms for advanced 1 day workshops on various Eclipse Topics.


These classes are open for registration and available for non Bangalore associates thru Online Training. Interested in our classes. Please feel free to contact on training@ancitconsulting.com

by Its_Me_Malai (noreply@blogger.com) at February 02, 2016 02:02 AM

Swing by to Talk about SWT and JavaFX

by waynebeaton at February 01, 2016 06:27 PM

Eclipse SWT support for GTK3 has improved dramatically since the Eclipse Mars release: the latest milestone and nightly builds work and look great on my Fedora 22 box. There’s still some work to do, but the progress since Mars is impressive. Fixing SWT/GTK issues requires a special set of skills: if you have those skills, you might find our work item to Improve GTK 3 Support interesting.

Or, if you just want to see how you can help, drop by the EclipseCon 2016 Hackathon and we can try and hammer out a fix or two together. A few actual Eclipse Platform committers will be at the conference, so I’m using “we” in the royal sense.

If JavaFX is more to your tastes, consider attending the two EclipseCon sessions being lead by the Eclipse community’s JavaFX expert and Eclipse e(fx)clipse open source project lead, Tom Schindl: Develop Slick Rich Client Applications with Eclipse 4 on JavaFX, and Smart, slim and good looking – Building Smart Editors with Eclipse and JavaFX.

emf_treeview_dnd

EMF Edit UI for JavaFX allows you to view your EMF models in JavaFX TextFields, ListViews, TreeViews and TableViews with only a few lines of code.
(from the project website)

Tom can help you get started building JavaFX applications using Eclipse; he’s also the best person to help you build Eclipse Rich Client Platform applications, editors, and IDEs with JavaFX as a base.

You can also learn about some fascinating work being done in the Eclipse Integrated Computational Environment (ICE) project in the form of Adventures in 3D with Eclipse ICE and JavaFX (ICE leverages the JavaFX/SWT integration layer, FXCanvas). Speakers Tom McCrary and Robert Smith, as members of our Science Working Group, are involved in some very cool new work being done at the Eclipse Foundation.

2015-p03234

Alex McCaskey and project lead Jay Billings running ICE on ORNL’s EVEREST Powerwall
(from Jay’s post from July of last year).

See you at EclipseCon!


EclipseCon NA 2016



by waynebeaton at February 01, 2016 06:27 PM

Eclipse Foundation Announces Ericsson as a Strategic Member

February 01, 2016 04:38 PM

We're pleased to announce that Ericsson has become a strategic member of the Eclipse Foundation.

February 01, 2016 04:38 PM

Filling up the Dance Card – EclipseCon 2016 Reston VA

by Doug Schaefer at February 01, 2016 04:23 PM

I can tell EclipseCon is getting close by the level of my panic. Five weeks to go. And it’s getting pretty tight.

I have a whole bunch of things I want to demo and hopefully get the community excited about. The Preview of the Arduino C++ IDE, a showcase for the new build and launch systems I’ve been working on, is nearing it’s end on the Mars stream and it’s time to get it working on Neon. I have a talk on that on the Tuesday entitled “Build Arduino Apps like a Pro”. It is a really cool application of all the great work we’ve done on the CDT over the years and I have some cool places I want to take it.

I also have a very ambitious talk on Thursday called simply “Eclipse, the IDE for IoT”. In that talk I will demo and describe all the Eclipse IDE plug-ins you can use to build an IoT system, from an Arduino to the cloud services that analyze the data coming from that Arduino, to the Web and Android clients you use to view the results of that analysis. This is what has always driven my work and passion for the Eclipse IDE. It’s a truly Integrated development environment with a capital ‘I’ that can do everything I need to build systems. And I’m proud to show that off to anyone who wants to see it.

Aside from my talks it’s going to be a very busy week. On Monday, we have the CDT Summit where anyone who has interest can join as we talk about CDT issues of the day and work on plans for the future. We’re such a diverse group and have built up a family of contributors, not only working on the CDT project itself, but on our siblings in Linux Tools and Parallel Tools and the occasion discussion about Target Management. I have a good start on my ideas for a new CDT build system that I’ll talk about and how it’s used for Qt, CMake and Arduino. And we have a Hackathon schedule Tuesday night where we should be able to get deep and dirty into the code.

For me, I have interest in the Internet of Things Summit on Tuesday and Wednesday. I work for an RTOS company so, of course, we have interest in IoT since our customers do. Actually we have customers who have done IoT for years and we now have a label for what they’ve been doing. But I’m interested to hear what kind of things people are building that fit the IoT label and what kind of open platforms they are using to build those things that are on the Internet. There’s been a lot of hype over IoT and we need to filter that and see what people are really doing with it.

Thursday is Eclipse Council day. I am a member of both the Architecture Council and the Planning Council. It’s a tricky job these councils are. You have a grand title, but you don’t really have much power over what projects do. We’re really here to help projects and much of the discussion we’ll have will be on how to help them better and to really help them make Eclipse better. Our reputation is based on the technology they produce and we really need to find ways to make sure the quality of that technology is world class.

But the highlight of every EclipseCon isn’t just the sessions and summits. It’s the people. It’s the chance to talk about things you care about with other people who also care about those things. It’s a chance to put faces on the the e-mail addresses you see on the mailing lists and bugzillas. It’s a chance to feel part of the bigger team that is the Eclipse community. We are a friendly bunch, often found at the bar in the evenings, well, OK, always found at the bar in the evenings. If you get a chance, please come down and please stop me or anyone else you know in the group and say Hi and join in and be a part of our great community.


by Doug Schaefer at February 01, 2016 04:23 PM