Published update to my Practical Eclipse book

by Dinis Cruz (noreply@blogger.com) at February 12, 2016 07:57 PM

You can get the latest version from https://leanpub.com/Practical_Eclipse for FREE by choosing the $0 minimum price.

Here is the email I send to my readers:

    Hi, thanks for being reader of my Practical Eclipse Plug-in development book. I just released a new version which contains a large number of content and images fixes (it has 294 pages).

    This version is very similar to the previous release, but I'm planning to make big changes in the next months.

    I really would like your feedback, so please don't hesitate to contact me at dinis.cruz@owasp.org or directly at the GitHub repo that contains all content and current Issues: https://github.com/DinisCruz/Book_Practical_Eclipse/issues

    For reference, here is the updated table of contents:

      Leanpub book, originally based on Blog posts
      Change log:
    1. August 2013
      Programming Eclipse in Real-Time (using an ‘Groovy based’ Eclipse Plug-in)
    2. September 2013
      Opening up a native Chrome Browser window inside Eclipse (raw version)
      Injecting HP Fortify Eclipse Plug-in Views into HP’s WebInspect UI
    3. November 2013
      SI Open Sources the Eclipse Plugin-development toolkit that I developed for TeamMentor
      TeamMentor Plugin and Builder v1.5.6 (Source Code and Eclipse Update site)
    4. December 2013
      Installing Eclipse Plugin Builder, accessing Eclipse objects and adding a new Menu Item that opens Owasp.org website
      How to word-wrap a word without breaking it (when using bootstrap)
      Installing, compiling and failing to use DCE VM for Eclipse Plugin development
      Using JRebel to HotSwap new Static String fields in an Eclipse Plugin (without Eclipse restart)
      Adding and using new API methods, that are consumed by an Eclipse Plugin under development (without Eclipse restart)
      Groovy Script to create a view that shows the images available in the ISharedImages class
      Executing Eclipse Plugin JUnit tests in real-time without needing to restart Eclipse (with no mocking)
      XStream “Remote Code Execution” exploit on code from “Standard way to serialize and deserialize Objects with XStream” article
      How to create (using Eclipse) JavaDocs that looks good? (My current approach is not working)
    5. January 2014
      Updating the GitHub repos for the 1.6.0 release of the Eclipse Fortify Plugin
      Saga to sign an eclipse plugin with a code cert
      Fixing Coding Signing issue where Eclipse Plugin didn’t install in Indigo
      Eclipse Groovy script to remove the ‘busy’ image from the WebBrowser Editor
      Viewing Eclipse and SWT objects (Workbench, Display and Shell) using Groovy’s ObjectBrowser and using TeamMentor’s Plugin ObjectBrowser
    6. February 2014
      Creating an Eclipse UI to run AngularJS e2e tests using Karma
      Using Firebase to sync data with a webpage (via Javascript, REST and Firebase Admin panel)
      XSS considerations when developing with Firebase
      Eclipse Groovy REPL script to sync a Browser with file changes (with recursive folder search via   Java’s WatchService)
      Really SIMPLE and clean AngularJS + Firebase example
      Using AngularJS in Eclipse, Part 1) The Basics
    7. March 2014
      Programmatically changing an AngularJS scope variable and adding Firebug Lite to an AngularJs app
      Why doesn’t Eclipse community stand-up more to IntelliJ?

by Dinis Cruz (noreply@blogger.com) at February 12, 2016 07:57 PM

The real benefits of living a connected life

by Benjamin Cabé at February 12, 2016 07:50 PM

It might be a bit late in the year to do a write-up on The Next Big Thing for IoT in 2016, but it doesn’t mean I cannot take a moment to take a step back from the multiple announcements around the consumer Internet of Things lately, and focus on the cornerstone for enabling a truly sustainable IoT ecosystem: Over-the-air Device Management. Of course it is important for the IoT industry at large to agree on the protocols and frameworks that shall enable IoT application development, but are we not jumping the gun here? I think we should all be thinking first about how we want to leverage over-the-air management capabilities to really revolutionize the way we can interact with the Things around us.

In fact, here are some of the reasons why you need to think of the IoT in terms of how it really changes our way, as humans, to interact with physical devices on a day-to-day basis, before even thinking about all the fancy things you will do with the data you collect, or the smart sensor networks you will build.

Over-the-air provisioning can greatly reduce time-to-market

Do you remember how, just a few decades ago, you would need to use dozens of floppy disks to setup your brand new PC? This workflow has greatly improved over the years, and it is now very common to do the initial provisioning of an Ubuntu or OSX system completely over the air, and the only manual steps involved are essentially limited to entering your name into the system… 40/365 - 02/09/11 - Windows 3.1 Floppy Disk When you think about it, this is incredibly beneficial to both the computer manufacturer, who does not need to worry and make sure its machines leave the factory with the latest OS version that contains the newest features the market is asking for, and to us as end-users, since we always get the most up-to-date system, both in terms of available features, but also security patches, etc.

TPM.svg In the context of IoT, now that more and more devices have strong cryptographic capabilities and start including Trusted Platform Modules, it becomes possible to perform a 100%-secured bootstrap of any IoT device.

Think about it: when it comes to healthcare or automotive, you would not want to have a single doubt regarding the fact that a device is indeed running software that is originating (i.e. signed by) from the manufacturer. A compromised smart thermostat in your living room is one thing, a compromised car is a totally different story. What’s more, being able to delay the moment embedded software is finally put onto a device indubitably leads to reducing time-to-market while allowing for a few extra cycles to innovate on the software front before the device reaches its user.

Fighting planned obsolescence thanks to over-the-air upgrades

Over-the-air provisioning is only the first step in providing a seamless experience for end-users. What we start witnessing with e.g. Tesla is something the car industry has been secretly dreaming of for a century, and is now finally able to achieve: your car can now be fixed (or better: its performances improved) without requiring you to stop by the repair shop.

Indeed, Tesla brings the concept of decoupling hardware and software to a totally different level. Over the years, the upgrades they deployed went from somewhat minor improvements in the user interface of the on-board computer; to much more advanced fixes to the steering or braking system. It is a huge shift for the whole manufacturing industry: hardware design decisions at Day 1 will no longer be limiting the capabilities of a device overtime, and over-the-air updates will allow for actual improvements in safety, stability and overall user experience overtime.

Bypassing some physical constraints we never thought we could

The lifespan of a house is literally decades, which means that in many cases smart sensors and devices are literally set in concrete. Should a buggy sensor really be forever condemned, when it could be upgraded over-the-air?

Not only can over-the-air updates help fix faulty hardware, but it can actually help you in situations where you don’t have physical access to the culprit. Technologies like 6LoWPAN allow battery-powered devices to communicate wirelessly for years, so why not just deploy a new PID temperature control algorithm in that smart thermostat of yours rather than spending yet another couple hundred dollars for a new one (…just like you did last year, and the year before that!).

Avoiding vendor lock-in for a truly interoperable IoT

5997130009_4e29c6919c_mWhile it is understandable that some vendors may want to lock-in customers to their platform, I really believe that in the long run, the manufacturers who will be relying on open device management standards and give their users some control of their device will be the ones that will stay ahead of the game.

Would we really expect to buy an electric appliance without being sure that the plug would fit in our wall socket? That is exactly what we are talking about here, and that’s why standardization initiatives like what the Open Mobile Alliance is doing with LightweightM2M (LWM2M), or the work of the AllSeen Alliance and the Open InterConnect Consortium are really important. In the mobile and handset industry, a standard like OMA-DM is used in literally billions of devices and, because it’s an open-standard that practically every mobile handset implements, it enables large corporations with a standard way to manage their fleet of phones, without any vendor lock-in.

For the Internet of Things, vendor neutrality is going to be key for finally getting to the point where the general public starts seeing the Things of the Internet of Things as equipment they have control of (as opposed to scary black boxes wirelessly tethered to a company’s backend via an obscure communication protocol). Luckily, many players in the IoT industry understand the importance of device management, and I am hopeful that with Eclipse IoT we are doing our share by providing a framework for open innovation, and open source implementations of technologies like LWM2M, MQTT or CoAP.


This post was brought to you by HARMAN’s Engineering a Connected Life program. The views and opinions expressed in this post are my own and don’t necessarily represent HARMAN’s positions, strategies or opinions.


by Benjamin Cabé at February 12, 2016 07:50 PM

3 Reasons I Love Eclipse

by Tracy M at February 12, 2016 11:13 AM

I love Eclipse. Not just the IDE, but the whole community and philosophy of it. There are no shortage of folks who will tell you how they hate Eclipse.  For a change I’d like to tell you why I love Eclipse.

  1. Polyglot IDE – Eclipse IDE is really great because you can use the same open environment for multiple languages. Java, Python and C are three of our staple languages and I often end up using them in combinations. Eclipse understands each of these really, really well and allows for some powerful integrations. There’s no place I’d rather run some Python on a target processor while debugging a Linux kernel.
  2. Determined developers – with all its functionality and extensibility maximising the power of Eclipse is pretty hard. The people in the community who do use Eclipse and master its complexity are full of grit; problem-solvers who won’t give up on their end goals easily. As a result, it’s a great community to be part of.
  3. Continuous evolution – early on in my career I worked with hardware, then I switched to software and today work involves keeping up with several different technologies. Likewise, over the years Eclipse has evolved a lot. Early on it was just an IDE, then a rich-client framework and today it features many thriving working groups growing in areas like IoT. Adapting and learning can be quite a journey, so it’s great to have a community along for the ride.


by Tracy M at February 12, 2016 11:13 AM

Terminate and Relaunch in Eclipse IDE

February 11, 2016 11:00 PM

Are you using the "Terminate and Relaunch" context menu in the Debug view?

2016-02-12_debug_view

Last year Martin Lippert did a demo of the "Spring Tool Suite" at the Eclipse Demo Camp in Zurich. He presented a great addition to the Eclipse Toolbar: the possibility to terminate and relaunch an application from anywhere.

2016-02-12_toolbar

We discussed after his presentation if this feature could be integrated in Eclipse IDE. I think it would be great and I imagine I would use it every day (now I am switching to the Debug perspective to do it). I have opened Bug 487554 for that. Feel free to share your opinion there.


February 11, 2016 11:00 PM

EMF Forms 1.8.0 Feature: Ecore Editor Reloaded

by Maximilian Koegel and Jonas Helming at February 11, 2016 04:43 PM

With Mars.2, we release EMF Forms 1.8.0. EMF Forms makes it really simple to create forms that edit your data, based on an EMF model. To get started with EMF Forms please refer to our getting started tutorial. In this post, we want to outline one of the new features: a new version of the Ecore editor based on EMF Forms. We will also have a talk on EclipseCon North America 2016 about it. Along with the new version of the standard Ecore editor, we also provide a generic editor to be used with your custom model, an alternative for the “generated editor”.

Ecore Editor

Most of the developers who use EMF have used the good-old default Ecore editor:

image06

While there are other tools such as the graphical Ecore Tools or Xcore to create models, the tree-based editor is still extremely beneficial. It is simple to use and allows you to specify models fairly efficiently. However, the implementation is over 10 years old and hasn’t received much “love” since then. EMF Forms was created to build form-based UIs based on existing EMF data models. Those forms are often used in editors, which allow the modification of models. Luckily, there is an EMF model for Ecore itself, that means, Ecore itself is just another EMF model. Therefore, we were able to use the full power of EMF Forms to build a new version of the standard Ecore editor:

image01

As you can see, we got rid of the properties view and combined the tree and the properties in one editor. We ordered the properties by importance (instead of the former alphabetical order), grouped them, and combined some fields, such as lower and upper bounds. Where it made sense, we added custom controls, e.g. to support auto-completion. Finally, we added shortcuts and dialogs for the creation of new elements. This enables you to specify a model without leaving the keyboard and therefore, improves efficiency.

image02

Of course an Ecore editor does not make much sense without a Genmodel editor, therefore we reworked this as well:

image03

Both editors are unfortunately not yet part of an Eclipse package, but they are part of the EMF Client Platform 1.8.0 release. Please see this tutorial on how to install and how to use them.

As the implementation of both editors shares most of the code, we even went step further and also implemented a completely generic editor, which is capable of opening any kind of custom model. This editor is described in the following section.

Generic Editor

Another tool that is frequently used by EMF developers is the generated editor. It allows you to create instances of a given EMF model and thereby provides an easy way of testing a defined model. As the Ecore editor, the generated editor has room for improvement. It was initially only meant as a simple code example. It lacks extensibility and adaptability and is therefore, not a very good starting point for the implementation of a custom editor. To get around this, we also provide a generic editor based on EMF Forms, which works with any custom model. It supports features such as loading/saving, DnD, and undo out of the box. Further it uses EMF Forms for rendering, so you can adapt the controls and layouts used in the detail views as you wish. To make that clear: It provides you with a fully functional Eclipse editor for creating and modifying instances of your custom model, without any coding, code generation, or adaptation!

image00

All three editors, the Ecore editor, the Genmodel editor and the generic editor are currently under active development and will be contributing to Neon. If you are interested in trying them out, please follow this tutorial. Please provide feedback by submitting bugs or feature requests or contact us if you are interested in enhancements or support.

If you want to see these editors live in action, join us on EclipseCon North America and join our talk about them.


TwitterGoogle+LinkedInFacebook

Leave a Comment. Tagged with eclipse, emf, EMF Client Platform, emfforms, eclipse, emf, EMF Client Platform, emfforms


by Maximilian Koegel and Jonas Helming at February 11, 2016 04:43 PM

Meet me at MobileTechCon 2016 Munich

by ekkescorner at February 11, 2016 03:52 PM

At MTC I’ll talk about Mobile App Development and Security:

mtc_2016

MobileTechCon 2016


Filed under: Eclipse

by ekkescorner at February 11, 2016 03:52 PM

EclipseCon 2016 | Learn and Network

February 11, 2016 07:46 AM

EclipseCon 2016 is less than a month away! Learn more about Eclipse technologies this March.

February 11, 2016 07:46 AM

LaunchBar and User Experience

by Doug Schaefer at February 10, 2016 08:03 PM

Screen Shot 2016-02-09 at 10.03.27 PM

In an effort to make the LaunchBar more “Eclipse standard”, I am trying to make the icons for the build, launch, and stop buttons 24 pixels square. They were 32, which I do admit made the entire tool bar a little too fat. 24 pixel high makes it much more streamlined and you barely notice that it’s still a couple of pixels higher than without the launch bar. I think users can live with that. And I’ll find out soon enough as I always do.

Now, the icons I have there now are ugly. My claim to fame is writing parsers and build systems, not graphic design. I just whipped these together to show what they could look like and get a feel on how the new size helps with the overall visual. We’ll get someone (and Tony McCrary has volunteered) to make them more professional looking.

But I still get the argument from a few people that the UX is bad because the Launch Bar doesn’t look like the rest of the toolbar. Well, no, it doesn’t. And it wasn’t meant to. I think I need to tell the story of how the Launch Bar came about to help explain why.

It started when we were working on BlackBerry Momentics, the IDE for BB10. Our manager hooked us up with a manager who worked in our Sweden office. These were designers who worked at the former TAT (The Astonishing Tribe) who were responsible for the beautiful user experience that became the Cascades framework for BB10. They sent over a handful of “developer experience” designers to a workshop in Ottawa and we brainstormed about how we could make Eclipse beautiful, and more important, a great experience especially for developers who were new to BB10 development.

It was an eye opening experience. They were very cautious about our feelings over the Eclipse UX but were very candid about what they thought. And we didn’t really argue. They took special aim at the tool bar and the “ridiculous” 16-bit icons that were  supposed to somehow be meaningful to a new user. It leads to an overwhelming feeling that only intimidates these poor people and we really want to make sure they become successful using our tools. The first recommendation they gave us was to turn off all of the tool bar buttons.

Then we took aim at the launch experience. I have to admit we took some inspiration from popular IDEs that we all had experience with. But in the end, isn’t the most important thing you do with an IDE other than type your code in, is launch the thing you’re coding? We felt it deserved a place front and center. So the Ottawa gang put forward the general layout of the Launch Bar and the Swedes provided the icons and the spacing around the whole widget. They made it big on purpose and made the buttons soft so that the user interface wasn’t so intimidating and was easy to understand.

The feedback we got was tremendous. I’ve told this story before, but when our product manager presented the new look at a developers conference, one of the attendees went up to him after and gave him a hug for making his life so much better. That kind of feedback for a tools developer is hard to beat and something we should all strive for.

As we move forward, and we focus on the general QNX developer and bring them more and more of what’s available in the Eclipse ecosystem, we felt it important that we push the Launch Bar upstream and work to enable it for more and more use cases. Of course, when you try to address a larger audience, not all of them are going to appreciate the look and feel of it. It is different than most things on the tool bar. But by design, it was supposed to dominate the tool bar. Remember, there wasn’t supposed to be anything else there. It’s actually the old tool bar icons that don’t fit. When I set up my environment now, I turn off everything I can and it does result in a very clean look.

I appreciate that not everyone is going to like the Launch Bar or find it useful. We are striving to make it an optional feature. But as we work to support different types of targets you can launch on, we’re finding hard to do without the Launch Bar. So we will get lots of haters and, if you work at all in open source or tools in general, it’s just part of the game and something you get used to. You can’t make everyone happy. But on the other hand, you also need to make sure you don’t make everyone sad. And those UX guys we worked with were very sad about the Eclipse UX and I’m just trying to keep their effort to fix things up alive.


by Doug Schaefer at February 10, 2016 08:03 PM

Publish an Eclipse p2 composite repository on Bintray

by Lorenzo Bettini at February 10, 2016 03:46 PM

In a previous post I showed how to manage an Eclipse composite p2 repository and how to publish an Eclipse p2 composite repository on Sourceforge. In this post I’ll show a similar procedure to publish an Eclipse p2 composite repository on Bintray. The procedure is part of the Maven/Tycho build so that it is fully automated. Moreover, the pom.xml and the ant files can be fully reused in your own projects (just a few properties have to be adapted).

The complete example at https://github.com/LorenzoBettini/p2composite-bintray-example.

First of all, this procedure is quite different from the ones shown in other blogs (e.g., this one, this one and this one): in those approaches the p2 metadata (i.e., artifacts.jar and content.jar) are uploaded independently from a version, always in the same directory, thus overwriting the existing metadata. This leads to the fact that only the latest version of published features and bundles will be available to the end user. This is quite against the idea that old versions should still be available, and in general, all the versions should be available for the end users, especially if a new version has some breaking change and the user is not willing to update (see p2’s do’s and do not’s). For this reason, I always publish p2 composite repositories.

Quoting from https://wiki.eclipse.org/Equinox/p2/Composite_Repositories_(new)

The goal of composite repositories is to make this task easier by allowing you to have a parent repository which refers to multiple children. Users are then able to reference the parent repository and the children’s content will transparently be available to them.

In order to achieve this, all published p2 repositories must be available, each one with their own p2 metadata that should never be overwritten.

On the contrary, the metadata that we will overwrite will be the one for the composite metadata, i.e., compositeContent.xml and compositeArtifacts.xml.

In this example, all the binary artifacts can be found here: https://dl.bintray.com/lorenzobettini/p2-composite-example/.

Directory Structure

What I aim at is to have the following remote paths on Bintray:

  • releases: in this directory all p2 simple repositories will be uploaded, each one in its own directory, named after version.buildQualifier, e.g., 1.0.0.v20160129-1616/ etc. Your Eclipse users can then use the URL of one of these single update sites to stick to that specific version.
  • updates: in this directory the composite metadata will be uploaded. The URL https://dl.bintray.com/lorenzobettini/p2-composite-example/updates/ should be used by your Eclipse users to install the features in their Eclipse of for target platform resolution (depending on the kind of projects you’re developing). All versions will be available from this composite update site; I call this main composite. Moreover, you can provide the URL to a child composite update site that includes all versions for a given major.minor stream, e.g., https://dl.bintray.com/lorenzobettini/p2-composite-example/updates/1.0/, https://dl.bintray.com/lorenzobettini/p2-composite-example/updates/1.1/, etc. I call each one of these, child composite.
  • zipped: in this directory we will upload the zipped p2 repository for each version.

Summarizing we’ll end up with a remote directory structure like the following

root
|-- releases
|   |-- 1.0.0.v2016...
|   |   |-- artifacts.jar
|   |   |-- content.jar
|   |   |-- features
|   |   |   |-- your feature.jar
|   |   |-- plugins
|   |   |   |-- your bundle.jar
|   |-- 1.1.0.v2016...
|   |-- 1.1.1.v2016...
|   |-- 2.0.0.v2016...
|   ...
|-- updates
|   |-- compositeContent.xml
|   |-- compositeArtifacts.xml
|   |-- 1.0
|   |   |-- compositeContent.xml
|   |   |-- compositeArtifact.xml
|   |-- 1.1
|   |   |-- compositeContent.xml
|   |   |-- compositeArtifact.xml
|   |-- 2.0 ...
|   ...
|-- zipped
    |-- your site 1.0.0.v2016....zip
    |-- your site 1.1.0.v2016....zip
    ...

Uploading using REST API

In the posts I mentioned above, the typical line to upload contents with the REST API is of the shape

curl -X PUT -T $f \
   -u ${BINTRAY_USER}:${BINTRAY_API_KEY} \
   https://api.bintray.com/content/${BINTRAY_OWNER}/${BINTRAY_REPO}/$f;publish=1

For metadata, and

curl -X PUT -T $f \
  -u ${BINTRAY_USER}:${BINTRAY_API_KEY} \
  https://api.bintray.com/content/${BINTRAY_OWNER}/${BINTRAY_REPO}/\
  ${PCK_NAME}/${PCK_VERSION}/$f;publish=1

For features and plugins.

But this has the drawback I was mentioning above.

Thanks to the Bintray Support, I managed to use a different scheme that allows me to store p2 metadata for a single p2 repository in the same directory of the p2 repository itself and to keep those metadata separate for each single release.

To achieve this, we need to use another URL scheme for uploading, using matrix params options or header options.

This means that we’ll upload everything with this URL

curl -XPUT -T $f \
  -u${BINTRAY_USER}:${BINTRAY_API_KEY} \
  "https://api.bintray.com/content/${BINTRAY_OWNER}/${BINTRAY_REPO}/\
  ${TARGET_PATH}/$f;bt_package=${PCK_NAME};bt_version=${PCK_VERSION};publish=1"

On the contrary, for uploading p2 composite metadata, we’ll use the schema of the other approaches, i.e., we will not associate it to any specific version; we just need to specify the desired remote path where we’ll upload the main and the child composite metadata.

Building Steps

During the build, we’ll have to update the composite site metadata, and we’ll have to do that locally.

The steps that we’ll perform during the Maven/Tycho build, which will rely on some Ant scripts can be summarized as follows:

  • Retrieve the remote composite metadata compositeContent/Artifacts.xml, both for the main composite and the child composite. If these metadata cannot be found remotely, we fail gracefully: it means that it is the first time we release, or, if only the child composite cannot be found, that we’re releasing a new major.minor version. These will be downloaded in the directories target/main-composite and target/child-composite respectively. These will be created anyway.
  • Preprocess possible downloaded composite metadata: if this property is present
    <property name='p2.atomic.composite.loading' value='true'/>

    We must temporarily set it to false, otherwise we will not be able to add additional elements in the composite site with the p2 ant tasks.
  • Update the composite metadata using the version information passed from the Maven/Tycho build using the p2 Ant tasks for composite repositories
  • Post process the composite metadata (i.e., put the property p2.atomic.composite.loading above to true, see https://bugs.eclipse.org/bugs/show_bug.cgi?id=356561 for further details about this property)
  • Upload everything to bintray: both the new p2 repository, its zipped version and all the composite metadata.

IMPORTANT: the pre and post processing of composite metadata that we’ll implement assumes that such metadata are not compressed. Anyway, I always prefer not to compress the composite metadata since it’s easier, later, to manually change them or reviewing.

Technical Details

You can find the complete example at https://github.com/LorenzoBettini/p2composite-bintray-example. Here I’ll sketch the main parts. First of all, all the mechanisms for updating the composite metadata and pushing to Bintray (i.e., the steps detailed above) are in the project p2composite.example.site, which is a Maven/Tycho project with eclipse-repository packaging.

The pom.xml has some properties that you should adapt to your project, and some other properties that can be left as they are if you’re OK with the defaults:

<properties>
	<!-- The name of your own Bintray repository -->
	<bintray.repo>p2-composite-example</bintray.repo>
	<!-- The name of your own Bintray repository's package for releases -->
	<bintray.package>releases</bintray.package>
	<!-- The label for the Composite sites -->
	<site.label>Composite Site Example</site.label>

	<!-- If the Bintray repository is owned by someone different from your
		user, then specify the bintray.owner explicitly -->
	<bintray.owner>${bintray.user}</bintray.owner>
	<!-- Define bintray.user and bintray.apikey in some secret place,
		like .m2/settings.xml -->

	<!-- Default values for remote directories -->
	<bintray.releases.path>releases</bintray.releases.path>
	<bintray.composite.path>updates</bintray.composite.path>
	<bintray.zip.path>zipped</bintray.zip.path>
	<!-- note that the following must be consistent with the path schema
		used to publish child composite repositories and actual released p2 repositories -->
	<child.repository.path.prefix>../../releases/</child.repository.path.prefix>
</properties>

If you change the default remote paths it is crucial that you update the child.repository.path.prefix consistently. In fact, this is used to update the composite metadata for the composite children. For example, with the default properties the composite metadata will look like the following (here we show only compositeContent.xml):

<?xml version='1.0' encoding='UTF-8'?>
<?compositeMetadataRepository version='1.0.0'?>
<repository name='Composite Site Example 1.0' type='org.eclipse.equinox.internal.p2.metadata.repository.CompositeMetadataRepository' version='1.0.0'>
  <properties size='2'>
    <property name='p2.timestamp' value='1454086165279'/>
    <property name='p2.atomic.composite.loading' value='true'/>
  </properties>
  <children size='3'>
    <child location='../../releases/1.0.0.v20160129-1625'/>
    <child location='../../releases/1.0.0.v20160129-1630'/>
    <child location='../../releases/1.0.0.v20160129-1649'/>
  </children>
</repository>

You can also see that two crucial properties, bintray.user and, in particular, bintray.apikey should not be made public. You should keep these hidden, for example, you can put them in your local .m2/settings.xml file, associated to the Maven profile that you use for releasing (as illustrated in the following). This is an example of settings.xml

<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
                      http://maven.apache.org/xsd/settings-1.0.0.xsd/"> 
    <profiles>
        <profile>
            <id>release-composite</id>
            <activation>
                    <activeByDefault>false</activeByDefault>
            </activation>
            <properties>
                <bintray.user>YOUR BINTRAY USER HERE</bintray.user>
                <bintray.apikey>YOUR BINTRAY APIKEY HERE</bintray.apikey>
            </properties>
        </profile>
    </profiles>

</settings>

In the pom.xml of this project there is a Maven profile, release-composite, that should be activated when you want to perform the release steps described above.

We also make sure that the generated zipped p2 repository has a name with fully qualified version

<!-- make sure that zipped p2 repositories have the fully qualified version -->
<plugin>
	<groupId>org.eclipse.tycho</groupId>
	<artifactId>tycho-p2-repository-plugin</artifactId>
	<version>${tycho-version}</version>
	<configuration>
		<finalName>${project.artifactId}-${qualifiedVersion}</finalName>
	</configuration>
</plugin>

In the release-composite Maven profile, we use the maven-antrun-plugin to execute some ant targets (note that the Maven properties are automatically passed to the Ant tasks): one to retrieve the remote composite metadata, if they exist, and the other one as the final step to deploy the p2 repository, its zipped version and the composite metadata to Bintray:

<plugin>
	<artifactId>maven-antrun-plugin</artifactId>
	<version>${maven-antrun-plugin.version}</version>
	<executions>
		<execution>
			<!-- Retrieve possibly existing remote composite metadata -->
			<id>update-local-repository</id>
			<phase>prepare-package</phase>
			<configuration>
				<target>
					<ant antfile="${basedir}/bintray.ant" target="get-composite-metadata">
					</ant>
				</target>
			</configuration>
			<goals>
				<goal>run</goal>
			</goals>
		</execution>
		
		<execution>
			<!-- Deploy p2 repository, p2 composite updated metadata and zipped p2 repository -->
			<id>deploy-repository</id>
			<phase>verify</phase>
			<configuration>
				<target>
					<ant antfile="${basedir}/bintray.ant" target="push-to-bintray">
					</ant>
				</target>
			</configuration>
			<goals>
				<goal>run</goal>
			</goals>
		</execution>
	</executions>
</plugin>

The Ant tasks are defined in the file bintray.ant. Please refer to the example for the complete file. Here we sketch the main parts.

This Ant file relies on some properties with default values, and other properties that are expected to be passed when running these tasks, i.e., from the pom.xml

<!--
These must be set from outside
<property name="bintray.user" value="" />
<property name="bintray.apikey" value="" />
<property name="bintray.repo" value="" />
<property name="bintray.package" value="" />
<property name="bintray.releases.path" value="" />
<property name="bintray.composite.path" value="" />
<property name="bintray.zip.path" value="" />
-->

<property name="bintray.url" value="https://dl.bintray.com/${bintray.owner}/${bintray.repo}" />
<property name="bintray.package.version" value="${unqualifiedVersion}.${buildQualifier}" />
<property name="bintray.releases.target.path" value="${bintray.releases.path}/${bintray.package.version}" />

<property name="main.composite.url" value="${bintray.url}/${bintray.composite.path}" />
<property name="target" value="target" />
<property name="composite.repository.directory" value="composite-child" />
<property name="main.composite.repository.directory" value="composite-main" />

<property name="compositeArtifacts" value="compositeArtifacts.xml" />
<property name="compositeContent" value="compositeContent.xml" />

<property name="local.p2.repository" value="target/repository" />

To retrieve the existing remote composite metadata we execute the following, using the standard Ant get task. Note that if there is no composite metadata (e.g., it’s the first release that we execute, or we are releasing a new major.minor version so there’s no child composite for that version) we ignore the error; however, we still create the local directories for the composite metadata:

<!-- Take from the remote URL the possible existing metadata -->
<target name="get-composite-metadata" depends="getMajorMinorVersion" >
	<get-metadata url="${main.composite.url}" dest="${target}/${main.composite.repository.directory}" />
	<get-metadata url="${main.composite.url}/${majorMinorVersion}" dest="${target}/${composite.repository.directory}" />
	<antcall target="preprocess-metadata" />
</target>

<macrodef name="get-metadata" description="Retrieve the p2 composite metadata">
	<attribute name="url" />
	<attribute name="dest" />
	<sequential>
		<echo message="Creating directory @{dest}..." />
		<mkdir dir="@{dest}" />
		<get-file file="${compositeArtifacts}" url="@{url}" dest="@{dest}" />
		<get-file file="${compositeContent}" url="@{url}" dest="@{dest}" />
	</sequential>
</macrodef>

<macrodef name="get-file" description="Use Ant Get task the file">
	<attribute name="file" />
	<attribute name="url" />
	<attribute name="dest" />
	<sequential>
		<!-- If the remote file does not exist then fail gracefully -->
		<echo message="Getting @{file} from @{url} into @{dest}..." />
		<get dest="@{dest}" ignoreerrors="true">
			<url url="@{url}/@{file}" />
		</get>
	</sequential>
</macrodef>

For preprocessing/postprocessing composite metadata (in order to deal with the property p2.atomic.composite.loading as explained in the previous section) we have

<!-- p2.atomic.composite.loading must be set to false otherwise we won't be able
	to add a child to the composite repository without having all the children available -->
<target name="preprocess-metadata" description="Preprocess p2 composite metadata">
	<replaceregexp byline="true">
		<regexp pattern="property name='p2.atomic.composite.loading' value='true'" />
		<substitution expression="property name='p2.atomic.composite.loading' value='false'" />
		<fileset dir="${target}">
			<include name="${composite.repository.directory}/*.xml" />
			<include name="${main.composite.repository.directory}/*.xml" />
		</fileset>
	</replaceregexp>
</target>

<!-- p2.atomic.composite.loading must be set to true
	see https://bugs.eclipse.org/bugs/show_bug.cgi?id=356561 -->
<target name="postprocess-metadata" description="Preprocess p2 composite metadata">
	<replaceregexp byline="true">
		<regexp pattern="property name='p2.atomic.composite.loading' value='false'" />
		<substitution expression="property name='p2.atomic.composite.loading' value='true'" />
		<fileset dir="${target}">
			<include name="${composite.repository.directory}/*.xml" />
			<include name="${main.composite.repository.directory}/*.xml" />
		</fileset>
	</replaceregexp>
</target>

Finally, to push everything to Bintray we execute curl with appropriate URLs, as we described in the previous section about REST API. The single tasks for pushing to Bintray are similar, so we only show one for uploading the p2 repository associated to a specific version, and the one for uploading p2 composite metadata. As detailed at the beginning of the post, we use different URL shapes.

<target name="push-to-bintray" >
	<antcall target="postprocess-metadata" />
	<antcall target="push-p2-repo-to-bintray" />
	<antcall target="push-p2-repo-zipped-to-bintray" />
	<antcall target="push-composite-to-bintray" />
	<antcall target="push-main-composite-to-bintray" />
</target>

<target name="push-p2-repo-to-bintray">
	<apply executable="curl" parallel="false" relative="true" addsourcefile="false">
		<arg value="-XPUT" />
		<targetfile />

		<fileset dir="${local.p2.repository}" />

		<compositemapper>
			<mergemapper to="-T" />
			<globmapper from="*" to="${local.p2.repository}/*" />
			<mergemapper to="-u${bintray.user}:${bintray.apikey}" />
			<globmapper from="*" to="https://api.bintray.com/content/${bintray.owner}/${bintray.repo}/${bintray.releases.target.path}/*;bt_package=${bintray.package};bt_version=${bintray.package.version};publish=1" />
		</compositemapper>
	</apply>
</target>

<target name="push-composite-to-bintray" depends="getMajorMinorVersion" >
	<apply executable="curl" parallel="false" relative="true" addsourcefile="false">
		<arg value="-XPUT" />
		<targetfile />

		<fileset dir="${target}/${composite.repository.directory}" />

		<compositemapper>
			<mergemapper to="-T" />
			<globmapper from="*" to="${target}/${composite.repository.directory}/*" />
			<mergemapper to="-u${bintray.user}:${bintray.apikey}" />
			<globmapper from="*" to="https://api.bintray.com/content/${bintray.owner}/${bintray.repo}/${bintray.composite.path}/${majorMinorVersion}/*;publish=1" />
		</compositemapper>
	</apply>
</target>

To update composite metadata we execute an ant task using the tycho-eclipserun-plugin. This way, we can execute the Eclipse application org.eclipse.ant.core.antRunner, so that we can execute the p2 Ant tasks for managing composite repositories.

ATTENTION: in the following snipped, for the sake of readability, I split the <appArgLine> into several lines, but in your pom.xml it must be exactly in one (long) line.

<plugin>
	<groupId>org.eclipse.tycho.extras</groupId>
	<artifactId>tycho-eclipserun-plugin</artifactId>
	<version>${tycho-version}</version>
	<configuration>
		<!-- Update p2 composite metadata or create it -->
		<!-- IMPORTANT: DO NOT split the arg line -->
		<appArgLine>-application org.eclipse.ant.core.antRunner 
-buildfile packaging-p2composite.ant p2.composite.add 
-Dsite.label="${site.label}" 
-Dproject.build.directory=${project.build.directory} 
-DunqualifiedVersion=${unqualifiedVersion} 
-DbuildQualifier=${buildQualifier} 
-Dchild.repository.path.prefix="${child.repository.path.prefix}"</appArgLine>
		<repositories>
			<repository>
				<id>mars</id>
				<layout>p2</layout>
				<url>http://download.eclipse.org/releases/mars</url>
			</repository>
		</repositories>
		<dependencies>
			<dependency>
				<artifactId>org.eclipse.ant.core</artifactId>
				<type>eclipse-plugin</type>
			</dependency>
			<dependency>
				<artifactId>org.apache.ant</artifactId>
				<type>eclipse-plugin</type>
			</dependency>
			<dependency>
				<artifactId>org.eclipse.equinox.p2.repository.tools</artifactId>
				<type>eclipse-plugin</type>
			</dependency>
			<dependency>
				<artifactId>org.eclipse.equinox.p2.core.feature</artifactId>
				<type>eclipse-feature</type>
			</dependency>
			<dependency>
				<artifactId>org.eclipse.equinox.p2.extras.feature</artifactId>
				<type>eclipse-feature</type>
			</dependency>
			<dependency>
				<artifactId>org.eclipse.equinox.ds</artifactId>
				<type>eclipse-plugin</type>
			</dependency>
		</dependencies>
	</configuration>
	<executions>
		<execution>
			<id>add-p2-composite-repository</id>
			<phase>package</phase>
			<goals>
				<goal>eclipse-run</goal>
			</goals>
		</execution>
	</executions>
</plugin>

The file packaging-p2-composite.ant is similar to the one I showed in a previous post. We use the p2 Ant tasks for adding a child to a composite p2 repository (recall that if there is no existing composite repository, the task for adding a child also creates new compositeContent.xml/Artifacts.xml; if a child with the same name exists the ant task will not add anything new).

<?xml version="1.0"?>
<project name="project">

	<target name="getMajorMinorVersion">
		<script language="javascript">
			<![CDATA[

	                // getting the value
	                buildnumber = project.getProperty("unqualifiedVersion");
	                index = buildnumber.lastIndexOf(".");
	                counter = buildnumber.substring(0, index);
	    			project.setProperty("majorMinorVersion",counter);

	            ]]>
		</script>
	</target>

	<target name="test_getMajorMinor" depends="getMajorMinorVersion">
		<echo message="majorMinorVersion: ${majorMinorVersion}" />
	</target>

	<!--
		site.label						The name/title/label of the created composite site
		unqualifiedVersion 				The version without any qualifier replacement
		buildQualifier					The build qualifier
		child.repository.path.prefix	The path prefix to access the actual p2 repo from the
										child repo, e.g., if child repo is in /updates/1.0 and
										the p2 repo is in /releases/1.0.0.something then this property
										should be "../../releases/"
	-->
	<target name="compute.child.repository.data" depends="getMajorMinorVersion">
		<property name="full.version" value="${unqualifiedVersion}.${buildQualifier}" />

		<property name="site.composite.name" value="${site.label} ${majorMinorVersion}" />
		<property name="main.site.composite.name" value="${site.label} All Versions" />

		<!-- composite.base.dir	The base directory for the local composite metadata,
			e.g., from Maven, ${project.build.directory}
		-->
		<property name="composite.base.dir" value="target"/>

		<property name="main.composite.repository.directory" location="${composite.base.dir}/composite-main" />
		<property name="composite.repository.directory" location="${composite.base.dir}/composite-child" />

		<property name="child.repository" value="${child.repository.path.prefix}${full.version}" />
	</target>

	<target name="p2.composite.add" depends="compute.child.repository.data">
		<add.composite.repository.internal composite.repository.location="${main.composite.repository.directory}" composite.repository.name="${main.site.composite.name}" composite.repository.child="${majorMinorVersion}" />
		<add.composite.repository.internal composite.repository.location="${composite.repository.directory}" composite.repository.name="${site.composite.name}" composite.repository.child="${child.repository}" />
	</target>

	<!-- = = = = = = = = = = = = = = = = =
          macrodef: add.composite.repository.internal          
         = = = = = = = = = = = = = = = = = -->
	<macrodef name="add.composite.repository.internal">
		<attribute name="composite.repository.location" />
		<attribute name="composite.repository.name" />
		<attribute name="composite.repository.child" />
		<sequential>

			<echo message=" " />
			<echo message="Composite repository       : @{composite.repository.location}" />
			<echo message="Composite name             : @{composite.repository.name}" />
			<echo message="Adding child repository    : @{composite.repository.child}" />

			<p2.composite.repository>
				<repository compressed="false" location="@{composite.repository.location}" name="@{composite.repository.name}" />
				<add>
					<repository location="@{composite.repository.child}" />
				</add>
			</p2.composite.repository>

			<echo file="@{composite.repository.location}/p2.index">version=1
metadata.repository.factory.order=compositeContent.xml,\!
artifact.repository.factory.order=compositeArtifacts.xml,\!
</echo>

		</sequential>
	</macrodef>


</project>

Removing Released artifacts

In case you want to remove an existing released version, since we upload the p2 repository and the zipped version as part of a package’s version, we just need to delete that version using the Bintray Web UI. However, this procedure will never remove the metadata, i.e., artifacts.jar and content.jar. The same holds if you want to remove the composite metadata. For these metadata files, you need to use the REST API, e.g., with curl. I put a shell script in the example to quickly remove all the metadata files from a given remote Bintray directory.

Performing a Release

For performing a release you just need to run

mvn clean verify -Prelease-composite

on the p2composite.example.tycho project.

Concluding Remarks

As I said, the procedure shown in this example is meant to be easily reusable in your projects. The ant files can be simply copied as they are. The same holds for the Maven profile. You only need to specify the Maven properties that contain values for your very project, and adjust your settings.xml with sensitive data like the bintray APIKEY.

Happy Releasing! :)

Be Sociable, Share!

by Lorenzo Bettini at February 10, 2016 03:46 PM

CFP: MesosCon 2016

by Chris Aniszczyk at February 10, 2016 03:34 PM

MesosCon is happening again and I’m happy to be involved with the Program Committee. MesosCon 2016 will be in Denver on June 1st-2nd:

The CFP is open until March 9th and the schedule will be announced on April 4th!

 


by Chris Aniszczyk at February 10, 2016 03:34 PM

TypeFox - The Xtext Company

by Sven Efftinge (noreply@blogger.com) at February 10, 2016 11:14 AM

As many of you already noticed, we had to find a company name without 'Xtext' in it. Longer story short, we finally decided for TypeFox (it still has an 'x', ey? ;-)).  We are still all about Xtext and Xtend, of course.

The website is online now and reveals some additional details about what we do. Also we are having a blog there, which will be updated with useful content around Xtext and Xtend and more general about language engineering, code generators and so on on a regular basis. If you want to get notified about the content, there is a monthly newsletter. It will contain information about the latest blog post, upcoming Xtext and Xtend releases and upcoming events. The sign-up form is on the blog page.

Also Jan joined this month as a co-founder and there will be five more friends (Xtext committers) joining TypeFox in the coming weeks.

Finally I wanted to say thank you, for all the good wishes and the trust of our partners who already do business with us. It starts all very well and I am very thankful for that.

How do you like our logo?






by Sven Efftinge (noreply@blogger.com) at February 10, 2016 11:14 AM

Announcing Extras for Eclipse

by Rüdiger Herrmann at February 10, 2016 08:00 AM

Written by Rüdiger Herrmann

Over the last months I wrote some extensions for the Eclipse IDE that I found were missing and could be implemented with reasonable effort.

The outcome is Extras for Eclipse, a collection of small extensions for the Eclipse IDE which include a launch dialog, a JUnit status bar, a launch configuration housekeeper, and little helper to accomplish recurring tasks with keyboard shortcuts.

They have been developed and proven useful in daily work over the last months to me and so I thought they might be useful for others, too. In this post I will walk through each of the features briefly.

The most noteworthy Extras for Eclipse at a glance:

  • A JUnit progress meter in the main status bar and a key binding to open the JUnit View
  • A dialog to quickly start or edit arbitrary launch configurations, think of Open Resource for launch configurations
  • An option to remove generated launch configurations when they are no longer needed
  • A key binding for the Open With… menu to choose the editor for the selected file
  • And yet another key binding to delete the currently edited file

The listed components can be installed separately so that you are free to choose whatever fits your needs.

Installation

Extras for Eclipse is available from the Eclipse Marketplace. If you want to give it a try, just drag the icon to your running Eclipse:

Drag to your running Eclipse workspace to install Extras for Eclipse

If you prefer, you can also install Extras for Eclipse directly from this software repository:

In the Eclipse main menu, select Help > Install New Software…, then enter the URL above and select Extras for the Eclipse IDE. Expand the item to select only certain features for installation.

Please note that a JRE 8 or later and Eclipse Luna (4.4) or later are required to run this software.

Extras for JUnit

If you are using the the JUnit View like Frank does and have it minimized in a fast view, the only progress indicator is the animated view icon. While it basically provides the desired information – progress and failure/success – I found it a little too subtle. But having the JUnit View always open is quite a waste of space.

That brought me to the JUnit Status Bar: a progress meter in the main status bar that mirrors the bar of the JUnit View but saves the space of the entire view that is only useful when diagnosing test failure.

Extras for Eclipse: JUnit Status Bar      Extras for Eclipse: JUnit Status Bar      Extras for Eclipse: JUnit Status Bar

Together with a key binding (Alt+Shift+Q U) to open the JUnit View when needed this made running tests even a little more convenient.

If you would like to hide the status bar, go to the Java > JUnit > JUnit Status Bar preference page. Note that due to a bug in Eclipse Mars (4.5) and later you need to resize the workbench window afterwards to make the change appear.

Launching with Keys

When working on Eclipse plug-ins I usually run tests with Ctrl+R or Alt+Shift+D T/P, but from time to time I also launch the application for a reality-check. And then Ctrl+F11/F11 to relaunch the previously launched application often isn’t the right choice. Neither does Launch the selected resource always pick the right one either.

Hence, leave the keyboard and grab the mouse, go to the main toolbar, find the Run/Debug tool item and select the appropriate launch configuration, if it is still among the most recently used ones. Otherwise open the launch dialog, …

Therefore, I was looking for quicker access and came up with the Start Launch Configuration dialog. It works much the same as Open Type or Open Resource: A filtered list shows all available launch configurations. Favorites and recently used launch configurations are listed first. With Return the selected launch configuration(s) can be debugged. A different launch mode (i.e. run or profile) can be chosen from the drop-down menu next to the filter text.

And most important, there is a key binding: Alt+F11. Or if you prefer Ctrl+3, the command is named Open Launch Dialog.

The screenshot below shows the Start Launch Configuration dialog in action:

Extras for Eclipse: Start Launch Configuration Dialog

If there are launch configurations that have currently running instances, their image is decorated with a running (Extras for Eclipse: Running Launch Configuration Decorator) symbol. And if you need to modify a launch configuration prior to running it, the Edit button gives you quick access to its properties.

Launch Configuration Housekeeping

With launch configurations there is another annoyance: each test that is run generates a launch configuration and if you code test driven, you will end up with many launch configurations. So many that they obscure the two or so manually created master test suites that actually matter.

That brought me to the idea to remove launch configurations that were generated when they are no longer needed. The no longer needed is currently reached when another launch configuration is run. This still gives the ability to re-run an application with Ctrl+F11/F11 but limits the number of launch configurations to those that are relevant.

The term generated applies to all launch configurations that aren’t explicitly created in the launch configuration dialog such as those created through the Run As > JUnit Test or Debug As > Java Application commands for example.

With the Run/Debug > Launching > Clean Up preference page, you can specify which launch configuration types should be considered when cleaning up.

Extras for Eclipse: Clean up Launch Configurations Preference Page

Open With… Key Binding

Extras for Eclipse: Open With... Key Binding

Sometimes I had the need to open a file with a different editor than the default one.

To work around the broken PDE target definition editor for example, I used to open target definition files with a text editor. While this particular editor has improved since its Mars release, I still have occassional use for Open With….

As an extension of the F3 or Return key that opens the selected file in the respective default editor, there is now Shift+F3 that shows the Open With… menu to choose an alternative editor.

Delete the Currently Edited File

For a while now I realized that I use the Package Explorer less and less. It even is in the way at times and may as well make a good candidate for a fast view.

I find the package explorer – or more generally navigation views – a useful tool to get to know the structure of a software project, but once you are comfortable with it the view adds less and less value while occupying much screen real estate.

Extras for Eclipse: Delete File in Editor

To navigate the sources I mostly use Open Type (Ctrl+Shift+T) or Open Declaration (F3) or the Quick Type Hierarchy (Ctrl+T) or the editors breadcrumb bar.

But to delete a file I have to go back to a navigation view, select the resource in question and hit the Del key.

This detour can be spared with yet another key binding, Alt+Del, that invokes the regular delete operation so that the behavior is the same as if the edited file was deleted from one of the navigation views.

Concluding Extras for Eclipse

This article describes the features that I found most noteworthy. For a complete list, please visit the project page.

For some of the extensions introduced here I have opened enhancement requests at Eclipse (see this Bugzilla query). If there is enough interest and support, I will eventually contribute them to the respective Eclipse projects.

I prefer a possibly slim IDE that consists only of those plug-ins that are actually necessary for the task at hand (mostly Java Tools, Plug-in Development Tools, MoreUnit, Maven integration, Git integration, EclEmma and Extras for Eclipse of course) – and in this environment the components have proven stable.

Therefore I would be grateful for hints if an Extra feature collides with plug-ins in a different setup. If you find a bug or would like to propose an enhancement please file an issue here:

Or if you event want to contribute to Extras for Eclipse, please read the Contributing Guidelines that list the few rules to contribute and explain how to set up the development environment.

The post Announcing Extras for Eclipse appeared first on Code Affine.


by Rüdiger Herrmann at February 10, 2016 08:00 AM

Using TypeScript LanguageService to build an JS/TypeScript IDE in Java

by Tom Schindl at February 09, 2016 06:34 PM

Yesterday I blogged about my endeavors into loading and using the TypeScript Language Service (V8 and Nashorn) and calling it from Java to get things like an outline, auto-completions, … .

Today I connected these headless pieces to my JavaFX-Editor-Framework. The result can be seen in the video below.

To get the TypeScript LanguageService feeling responsible not only for TypeScript files but also for JavaScript I used the 1.8 beta.

As you notice JS-Support is not yet really at a stage where it can replace eg Tern but I guess things are going to improve in future.



by Tom Schindl at February 09, 2016 06:34 PM

JSDT Project Structure

by psuzzi at February 09, 2016 11:07 AM

This post explains the JSDT projects structure, and it is the result of my direct experience.

This page serves also as part discussion for JSDT developments. Kudos -all to those who comment and leave constructive feedback: here and on JSDT bugzilla. [486037477020]

By reading this article you will be able to understand where the JSDT projects are located; which are the related git repositories and how to get the source code to work with one of those projects. i.e. JSDT Editor; Nodejs; Bower, JSON Editor, Gulp, Grunt, HTML>js, JSP>js , etc.

JSDT Repositories

The image below represents the current structure of JSDT repositories.

Almost all of the links to the above source code repositories are accessible via the https://projects.eclipse.org/projects/webtools/developer page.

Description:

  • eclipse.platform.runtime : [gerritBrowse repo] source repo required for quitting some IDE validation at compile-time.
  • webtools : [Browse repo] contains the website for all the webtools project. It’s big, needed to update project page
  • webtools.jsdt : [gerrit,Browse repo, github] source repo containing the most updated code for JSDT
  • webtools.jsdt.[core, debug, tests]: old source repos containing outdated code (last commit: 2013)
  • webtools.sourceediting: [gerritBrowse repo] source repo for JSDT Web and JSON

Note: the Gerrit [Review With Gerrit] icons are linking to the repos accepting Gerrit contributions, so anybody can easily contribute.

Early Project Structure

According older documentation, JSDT was split in four areas: Core, Debug, Tests and Web. The source of the first three was directly acessible under project source control, while the latter, because of its wider extent, was part of the parent project.

Dissecting the old jsdt_2010.psf, we see the original project structure.

structure-of-jsdt-projects-2010-cvs

Current Project Structure

The current project structure is based on the old structure, but it has additional projects. To simplify I split the project in four sets:

  • JSDT Core, Debug, Docs (& Tests): under the webtools.jsdt source repositories contains similar data to the old project.
  • JSDT.js : it is also under the webtools.jsdt source repo, but contains the Nodejs stuffs.
  • wst.json : under the webtools.sourceediting, contains the projects needed to parse / edit JSON
  • wst.jsdt.web : also under the webtools.sourcediting repo, contains the projects to include JSDT in Web editors

The image below represents simultaneously all the above project sets, as visible in my workspace.

JSDT-projects-structure_edit

A complete Project Set

Here you can find the complete projectset, containing the four projectsets above, plus the Platform dependencies, and the webtools project.

wst.jsdt.allProjects_20160209.psf

After importing, you should see the project sets below.

The full list of projects in my workspace is visible in the image below.

JSDT-all-related-projects

JSDT Development

At this point, to start with JSDT development, you will need to:

  1. clone the needed repositories to your local
  2. setup the development environment, as explained in my previous article.
  3. Import the referenced projectset
  4. Launch the inner eclipse with the source plugins you want

Note:

Your comments and suggestions are very welcome. Thanks for your feedback !

References:


by psuzzi at February 09, 2016 11:07 AM

OSGi – bundles / fragments / dependencies

by Dirk Fauth at February 09, 2016 08:02 AM

In the last weeks I needed to look at several issues regarding OSGi dependencies in different products. A lot of these issues were IMHO related to wrong usage of OSGi bundle fragments. As I needed to search for various solutions, I will publish my results and my opinion on the usage of fragments in this post. Partly also for myself to remind me about it in the future.

What is a fragment?

As explained in the OSGi Wiki, a fragment is a bundle that makes its contents available to another bundle. And most importantly, a fragment and its host bundle share the same classloader.

Looking at this from a more abstract point of view, a fragment is an extension to an existing bundle. This might be a simplified statement. But considering this statement helped me solving several issues.

What are fragments used for?

I have seen a lot of different usage scenarios for fragments. Considering the above statement, some of them where wrong by design. But before explaining when not to use fragments, let’s look when they are the agent of choice. Basically fragments need to be used whenever a resource needs to be accessible by the classloader of the host bundle. There are several use cases for that, most of them rely on technologies and patterns that are based on standard Java. For example:

  • Add configuration files to a third-party-plugin
    e.g. provide the logging configuration (log4j.xml for the org.apache.log4j bundle)
  • Add new language files for a resource bundle
    e.g. a properties file for locale fr_FR that needs to be located next to the other properties files by specification
  • Add classes that need to be dynamically loaded by a framework
    e.g. provide a custom logging appender
  • Provide native code
    This can be done in several ways, but more on that shortly.

In short: fragments are used to customize a bundle

When are fragments the wrong agent of choice?

To explain this we will look at the different ways to provide native code as an example.

One way is to use the Bundle-NativeCode manifest header. This way the native code for all environments are packaged in the same bundle. So no fragments here, but sometimes not easy to setup. At least I struggled with this approach some years ago.

A more common approach is to use fragments. For every supported platform there is a corresponding fragment that contains the platform specific native library. The host bundle on the other side typically contains the Java code that loads the native library and provides the interface to access it (e.g. via JNI). This scenario is IMHO a good example for using fragments to provide native code. The fragment only extend the host bundle without exposing something public.

Another approach is the SWT approach. The difference to the above scenario is, that the host bundle org.eclipse.swt is an almost empty bundle that only contains the OSGi meta-information in the MANIFEST.MF. The native libraries aswell as the corresponding Java code is supplied via platform dependent fragments. Although SWT is often referred as reference for dealing with native libraries in OSGi, I think that approach is wrong.

To elaborate why I think the approach org.eclipse.swt is using is wrong, we will have a look at a small example.

  1. Create a host bundle in Eclipse via File -> New -> Plug-in Project and name it org.fipro.host. Ensure to not creating an Activator or anything else.
  2. Create a fragment for that host bundle via File -> New -> Other -> Plug-in Development -> Fragment Project and name it org.fipro.host.fragment. Specify the host bundle org.fipro.host on the second wizard page.
  3. Create the package org.fipro.host in the fragment project.
  4. Create the following simple class (yes, it has nothing to do with native code in fragments, but it also shows the issues).
    package org.fipro.host;
    
    public class MyHelper {
    	public static void doSomething() {
    		System.out.println("do something");
    	}
    }
    

So far, so good. Now let’s consume the helper class.

  1. Create a new bundle via File -> New -> Plug-in Project and name it org.fipro.consumer. This time let the wizard create an Activator.
  2. In Activator#start(BundleContext) try to call MyHelper#doSomething()

Now the fun begins. Of course MyHelper can not be resolved at this time. We first need to make the package consumable in OSGi. This can be done in the fragment or the host bundle. I personally tend to configure Export-Package in the bundle/fragment where the package is located. We therefore add the Export-Package manifest header to the fragment. To do this open the file org.fipro.host.fragment/META-INF/MANIFEST.MF. Switch to the Runtime tab and click Add… to add the package org.fipro.host.

Note: As a fragment is an extension to a bundle, you can also specify the Export-Package header for org.fipro.host in the host bundle org.fipro.host. org.eclipse.swt is configured this way. But notice that the fragment packages are not automatically resolved using the PDE Manifest Editor and you need to add the manifest header manually.

After that the package org.fipro.host can be consumed by other bundles. Open the file org.fipro.consumer/META-INF/MANIFEST.MF and switch to the Dependencies tab. At this time it doesn’t matter if you use Required Plug-ins or Imported Packages. Although Import-Package should be always the preferred way, as we will see shortly.

Althought the manifest headers are configured correctly, the MyHelper class can not be resolved. The reason for this is PDE tooling. It needs additional information to construct proper class paths for building. This can be done by adding the following line to the manifest file of org.fipro.host

Eclipse-ExtensibleAPI: true

After this additional header is added, the compilation errors are gone.

Note: This additional manifest header is not necessary and not used at runtime. At runtime a fragment is always allowed to add additional packages, classes and resources to the API of the host.

After the compilation errors are gone in our workspace and the application runs fine, let’s try to build it using Maven Tycho. I don’t want to walk through the whole process of setting up a Tycho build. So let’s simply assume you have a running Tycho build and include the three projects to that build. Using POM-less Tycho this simply means to add the three projects to the modules section of the build.

You can find further information on Tycho here:
Eclipse Tycho for building Eclipse Plug-ins and RCP applications
POM-less Tycho builds for structured environments

Running the build will fail because of a Compilation failure. The Activator class does not compile because the import org.fipro.host cannot be resolved. Similar to PDE, Tycho is not aware of the build dependency to the fragment. This can be solved by adding an extra. entry to the build.properties of the org.fipro.consumer project.

extra.. = platform:/fragment/org.fipro.host.fragment

See the Plug-in Development Environment Guide for further information about build configuration.

After that entry was added to the build.properties of the consumer bundle, also the Tycho build succeeds.

What is wrong with the above?

At first sight it is quite obvious what is wrong with the above solution. You need to configure the tooling at several places to make the compilation and the build work. These workarounds even introduce dependencies where there shouldn’t be any. In the above example this might be not a big issue, but think about platform dependent fragments. Do you really want to configure a build dependency to a win32.win32.x86 fragment on the consumer side?

The above scenario even introduces issues for installations with p2. Using the empty host with implementations in the fragments forces you to ensure that at least (or exactly) one fragment is installed together with the host. Which is another workaround in my opinion (see Bug 361901 for further information).

OSGi purists will say that the main issue is located in PDE tooling and Tycho, because the build dependencies are kept as close as possible to the runtime dependencies (see for example here). And using tools like Bndtools you don’t need these workarounds. And in first place I agree with that. But unfortunately it is not possible (or only hard to achieve) to use Bndtools for Eclipse application development. Mainly because in plain OSGi, Eclipse features, applications and products are not known. Therefore also the feature based update mechanism of p2 is not usable. But I don’t want to start the discussion PDE vs. Bndtools. That is worth another (series) of posts.

In my opinion the real issue in the above scenario, and therefore also in org.eclipse.swt, is the wrong usage of fragments. Why is there a host bundle that only contains the OSGi meta information? After thinking a while about this, I realized that the only reason can be laziness! Users want to use Require-Bundle instead of configuring the several needed Import-Package entries. IMHO this is the only reason that the org.eclipse.swt bundle with the multiple platform dependent fragments exists.

Let’s try to think about possible changes. Make every platform dependent fragment a bundle and configure the Export-Package manifest header for every bundle. That’s it on the provider side. If you wonder about the Eclipse-PlatformFilter manifest header, that works for bundles aswell as for fragments. So we don’t loose anything here. On the consumer side we need to ensure that Import-Package is used instead of Require-Bundle. This way we declare dependencies on the functionality, not the bundle where the functionality originated. That’s all! Using this approach, the workarounds mentioned above can be removed. PDE and Tycho are working as intended, as they can simply resolve bundle dependencies. I have to admit that I’m not sure about p2 regarding the platform dependent bundles. Would need to check this separately.

Conclusion

Having a look at the two initial statements about fragments

  • a fragment is an extension to an existing bundle
  • fragments are used to customize a bundle

it is IMHO wrong to make API public available from a fragment. These statements could even be modified to become the following:

  • a fragment is an optional extension to an existing bundle

Having that statement in mind, things are getting even clearer when thinking about fragments. Here is another example to strengthen my statement. Guess you have a host bundle that already exports a package org.fipro.host. Now you have a fragment that adds an additional public class via that package, and in a consumer bundle that class is used. Using Bndtools or the workarounds for PDE and Tycho showed above, this should compile and build fine. But what if the fragment is not deployed or started at runtime? Since there is no constraint for the consumer bundle that would identify the missing fragment, the consumer bundle would start. And you will get a ClassNotFoundException at runtime.

Personally I think that everytime a direct dependency to a fragment is introduced, there is something wrong.

There might be exceptions to that rule. One could be to create a custom logging appender that needs to be accessible in other places, e.g. for programmatically configurations. As the logging appender needs to be in the same classloader as the logging framework (e.g. org.apache.log4j), it needs to be provided via fragment. And to access it programmatically, a direct dependency to the fragment is needed. But honestly, even in such a case a direct dependency to the fragment can be avoided with a good module design. Such a design could be for example to make the appender an OSGi service. The service interface would be defined in a separate API bundle and the programmatic access would be implemented against the service interface. Therefore no direct dependency to the fragment would be necessary.

As I struggled several days with searching for solutions on fragment dependency issues, I hope this post can help others, solving such issues. Basically my solution is to get rid of all fragments that export API and make them either separate bundles or let them provide their API via services.

If someone with a deeper knowledge in OSGi ever comes by this post and has some comments or remarks about my statements, please let me know. I’m always happy to learn something new or getting new insights.


by Dirk Fauth at February 09, 2016 08:02 AM

The buzz around Eclipse Che

by Ian Skerrett at February 08, 2016 10:56 PM

Just over two weeks ago the Eclipse Che project released a beta version of their Che 4.0 release. We published an article introducing Eclipse Che in our Eclipse Newsletter so readers can learn more about the highlights of Che.

The feedback in the community has been pretty exciting to watch. On twitter, people are certainly creating a buzz about the future of the IDE.

 

InfoWorld is calling Eclipse Che the launch of the cloud ide revolution.

The Eclipse Che GitHub repo has 1500 stars and 200 forks.

There have been over 100,000 downloads of the Che beta so people are trying it out.

The buzz is certainly growing around Eclipse Che. At EclipseCon in March you will be able to experience Eclipse Che first hand, including Tyler Jewell’s keynote address on the Evolution and Future of the IDE. If you are interested in the future of cloud IDEs then plan to attend EclipseCon

eclipsecon_logo20padding

 

 



by Ian Skerrett at February 08, 2016 10:56 PM

5 open source IoT projects to watch in 2016

by Benjamin Cabé at February 08, 2016 09:57 PM

The IoT industry is slowly but steadily moving from a world of siloed, proprietary solutions, to embracing more and more open standards and open source technologies.
What’s more, the open source projects for IoT are becoming more and more integrated, and you can now find one-stop-shop open source solutions for things like programming your IoT micro controller, or deploying a scalable IoT broker in a cloud environment.

Here are the Top 5 Open Source IoT projects that you should really be watching this year.

  • #1 – The Things Network

    LP-WAN technologies are going to be a hot topic for 2016. It's unclear who will win, but the availability of an open-source ecosystem around those is going to be key. The Things Network is a crowdsourced world-wide community for bringin LoRaWAN to the masses. Most of their backend is open-source and on Github.
<>
Note: you can click on the pictures to learn more!

 

What about you? What are the projects you think are going to make a difference in the months to come?

In case you missed it, the upcoming IoT Summit, co-located with EclipseCon North America, is a great opportunity for you to learn about some of the projects mentioned above, so make sure to check it out!


by Benjamin Cabé at February 08, 2016 09:57 PM

JavaScript Performance V8 vs Nashorn (for Typescript Language Service)

by Tom Schindl at February 08, 2016 02:18 PM

On the weekend I’ve worked on my API to interface with the Typescript language service from my Java code.

While the initial version I developed some months ago used the “tsserver” to communicate with the LanguageService I decided to rewrite that and to interface with the service directly (in memory or through an extra process).

For the in memory version I implemented 2 possible ways to load the JavaScript sources and call them

  • Nashorn
  • V8(with the help of j2v8)

I expected that Nashorn is slower than V8 already but after having implemented a small (none scientific) performance sample the numbers show that Nashorn is between 2 and 4 times slower than V8 (there’s only one call faster in Nashorn).

The sample code looks like this:

public static void main(String[] args) {
  try {
    System.err.println("V8");
    System.err.println("============");
    executeTests(timeit("Boostrap", () -> new V8Dispatcher()));
    System.err.println();
    System.err.println("Nashorn");
    System.err.println("============");
    executeTests(timeit("Nashorn", () -> new NashornDispatcher()));
  } catch (Throwable e) {
    e.printStackTrace();
  }
}

private static void executeTests(Dispatcher dispatcher) throws Exception {
  timeit("Project", () -> dispatcher.sendSingleValueRequest(
    "LanguageService", "createProject", String.class, "MyProject").get());

  timeit("File", () -> dispatcher.sendSingleValueRequest(
    "LanguageService", "addFile", String.class, "p_0", DispatcherPerformance.class.getResource("sample.ts")).get());

  timeit("File", () -> dispatcher.sendSingleValueRequest(
    "LanguageService", "addFile", String.class, "p_0", DispatcherPerformance.class.getResource("sample2.ts")).get());

  timeit("Outline", () -> dispatcher.sendMultiValueRequest(
    "LanguageService", "getNavigationBarItems", NavigationBarItemPojo.class, "p_0", "f_0").get());

  timeit("Outline", () -> dispatcher.sendMultiValueRequest(
    "LanguageService", "getNavigationBarItems", NavigationBarItemPojo.class, "p_0", "f_1").get());
}

Provides the following numbers:

V8
============
Boostrap : 386
Project : 72
File : 1
File : 0
Outline : 40
Outline : 10

Nashorn
============
Nashorn : 4061
Project : 45
File : 29
File : 2
Outline : 824
Outline : 39

The important numbers to compare are:

  • Bootstrap: ~400ms vs ~4000ms
  • 2nd Outline: ~10ms vs ~40ms

So performance indicates that the service should go with j2v8 but requiring that as hard dependency has the following disadvantages:

  • you need to ship different native binaries for each OS you want to run on
  • you need to ship v8 which might/or might not be a problem

So the strategy internally is that if j2v8 is available we’ll use v8, if not we fallback to the slower nashorn, a strategy I would recommend probably for your own projects as well.

If there are any Nashorn experts around who feel free to help me fix my implementation



by Tom Schindl at February 08, 2016 02:18 PM

Branch by Abstraction and OSGi

by David Bosschaert (noreply@blogger.com) at February 08, 2016 10:02 AM

Inspired by my friend Philipp Suter who pointed me at this wired article http://www.wired.com/2016/02/rebuilding-modern-software-is-like-rebuilding-the-bay-bridge which relates to Martin Fowler's Branch by Abstraction I was thinking: how would this work in an OSGi context?

Leaving aside the remote nature of the problem for the moment, let's focus on the pure API aspect here. Whether remote or not really orthogonal... I'll work through this with example code that can be found here: https://github.com/coderthoughts/primes

Let's say you have an implementation to compute prime numbers:
public class PrimeNumbers {
  public int nextPrime(int n) {
    // computes next prime after n - see
https://github.com/coderthoughts/primes details
    return p;
  }
}
And a client program that regularly uses the prime number generator. I have chosen a client that runs in a loop to reflect a long-running program, similar to a long-running process communicating with a microservice:
public class PrimeClient {
  private PrimeNumbers primeGenerator = new PrimeNumbers();
  private void start() {
    new Thread(() -> {
      while (true) {
        System.out.print("First 10 primes: ");
        for (int i=0, p=1; i<10; i++) {
          if (i > 0) System.out.print(", ");
          p = primeGenerator.nextPrime(p);
          System.out.print(p);
        }
        System.out.println();
        try { Thread.sleep(1000); } catch (InterruptedException ie) {}
      }
    }).start();
  }
 
  public static void main(String[] args) {
    new PrimeClient().start();
  }
}
If you have the source code cloned or forked using git, you can run this example easily by checking out the stage1 branch and using Maven:
.../primes> git checkout stage1
.../primes> mvn clean install
... maven output
[INFO] ------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------
Then run it from the client submodule:
.../primes/client> mvn exec:java -Dexec.mainClass=\
org.coderthoughts.primes.client.PrimeClient
... maven output
First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31
First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31
First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31
... and so on ...
Ok so our system works. It keeps printing out prime numbers, but as you can see there is a bug in the output. We also want to replace it in the future with another implementation. This is what the Branch by Abstraction Pattern is about.

In this post I will look at how to do this with OSGi Services. OSGi Services are just POJOs registered in the OSGi Service Registry. OSGi Services are dynamic, they can come and go, and OSGi Service Consumers dynamically react to these changes, as well see. In the following few steps will change the implementation to an OSGi Service. Then we'll update the service at runtime to fix the bug above, without even stopping the service consumer. Finally we can replace the service implementation with a completely different implementation, also without even stopping the client.

Turn the application into OSGi bundles

We'll start by turning the program into an OSGi program that contains 2 bundles: the client bundle and the impl bundle. We'll use the Apache Felix OSGi Framework and also use OSGi Declarative Services which provides a nice dependency injection model to work with OSGi Services.

You can see all this on the git branch called stage2:
.../primes> git checkout stage2
.../primes> mvn clean install
The Client code is quite similar to the original client, except that it now contains some annotations to instruct DS to start and stop it. Also the PrimeNumbers class is now injected instead of directly constructed via the @Reference annotation. The greedy policyOption instructs the injector to re-inject if a better match becomes available:
@Component
public class PrimeClient {
  @Reference(policyOption=ReferencePolicyOption.GREEDY)
  private PrimeNumbers primeGenerator;
  private volatile boolean keepRunning = false;
 
  @Activate
  private void start() {
    keepRunning = true;
    new Thread(() -> {
      while (keepRunning) {
        System.out.print("First 10 primes: ");
        for (int i=0, p=1; i<10; i++) {
          if (i > 0) System.out.print(", ");
          p = primeGenerator.nextPrime(p);
          System.out.print(p);
        }
        System.out.println();
        try { Thread.sleep(1000); } catch (InterruptedException ie) {}
      }
    }).start();
  }
 
  @Deactivate
  private void stop() {
    keepRunning = false;
  }
}
The prime generator implementation code is the same except for an added annotation. We register the implementation class in the Service Registry so that it can be injected into the client:
@Component(service=PrimeNumbers.class)
public class PrimeNumbers {
  public int nextPrime(int n) {
    // computes next prime after n
    return p;
  }
}
As its now an OSGi application, we run it in an OSGi Framework. I'm using the Apache Felix Framework version 5.4.0, but any other OSGi R6 compliant framework will do.
> java -jar bin/felix.jar
g! start http://www.eu.apache.org/dist/felix/org.apache.felix.scr-2.0.2.jar
g! start file:/.../clones/primes/impl/target/impl-0.1.0-SNAPSHOT.jar
g! install file:/.../clones/primes/client/target/client-0.1.0-SNAPSHOT.jar
Now you should have everything installed that you need:
g! lb
START LEVEL 1
ID|State |Level|Name
0|Active | 0|System Bundle (5.4.0)|5.4.0
1|Active | 1|Apache Felix Bundle Repository (2.0.6)|2.0.6
2|Active | 1|Apache Felix Gogo Command (0.16.0)|0.16.0
3|Active | 1|Apache Felix Gogo Runtime (0.16.2)|0.16.2
4|Active | 1|Apache Felix Gogo Shell (0.10.0)|0.10.0
5|Active | 1|Apache Felix Declarative Services (2.0.2)|2.0.2
6|Active | 1|impl (0.1.0.SNAPSHOT)|0.1.0.SNAPSHOT
7|Installed | 1|client (0.1.0.SNAPSHOT)|0.1.0.SNAPSHOT
We can start the client bundle:
g! start 7
First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31
First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31
... and so on ..
You can now also stop the client:
g! stop 7
Great - our OSGi bundles work :)
Now we'll do what Martin Fowler calls creating the abstraction layer.

Introduce the Abstraction Layer: the OSGi Service

Go to the branch stage3 for the code:
.../primes> git checkout stage3
.../primes> mvn clean install
The abstraction layer for the Branch by Abstraction pattern is provided by an interface that we'll use as a service interface. This interface is in a new maven module that creates the service OSGi bundle.
public interface PrimeNumberService {
    int nextPrime(int n);
}
Well turn our Prime Number generator into an OSGi Service. The only difference here is that our PrimeNumbers implementation now implements the PrimeNumberService interface. Also the @Component annotation does not need to declare the service in this case as the component implements an interface it will automatically be registered as a service under that interface:
@Component
public class PrimeNumbers implements PrimeNumberService {
    public int nextPrime(int n) {
      // computes next prime after n
      return p;
    }
}
Run everything in the OSGi framework. The result is still the same but now the client is using the OSGi Service:
g! lb
START LEVEL 1
   ID|State      |Level|Name
    0|Active     |    0|System Bundle (5.4.0)|5.4.0
    1|Active     |    1|Apache Felix Bundle Repository (2.0.6)|2.0.6
    2|Active     |    1|Apache Felix Gogo Command (0.16.0)|0.16.0
    3|Active     |    1|Apache Felix Gogo Runtime (0.16.2)|0.16.2
    4|Active     |    1|Apache Felix Gogo Shell (0.10.0)|0.10.0
    5|Active     |    1|Apache Felix Declarative Services (2.0.2)|2.0.2
    6|Active     |    1|service (1.0.0.SNAPSHOT)|1.0.0.SNAPSHOT
    7|Active     |    1|impl (1.0.0.SNAPSHOT)|1.0.0.SNAPSHOT
    8|Resolved  |    1|client (1.0.0.SNAPSHOT)|1.0.0.SNAPSHOT
g! start 8
First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31
You can introspect your bundles too and see that the client is indeed wired to the service provided by the service implementation:
g! inspect cap * 7
org.coderthoughts.primes.impl [7] provides:
-------------------------------------------
...
service; org.coderthoughts.primes.service.PrimeNumberService with properties:
   component.id = 0
   component.name = org.coderthoughts.primes.impl.PrimeNumbers
   service.bundleid = 7
   service.id = 22
   service.scope = bundle
   Used by:
      org.coderthoughts.primes.client [8]
Great - now we can finally fix that annoying bug in the service implementation: that it missed 2 as a prime! While we're doing this we'll just keep the bundles in the framework running...

Fix the bug in the implementation whitout stopping the client

The prime number generator is fixed in the code in stage4:
.../primes> git checkout stage4
.../primes> mvn clean install
It's a small change to the impl bundle. The service interface and the client remain unchanged. Let's update our running application with the fixed bundle:
First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31
First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31
First 10 primes: 3, 5, 7, 11, 13, 17, 19, 23, 29, 31
g! update 7 file:/.../clones/primes/impl/target/impl-1.0.1-SNAPSHOT.jar
First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
Great - finally our service is fixed! And notice that the client did not need to be restarted! The  DS injection, via the @Reference annotation, handles all of the dynamics for us! The client code simply uses the service as a POJO.

The branch: change to an entirely different service implementation without client restart

Being able to fix a service without even restarting its users is already immensely useful, but we can go even further. I can write an entirely new and different service implementation and migrate the client to use that without restarting the client, using the same mechanism. 

This code is on the branch stage5 and contains a new bundle impl2 that provides an implementation of the PrimeNumberService that always returns 1. 
.../primes> git checkout stage5
.../primes> mvn clean install
While the impl2 implementation obviously does not produce correct prime numbers, it does show how you can completely change the implementation. In the real world a totally different implementation could be working with a different back-end, a new algorithm, a service migrated from a different department etc...

Or alternatively you could do a façade service implementation that round-robins across a number of back-end services or selects a backing service based on the features that the client should be getting.
In the end the solution will always end up being an alternative Service in the service registry that the client can dynamically switch to.

So let's start that new service implementation:
First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
g! start file:/.../clones/primes/impl2/target/impl2-1.0.0-SNAPSHOT.jar
First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
g! stop 7
First 10 primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29
First 10 primes: 1, 1, 1, 1, 1, 1, 1, 1, 1, 1
First 10 primes: 1, 1, 1, 1, 1, 1, 1, 1, 1, 1
Above you can see that when you install and start the new bundle initially nothing will happen. At this point both services are installed at the same time. The client is still bound to the original service as its still there and there is no reason to rebind, the new service is no better match than the original. But when the bundle that provides the initial service is stopped (bundle 7) the client switches over to the implementation that always returns 1. This switchover could happen at any point, even halfway thought the production of the list, so you might even be lucky enough to see something like:
First 10 primes: 2, 3, 5, 7, 11, 13, 1, 1, 1, 1
I hope I have shown that OSGi services provide an excellent mechanism to implement the Branch by Abstraction pattern and even provide the possibility to do the switching between suppliers without stopping the client!

In the next post I'll show how we can add aspects to our services, still without modifying or even restarting the client. These can be useful for debugging, tracking or measuring how a service is used.

PS - Oh, and on the remote thing, this will work just as well locally or Remote. Use OSGi Remote Services to turn your local service into a remote one... For available Remote Services implementations see https://en.wikipedia.org/wiki/OSGi_Specification_Implementations#100:_Remote_Services

With thanks to Carsten Ziegeler for reviewing and providing additional ideas.

by David Bosschaert (noreply@blogger.com) at February 08, 2016 10:02 AM

Bug 75981 is fixed!

February 05, 2016 11:00 PM

Like many of my Eclipse stories, it starts during a coffee break.

  • Have you seen the new TODO template I have configured for our project?

  • Yes. It is nice…​

2016-02-06_todo-template-old

  • But I hate having to set the date manually.

  • I know but it is not possible with Eclipse.

  • …​

A quick search on Google pointed me to Bug 75981. I was not the only one looking for a solution to this issue:

By analyzing the Bugzilla history I have noticed that already 2 contributors have started to work on this (a long time ago) and the feedback to the latest patch never got any answers. I reworked the last proposal…​ and…​

I am happy tell you that you can now do the following:

2016-02-06_templates-preferences

Short description of the possibilities:

  • As before you can use the date variable with no argument. Example: ${date}

  • You can use the variable with additional arguments. In this case you will need to name the variable (since you are not reusing the date somewhere else, the name of the variable doesn’t matter). Example: ${mydate:date}

    • The first parameter is the date format. Example: ${d:date('yyyy-MM-dd')}

    • The second parameter is the locale. Example: ${maDate:date('EEEE dd MMMM yyyy HH:mm:ss Z', 'fr')}

Back to our use case, it now works as expected:

2016-02-06_todo-template-new

Do not hesitate to try the feature and to report any issue you can find. The fix is implemented with the M5 milestone release of Eclipse Neon. You can download this version now here:

This experiment was also a great opportunity for me to measure how the development process at Eclipse has been improved:

  • With Eclipse Oomph (a.k.a Eclipse Installer) it is possible setup the Workspace to work on "Platform Text" very quickly

  • With Gerrit it is much easier for me (a simple contributor) to work with the commiters of the project (propose a patch, discuss each line, push a new version, rebase on top of HEAD…​)

  • With the Maven Build, the build is reproducible (I never tried to build the platform with the old PDE Build, but I believe that this was not possible for somebody like me)

Where I spend most of the time:

  • Analysis of the proposed patches and existing feedbacks in Bugzilla

  • Figure out how I could add some unit tests (for the existing behaviour and for the new use cases).

This was a great experience for me and I am really happy to have contributed this fix.


February 05, 2016 11:00 PM