Andmore 0.5-M7 available.

by kingargyle at May 04, 2016 06:11 PM

Screen Shot 2015-04-19 at 4.44.52 PM

It’s been a while since I posted an update.  Milestone 0.5-M7 is available for download.  This is a pretty big upgrade to the underlying support libraries.   You should now be able to work with Android versions up to M with the this update.   It is recommended that you update your SDK as well if you haven’t already.

Android N may work as well, and we’ll need people to test this out.   The latest version is out in the Eclipse Market Place

https://marketplace.eclipse.org/content/andmore

The latest version can always be obtained from the following p2 url:

http://download.eclipse.org/andmore/latest/

Please do give this a tire kick, and thanks again to Matthew Piggot for contributing the update and  Kaloyan for getting this big change out.



by kingargyle at May 04, 2016 06:11 PM

Eclipse lambda driven UI creation

by Erdal Karaca (noreply@blogger.com) at May 04, 2016 05:52 PM

About

With the addition of lambda expressions to the Java 8 language, you can now create nice APIs for reducing boilerplate code. One of tedious tasks has always been the creation of UI code as it involves lots of boilerplate code and keeping track of what control/widget is a child of another. This article introduces a small helper library I use in my projects. It is about a thin lambda driven API for creating structured UI code.

Conventional UI code

The following code will create a simple UI form:
private Text text;

public void createUIConventional(Composite parent) {
parent.setLayout(GridLayoutFactory.swtDefaults().numColumns(3).create());

Label label = new Label(parent, SWT.NONE);
label.setText("Selection");

ComboViewer viewer = new ComboViewer(parent, SWT.BORDER | SWT.SINGLE | SWT.READ_ONLY);
customizeComboViewer(viewer);
viewer.getCombo().setLayoutData(new GridData(GridData.FILL_HORIZONTAL));

Button button = new Button(parent, SWT.NONE);
button.setText("Apply");
button.addSelectionListener(new SelectionAdapter() {
@Override
public void widgetSelected(SelectionEvent e) {
text.setText("Selection: " + viewer.getCombo().getText());
}
});

text = new Text(parent, SWT.READ_ONLY | SWT.BORDER);
text.setLayoutData(
GridDataFactory.swtDefaults().span(3, 1).grab(true, true).align(SWT.FILL, SWT.FILL).create());
}


As you can see, SWT requires that you always create a new Control/Widget as a child to an existing one, i.e. you have to provide the parent as the first constructor parameter.
Furthermore, if you have lots of UI elements to create, the readability of the code quickly decreases and you can hardly see the hierarchy of the UI widgets WITHIN the code.

In the past, I often used local code blocks to denote the hierarchy of UI controls within the code to achieve better readability.

Lambda driven UI code

The above example can now be rewritten as follows:

 private SwtUI root;

public void createUI(Composite parent) {
root = SwtUI.wrap(parent);
root.layout(GridLayoutFactory.swtDefaults().numColumns(3).create())//
.child(() -> SwtUI.create(Label::new)//
.text("Selection"))//
.child(() -> ViewerUI.createViewer(ComboViewer::new, SWT.BORDER | SWT.SINGLE | SWT.READ_ONLY)//
.id("selectionCombo")//
.customizeViewer(this::customizeComboViewer)//
.layoutData(new GridData(GridData.FILL_HORIZONTAL)))//
.child(() -> SwtUI.create(Button::new)//
.text("Apply")//
.on(SWT.Selection, this::onButtonClick))//
.child(() -> SwtUI.create(Text::new, SWT.READ_ONLY | SWT.BORDER)//
.id("textField")//
.layoutData(GridDataFactory.swtDefaults().span(3, 1).grab(true, true).align(SWT.FILL, SWT.FILL)
.create()));
}

Now, by using method pointers internally, instantiating a control/widget does not need its parent any more as it will always be parented within the current context.

You can find the code at:

https://github.com/erdalkaraca/lambda-ui

Check it out into your workspace and experiment with it. Do not hesitate to give feedback.

by Erdal Karaca (noreply@blogger.com) at May 04, 2016 05:52 PM

Learning Vagrant, e.g. to try out GNOME instead of Unity on Ubuntu 16.04 for better Eclipse support

by Michael Vorburger (noreply@blogger.com) at May 03, 2016 04:05 PM

Recently I've starting looking at a project which uses a bunch of Virtual Machines (VMs) to make an easy and ready made development/test environment available. This project used multiple Vagrant machines, and while I had heard of Vagrant before, I didn't get around to actually try out Vagrant - until today. It's actually pretty cool:

1) Install Vagrant from vagrantup.com/downloads
2) Install VirtualBox from virtualbox.org/Downloads
3) Type "vagrant init hashicorp/precise64; vagrant up"
4) Type "vagrant ssh" and you're in the VM on CLI

Now I was wondering if I could also use this to try out the switch from Ubuntu Unity to GNOME, which I wanted to make (because Eclipse SWT is a PITA on Unity's tweaked GTK3 stack of Ubuntu 16.04 LTS; although the latest Eclipse Neon v4.6 fixes a lot of those problems; I hear that Eclipse on GNOME Shell with standard theme "Adwaita" just works smoother), but had been pushing off from making the switch on my main work machine, for fear of breaking it.. but what better than a throw away VM to play and learn? It turns out that Vagrant can be used both for typically headless server type VMs as well as for Desktop environments; easily:

1) vagrant init boxcutter/ubuntu1604-desktop; vagrant up <= brings up standard Ubuntu install
2) vagrant ssh -- -X
3) gnome-terminal
4) sudo apt-get install gnome-shell gnome-themes-standard

At this point there will a prompt asking you to choose between LightDM (the default "Display Manager" for the Unity Greeter in Ubuntu) or GDM (the GNOME Display Manager; used e.g. by Fedora I think). I initially choose the "gdm3" option. Reboot the VM, and... oups, black screen. Hm, something probably not quite made for a match in heaven here  - but whatever, with a test VM, you simply:

5) vagrant halt
6) vagrant destroy
7) vagrant box remove
8) rm -rf .vagrant/

and start over with step 1) vagrant init .. and try the "lightdm" option, which works! Now I'm comfortable doing the same change on my main production station. Thank you Vagrant for this 1 line playground set-up!(BTW: The steps 7) & 8) may above not actually be needed - just shown for how to completely reset things.)

Note that the specific project (OVSDB of ODL) used VMs with Vagrant because they need an entire local "micro cloud", with Open Stack and mininet. Not all projects require full blown VMs for demo and development environments; alternatives include: 1) Eclipse "Oomph" Installer-based workspace provisioning; if all you need is a ready Eclipse IDE. 2) Embedded Java database, web server and executable WAR; if all you need to solve is the dependency to an external database & container. 3) Docker-based instead of full VMs approaches.

by Michael Vorburger (noreply@blogger.com) at May 03, 2016 04:05 PM

Microsoft Graph Unifies Access to All APIs

by Jerome Louvel at May 03, 2016 03:00 PM

At the Microsoft Build conference in San Francisco, InfoQ had the opportunity to speak with Gareth Jones, API architect for the Microsoft Graph API which aims at making life easier for developers by providing a unified API endpoint. With the prevalence of Microsoft products in most businesses around the world, it is interesting to see how Microsoft solves this issue at their scale.

By Jerome Louvel

by Jerome Louvel at May 03, 2016 03:00 PM

Great Fixes from Nathan Ridge

by waynebeaton at May 03, 2016 01:13 AM

Our second winner in the Great Fixes for Eclipse Neon Skills Competition is Nathan Ridge. Nathan made eighty qualifying contributions to the Eclipse C/C++ Development Tools (CDT) project.

Nathan uses CDT both in his day job, which involves systems programming in C++, and in personal projects, which are mostly small fun projects primarily for learning, also in C++. He uses both gcc and clang, primarily on Linux, though most of the code that he writes is cross-platform. He likes the CDT editor’s code navigation features, which he finds invaluable for finding his way around large code bases.

Nathan started filing bugs against the CDT in 2010, and started fixing them in late 2012. He was nominated to become a committer on the project in January 2016 by committer Sergey Prigogin, was enthusiastically endorsed by the project team, and has joined the team. He has made more than thirty commits since becoming a project committer.

The Great Fixes Candidates page shows a list of the commits attributed to Nathan, along with those of the other Great Fix candidates.

Eclipse Neon is the eleventh named simultaneous release of Eclipse projects. Our first official simultaneous release, Eclipse Callisto included ten open source projects; with Neon, we coordinate the release of 87 open source projects. Participating projects commit to implement a set of participation requirements and agree to produce regular milestone builds following the schedule established by the Eclipse Planning Council. Project teams engage in regular inter-project communication via the cross-project-issues-dev mailing list.

 



by waynebeaton at May 03, 2016 01:13 AM

Metamodel (Ecore) Design Checklist - part 1

by Cédric Brun (cedric.brun@obeo.fr) at May 02, 2016 12:00 AM

Be meticulous with the model describing your domain! So many aspects of your tool will trickle down from your Ecore model that it pays a lot to pause for a bit and do some basic sanity checks.

The Ecore model in the center is the basis for so many things!

Eclipse Modeling technologies are enabling you to build graphical, tree or textual editors, connectors to import or export data, code generators and all of these features in your tool are directly tied or infered from an Ecore model. Better get it right.

I compiled the following checklist based on my personal experience, this is not exhaustive and I expect it to live and get richer over time.

Most of the checks stated here are very easy to comply with when considered from the start. When it’s later down the road the gain/risk ratio should be evaluated as some changes might need to update some code, some files or might just be too much work to be worth it then. Because of this and because we sometime learned the hard way you might quite easily find some Ecore models I authored which are not 100% compliant with this list ;).

By the way, feel free to tell me about your own rules, I might add it to the list!


Ground rules

☑ The purpose and audience of the models are stated

A model is a representation of a system for a given purpose. Just like Object Oriented Programming never was intended to help “structuring code so that it’s close to the real world” a metamodel doesn’t have to match the real world.

On the other hand, a metamodel should answer a specific set of questions. Start by stating those questions. And for who.

The “who” is using the model as this might have important implications regarding the naming of the concepts. My tool of chocie for defining the “who” is to take a few minutes and write down a Persona so that I can get back at it when I need to justify a given choice. The persona description should document the user background, the vocabulary he is confortable with and the tools he is used too.

Example: Models from this metamodel will enable researcher in agriculture to answer the questions: how many resources (water, machines, humans) are needed for a given farm structure, in a given region, and for set of cultures (wheat, sorgho..).



Example: Models from this metamodel will enable software architects to answer the questions: which are the services existing in my system, what are their non-functional characteristics, their signature, who owns them and how are the related to each others ?



or even

Example: Models from this metamodel will enable My Little Pony authors to answer the questions how are the story and the characters evolving during the show, when is each character introduced and is it consistent with the episodes previously aired ?

☑ The nsURI is the definitive one and is consistent with your naming conventions

As part of this first step of setting up an identification card for your metamodel, you have to stop for a minute coming up with the EPackage nsURI. This nsURI will identify your Ecore model, starting now and forever. It is used in many places, in the generated Java code, in the plugin.xml file of your project, but more importantly other tools or Ecore models are likely to use this URI to identify your Ecore model (in code generators, model transformers…).

Changing this is a pain. Make sure the nsURI you picked is sensible and matches your naming conventions in this regard.

The most important thing is to be consistent and that’s not a given; see how we fail in having a consistent naming in Eclipse itself.

NsUris in the modeling package.

The same level of care should be used for your project name. Make sure you get it right quickly or be prepared for fiddling with identifiers in many different files.

☑ Nested EPackages are not used

There is no such thing as a sub-EPackage. Let’s just pretend this capability never existed in Ecore (and by the way, you can’t do this in Xcore).

— Comment #4 from Ed Merks Ed.Merks@gmail.com — Yes, that simply doesn’t work. It’s not possible to represent nested Ecore Package in Xcore.

Allowing the definition of subpackages within an EPackage was, in retrospect, a bad decision as that introduced several different ways to reference a single domain. We should have a one to one mapping in between a domain and an EPackage, this one clearly identified by its nsURI. The notion of nested EPackage break this mapping as then you have several different ways to access an EPackage. This lead to slightly different intepretation of this among tools, one might declare a subpackage if a parent is declared, or the other way around, or not at all.

In a nutshell, one EPackage, one .ecore file, and your life will be simpler.

☑ Names are real ones, precise and consistent

Naming things is hard, and just like in every design activity it is of the most critical importance. For non-native english speakers it gets even harder as we might lack some vocabulary or some subtle interpretation might escape us.

Tips and tricks :

  • use PowerThesaurus, make sure the name is the most precise you can get.
  • use the user background to pick the right name (having defined the Persona comes in handy)
  • try to avoid names which are so general or abstract that they could be interpreted in many different ways by your target users. Artifact, Element are probably fairly bad names (but again, use the context to decide).

in the MyLittlePony world an Element refers to the “Elements of Harmony” and has a very precise definition. The context matters.

☑ Reference and attribute names are consistent

Check that you stick with a consistent convention for your references. The main decisions which are in front of you:

  • do you pluralize the references with a many upper bound ?
  • do you add a prefix like owned for any containment reference?
  • do you add a prefix like parent for any container reference?
  • do you add a prefix for any derived reference or attribute?

☑ All the non-abstract EClasses are supposed to be instanciated

In the very early phases it often happens that you start with a concept as an EClass and at some point you specialized it, in the end you have an abstract EClass but you just forgot to make it abstract, leading to the possibility to instanciate it.

Hold on, go through all the concepts which you don’t want to be instanciable and make sure they are “abstract” or “interface”.

Introducing subclasses

☑ 0..1 and 1..1 cardinalities have been reviewed

Go through all the attributes and reference and think again: does an instance makes any sense if this attribute is not valued ?

EcoreTools uses bold typefaces for any required element

☑ Containment relationships have been reviewed

Ecore provides a notion of containment reifying the basic lifecycle of an instance. If an object A is contained in an object B then whenever the object B is removed or deleted the object ‘A’ is too. Thinking about your model as a tree helps in those cases: either your object is expected to be a the root of a resource or it has to be contained by another object.

The goal here is to make a conscious decision about when should an instance disappear from the model and clearly identify the type of elements you expect as a root of a model file.

Also note that this containment relationship might be leveraged as part of the referencing of an element.

☑ Every validation rule which is not enforced by the Ecore model structure itself is named.

While designing capture and name every validation rule which comes up. You should be able to come up with a name and hopefully a description of valid and invalid cases.

Constraints annotations in EcoreTools.

☑ The concepts are all documented.

Make sure you have documented all the EClasses or relationship which are not completely obvious. We use annotations directly in Ecore to capture the developper or specifier facing documentation.

Design doc annotation can be added in a diagram using EcoreTools
A table editor is also provided for convenience

Note that you can also value an attribute in the Genmodel for the user documentation and that this information will be directly used by EMF in the tree editor.

☑ There are no Boolean monsters in the making

Over time a simple EClass with a couple of EAttributes can grow to a monster with many more, each one acting as a configuration “flag”. Identify such monsters in the making. Go through all the possible combinations of attribute values and make sure they are all valid, also confirm that there is only two possible outcomes: true or false, but not more. Sometimes a couple of EEnumerations are better to capture the transversal characteristics.

Booleans monster

Also check your boolean attributes naming. The EMF Java generator will add an “is” prefix on your API, you don’t have to do it, but make sure isMyName is legible.

Outside world

☑ I decided how instances should be referenced from the outside

Any EObject which is contained in a resource has a URI and might be referenced by others. But there are so many ways to identify an instance. You roughly have to decide in between: Resource specific identification like XMI-IDs, or domain related identification by defining an id EAttribute or by using the EReference eKeys.

The default behavior uses the containment relationship and the index of the object within it’s containing reference. This solution is not suitable for 99% of the cases as any addition or removal in a reference might break references (but this is the default one in EMF as it is the only one which assumes nothing about the serialization format or the EClasses).

EcoreTools will display the eKey with a small blue label on the target end

☑ A user can’t introduce cyclic references in between model fragments


If you are planning to split your model on multiple files or if part of it is to be referenced by other models, then you should make sure that introducing such references is not supposed to modify the referenced instance. These situations can easily arise when using EOpposite references.



Keep in mind that many EMF technologies will provides you with a way to easily and efficiently navigate on inverse references which are not designed as such in the Ecore model.

For instance using Sirius you might write queries like: aql:self.eInverse(some::Type) to retrieve any instance of Type referencing the object self or aql:self.eInverse(anEReferenceName) to navigate on the inverse of the reference anEReferenceName from the object self.

☑ The dependencies in between EPackages are in control

Inheritance or references in between EPackages can quickly get tricky (and the former even sooner than the later). It is so easy to do using the modeling tools that one can easily abuse it, but in the end your Ecore model is translated to Java and OSGi components, you’ll have to deal with the technical coordination.

As such, only introduce inter-EPackage relationships for compelling reasons and when you do, make sure you either only reference EClasses or if you need to subclass make sure you are able to cope with a strong coupling on the corresponding component.

Package dependencies diagram in EcoreTools

☑ The EClasses which might be extended by subtypes are clearly identified

This item is symetric from the previous one: if one of your goal is for others to provide subtypes of your EClasses explicitely design for it and document it.


That’s it for now but the subject is far from being exhausted. The next part will be more technical with a focus on the scalability and on the mapping in between Ecore and Java. Follow me on Twitter to know when it gets published. In the meantime feel free to give your feedback!

Metamodel (Ecore) Design Checklist - part 1 was originally published by Cédric Brun at CTO @ Obeo on May 02, 2016.


by Cédric Brun (cedric.brun@obeo.fr) at May 02, 2016 12:00 AM

Remote Services over (Unreliable) Networks

by Scott Lewis (noreply@blogger.com) at April 29, 2016 05:43 PM

In a previous post, I described how ECF Remote Services provided a way to create, implement, test, deploy and upgrade transport-independent remote services.

Note that 'transport-independent' does not mean 'transparent'.

For example, with network services it's always going to relatively likely that a remote service (whether an OSGi Remote Service or any other kind of service) could fail at runtime.   The truth of this is encapsulated in the first fallacy of distributed computing:  The network is reliable.

A 'transparent' remote services distribution system would attempt to hide this fact from the service designer and implementer.   Alternatively, the ECF Remote Services approach allows one to choose a distribution provider(s) to meet the reliability and availability requirements for that remote service, and/or potentially change that selection in the future if requirements change.  

Also, the dynamics of OSGi services (inherited by Remote Services), allows network failure to be mapped by the distribution system to dynamic service departure.   For example, using Declarative Services, responding at the application level to a network failure can be as simple as implementing a method:

void unbindStudentService(StudentService service) throws Exception {
// Make service unavailable to application
this.studentService = null;
// Also could respond by using/binding to a backup service, removing
// service from UI, etc
}
This is only possible if
  1. The distribution system detects the failure
  2. The distribution system maps the detected failure to a service unregistration of the proxy
If those two things happen, then using DS the proxy unregistration will result in the unbind method above being called by DS.
But the two requirements on the distribution system above may not be satisfied by all distribution providers. For example, for a typical REST/http-based service, there may be no way for the distribution system to detect network failure, and so no unbinding of the service can/will occur.

The use of transport-independent remote services, along with the ability to choose/use/create custom distribution providers as appropriate allows micro service developers to easily deal with the realities of distributed computing, as well as changing service requirements.

by Scott Lewis (noreply@blogger.com) at April 29, 2016 05:43 PM

EMF Forms 1.8.0 Feature: New Group Rendering Options

by Maximilian Koegel and Jonas Helming at April 29, 2016 11:51 AM

With Mars.2, we released EMF Forms 1.8.0. EMF Forms makes it really simple to create forms that edit your data based on an EMF model. To get started with EMF Forms please refer to our tutorial. In this post, we wish to outline an important enhancement for rendering group elements in forms, which allows you to create form-based UIs even more efficiently.

The core of EMF Forms is the so-called view model, which essentially describes how a custom form-based UI should look. “Group” is one of the most frequently used layout elements in EMF Forms. This allows you to contain any view element, such as controls or other containers, and therefore, enables you to structure a form. The element group is very flexible, it does not directly imply a certain way of being rendered. The standard EMF Forms renderer will render a group as an SWT group:

image08

However it is pretty common to provide custom renderers. This allows you to change the way groups are visualized in your custom product. The benefit is that you do not have to adapt your view models in any way, you just need to provide another renderer. As an example, groups can also be rendered as a Nebula PGroup and therefore make it collapsible:

image13

From various customer projects, we have learned that making a group collapsible is a fairly common need. To free adopters from the need to always implement a custom renderer, we have added this as an option the view model itself. Therefore for every group you can specifically configure its collapsibility. If you open a group within your view model, you can change the “Group Type” to “Collapsible”. Additionally, you can then set the initial collapsed state. The EMF Forms default renderer will then render it like this, using an SWT ExpandBar:

image06

Another common issue with groups was that they are independent elements, which are also rendered independently. As this makes sense from a conceptional point of view, it sometimes produced unexpected results when it come to layouting. The following screenshot shows two groups below each other. As you can see, the layout is calculated independently for both groups, therefore, the controls are not aligned.

image10

As this behavior is fine in some use cases, some users would expect the alignment. In this case, the renderer of the group would have to embed itself into a parent GridLayout, which is calculated and rendered for both groups together. This is now also supported by the EMF Forms default renderer. If you configure the “Group Type” to “Embedded”, the renderer will not create independent Grids for every group, but rather embed them, producing a more homogenous layout:

image05

Of course there are many more possible adaptations available for group and other elements. To keep the view model language simple, we try to only add options, which are commonly used across projects. However, by enhancing the existing renderers, all types of customizations are possible. If you miss any feature or ways to adapt it, please provide feedback by submitting bugs or feature requests or contact us if you are interested in enhancements or support.


TwitterGoogle+LinkedInFacebook

Leave a Comment. Tagged with eclipse, emf, emfcp, emfforms, eclipse, emf, emfcp, emfforms


by Maximilian Koegel and Jonas Helming at April 29, 2016 11:51 AM

Presentation: Code in the Cloud with Eclipse Che and Docker

by Stevan Le Meur, Florent Benoit at April 28, 2016 07:47 PM

Stevan Le Meur, Florent Benoit explain how to setup a workspace in Eclipse Che, how to create the environment using Docker, and show some of the advanced features of the IDE.

By Stevan Le Meur, Florent Benoit

by Stevan Le Meur, Florent Benoit at April 28, 2016 07:47 PM

Configuring your Orion project

by Mike Rennie at April 28, 2016 04:00 PM

Everyone loves to customize stuff.

In Orion 11.0, we provided support for .tern-project files so you can do just that. A .tern-project file, for those unfamiliar with them, is a configuration file that lives at the root of your project, is entirely JSON, and tells Tern how and what to run with.

Lets see an example file (the one used in the Orion client):

{
  "plugins": {
    "node": {},
    "requirejs": {},
    "express": {}
  },
  "ecmaVersion": 5,
  "libs": [
    "ecma5",
    "browser",
    "chai"
  ]
}

See? It’s not so bad. Now lets talk about what all the parts mean, and why you would want to make one.

The first thing typically asked when talking about these files and configuring your project is: “what if I don’t have a .tern-project file?”. The short answer is; we start Tern with a default configuration that contains every plugin we pre-package in Orion.

The longer answer is:

  1. you get all of the required core services plugins from Orion (like HTML support, parsing plugins, etc)
  2. you get all of the pre-packaged plugins (mongodb, redis, mysql, express, amqp, postgres, node, requirejs and angular)
  3. you get a default ECMA parse level of 6
  4. you get a default set of definitions (ecma5, ecma6, browser and chai)

Basically, everything will work right out of the box, the downside is that you get a lot more stuff loaded in Tern than you might need (or want).

ecmaVersion

This is the most common element that people create a .tern-project file for, and its an easy one. If you are coding against ECMA 6, the defaults are fine. If you are not, its best to change this to be 5.

libs

This entry describes type libraries you want Tern to use. These libraries provide type information that is used for content assist and inferencing. At the moment, only the definitions that come pre-packaged in Orion can be entered in this list, which include  ecma5, ecma6, browser and chai.

I know what you are thinking. What if I have a definition not in the list I would like to use? We are working on support for custom definition files directly in the project, and providing a UI to install new ones.

plugins

This entry describes all of the plugins that you want Tern to run with. As mentioned, by default you get everything. Leave it out? You get everything. Empty plugins entry? You get everything.

Regardless of what you put in the plugins entry, you will always get the core services plugins that Orion needs to function – so no, you cannot break the tools.

While everything will work fine right out of the box with all plugins running – you can really improve the performance and memory usage of the tools if you tell us what support you actually need. For example, working on a node.js project? only include the node plugin. Working on AMD browser project? only include the requirejs plugin. You get the idea – only load what you actually need.

At the moment, the list of plugins that can be enabled is amqp, angular, express, mongodb, mysql, node, postgres, redis, requirejs. And yes, we are working on ways to provide your own.

loadEagerly

This entry is a list of files that are so important, that you just have to have Tern load them when it starts up. All joking aside, this entry is best for pointing Tern at the ‘main’ in your project. For example, say you were working on a web page. You could add index.html in the loadEagerly entry to have Tern automatically load that file and all the dependent scripts right away, so everything was primed and ready to go immediately (as opposed to Tern filling in its type information as you open and close files).

dependencyBudget

Don’t change this entry. Set too low, it will cause dependency resolution to time out (incomplete type information). Set too high, the IDE will wait longer before answering whatever you asked it (content assist, open decl, etc).

So thats it. Those are all of the supported entries that can appear in the .tern-project file. A pretty short list.

You can’t break the tools by setting a bad entry – we have sanity checking that will revert to a “something is wrong, load the defaults” state. We also have linting / sanity checking that will alert you if you have broken something in the file.

Tern project file linting

Tern project file linting

Broke the file and still navigated to another file (or maybe brought in a change from git that broke the file)? We will alert you that its bad in the banner, with a handy link to jump to the file and fix it!

Tern project file banner message

Tern project file banner message

Remember, you can ignore all the warnings (if you really want to) and Tern will still start with the defaults as a safety measure.

Feeling a bit lazy and don’t want to type out a brand new file for your project? Just open up the new (empty) .tern-project and hit Ctrl+Space, which will auto-magically insert a shiny new template for you.

Happy configuring!


by Mike Rennie at April 28, 2016 04:00 PM

Why it’s time to kill the Eclipse release names:Neon, Oxygen, etc

by Tracy M at April 28, 2016 01:26 PM

noneonmarsluna

I thought I’d heard all the arguments for why developers choose IntelliJ IDEA over Eclipse IDE, but this was a new one. I was at a meet-up with lots of Java developers, and inevitably the conversation went to the topic of preferred Java IDE. One developer raised the point ‘I never understand the different versions of Eclipse, you know, Luna, Mars, Neon – what does what, which is the Java one I should download? With IntelliJ it’s just IntelliJ community or ultimate,I know what to get.’ I had to stop myself from launching into a let-me-explain-it’s-simple-and-alphabetic explanation and instead just listen and look around to see others were nodding along in agreement with the speaker.

Not long after that I was reading this article: Kill extra brand names to make your open source project more powerful by Chris Grams. In the article, Grams talks about the ‘mental brand tax‘  incurred when projects have additional brand names users are expected to understand. This was the name for what the developers were expressing. As Grams explains “…having a bunch of different brand names can be exclusionary to those who do not have the time to figure out what they all are and what they all do.” This sounded like those developers who are busy solving their problems and keeping pace with the fast developments in software.

From my corporate days, engineering often had a working project name. For example, there were the projects named after US state names: ‘New Jersey’, ‘California’, etc. However, when it came to release, these internal names were always scrubbed out by the product marketing department and never referred to from the user perspective. In those cases it was easy to see how the project names could cause real confusion out in the world.

In Eclipse, the names are part of the development flow. It’s a nice way for the community to get together to choose them and it is a common language for us as Eclipse developers to use. Often we don’t differentiate between developer-users and developer-extenders. We expect all users to know they are alphabetic and follow a similar theme. But if you think about it isn’t that just another level of abstraction put onto Eclipse versioning? Should these names really be going out to the Eclipse users? Should we expect our users to know Neon is the same as Eclipse 4.6 which is the same as the version that was released in 2016? Ditto for all previous versions? (And that is before we get into the different flavours of the IDE e.g. Java, C/C++, etc).

So what could we use instead? I don’t have all the answers, but want to kick off the conversation with a proposal. As Grams summarizes “Sometimes the most powerful naming strategy is an un-naming strategy”. What if we did that? The Eclipse simultaneous release is reliably once a year. How about we use the year it comes out to version it. So this year, Neon would be Eclipse IDE 2016, Oxygen becomes Eclipse IDE 2017 and so on. The added benefit to users is that it becomes immediately obvious how old previous versions are. So instead of ‘Are you going to fix my bug in Luna?‘ someone might ask ‘Are you going to fix my bug in Eclipse.2014?‘ It might be more straightforward for them to see they are already 2-3 years behind the latest release.

As we, as a community, move towards thinking and treating Eclipse more as a product, this is a change that could be well worth the effort. As Grams notes:Just because you have a weak confederation of unsupported brands today doesn’t mean you can’t change. Try killing some brand names, replacing them with descriptors, and channel that power back into your core brand.”



by Tracy M at April 28, 2016 01:26 PM

Modeling at EclipseCon France

by Maximilian Koegel and Jonas Helming at April 28, 2016 12:10 PM

EclipseCon France is only a couple of weeks away. I’m looking forward to this great conference with a carefully selected program in a beautiful city . And, I’m definitely looking forward to presenting three topics!

On Tuesday the conference starts with the tutorials and we start with our tutorial on AngularJS – What every Java developer should know about AngularJS. This tutorial specifically addresses Java/Eclipse developers with no previous experience developing web frontends and provides a good hands-on introduction into AngularJS. So, be sure to sign up and bring your own laptop!

If you would like to spice up your day with some more web development, we can also recommend the CHE tutorial –  Extending Eclipse Che to build custom cloud IDEs – about how to extend the new web-based IDE at Eclipse.

Getting back to  AngularJS, we will also present an ignite talk on JSONForms. This component ships with EMF Forms and allows you to render an EMF-Forms-based form in an AngularJS-based application. With JSONForms, you can leverage the mature tooling of EMF Forms for modeling forms while developing state-of-the-art single-page web applications based on HTML5, CSS, JavaScript/Typescript and JSON/JSON Schema.

If you are new to EMF Forms, you could also drop by my talk, “EMF, myself and UI” on building UIs for EMF-based data. While preparing the presentation, I was amazed by all features that have been added since I presented it at EclipseCon France last year. Here are just three of the important features:

1. We have added AngularJS-based rendering as mentioned earlier.

image00

2. We have made it really simple to use the Treeviewer and Tableviewer components standalone.

image03

3. And finally, we built a brand-new Ecore Editor with improved usability based on EMF-Forms.

image02

 

In the talk we will also give you a sneak preview of unpublished features in the pipeline, so don’t miss it!

image01

Apart from the technical content, any EclipseCon is a good opportunity to meet the people behind the technology, to get in contact and maybe solve one or two technical problems on the spot.

We are looking forward to meeting you at EclipseCon France. Do not forget to register quickly as there is a discount if you register by May 10th. See you soon in Toulouse!


TwitterGoogle+LinkedInFacebook

Leave a Comment. Tagged with AngularJS, eclipse, eclipsecon, emf, emf forms, JSON, JSONForms, modeling, AngularJS, eclipse, eclipsecon, emf, emf forms, JSON, JSONForms, modeling


by Maximilian Koegel and Jonas Helming at April 28, 2016 12:10 PM

Samsung and Codenvy release Artik IDE for IoT

by Alex Blewitt at April 27, 2016 06:30 PM

Today at the Samsung Developers Conference, Codenvy announced the first public release of the Samsung Artik IDE which allows building applications for the Samsung Artik IoT devices.

By Alex Blewitt

by Alex Blewitt at April 27, 2016 06:30 PM

Boost Productivity with MyEclipse—Project Setup

by Srivatsan Sundararajan at April 26, 2016 02:36 PM

MyEclipse supports a large number of features, ranging from Java EE support of specifications like JPA and JAX-RS, to modern web capabilities like Angular JS and CSS 3. Given the breadth of the IDE, it can be easy to miss key timesaving features that have been added to MyEclipse over the years. There are too […]

The post Boost Productivity with MyEclipse—Project Setup appeared first on Genuitec.


by Srivatsan Sundararajan at April 26, 2016 02:36 PM

Using an e4view in combination with Guice injection

by Stefan Winkler (stefan@winklerweb.net) at April 26, 2016 10:21 AM

I am currently working for a customer on an existing Eclipse RCP (based on Luna) which consists of 99% Eclipse 3.x API. The customer wants to migrate to E4 gradually, but there is no budget to migrate existing code all at once. Instead, the plan is to start using e4 with new features and migrate the other code step by step.

So, when I was given the task of creating a new view, I wanted to use the "new" (in Luna, anyways) e4view element for the org.eclipse.ui.views extension point. The good thing about this is that you can easily write JUnit tests for the new class because it is a POJO and does not have many dependencies. 

My problem is that part of the customer's RCP uses Xtext and several components or "services" (not actual services in an OSGi sense) are available via Guice.

So I was confronted with the requirement to get a dependency available via Guice injected in an E4-style view implementation:

public class MyViewPart
{
    @Inject // <- should be injected via Guice
    ISomeCustomComponent component;

    @PostConstruct // <- should be called and injected via E4 DI
    public void createView(Composite parent)
    {
        // ...
    }
}

 

The usual way to get classes contributed via extension point injected by Guice is to use an implementation of AbstractGuiceAwareExecutableExtensionFactory like this:

<plugin>
   <extension
         point="org.eclipse.ui.views">
      <e4view
            class="my.app.MyExecutableExtensionFactory:my.app.MyViewPart"
            id="my.app.view"
            name="my view"
            restorable="true">
      </e4view>
   </extension>
</plugin>

The colon in the class attribute is usually interpreted by the framework in a way that the class identified before the colon is instantiated as an IExecutableExtensionFactory and the actual object is identified by the parameter (given after the colon) and created by that factory.

But I did not expect this to work, because I thought it would bypass the E4 class creation mechanism; and actually, it seems to be the other way round and the e4view.class element seems to ignore the extension factory create the my.app.MyViewPart to inject it with E4DI. The MyExecutableExtensionFactory is never getting called.

As I said, I didn't expect both DI frameworks to coexist without conflict, so I thought the solution to my problem would be to put those objects which I need injected into the E4 context. After googling a bit, I have found multiple approaches, and I didn't know which one is the "correct" or "nice" one.

Among the approaches I have found, there were:

  1. providing context functions which delegate to the guice injector
  2. retrieving the objects from Guice and configure them as injector bindings
  3. retrieving the objects from Guice, obtain a context and put them in the context

(The first two approaches are mentioned in the "Configure Bindings" section of https://wiki.eclipse.org/Eclipse4/RCP/Dependency_Injection)

I ended up trying all three, but could only get the third alternative to work.

This is what I tried:

Context Functions

I tried to register the context functions as services in the Bundle Activator with this utility method:

private void registerGuiceDelegatingInjection(final BundleContext context, final Class<?> clazz)
{
IContextFunction func = new ContextFunction()
{
@Override
public Object compute(final IEclipseContext context, final String contextKey)
{
return guiceInjector.getInstance(clazz);
}
};

ServiceRegistration<IContextFunction> registration =
context.registerService(IContextFunction.class, func,
new Hashtable<>(Collections.singletonMap(
IContextFunction.SERVICE_CONTEXT_KEY, clazz.getName()
)));
}

and called registerGuiceDelegatingInjection() in the BundleActivator's start() method for each class I needed to be retrieved via Guice.

For some reason, however, this did not work. The service itself was registered as expected (I checked via the OSGi console) but the context function was never called. Instead I got injection errors that the objects could not be found during injection.

Injector Bindings

I quickly found out that this solution does not work for me, because you can only specify an interface-class to implementation-class mapping in the form

InjectorFactory.getDefault().addBinding(IMyComponent.class).implementedBy(MyComponent.class)

You obviously cannot configure instances or factories this way, so this is not an option, because I need to delegate to Guice and get Guice-injected instances of the target classes...

Putting the objects in the context

Finally, the solution that worked for me was getting the IEclipseContext and put the required classes there myself during the bundle activator's start() method.

private void registerGuiceDelegatingInjection(final BundleContext context, final Class<?> clazz)
{
  IServiceLocator s = PlatformUI.getWorkbench();
  IEclipseContext ctx = (IEclipseContext) s.getService(IEclipseContext.class);
  ctx.set(clazz.getName(), guiceInjector.getInstance(clazz));
}

This works at least for now. I am not sure how it works out in the future if more bundles would directly put instances in the context; maybe in the long-term named instances would be needed. Also, for me this works, because the injected objects are singletons, so it does not do any harm to put single instances in the context.

I would have liked the context function approach better, but I could not get it to work so far.

Maybe one of you, the readers, can see my mistake. If so, please feel free to comment or to add an answer to my initial StackOverflow question.


by Stefan Winkler (stefan@winklerweb.net) at April 26, 2016 10:21 AM

IoT Standards and Remote Services

by Scott Lewis (noreply@blogger.com) at April 25, 2016 07:46 PM

In a recent posting, Ian Skerrit asks:   Can open source solve the too-many standards problem?

Even though it's clear that IoT developers want interoperability (the putative reason for many communications standards), there are other reasons that lead to multiple competing standards efforts (e.g. technical NIH, desire for emerging market dominance, different layering, etc).

I agree with Ian that open source provides one way out of this problem, as open implementations can provide interoperability much more quickly than formal standardization efforts.   One doesn't have to look very far to see that's been the case for previous communication and software standards efforts.

One complication for the IoT developer, however, is that they frequently need to make a choice...so that they can build their applications.   This choice has risks, however, because if the communications protocol one chooses doesn't end up being widely adopted/popular, does not provide interoperability with the necessary systems, or is a poor fit to the application-level or non-functional needs for your app (e.g. performance/bw requirements, round-trips, etc), then it could mean a very costly re-architecture and/or re-implementation of one's app or service.  

One way to hedge this risk is provided by ECF's implementation of Remote Services.   OSGi Remote Services is a simple specification for exposing an arbitrary service for remote access,   The spec says nothing about how the communication is done (protocol, messaging pattern, serialization), but rather identifies a pluggable distribution provider role that must be present for a remote service to be exported.   Each service can be exported with a distinct distribution provider, and the decision about what provider is to be used is done at service registration time.

One effect of this is that the remote service can be declared, implemented, tested, deployed, used, and versioned without ever binding to a distribution system.   In fact, it's possible to use one distribution provider to develop and test a remote service, and deploy with a completely different distribution provider simply by changing the values of some service properties.   With ECF's implementation, it's easy to either use an existing distribution provider, or create your own (open source or not), using your favorite communications framework.

ECF Remote Services allows IoT developers maximum flexibility to meet their application's technical needs, now and in the future, without having to commit permanently to a single communication framework, transport, or standard.








by Scott Lewis (noreply@blogger.com) at April 25, 2016 07:46 PM

A new interpreter for EASE (5): Support for script keywords

by Christian Pontesegger (noreply@blogger.com) at April 25, 2016 06:01 AM

EASE scripts registered in the preferences support  a very cool feature: keyword support in script headers. While this does not sound extremely awesome, it allows to bind scripts to the UI and will allow for more fancy stuff in the near future. Today we will add support for keyword detection in registered BeanShell scripts.

Read all tutorials from this series.

Source code for this tutorial is available on github as a single zip archive, as a Team Project Set or you can browse the files online. 

Step 1: Provide a code parser

Code parser is a big word. Currently all we need to detect in given script code are comments. As there already exists a corresponding base class, all we need to do is to provide a derived class indicating comment tokens:
package org.eclipse.ease.lang.beanshell;

import org.eclipse.ease.AbstractCodeParser;

public class BeanShellCodeParser extends AbstractCodeParser {

@Override
protected boolean hasBlockComment() {
return true;
}

@Override
protected String getBlockCommentEndToken() {
return "*/";
}

@Override
protected String getBlockCommentStartToken() {
return "/*";
}

@Override
protected String getLineCommentToken() {
return "//";
}
}

Step 2: Register the code parser

Similar to registering the code factory, we also need to register the code parser. Open the plugin.xml and select the scriptType extension for BeanShell. There register the code parser from above. Now EASE is able to parse script headers for keywords and interprets them accordingly.


by Christian Pontesegger (noreply@blogger.com) at April 25, 2016 06:01 AM

GEF4 Common Collections and Properties - Guava goes FX

by Alexander Nyßen (noreply@blogger.com) at April 25, 2016 03:20 AM

As a preparation for our upcoming Neon release, I intensively worked on adopting JavaFX collections and properties to the entire GEF4 code base (see #484774).

We had already used them before, but only in those parts of the framework that directly depend on the JavaFX toolkit. In all other places we had used our own extensions to Java Beans properties, because we did not want to introduce JavaFX dependencies there. However, as JavaFX is part of JavaSE 1.7 and JavaSE-1.8 and its collections and properties are completely independent of the UI toolkit, we (no longer) considered the additional dependencies as being a real show stopper. Instead, we highly valued the chance to provide only a single notification mechanism to our adopters. The very rich possibilities offered by JavaFX's binding mechanism in addition tipped the scales.

As JavaFX only provides observable variants for Set, Map, and List, I had to create observable variants and related (collection) properties for those Google Guava collections we use (Multiset and SetMultimap) as a prerequisite. I had to dig deep into the implementation details of JavaFX collections and properties to achieve this, learned quite a lot, and found the one or other oddity that I think is worth to be shared. I also implemented a couple of replacement classes to fix problems in JavaFX collections and properties, which are published as part of GEF4 Common. 

But before going into details, let me shortly explain what JavaFX observable collection and properties are about.

Properties, Observable Collections, and (Observable) Collection Properties


In general, a JavaFX property may be regarded as a specialization of a Java Beans property. What it adds is support for lazy computation of its value as well as for notification of invalidation (a lazily computed value may need to be re-computed) and change listeners, which may - in contrast to Java Beans - be registered directly at the property, not at the enclosing bean.

Using JavaFX properties implies a certain API style (similar to Java Beans), as it is expected to provide a getter and setter to access the property value, as well as an accessor for the property itself. We may consider javafx.scene.Node as an example, which amongst various others provides a boolean pickOnBounds property that (is lazily created and) controls whether picking is computed by intersecting with the rectangular bounds of the node or not:

  public abstract class Node implements EventTarget, Stylable {
    ...
    private BooleanProperty pickOnBounds;
    
    public final void setPickOnBounds(boolean value) {
      pickOnBoundsProperty().set(value);
    }
    
    public final boolean isPickOnBounds() {
      return pickOnBounds == null ? false : pickOnBounds.get();
    }
    
    public final BooleanProperty pickOnBoundsProperty() {
      if (pickOnBounds == null) {
        pickOnBounds = new SimpleBooleanProperty(this, "pickOnBounds");
      }
      return pickOnBounds;
    }
  }

In addition to the notification support already mentioned, values of properties may be bound to values of others, or even to values that are computed by more complex expressions, via so called bindings. This is a quite powerful mechanism that reduces the need for custom listener implementations significantly. If the pickOnBounds value of one node should for instance be kept equal to the pickOnBounds value of another, the following binding is all that is required:

  node1.pickOnBoundsProperty().bind(node2.pickOnBoundsProperty());

Binding it to a more complex boolean expression is pretty easy as well:

  node1.pickOnBoundsProperty().bind(node2.pickOnBoundsProperty().or(node3.visibleProperty()));

One can even define own bindings that compute the property value (lazily) based on values of arbitrary other properties:

  node1.pickOnBoundsProperty().bind(new BooleanBinding() {
    {
      // specify dependencies to other properties, whose changes
      // will trigger the re-computation of our value
      super.bind(node2.pickOnBoundsProperty());
      super.bind(node3.layoutBoundsProperty());
    }
    
    @Override
    protected boolean computeValue() {
      // some arbitrary expression based on the values of our dependencies
      return node2.pickOnBoundsProperty().get() &&
             node3.layoutBoundsProperty().get().isEmpty();
    }
  });

JavaFX provides property implementations for all Java primitives (BooleanPropertyDoublePropertyFloatPropertyIntegerPropertyLongPropertyStringProperty, as well as a generic ObjectProperty, which can be used to wrap arbitrary object values. It is important to point out that an ObjectProperty will of course only notify invalidation and change listeners in case the property value is changed, i.e. it is altered to refer to a different object identity, not when changes are applied to the contained property value. Accordingly, an ObjectProperty that wraps a collection only notifies about changes in case a different collection is set as property value, not when the currently observed collection is changed by adding elements to or removing elements from it:

  ObjectProperty<List<Integer>> observableListObjectProperty = new SimpleObjectProperty<>();
  observableListObjectProperty.addListener(new ChangeListener<List<Integer>>() {
    @Override
    public void changed(ObservableValue<? extends List<Integer>> observable,
                List<Integer> oldValue, List<Integer> newValue)    
      System.out.println("Change from " + oldValue + " to " + newValue);
    }
  });
  
  // change listener will be notified about identity change from 'null' to '[]'
  observableListObjectProperty.set(new ArrayList<Integer>());
  // change listener will not be notified
  observableListObjectProperty.get().addAll(Arrays.asList(1, 2, 3));

This is where JavaFX observable collections come into play. As Java does not provide notification support in its standard collections, JavaFX delivers dedicated observable variants: ObservableListObservableMap and ObservableSet. They all support invalidation listener notification (as properties do) and in addition define their own respective change listeners (ListChangeListenerMapChangeListener, and SetChangeListener).

ObservableList also extends List by adding setAll(E... elements) and setAll(Collection<? extends E> c), which combines a clear() with an addAll(Collection< ? extends E> c) into a single atomic replace operation, as well as a remove(int from, int to) that supports removal within an index interval. This allows to 'reduce noise', which is quite important to a graphical framework like GEF, where complex computations might be triggered by changes.

List changes are iterable, i.e. they comprise several sub-changes, so that even a complex operation like setAll(Collection<? extends E> c) results in a single change notification:

  ObservableList<Integer> observableList = FXCollections.observableArrayList();
  observableList.addListener(new ListChangeListener<Integer>() {
    
    @Override
    public void onChanged(Change<? extends Integer> change) {
      while (change.next()) {
        int from = change.getFrom();
        int to = change.getTo();
        // iterate through the sub-changes
        if (change.wasReplaced()) {
          // replacement (simultaneous removal and addition in a continuous range)
         System.out.println("Replaced " + change.getRemoved()  
                   + " with " + change.getAddedSubList() + ".");
        } else if (change.wasAdded()) {
          // addition (added sublist within from-to range)
         System.out.println("Added " + change.getAddedSubList() 
                   + " within [" + from + ", " + to + ").");
        } else if (change.wasRemoved()) {
          // removal (change provides removed sublist and from index)
  System.out.println("Removed " + change.getRemoved() + " at " + from + ".");
        } else if (change.wasPermutated()) {
          // permutation (change provides mapping of old indexes to new indexes)
  System.out.print("Permutated within [" + change.getFrom() + ", " + to + "):");
  for (int i = from; i < to; i++) {
    System.out.print((i == from ? " " : ", "
                        +  i + " -> " + change.getPermutation(i)
                        + (i == to - 1 ? ".\n" : ""));
  }
        }
      }
    }
  });
  
  // one comprised change: 'Added [3, 1, 2] within [0, 3).'
  observableList.setAll(Arrays.asList(3, 1, 2));
  
  // one comprised change: 'Permutated within [0, 3): 0 -> 2, 1 -> 0, 2 -> 1.'
  Collections.sort(observableList);
  
  // one comprised change: 'Replaced [1, 2, 3] with [4, 5, 6].'     
  observableList.setAll(Arrays.asList(4, 5, 6));
  
  // two comprised changes: 'Removed [4] at index 0.', 'Removed [6] at index 1.'
  observableList.removeAll(Arrays.asList(4, 6));

Similar to properties, observable collections may even be used to establish bindings using so called content bindings:

  // ensure that elements of list are synchronized with that of observableList
  List<Integer> list = new ArrayList<>();
  Bindings.bindContent(list, observableList);

As such, observable collections are quite usable, even if not being wrapped into a property. As long as the identity of an observable collection is not to be changed, it may directly be exposed without being wrapped into a property. And that's exactly how JavaFX uses them in its own API. As an example consider javafx.scene.Parent, which exposes its children via an ObservableList:

  public abstract class Parent extends Node {
    ...
  protected ObservableList<Node> getChildren() {
      return children;
    }
    
    @ReturnsUnmodifiableCollection
    public ObservableList<Node> getChildrenUnmodifiable() {
      return unmodifiableChildren;
    }
  }

Wrapping it into a property however is required, if a collection's identity is to be changed (in a way transparent to listeners) or properties are to be bound to it. In principle an observable collection could be wrapped directly into an ObjectProperty but this has the disadvantage that two listeners are required if collection changes are to be properly tracked.

Consider an ObservableList being wrapped into a SimpleObjectProperty as an example. While changes to the list can be observed by registering a ListChangeListener a ChangeListener is required in addition to keep track of changes to the property's value itself (and to transfer the list change listener from an old property value to a new one):

  ObjectProperty<ObservableList<Integer>> observableListObjectProperty
    new SimpleObjectProperty<>();
  
  final ListChangeListener<Integer> listChangeListener = new  ListChangeListener<Integer>(){
    @Override
    public void onChanged(ListChangeListener.Change<? extends Integer> c) {
      // react to list changes
    }
  };
  
  // register list change listener at (current) property value
  observableListObjectProperty.get().addListener(listChangeListener);
  
  // register change listener to transfer list change listener
  observableListObjectProperty.addListener(new ChangeListener<ObservableList<Integer>>() {
      @Override
      public void changed(ObservableValue<? extends ObservableList<Integer>> observable,
                  ObservableList<Integer> oldValue,
                  ObservableList<Integer> newValue) {
        // transfer list change listener from old value to new one
        if(oldValue != null && oldValue != newValue){
          oldValue.removeListener(listChangeListener);
        }
        if(newValue != null && oldValue != newValue){
          newValue.addListener(listChangeListener);
        }
      }
    });
  }

As this is quite cumbersome, JavaFX offers respective collection properties that can be used as an alternative: ListPropertySetProperty, and MapProperty. They support invalidation and change listeners as well as the respective collection specific listeners and will even synthesize a collection change when the observed property value is changed:

  ListProperty<Integer> listProperty = new SimpleListProperty<>(
    FXCollections.<Integer> observableArrayList());
  
  final ListChangeListener<Integer> listChangeListener = new ListChangeListener<Integer>() {
    @Override
    public void onChanged(ListChangeListener.Change<? extends Integer> change) {
      // handle list changes
    }
  };
  listProperty.addListener(listChangeListener);
  
  // forwarded list change: 'Added [1, 2, 3] within [0, 3).'
  listProperty.addAll(Arrays.asList(1, 2, 3));
  
  // synthesized list change: 'Replaced [1, 2, 3] with [4, 5, 6].'
  listProperty.set(FXCollections.observableArrayList(4, 5, 6));

In addition, collection properties define their own (read-only) properties for emptiness, equality, size, etc., so that advanced bindings can also be created:

  // bind boolean property to 'isEmpty'
  BooleanProperty someBooleanProperty = new SimpleBooleanProperty();
  someBooleanProperty.bind(listProperty.emptyProperty());
  
  // bind integer property to 'size'
  IntegerProperty someIntegerProperty = new SimpleIntegerProperty();
  someIntegerProperty.bind(listProperty.sizeProperty());

While observable properties are thus not required to notify about collection changes (which is already possible using observable collections alone), they add quite some comfort when having to deal with situations where collections may be replaced or where bindings have to rely on certain properties (emptiness, size, etc.) of a collection.


GEF4 Common Collections


As already mentioned, GEF4 MVC uses some of Google Guava's collection classes, while JavaFX only offers observable variants of SetMap, and List. In order to facilitate a unique style for property change notifications in our complete code base, observable variants had to be created up front. That actually involved more design decisions than I had expected. To my own surprise, I also ended up with a replacement class for ObservableList and a utility class to augment the original API, because that seemed quite necessary in a couple of places.

Obtaining atomic changes

As this might not be directly obvious from what was said before, let me point out that only ObservableList is indeed capable of notifying about changes atomically in the way laid out beforeObservableSet and ObservableMap notify their listeners for each elementary change individually. That is, an ObservableMap notifies change listeners independently for each affected key change, while ObservableSet will do likewise for each affected element. Calling clear() on an ObservableMap or can thus lead to various change notifications.

I have no idea why the observable collections API was designed in such an inhomogeneous way (it's discussed at JDK-8092534 without providing much more insight), but I think that an observable collection should rather behave like ObservableList, i.e. fire only a single change notification for each  method call. If all required operations can be performed atomically via dedicated methods, a client can fully control which notifications are produced. As already laid out, ObservableList follows this to some extend with the additionally provided setAll() methods that combine clear() and addAll() into a single atomic operation, which would otherwise yield two notifications. However, an atomic move() operation is still lacking for ObservableList, so that movement of elements currently cannot be performed atomically.

When creating ObservableSetMultimap and ObservableMultiset, I tried to follow the contract of ObservableList for the above mentioned reasons. Both notify their listeners through a single atomic change for each method call, which provides details about elementary sub-changes (related to a single element or key), very similar to ListChangeListener#ChangeIn accordance to the addAll() of ObservableList, I added a replaceAll() operation to both to offer an atomic operation via which the contents of the collections can be replaced. Change notifications are iterable, as for ObservableList:

  ObservableSetMultimap<Integer, String> observableSetMultimapCollectionUtils.<Integer, String> observableHashMultimap();
  observableSetMultimap.addListener(new SetMultimapChangeListener<Integer, String>() {
    @Override
    public void onChanged(SetMultimapChangeListener.Change<? extends Integer,
                                                           ? extends String> change) {
      while (change.next()) {
        if (change.wasAdded()) {
          // values added for key
          System.out.println("Added " + change.getValuesAdded() 
                           + " for key " + change.getKey() + ".");
        } else if (change.wasRemoved()) {
          // values removed for key
          System.out.println("Removed " + change.getValuesRemoved() + " for key " 
                           + change.getKey() + ".");
        }
      }
    }
  });
  
  // one comprised change: 'Added [1] for key 1.'
  observableSetMultimap.put(1, "1");
  
  // one comprised change: 'Added [2] for key 2.'
  observableSetMultimap.put(2, "2");
  
  // two comprised changes: 'Removed [1] for key 1.', 'Removed [2] for key 2.'
  observableSetMultimap.clear();

I also though about providing replacement classes for ObservableMap and ObservableSet that rule out the inhomogeneity of the JavaFX collections API, but this would have required to extend their respective listener interfaces, and I thus abstained. 

Retrieving the "previous" contents of an observable collection

While in principle I like the API of ListChangeListener#Change, what really bothers me is that there is no convenience method to retrieve the old state of an ObservableList before it was changed. It has to be recomputed from the respective addition, removal, and permutation sub-changes (which will be propagated when sorting the list) that are comprised. 

When creating ObservableMultiset and ObservableSetMultimap, I added a getPreviousContent() method to both, so clients can easily access the contents the collection contained before the change was applied. I also added a utility method (within CollectionUtils) that can be used to retrieve the previous contents of an ObservableList:

  ObservableList<Integer> observableList = FXCollections.observableArrayList();
  observableList.addAll(4, 3, 1, 5, 2);
  observableList.addListener(new ListChangeListener<Integer>() {
    @Override
    public void onChanged(ListChangeListener.Change<? extends Integer> change) {
      System.out.println("Previous contents: " + CollectionUtils.getPreviousContents(change));
    }
  });
  
  // Previous contents: [4, 3, 1, 5, 2]
  observableList.set(3, 7);
  // Previous contents: [4, 3, 1, 7, 2]
  observableList.clear();

Obtaining immutable changes from an observable collection

While list fires atomic changes, its change objects are not immutable (see JDK-8092504). Thus, when a listener manipulates the observed list as a result to a change notification, the change object it currently processed will actually be changed. This is a likely pitfall, as client code may not even be aware that it is actually called from within a change notification context. Consider the following snippet:


  ObservableList<Integer> observableList = FXCollections
.observableArrayList();
  observableList.addListener(new ListChangeListener<Integer>() {
    @Override
    public void onChanged(ListChangeListener.Change<? extends Integer> change) {
      while (change.next()) {
        System.out.println(change);
      }
      // manipulate the list from within the change notification
      if (!change.getList().contains(2)) {
((List<Integer>) change.getList()).add(2);
      }
    }
  });
  // change list by adding element '1'
  observableList.add(1);

It will yield the following in valid change notifications:


  { [1] added at 0 }
  { [1, 2] added at 0 }

Again, I cannot understand why this was designed that way, but the documentation clearly states that the list may not be manipulated from within a change notification context and the implementation follows it consequently by finalizing the change object only after all listeners have been notified. In a framework as GEF having non immutable change objects is a real show stopper. Accordingly, I designed ObservableMultiset and ObservableSetMultimap to use immutable change objects only. I also created a replacement class for com.sun.javafx.collections.ObservableListWrapper (the class that is instantiated when calling FXCollections.observableArrayList()) that produces immutable change objects. It can be created using a respective utility method:

  ObservableList observableList = CollectionUtils.observableArrayList();

When using the replacement class in the above quoted scenario, it will yield the following output:

  Added[1] at 0.
  Added[2] at 1.

As ObservableSet and ObservableMap are not capable of comprising several changes into a single atomic one, the immutability problem does not arise there, so the alternative ObservableList implementation is all that is needed.

GEF4 Common (Collection) Properties


In addition to the ObservableMultiset and ObservableSetMultimap collections, I also added respective properties (and related bindings) to wrap these values to GEF4 Common: SimpleMultisetPropertyReadOnlyMultisetPropertySimpleSetMultimapProperty, and ReadOnlySetMultimapProperty.

It should no longer be surprising that I did not only create these, but also ended up with replacement classes for JavaFX's own collection properties (SimpleSetPropertyExReadOnlySetWrapperExSimpleMapPropertyExReadOnlyMapWrapperExSimpleListPropertyExReadOnlyListWrapperEx), because a comparable, consistent behavior could otherwise not be guaranteed.

Stopping the noise

As I have elaborated quite intensively already, observable collection notify about element changes, whereas properties notify about (identity) changes of their contained (observable) value. Collection properties in addition forward all collection notifications, so that a replacement of the property's value is transparent to collection listeners.

However, this is not the full story. The observable collection properties offered by JavaFX fire change notifications even if the observed value did not change (JDK-8089169). That is, every collection change will not only lead to the notification of collection-specific change listeners, but also to the notification of all property change listeners:

  SimpleListProperty<Integer> listProperty = new SimpleListProperty<>(
    FXCollections.<Integer> observableArrayList());
  
  listProperty.addListener(new ChangeListener<ObservableList<Integer>>() {
    @Override
    public void changed(ObservableValue<? extends ObservableList<Integer>> observable,
                        ObservableList<Integer> oldValue,
                        ObservableList<Integer> newValue) {
      System.out.println("Observable (collection) value changed.");
    }
  });
  
  listProperty.addListener(new ListChangeListener<Integer>() {
    @Override
    public void onChanged(ListChangeListener.Change<? extends Integer> c) {
      System.out.println("Collection changed.");
    }
  });
  
  // will (incorrectly) notify (property) change listeners in addition to
  // list change listeners
  listProperty.add(5);
  
  // will (correctly) notify list change listeners and (property) change
  // listeners
  listProperty.set(FXCollections.<Integer> observableArrayList());

As this leads to a lot of unwanted noise, I have ensured that MultisetProperty and SetMultimapProperty, which are provided as part of GEF4 Common, do not babble likewise. I ensured that the replacement classes we provide for the JavaFX collection properties behave accordingly, too.

Guarding notifications consistently

JavaFX observable collection capture all exceptions that occur during a listener notification. That is, the listener notification is guarded as follows:

  try {
    listener.onChanged(change);
  } catch (Exception e) {
    Thread.currentThread().getUncaughtExceptionHandler().
    uncaughtException(Thread.currentThread(), e);
  }

Interestingly, JavaFX properties don't behave like that. Thus, all observable collection properties are inconsistent in a sense that collection change listener invocations will be guarded, while (property) change listener notifications won't. Again, I have ensured that the GEF4 Common provided collection properties show consistent behavior by guarding all their listener notifications, and I have done likewise for the replacement classes we provide.

Fixing further issues and ensuring JavaSE-1.7 compatibility

Having the replacement classes at hand, I also fixed some further inconsistencies that bothered us quite severely. This includes the recently introduced regression that JavaFX MapProperty removes all attached change listeners when just one is to be removed (JDK-8136465), as well as the fact that read-only properties can currently not be properly bound (JDK-8089557).

As GEF4 still aims at providing support for JavaSE-1.7, I tried to ensure that all collection properties we provide, including the replacement classes, can be used in a JavaSE-1.7 environment, too. Except that the static methods JavaFX offers to create and remove bindings cannot be applied (because the binding mechanism has been completely revised from JavaSE-1.7 to JavaSE-1.8), this could be achieved.

While we make intensive use of GEF4 Common Collections and Properties in our own GEF4 framework, GEF4 Common can be used independently. If you need observable variants of Google Guava's Multiset or SetMultimap, or if you want to get rid of the aforementioned inconsistencies, it may be worth taking a look.

by Alexander Nyßen (noreply@blogger.com) at April 25, 2016 03:20 AM

IAdaptable - GEF4's Interpretation of a Classic

by Alexander Nyßen (noreply@blogger.com) at April 24, 2016 04:07 PM

Adaptable Objects as a Core Pattern of Eclipse Core Runtime

The adaptable objects pattern is probably the most important one used by the Eclipse core runtime. Formalized by the org.eclipse.core.runtime.IAdaptable interface, an adaptable object can easily be queried by clients (in a type-safe manner) for additional functionality that is not included within its general contract.
public interface IAdaptable {
  /**
   * Returns an object which is an instance of the given class
   * associated with this object. Returns <code>null</code> if
   * no such object can be found.
   *
   * @param adapter the adapter class to look up
   * @return a object castable to the given class, 
   *    or <code>null</code> if this object does not
   *    have an adapter for the given class
   */
   public Object getAdapter(Class adapter);
}
From another viewpoint, if an adaptable object properly delegates its getAdapter(Class) implementation to an IAdapterManager (most commonly Platform.getAdapterManager()) or provides a respective proprietary mechanism on its own, it can easily be extended with new functionality (even at runtime), without any need for local changes, and adapter creation can flexibly be handled through a set of IAdapterFactory implementations.

Why org.eclipse.core.runtime.IAdaptable is not perfectly suited for GEF4 

As it has proven its usefulness in quite a number of places, I considered the adaptable objects pattern to be quite a good candidate to deal with the configurability and flexibility demands of a graphical editing framework as well. I thus wanted to give it a major role within the next generation API of our model-view-controller framework (GEF4 MVC).

GEF4 MVC is the intended replacement of GEF (MVC) 3.x as the core framework, from which to build up graphical editors and views. As it has a high need for flexibility and configurability it seemed to be the ideal playground for adaptable objects. However, the way the Eclipse core runtime interprets the adaptable objects pattern does not make it a perfect match to fulfill our requirements, because:
  • Only a single adapter of a specific type can be registered at an adaptable. Registering two different Provider implementations at a controller (e.g. one to specify the geometry used to display feedback and one to depict where to layout interaction handles) is for instance not possible.
  • Querying (and registering) adapters for parameterized types is not possible in a type-safe manner. The class-based signature of getAdapter(Class) does for instance not allow to differentiate between a Provider<IGeometry> and a Provider<IFXAnchor>.
  • IAdaptable only provides an API for retrieving adapters, not for registering them, so (re-) configuration of adapters at runtime is not easily possible. 
  • Direct support for 'binding' an adapter to an adaptable object, i.e. to establish a reference from the adapter to the adaptable object, is not offered (unless the adapter explicitly provides a proprietary mechanism to establish such a back-reference).

    Adaptable Objects as Interpreted by GEF4 Common

    I thus created my own interpretation of the adaptable objects pattern, formalized by org.eclipse.gef4.common.adapt.IAdaptable. It is provided by the GEF4 Common component and can thus be easily used standalone, even for applications that have no use for graphical editors or views (GEF4 Common only requires Google Guice and Google Guava to run).

    AdapterKey to combine Type with Role

    Instead of a simple Class-based type-key, adapters may now be registered by means of an AdapterKey, which combines (a Class- or TypeToken-based) type key (to retrieve the adapter in a type-safe manner) with a String-based role.

    The combination of a type key with a role allows to retrieve several adapters of the same type with different roles. Two different Provider implementations can for instance now easily be retrieved (to provide independent geometric information for selection feedback and selection handles) through:
    getAdapter(AdapterKey.get(new TypeToken<Provider<IGeometry>>(){}, "selectionFeedbackGeometryProvider"))
    getAdapter(AdapterKey.get(new TypeToken<Provider<IGeometry>>(){}, "selectionHandlesGeometryProvider"))

    TypeToken instead of Class

    The second significant difference is that a com.google.common.reflect.TypeToken (provided by Google Guava) is used as a more general concept instead of a Class, which enables parameterized adapters to be registered and retrieved in a type-safe manner as well. A geometry provider can for instance now be easily retrieved through getAdapter(new TypeToken<Provider<IGeometry>>(){}), while an anchor provider can alternatively be retrieved through getAdapter(new TypeToken<Provider<IFXAnchor>>(){}). For convenience, retrieving adapters by means of Class-based type keys is also supported (which will internally be converted to a TypeToken).

    IAdaptable as a local adapter registry

    In contrast to the Eclipse core runtime interpretation, an org.eclipse.gef4.common.adapt.IAdaptable has the obligation to provide means to not only retrieve adapters (getAdapter()) but also register or unregister them (setAdapter(), unsetAdapter()). This way, the 'configuration' of an adaptable can easily be changed at runtime, even without providing an adapter manager or factory.

    Of course this comes at the cost that an org.eclipse.gef4.common.adapt.IAdaptable is itself responsible of maintaining the set of registered adapters. This (and the fact that the interface contains a lot of convenience functions) is balanced by the fact that a base implementation (org.eclipse.gef4.common.adapt.AdaptableSupport) can easily be used as a delegate to realize the IAdaptable interface.

    IAdaptable.Bound for back-references

    If adapters need to be 'aware' of the adaptable they are registered at, they may implement the IAdaptable.Bound interface, which is used to establish a back reference from the adapter to the adaptable. It is part of the IAdaptable-contract that an adapter implementing the IAdaptable.Bound will be provided with a back-reference during registration (if an adaptable uses org.eclipse.gef4.common.adapt.AdaptableSupport to internally realize the interface, this contract is  of course guaranteed). 

    IAdaptables and Dependency Injection

    While the possibility to re-configure the registered adapters at runtime is quite helpful, proper support to create an initial adapter configuration during instantiation of an adaptable is also of importance. To properly support this, I integrated the GEF4 Common adaptable objects mechanism with Google Guice. 

    That is, the adapters that are to be registered at an adaptable can be configured in a Guice module, using a specific AdapterMap binding (which is based on Guice's multi-bindings). To register an adapter of type VisualBoundsGeometryProvider at a FXGeometricShapePart adaptable can for instance be performed using the following Guice module configuration:
    protected void configure() {
      // enable adapter map injection support
      install(new AdapterInjectionSupport());
      // obtain map-binder to bind adapters for FXGeometricShapePart instances
      MapBinder<AdapterKey<?>, Object> adapterMapBinder
    AdapterMaps.getAdapterMapBinder(binder(), FXGeometricShapePart.class);
      // bind geometry provider for selection handles as adapter on FXGeometricShapePart
      adapterMapBinder.addBinding(AdapterKey.role("selectionHandlesGeometryProvider")).
            to(VisualBoundsGeometryProvider.class);
      ...
    }
    It will not only inject a VisualBoundsGeometryProvider instance as an adapter to all direct instances of FXGeometricShapePart but also to all instances of its sub-types, which may be seen as a sort of 'polymorphic multi-binding'.

    Two prerequisites have to be fulfilled in order to make use of adapter injections:
    1. Support for adapter injections has to be enabled in your Guice module by installing an org.eclipse.gef4.common.inject.AdapterInjectionSupport module as outlined in the snippet above.
    2. The adaptable (here: FXGeometricShapePart.class) or any of its super-classes has to provide a method that is eligible for adapter injection:
    @InjectAdapters
    public <T> void setAdapter(TypeToken<T> adapterType, T adapterString role) {
      // TODO: implement (probably by delegating to an AdaptableSupport)
    }
    GEF4 MVC makes use of this mechanism quite intensively for the configuration of adapters (and indeed, within the MVC framework, more or less everything is an adapter). However, similar to the support for adaptable objects itself, the related injection mechanism is easily usable in a standalone scenario. Feel free to do so!

    by Alexander Nyßen (noreply@blogger.com) at April 24, 2016 04:07 PM

    Eclipse Neon: disable the theming

    April 23, 2016 10:00 PM

    At Devoxx France 2016, Mikaël Barbero gave a great talk about the Eclipse IDE. The talks was well attended (the room was almost full). This is the proof that there is still a lot of interest for the Eclipse IDE.

    2016-04-24_room

    The talk was a great presentation of all improvements made to the IDE (already implemented with mars, coming with neon or oxygen). It was a big new and noteworthy, organized by main categories that matters to the users and not by eclipse projects. I really appreciated this approach.

    If understand French, I recommend you to watch the video of the talk. In the other case, I am sure you will learn something by just looking at the slides.

    Something I have learned: with Neon you can deactivate the theming (appearance section of the preferences) completely. In that case the CSS styling engine will be deactivated and your Eclipse IDE will have a really raw look. To disable the theming, just uncheck the checkbox highlighted in Figure 2

    Eclipse preferences > General > Appearance (Eclipse Neon)

    2016-04-24_preferences_appearance

    After a restart your Eclipse will look like this screenshot (Figure 3):

    Eclipse IDE with disabled theming

    2016-04-24_eclipse_neon_disabled_theming

    I hope the performances will be better, it particular when Eclipse IDE is used on distant virtualized environments like Citrix clients. If you want to test it now, download a Milestone release of Neon.


    April 23, 2016 10:00 PM