Eclipse Newsletter | Capella: open source MBSE solution

December 14, 2017 03:10 PM

Learn everything about Capella, an open source workbench based on Eclipse technology that allows engineers to design complex systems.

December 14, 2017 03:10 PM

Open Source Community Accelerates Big Data Analytics for Geospatial Solutions

December 14, 2017 01:00 PM

LocationTech announces new project releases that provide core technology for geospatial big data analytic solutions.

December 14, 2017 01:00 PM

Debugger 11: Watch expressions

by Christian Pontesegger (noreply@blogger.com) at December 14, 2017 11:03 AM

Now that we have variables working, we might also want to include watch expressions to dynamically inspect code fragments.

Debug Framework Tutorials

For a list of all debug related tutorials see Debug Framework Tutorials Overview.

Source code for this tutorial is available on github as a single zip archive, as a Team Project Set or you can browse the files online.

Step 1: Provide the Watch Expression Delegate

Watch points are implemented via an extension point. So switch to your plugin.xml and add a new extension point for org.eclipse.debug.core.watchExpressionDelegates.
The new delegate simply points to our debugModel identifier: com.codeandme.debugModelPresentation.textinterpreter and provides a class implementation:
public class TextWatchExpressionDelegate implements IWatchExpressionDelegate {

@Override
public void evaluateExpression(String expression, IDebugElement context, IWatchExpressionListener listener) {
if (context instanceof TextStackFrame)
((TextStackFrame) context).getDebugTarget().fireModelEvent(new EvaluateExpressionRequest(expression, listener));
}
}
Delegates can decide on which context they may operate. For our interpreter we could evaluate expressions on StackFrames, Threads or the Process, but typically evaluations do take place on a dedicated StackFrame.

Step 2: Evaluation

Now we apply the usual pattern: send an event, let the debugger process it and send some event back to the debug target. Once the evaluation is done we then will inform the provided listener of the outcome of the evaluation.
public class TextDebugTarget extends TextDebugElement implements IDebugTarget, IEventProcessor {

@Override
public void handleEvent(final IDebugEvent event) {

[...]

} else if (event instanceof EvaluateExpressionResult) {
IWatchExpressionListener listener = ((EvaluateExpressionResult) event).getOriginalRequest().getListener();
TextWatchExpressionResult result = new TextWatchExpressionResult((EvaluateExpressionResult)event, this);
listener.watchEvaluationFinished(result);
}
}
The TextWatchExpressionResult uses a TextValue to represent the evaluation result. As before with variables we may support nested child variables within the value. In case the evaluation failed for some reason we may provide error messages which do get displayed in the Expressions view.

by Christian Pontesegger (noreply@blogger.com) at December 14, 2017 11:03 AM

Debugger 10: Editing variables

by Christian Pontesegger (noreply@blogger.com) at December 14, 2017 10:13 AM

In the previous tutorial we introduced variables support for our debugger. Now lets see how we can modify variables dynamically during a debug session.

Debug Framework Tutorials

For a list of all debug related tutorials see Debug Framework Tutorials Overview.

Source code for this tutorial is available on github as a single zip archive, as a Team Project Set or you can browse the files online.

Step 1: Allowing for editing and trigger update

First variables need to support editing. Then the variables view will automatically provide a text input box on the value field once clicked by the user. This is also a limitation: editing variables requires the framework to interpret an input string and process it accordingly to the target language.

The relevant changes for the TextVariable class are shown below:
public class TextVariable extends TextDebugElement implements IVariable {

@Override
public void setValue(String expression) {
getDebugTarget().fireModelEvent(new ChangeVariableRequest(getName(), expression));
}

@Override
public boolean supportsValueModification() {
return true;
}

@Override
public boolean verifyValue(String expression) throws DebugException {
return true;
}
}
verifyValue(String) and setValue(String) are used by the debug framework when a user tries to edit a variable in the UI. We do not need to update the value yet, but simply trigger an event to update the variable in the debugger.

Step 2: Variable update & refresh

As our primitive interpreter accepts any kind of text variables there is nothing which can go wrong here. Instead of sending an update event for the changed variable we simply use the already existing VariablesEvent to force a refresh of all variables of the current TextStackFrame:
public class TextDebugger implements IDebugger, IEventProcessor {

@Override
public void handleEvent(final IDebugEvent event) {

[...]

} else if (event instanceof ChangeVariableRequest) {
fInterpreter.getVariables().put(((ChangeVariableRequest) event).getName(), ((ChangeVariableRequest) event).getContent());
fireEvent(new VariablesEvent(fInterpreter.getVariables()));
}
}
}


by Christian Pontesegger (noreply@blogger.com) at December 14, 2017 10:13 AM

Papyrus and the Papyrus IC at Euroforum

by tevirselrahc at December 13, 2017 09:31 PM


Yesterday, my minion Maximilian went to the Automotive Software Development Conference (Euroforum) and presented me and my Industry Consortium!
I hope I made a good impression (I’m sure Maximilian did a great job)!

Maybe one day, you will be driving a car with software designed with my help!

Advertisements
&b
&b

Filed under: community, Conference, Papyrus, Papyrus IC, Uncategorized Tagged: automotive, industry

by tevirselrahc at December 13, 2017 09:31 PM

Remote Services between Python and Java

by Scott Lewis (noreply@blogger.com) at December 13, 2017 03:23 PM

ECF's implementation of OSGi Remote Services allows multiple distribution providers, which are responsible for the actual rpc communication required by remote services.   Here is a list of ECF distribution providers we've created.

Using Py4j and Google Protocol Buffers, we've recently enhanced an ECF distribution provider that allows the use of remote services (and Remote Service Admin) between OSGi and Python.   Service impls can be in either Java or Python, and consumers can be either Java or Python.     Protocol Buffers can be used to efficiently serialize arguments and return values.

The only dependencies are on OSGi, Py4j, and Google Protocol buffers, so this distribution provider can be used in Eclipse or other OSGi environments like Karaf.

Get the most recent release, with examples and source code at this github repository.



by Scott Lewis (noreply@blogger.com) at December 13, 2017 03:23 PM

Announcing Open IoT Challenge 4.0 Scholars

December 13, 2017 02:45 PM

Congratulations to the Top 12 teams who submitted the best proposals for the fourth Open IoT Challenge!

December 13, 2017 02:45 PM

JBoss Tools 4.5.2.AM2 for Eclipse Oxygen.2

by jeffmaury at December 13, 2017 07:40 AM

Happy to announce 4.5.2.AM2 (Developer Milestone 2) build for Eclipse Oxygen.2 (built with RC2).

Downloads available at JBoss Tools 4.5.2 AM2.

What is New?

Full info is at this page. Some highlights are below.

Fuse Tooling

Fuse 7 Karaf-based runtime Server adapter

Fuse 7 is cooking and preliminary versions are already available on early-access repository. Fuse Tooling is ready to leverage them so that you can try the upcoming major Fuse version.

Fuse 7 Server Adapter

Classical functionalities with server adapters are available: automatic redeploy, Java debug, Graphical Camel debug through created JMX connection. Please note: - you can’t retrieve the Fuse 7 Runtime yet directly from Fuse tooling, it is required to download it on your machine and point to it when creating the Server adapter. - the provided templates requires some modifications to have them working with Fuse 7, mainly adapting the bom. Please see work related to it in this JIRA task and its children.

Display routes defined inside "routeContext" in Camel Graphical Editor (Design tab)

"routeContext" tag is a special tag used in Camel to provide the ability to reuse routes and to split them across different files. This is very useful on large projects. See Camel documentation for more information. Since this version, the Design of the routes defined in "routeContext" tags are now displayed.

Usability improvement: Progress bar when "Changing the Camel version"

Since Fuse Tooling 10.1.0, it is possible to change the Camel version. In case the Camel version was not cached locally yet and for slow internet connections, this operation can take a while. There is now a progress bar to see the progress.

Switch Camel Version with Progress Bar

Enjoy!

Jeff Maury


by jeffmaury at December 13, 2017 07:40 AM

November Java User Group Tour 2017

by Nikhil Nanivadekar at December 13, 2017 04:39 AM

Cities visited

This year I had the pleasure to visit multiple Java User Groups in England, Ireland, Northern Ireland and Scotland. I presented about Eclipse Collections, Java 9 and Robots.

Day 1–25 November 2017: London Java Community (LJC) Unconference:

Central London

I had the opportunity to participate in the LJC Unconference. This year it was disorganized in a JCrete like format. I did an ignite session on How you can support open source projects? I mentioned some aspects like starring a project’s repository, using the project, raising issues and bugs, contributing bug fixes/ enhancements and most importantly documentation. In the last session of the day I did an introduction to Eclipse Collections Kata a fun way to learn Eclipse Collections framework. Then we all headed to a pub close by and continued discussions over a few beers.

Day 2–26 November 2017: Travel to Dublin:

View from Dublin airport

The train to Gatwick was canceled but it did not deter my excitement and I (barely) made it to my flight to Dublin from London. This was my first time in Dublin, I walked around the city and went to a few places suggested by friends. I met them for a nice dinner at a Chinese restaurant, roamed around the city, had Guiness and ended the night with some live Irish folk music and Irish Coffee.

Day 2–27 November 2017: Belfast Java User Group:

Belfast

I took a short, comfortable and scenic bus ride to Belfast from Dublin. I did manage to get some sleep and arrived refreshed near Belfast city center. After having a quick lunch, I checked in at my AirBnb and assembled the robots for the presentation in the evening. I roamed around the city, strolled around in the Belfast Christmas market and enjoyed few delicacies. I met one of the organizers before the presentation for a quick pint and he presented the Belfast JUG coffee mug to me. I presented Robots for Kid in Us and API Design of Eclipse Collections. We had good discussions around how we choose to evolve our API and decisions which we take while adding any new API. The night ended with a Guiness at Bittles Bar.

Day 3–28 November 2017: Dublin Java User Group:

Dublin

I took a bus back to Dublin and checked in at my AirBnb close to Temple Bar. I met the organizer of DubJUG and enjoyed some much needed and filling lunch. With about 3 hours left to spend, I wandered to the Jameson Distillery, took a tour and headed over for my presentations on Collections.compare and How to make your project Java 9 compatible. I ended up presenting for more than 3 hours. One unique thing by audience demand I did was, explained how we code generate all the primitive collections in Eclipse Collections. The night ended with Jameson and some Irish folk music.

Day 4–29 November 2017: Edinburgh Java User Group:

Edinburgh

I traveled via flight to Edinburgh from Dublin. I had a traditional lunch of Haggis and mash with the only Java Champion in Scotland and walked around Edinburgh. I strolled around Edinburgh, watched a live march, went whiskey/scotch tasting and got ready for the presentation in the evening. The meet up was kicked off by one of the organizers of Edinburgh JUG with a discussion about What’s new with Java. I followed with Collections.compare. We went to a pub right down the street discussed more about Java and a bit about politics. After that I headed to Whisky bar for some lively Scottish music and ended the night with Balvenie.

Day 5–30 November 2017: Manchester Java Community (MJC):

Manchester

The day started with a tasty, filling and traditional Scottish breakfast. After a brisk walk to Edinburgh Waverley station I was ready to board the train to Manchester, but, the train was canceled! I was put on a different train with a connection at Preston. I reached Manchester Piccadilly station late afternoon and enjoyed delicious lunch at Kabana. I met one of the organizers of Manchester Java Community, we had met at JCrete in 2016 and have since been friends. In autumn, he convinced me to visit and present at MJC and thus the organization of the JUG tour began. I presented Collections.compare and How to make your project Java 9 compatible. It was very well received and we continued our discussions at Piccadilly Tap and ended the night with some local ale.

Day 6–1 December 2017: West Midlands Java User Group (WMJUG):

Birmingham

I had lunch at Kabana (again) and headed to Birmingham from Manchester via train. No train delays this time yay! This was the last JUG I was scheduled to present on this trip. I stored my bags at the meet-up location and set out for a walk around Birmingham. I love to walk and explore whenever I am visiting a new place. I stumbled across a Needless Alley 2, I didn’t call it needless, it was named so! Next time I visit Birmingham I do want to search for Needless Alley 1. The walk around Birmingham was relaxing as I reflected about the JUG tour and was happy with the way it turned out. I headed back to meet up location and presented Collections.compare and Robots for Kid in Us. After the presentation, I walked over to Birmingham International station for my train to London. As always the bad luck with trains continued and the train was delayed by more than an hour. However, I was happy, excited and content by the success of JUG tour and train delays could not crush my spirit. Finally, the train arrived and I was London bound.

Day 7,8–2,3 December 2017: Hanging out with friends in London:
If you go to a different place and don’t meet your friends, I don’t call that visiting. I met my friends from London, watched a Premier League match at a local pub, tasted some fresh brew at a micro-brewery, enjoyed a delicious and filling Sunday Roast and of course had fish and chips.

I headed home after a successful JUG tour and will definitely do it again next year. I would like to thank the organizers of London JC (Twitter: ljcjug), Belfast JUG (Twitter: BelfastJUG), Dublin JUG (Twitter: DubJug), Edinburgh JUG (Twitter: edinburghjava), Manchester JC (Twitter: mcrjava), West Midlands JUG (Twitter: wm_jug). I would also like to thank all my friends for their support, timely feedback, encouragement, tweets, emails and chats. I would like to thank everyone who starred the Eclipse Collections Repository and showed their support. Last but not the least, a very big thank you to everyone who attended and supported me!

In this short travelogue I tried to describe my experience, but honestly I did not come even close to the real awesome experience. In the future I plan to do JUG tours across different countries.


by Nikhil Nanivadekar at December 13, 2017 04:39 AM

Eclipse Mars - how to switch back to previous Java formatter?

by Mateusz Matela (noreply@blogger.com) at December 12, 2017 10:00 PM

Update: the Luna formatter plugin now lives on GitHub, thanks to Asier Lostalé!

Java code formatter in Eclipse 4.5 has been completely rewritten. There's a lot less of bugs, the behavior is more consistent, and line wrapping is a bit smarter. It also opens way to easier implementation of new improvements in the future.
While most users will probably be happy with the new formatter, for some the changes may be unwelcome. Probably the most controversial change is a more restrictive approach to the "Never join already wrapped lines" option - a lot of line breaks that used to be tolerated by the old formatter, will be now removed if they don't fit the line wrapping settings. Also, some teams just don't want to force everyone to immediately switch to the newest Eclipse, so during the transition it would be problematic if part of the team used different formatter.
If you also find that problems related to changed formatter behavior outweigh the benefits of bug fixes and improvements, you'll be glad to hear that Eclipse Mars has a new extension point for Java formatter. So it's easy to take the code of the old formatter and wrap it in a plugin. I did just that for your convenience, the plugin can be downloaded here. Although I tested it a bit, the plugin is provided "as is" and I take no responsibility for anything that happens because of it. Just unzip the provided jar into the "plugins" directory of your Eclipse installation, restart, and in the Preferences -> Code Style -> Formatter page select Formatter implementation: Old formatter. Happy formatting!

by Mateusz Matela (noreply@blogger.com) at December 12, 2017 10:00 PM

Put OOMPH product versions in separate files

by vzurczak at December 12, 2017 11:57 AM

Just a quick tip for those who have big setup files for OOMPH products. I recently split up one by putting product versions in other files. Here is how to proceed.

One big setup file would look like this…

<?xml version="1.0" encoding="UTF-8"?>
<setup:ProductCatalog
    xmi:version="2.0"
    xmlns:xmi="http://www.omg.org/XMI"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:setup="http://www.eclipse.org/oomph/setup/1.0"
    name="my.product"
    label="some label">
    
  <!-- ... -->

  <product name="myproduct" label="Custom Eclipse">
    <annotation
        source="http://www.eclipse.org/oomph/setup/BrandingInfo">
      <detail
          key="folderName">
        <value>eclipse</value>
      </detail>
      <detail
          key="folderName.macosx">
        <value>Eclipse</value>
      </detail>
    </annotation>
    
    <version name="neon"
        label="Latest Neon"
        requiredJavaVersion="1.8">

        <!-- ... -->

    </version>

    <!-- Maybe with several versions. -->

    <description>...</description>
  </product>
</setup:ProductCatalog>

Now, to split it up, just add a reference to another file.

<?xml version="1.0" encoding="UTF-8"?>
<setup:ProductCatalog
    xmi:version="2.0"
    xmlns:xmi="http://www.omg.org/XMI"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:setup="http://www.eclipse.org/oomph/setup/1.0"
    name="my.product"
    label="some label">
    
  <!-- ... -->

  <product name="myproduct" label="Custom Eclipse">
    <annotation
        source="http://www.eclipse.org/oomph/setup/BrandingInfo">
      <detail
          key="folderName">
        <value>eclipse</value>
      </detail>
      <detail
          key="folderName.macosx">
        <value>Eclipse</value>
      </detail>
    </annotation>
    
    <version href="neon/my.products.neon.setup#/" />
    <description>...</description>
  </product>
</setup:ProductCatalog>

The important part is the reference to a sub-model file: version href=”neon/my.products.neon.setup#/”. And here is its content.

<?xml version="1.0" encoding="UTF-8"?>
<setup:ProductVersion
    xmi:version="2.0"
    xmlns:xmi="http://www.omg.org/XMI"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:setup="http://www.eclipse.org/oomph/setup/1.0"
    name="neon"
    label="Latest Neon"
    requiredJavaVersion="1.8">

      <!-- Force the loading of the parent when we open this file directly. -->
      <annotation source="ProductReference">
            <reference href="../my-other-product.setup#/" />
      </annotation>

      <!-- ... -->
      
</setup:ProductVersion>

The import part here is the ProductReference annotation. It is has no meaning for the EMF model itself, but it forces EMF to load the parent. If you drop this annotation, and that you open this setup file, you will have an error stating that the required feature ‘product’ of ‘Custom Eclipse’ must be set. With it, no matter which setup file you open, everything will be resolved correctly, without an error in the setup editor.

I made this summary after asking on Eclipse’s forums.
Many thanks to Ed Merks for his help.



by vzurczak at December 12, 2017 11:57 AM

Do we need Eclipse Commons?

by Niko Stotz at December 10, 2017 11:12 AM

tl;dr Vote at https://github.com/enikao/eclipse-commons/issues/1 in favor or against creating an Eclipse Commons (akin to Apache Commons) project.

Rationale

Have you ever done an Eclipse / EMF project without implementing this code?

1
2
3
4
5
6
7
8
public static IResource toIResource(URI uri) {
    if (uri.isPlatformResource()) {
        return ResourcesPlugin.getWorkspace().getRoot()
            .findMember(uri.toPlatformString(true));
    }
   
    return null;
}

I haven’t. And I’m tired of writing this code over and over again. Especially as I usually need more than one take to get it right (for example, the version above does not handle URIs pointing to non-existing IResources).

It already has been implemented several times. But I don’t want to introduce complex dependencies for reusing these implementations.

There are lots of other commonly reused code snippets, like

  • java.net.URIorg.eclipse.emf.common.util.URI
  • In an JUnit test, wait for the workspace to be ready
  • Create an IStatus without breaking your fingers

I therefore propose an Eclipse Commons project. This project would collect small utilities with minimal additional dependencies for common reuse.

Counter-Arguments

Allow me to anticipate some counter-arguments.

This code should be in the original project!

Yes, it should. But it is not. Some reasons might be personal preferences by the maintainers (“This code snippet is too short to be useful”), contextual arguments (“An URI cannot be generically represented as an IResource”), or no actual original project (where would the JUnit extensions go?).
(Please note that these are hypothetical reasons, not based on concrete experience.)

We don’t want to repeat the npm desaster with tiny snippets!

I envision Eclipse Commons to be hosted by the Eclipse Foundation (obviously). Therefore, no code that has been in there can just disappear.

This seems like an arbitrary collection of code. Who decides what is in, what is out of scope?

I propose a two-step approach. First, there is a “pile” repository (better name highly appreciated) where almost anything can be proposed. For each new proposal, we would have a vote. Every proposal that passes a threshold of votes (details tbd) and meets the required quality is accepted in Eclipse Commons.
Deployable artifacts are only created from the Eclipse Commons repository.

We definitely do not want to re-implement functionality that’s available on a similar level, like Apache Commons or Google Guava.

That’s chaos! Who creates order?

Besides some sanitation, I would not enforce any “grand scheme of things”. I’d guess Eclipse Commons would contain a quite diverse code base, therefore we don’t need central coordination. Also, there might not be any one party with enough insight into all parts of Eclipse Commons.

If a sizable chunk of code for a common topic agglomerates, it’s a good sign that the original project is really missing something and should adopt this chunk of code. This implies there is a party capable of bringing order to that chunk.

Also, Eclipse Commons should not be misused as a dump for large code base(s) that really should be their own project.

Thoughts on Implementation

Dependencies

Eclipse Commons should be separated in different plug-ins, guided by having the least possible dependencies. We might have one plug-in org.eclipse.commons.emf only depending on org.eclipse.core.runtime, org.eclipse.core.resources, and org.eclipse.emf.*. Another one might be org.eclipse.commons.junit depending only on core Eclipse and JUnit plug-ins, etc.

We should have strict separation between UI and non-UI dependent code. Where applicable, we should separate OSGi-dependent code from OSGi-independent implementations (as an example, a class EcoreUtil2 might go to the plug-in org.eclipse.commons.emf.standalone, as EMF can be used without OSGi).

As these plug-ins are meant solely for reuse, they should re-export any dependencies required to use them. We must avoid “class hierarchy incomplete” or “indirectly required” errors for Eclipse Commons users.

Versioning and Evolution

I propose semantic versioning. Regarding version x.y.z, we increase y every time some new proposal is migrated from “pile”. We reset z for every y increment to 0, and increase z for maintenance and bug fixes. x might be increased when code chunks are moved to an original project or one of our dependencies changes (see below). Every JavaDoc must contain @since for fine-grained versioning.

We should be “forever” (i.e. the foreseeable future) backwards-compatible, so we avoid any issues with upgrading. If code chunks are moved to an original project, this code should still be available within Eclipse Commons, but should be marked as @deprecated. Removing these chunks would require Eclipse Commons users to move to the newest version of the original project, but they might be not be able to do this. For the same reason, we cannot delegate from the (now deprecated) Eclipse Commons implementation to the original project.

I’m not sure what to do if a new proposal required a major change in our dependencies. As example, the existing plug-in org.eclipse.commons.emf might depend on org.eclipse.emf in version 2.3, but the new proposal required changes only introduced in EMF v2.6. We might want to go through with such a change, or create a separate plug-in with stricter dependencies.

Required Quality

I think the bar for entering “pile” should be rather low. This allows voting how useful the addition might be to others, and also allows community effort in reaching the desired quality. As we expect more or less independent utilities, improvements by others than the original authors should be easily possible without much required ramp-up.

On the other hand, code that enters Eclipse Commons must be pretty good. We want to keep this “forever”, and don’t want to spend the next year cleaning up after a have-backed once-off addition. This includes thorough documentation and tests.

We should care for naming, especially symmetry in naming. Having two methods IResource UriUtils.toIResource(URI) and URI IResourceTool.asUri(IResource) is highly undesirable.

Next Steps

I created a (temporary) repository at https://github.com/enikao/eclipse-commons, including a (hopefully) thorough implementation of aforementioned utility method.

What do you think of Eclipse Commons? Would you use it? Contribute? Help in maintaining it? How many votes should be the “pile” → Eclipse Commons threshold? And what would be a better name than “pile”?
Please leave your votes and comments at github.

If there was sufficient interest, the next step would be an Eclipse Incubation Project proposal.


by Niko Stotz at December 10, 2017 11:12 AM

The Occurrences of Occurrences

by Donald Raab at December 10, 2017 02:24 AM

Keeping counts, and letting you use them for different purposes.

Part of my shot glass collection

I have collected shot glasses for the past three decades. I lost count of the number of shot glasses I have. I think it must be somewhere over 200 by now. I usually buy one when I visit a place for the first time. Friends and family have joined in over the years picking me up shot glasses from cool places they visit. Now if I created a Java class called ShotGlass, I could put all instances of them in a Bag and then I could answer all kinds of questions about them. I could count them by a lot of different attributes.

A Bag keeps counts for you. It can tell you the number of occurrences of something it contains. There are several methods available on Bag that allow you to query or manipulate the number of occurrences of something in a Bag. Here are all the occurrences methods that the MutableBag class in Eclipse Collections 9.1 will have once it is released.

The method collectWithOccurrences is the newest addition in 9.1.

The following Eclipse Collections code will programmatically show you the top occurrences of all of the methods that contain “occurrences” in MutableBag.

@Test
public void topOccurrencesOfOccurrences()
{
Lists.fixedSize.with(MutableBag.class.getMethods())
.asLazy()
.selectWith(this::methodContains, "occurrences")
.countBy(Method::getName)
.topOccurrences(4)
.each(System.out::println);
}

private boolean methodContains(Method method, String string)
{
return method.getName().toLowerCase().contains(string);
}

This code outputs:

selectByOccurrences:4
topOccurrences:2
bottomOccurrences:2
forEachWithOccurrences:1
setOccurrences:1
addOccurrences:1
occurrencesOf:1
collectWithOccurrences:1
removeOccurrences:1

Here is an example of using the method collectWithOccurrences with a Bag of ShotGlass.

@Test
public void collectWithOccurrences()
{
Bag<ShotGlass> glasses = Bags.mutable.with(
new ShotGlass(Size.SMALL, "Orlando", "Florida", "USA",
"Disney World"),
new ShotGlass(
Size.SMALL, "Orlando", "Florida", "USA",
"Sea World"),
new ShotGlass(
Size.SMALL, "Orlando", "Florida", "USA",
"Universal Studios"),
new ShotGlass(
Size.MEDIUM, "Orlando", "Florida", "USA",
"Hard Rock Cafe")
);

Bag<String> byCity = glasses.countBy(ShotGlass::getCity);

Assert.assertEquals(
Bags.mutable.with(PrimitiveTuples.pair("Orlando", 4)),
byCity.collectWithOccurrences(
PrimitiveTuples::pair, Bags.mutable.empty()));

Bag<Size> bySize = glasses.countBy(ShotGlass::getSize);

Assert.assertEquals(
Bags.mutable.with(
PrimitiveTuples.pair(Size.SMALL, 3),
PrimitiveTuples.pair(Size.MEDIUM, 1)),
bySize.collectWithOccurrences(
PrimitiveTuples::pair, Bags.mutable.empty()));
}

The method collectWithOccurrences is passed each unique value with its count via an ObjectIntToObjectFunction. It is up to the developer to determine what to return from the ObjectIntToObjectFunction. In these two cases I opted to simply return an ObjectIntPair by using PrimitiveTuples::pair.

I have blogged previously about the prepositions By and With and their usefulness in API of Eclipse Collections. There is the potential for more By and With occurrences methods to be added to the API of Bag. There is an open issue requesting more occurrences methods on Bag. https://github.com/eclipse/eclipse-collections/issues/406. I would support adding both By and With versions of various methods if there are valid needs.

So it is still unknown how many methods with occurrences we will wind up adding to our Bag types. How many occurrences of occurrences methods do you need?

Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it.


by Donald Raab at December 10, 2017 02:24 AM

Formatting Java method calls in Eclipse

by Lorenzo Bettini at December 09, 2017 03:38 PM

Especially with lambdas, you may end up with a chain of method calls that you’d like to have automatically formatted with each invocation on each line (maybe except for the very first invocation).

You can configure the Eclipse Java formatter with that respect; you just need to reach the right option (the “Force split” is necessary to have each invocation on a separate line):

and then you can have method calls formatted automatically like that:

 

Be Sociable, Share!

by Lorenzo Bettini at December 09, 2017 03:38 PM

Eclipse Handly 0.8 Released

by Vladimir Piskarev at December 08, 2017 10:34 AM

We are pleased to announce the availability of Eclipse Handly 0.8 release, which attempts to finalize the key parts of the Core API. To that end, it contains many API enhancements to elements, element deltas, the model, and working copy management. It also introduces new naming convention for model *Impl* interface methods and a separate package for model implementation support. Also, some optimizations have been made in skeletal implementations to allow Handly-based models scale even better than before.

New and Noteworthy
Migration Guide
Downloads

Despite its incubation status, the project is known to be successfully used by early adopters in their large-scale commercial products. Recently, we created an experimental fork of Eclipse Java development tools (JDT) on GitHub as an experiment in adopting Handly within a non-trivial existing model implementation (the Java model), which can also serve as an exemplary implementation in addition to the exemplary implementations that are shipped as part of the project. All of the nearly 8 000 JUnit tests for the Java model, including performance tests, run successfully in the Handly-based fork, with none of the existing public JDT APIs having been affected in any way. The JDT fork and the other exemplary implementations as well as the project’s getting started tutorial and architectural overview have been updated to illustrate enhancements in Handly 0.8.

Broader community feedback and participation would be most welcome.

The Handly Team


by Vladimir Piskarev at December 08, 2017 10:34 AM

Cloud Native IoT Development in Practice

by Benjamin Cabé at December 07, 2017 11:01 PM

With Kubecon happening this week in Austin,  it is probably a good time to write an article on the role of containers and having a cloud native strategy for IoT, don’t you think?

Over the past years, Docker and its ecosystem have been instrumental in modernizing our approach to writing and shipping software. Today, more and more applications are becoming cloud native, meaning that not only core functionalities are being isolated as (micro)services, but also that applications are evolving to be first-class citizens in cloud environments (e.g exposing health metrics, acting as stateless processes, etc.).

In this blog post, we will be looking at how to approach cloud native IoT development. We will be deploying an end-to-end IoT solution for power consumption monitoring on OpenShift. The deployed services include:

  • IoT connectivity layer – getting telemetry data into a backend system is a challenge in itself, and we’ll see how Eclipse Hono can help with IoT connectivity ;
  • Device data simulator – as a way to illustrate how thinking cloud native for IoT can help make your application scale, we will actually have device simulators running on our cluster ;
  • Monitoring dashboards – we’ll see how we can leverage Grafana to visualize the data coming into our cluster, and its overall health ;
  • End-user application – getting IoT data into our backend is one thing, but we’ll also see how to develop a simple web application to visualize our instant power consumption ;
  • Cloud IDE – we will be using Eclipse Che to develop the web application mentioned just before.

So, let’s break this down!

Firing up a single-node OpenShift cluster with Minishift

The best way to get an OpenShift cluster setup is to use Minishift, which helps you deploy a single-node cluster on your local machine.

You can download the latest Minishift releases, and find install instructions on the project’s Github repository.

Once you have the Minishift command installed, firing up the cluster is actually pretty easy. Here’s the command I use on my quad-core Intel i7 MacBook Pro:

minishift start --cpus 4 --memory 12GB --metrics --disk-size 40GB

Obviously, your mileage will vary depending on the number of CPUs, memory, or disk space you want to allocate to your cluster, but no matter what your operating system is, soon enough you should be able to log into the OpenShift web console.

Scalable IoT Messaging with Eclipse Hono

Eclipse Hono enables scalable and secure ingestion of large volumes of sensor data into backend systems.

Eclipse Hono OverviewEclipse Hono Overview

The different building blocks of Hono (protocol adapters, device registry, …) can be deployed as microservices.

Deploying Hono on OpenShift is really easy: just follow the instructions available in the documentation! In my setup I’ve deployed a Hono environment where the core messaging functionality is taken care of by EnMasse, but you should also be fine with the regular, Qpid Dispatch-based, distro.

Feeding data into our system

Now that our IoT connectivity layer is deployed, with Hono running within our cluster, we want to ingest data into our system, and consume this data to e.g store it in a database.

Jens Reimann put together a nice setup that uses a public dataset of the energy consumption of a residential house to simulate “real” IoT devices. The application essentially deploys two services on our cluster:

  • A data simulator that sends energy consumption information to Hono using MQTT. The producer can be configured to simulate 1, 10… 10,000 of devices. And of course, you can also scale up the number of pods for the simulator to simulate even more devices.
  • A data consumer that taps into Hono’s telemetry API to retrieve data coming from all our virtual houses, and dump it into an InfluxDB time-series database.

If you follow the install instructions provided in Jens’ repo, you should have your simulator and consumer running in your OpenShift cluster, and data will start showing up in your InfluxDB database.

Here’s an example of how my Grafana dashboard looks like:

Running Eclipse Che on OpenShift

So we now have an IoT messaging infrastructure deployed in our OpenShift cluster, as well as an IoT app effectively pumping business data into our backend. Wouldn’t it be cool if we could also have the developer tools needed to write our user-facing application running in the same cluster?

Eclipse Che is a developer workspace server and cloud IDE that we will be deploying in our cluster, and using to write some Javascript code right from our browser. Deploying Eclipse Che on OpenShift is pretty straightforward and you can refer to the instructions on the Eclipse Che website to deploy it into your OpenShift project.

In my case, here’s how I would get the nightly build of Che 5.x deployed into my OpenShift project:

export CHE_IMAGE_TAG="nightly-centos"
export CHE_MULTIUSER="false"
export CHE_OPENSHIFT_PROJECT="hono"    

DEPLOY_SCRIPT_URL=https://raw.githubusercontent.com/eclipse/che/master/dockerfiles/init/modules/openshift/files/scripts/deploy_che.sh
WAIT_SCRIPT_URL=https://raw.githubusercontent.com/eclipse/che/master/dockerfiles/init/modules/openshift/files/scripts/wait_until_che_is_available.sh
STACKS_SCRIPT_URL=https://raw.githubusercontent.com/eclipse/che/master/dockerfiles/init/modules/openshift/files/scripts/replace_stacks.sh

curl -fsSL ${DEPLOY_SCRIPT_URL} -o ./get-che.sh
curl -fsSL ${WAIT_SCRIPT_URL} -o ./wait-che.sh
curl -fsSL ${STACKS_SCRIPT_URL} -o ./stacks-che.sh

bash ./get-che.sh ; bash ./wait-che.sh ; bash ./stacks-che.sh

And that’s it! Depending on your Internet speed it may take a few minutes for everything to get deployed, but Eclipse Che now just is a click away, accessible through a URL such as http://che-hono.192.168.64.2.nip.io/.

Writing our user-facing ExpressJS app from Eclipse Che

However quick this all was to set up, we’ve essentially worked on the infrastructure of IoT application: messaging, development environment, …

Arguably, the most interesting part is to actually make use of the data we’ve been collecting! For this, we will be developing a Node.js application that will be getting the overall electricity consumption metrics from  InfluxDB and displaying them on a fancy gauge.

The final version of the app is available here on my Github account. It uses Express, the InfluxDB client for Node.js, as well as gaugeJS for the gauge widget.

There are at least two interesting things to note here:

  • Thanks to Eclipse Che, we can not only easily set up a stack for Node.js development in no time, but we really have a full-blown IDE that includes advanced content assist features – not something you get that often when developing Javascript code. I can tell you that not being an expert in using the InfluxDB Javascript API, having code completion available in the IDE has been a pretty useful thing 🙂

  • Since Eclipse Che runs on the very same OpenShift cluster that holds our IoT backend, we can easily test our code against it. From within our Che workspace, all our environment variables are set up, and we can e.g access Hono, InfluxDB, etc.

Closing the loop

One last thing… We now have a Node.js application built from Che, that lives in its own Github repo. Wouldn’t it be great to have it run in our cluster, alongside the rest of our microservices?

From the OpenShift console, you are just a couple clicks away from deploying the Node.js app into the cluster. You can use the template for Node.JS applications to automatically build a Docker image from the Github repository that contains our app. It will automatically detect that the repository contains a Node application, install all its dependencies, build an image, and then deploy it to a pod with a route properly configured to expose our app outside of the cluster.

You could also set up a hook so that whenever there is a new commit in the upstream repository, the image gets rebuilt and redeployed.

Takeaways

Hopefully, this blog post helped you understand the importance of thinking cloud native when it comes to IoT development.

If you use Eclipse Hono for your IoT connectivity layer, for example, you automagically get a piece of infrastructure that is already instrumented to autoscale, should the number of devices connected to your backend require it.

Thanks to Eclipse Che, you can develop your IoT services in a controlled environment, that is already part of the same cluster where the rest of your IoT infrastructure and applications is already running.

Final words: don’t push it!

Now, I cannot conclude this blog post without a personal observation, and something I hope others have in mind as well.

Many moons ago, I used to teach people how to develop plugins for the Eclipse RCP platform – a truly great, highly extensible, framework. However, the platform being so modular, soon enough, you could end up turning everything into a plugin, just for the sake of having an “elegant” design. And when you think about it, microservices are very similar to Eclipse plugins…

Does it really make sense to isolate really tiny microservices in their own containers? For each microservice, what’s the overhead gonna be like to be maintaining its build system, access rights to the corresponding Git repository, configuration files, …?

You should absolutely have a cloud native strategy when it comes to building your IoT solution, but don’t overthink it! Your microservice architecture will likely emerge over time, and starting with a too small service granularity will just make things unnecessarily complex.

Please use the comments section below to share your thoughts on cloud native and IoT. I think this will be a hot topic for the near future, and I’m interested in hearing your views!


Final note: Shout out to Jens Reimann and Dejan Bosanac from Red Hat who’ve put a lot of work into the Hono part of the demo (running Hono on OpenShift, and putting together the demo app publishing electricity consumption information). Thanks also to Eugene Ivantsov for helping out with getting a proper Eclipse Che stack for JavaScript set up.

The post Cloud Native IoT Development in Practice appeared first on Benjamin Cabé.


by Benjamin Cabé at December 07, 2017 11:01 PM

Theia – VS Code in the Cloud

by Sven Efftinge at December 06, 2017 05:32 PM

… that supports native desktop apps through Electron, too.

VS Code is an awesome development tool. It comes with the right balance of simplicity and feature depth. The quality is really high and it performs very well in all situations. Even die-hard Emacs fans are convinced.

As VS Code is mostly implemented in TypeScript you would assume that you can run it in browsers connecting to remote workspaces running in containers. Unfortunately, it wasn’t designed for that. The underlying architecture is really not made for a remote connection between the backend node processes and the frontend (renderer process), as the communication is very chatty and fine-grained.

Providing an awesome IDE for workspaces running in containers is, however, something we needed. And because the available cloud IDEs were disappointing we started Theia.

Besides the support for a browser IDE, we also wanted to allow extension developers to build feature-rich extensions. So unlike VS Code, where only a limited set of functionality is exposed to extension developers, Theia itself is a collection of extensions. In other words, extensions are first-class citizens and they have access to everything the core packages have (as they are extensions, too).

Besides those two design goals, Theia is in many ways similar to VS Code and it is reusing many parts, like the Monaco editor, the language server protocol or the quick open widget (command palette, etc.).

Theia BETA is out

Today we have reached an important milestone and published a new version (v0.3.0) to npmjs.com. Theia offers the following features already:

  • Extension System
  • Navigator
  • Editor (monaco)
  • Terminal (xterm.js)
  • Flexible Layout (through phosphor.js)
  • Full! Language Server Protocol Support (diagnostics, completion, etc. for 50+ languages)
  • Language Extensions for TypeScript, JavaScript, Java, Python, YANG, DSLs etc.
  • Outline View
  • Problem Marker View
  • Git Staging View
  • Command Palette
  • Command and Keybinding Registries
  • Theming
  • Layout Restoration
  • Find File
  • Find Global Symbol
  • many more

It is best to try out yourself, which you can easily do if you have Docker on your machine:

docker run -it -p 3000:3000 -v "$(pwd):/home/project:cached" theiaide/theia

The Docker Hub organization currently hosts two images, but more configurations will be added over time. And you are invited to contribute your own, just as IBM did last week with the Java image.

Next Steps

We are in the process of finalizing our plans for the next months. While Theia is in a good shape to start diving into it, there are some features missing. We will come up with a more concrete plan in the coming days/weeks, but it will include support for debugging, launching tasks and better support for Git among others.

The activity on Theia is very promising, as you can see on Github.

Besides Ericsson and TypeFox, who are both more committed than ever, we are having conversations with three other larger corporations who use Theia for building products and who are willing to join and contribute back. In addition, people show up every other day trying Theia or expressing their excitement:

And we are excited, too! 🙂 During the next months we will focus on adding missing features, fix bugs, sorting out rough edges and taking care of performance improvements. But we will also be very happy to help you picking up and getting into Theia. It’s ready for that!

Get in touch on Github or Gitter (or simply leave a comment below).


by Sven Efftinge at December 06, 2017 05:32 PM

Open IoT Challenge 4.0 Scholars

by Roxanne on IoT at December 06, 2017 01:45 PM

The fourth edition of the Open IoT Challenge was launched in September and as of November 20, the race to build the best open IoT solution has officially begun!

Over 70 teams have submitted their ideas and are now in the running to win the Open IoT Challenge 4.0.

The participants have four months to make their idea a reality and show everyone how an innovative IoT solution can be built with open source and standards. The deadline to submit their final solution report is March 15. After that deadline, it’s in the jury’s hands. They will review each solution and pick the 3 winning teams.

The Jury

The jury has already deliberated to select the best proposals, which is the first Challenge milestone. They reviewed each proposal and chose the Top 12! You heard that right, 12 lucky teams were selected, not just 10 as originally announced.

Open IoT Challenge 4.0 — Jury

Top 12 Proposals

Congratulations to the Top 12 teams who have been awarded a “starter kit” to the most promising solutions! The kit includes a $150 gift card to buy IoT hardware and a mangOH® Red offered by Sierra Wireless for those who wish to use it to build their solution.

The top teams and the name of the submitters are (in alphabetical order):

To all submitters

You submitted a solution idea for the Challenge, but your name does not appear in this list? That doesn’t mean that you can’t win the Challenge! You’re still in the running for the final prizes, so don’t give up.

We could only select so many teams to be in the top proposals and we actually chose 12 teams instead of 10. Keep working on your solution and show the world how it is done!

Stay tuned

The teams will be sharing their build journey in blog or vlog form, so keep checking the Challenge website to follow their story or follow @EclipseIoT or #OpenIoTChallenge on Twitter to receive updates in your feed.

Thank you to our sponsors for making this the Open IoT Challenge 4.0 possible.

Open IoT Challenge 4.0 Sponsor

by Roxanne on IoT at December 06, 2017 01:45 PM

ECF 3.13.8 and etcd discovery for remote services

by Scott Lewis (noreply@blogger.com) at December 05, 2017 03:44 AM


ECF 3.13.8 has been available since September, but there are some new things available:

ECF 3.13.8 changes have distributed to maven central

There is a new release (1.3.0) of the etcd discovery provider.    This provider uses an ectd cluster to publish and discover remote services allow complete integration with systems like Kubernetes, which also use etcd for service discovery.


by Scott Lewis (noreply@blogger.com) at December 05, 2017 03:44 AM

Cloud Native Computing Foundation 2 Years Later

by Chris Aniszczyk at December 04, 2017 10:06 PM

A little over two years ago after five years of service at Twitter, I took the opportunity to build an open source foundation from scratch using some of the computing techniques we experimented with at Twitter:

I was initially excited about the idea because of my experience with open source foundations in previous lives, from being involved with the Eclipse Foundation, Linux Foundation, Apache Foundation plus part of the early discussions around OpenStack governance formation. I viewed this as an opportunity to learn from the lessons of other foundations and do something new and modern in the GitHub era, along with of course making our own mistakes. You really don’t get many opportunities to start an open source foundation from scratch that will impact the whole industry.

Stepping back, the original idea behind the Cloud Native Computing Foundation (CNCF) was to promote a method of computing (we call it cloud native) pioneered by internet scale giants such as Google, Twitter, Facebook and so on and bring it to the rest of the industry. If you looked inside these companies, you can see they were running services at scale, packaged in containers, orchestrated by some central system.

The first mission was to provide a neutral home for Kubernetes as the seed project of the foundation but also provide room for adjacent projects that specialized in specific areas for this new world (think monitoring and tracing as an example). The second mission was to convince all the major cloud providers to build in Kubernetes as a managed offering so we could essentially have a “POSIX of the cloud” that would give us a set of distributed APIs that would work everywhere (including on premise). Last week with AWS announcing their managed offering EKS, we have accomplished this goal with every major cloud provider supporting Kubernetes natively, kubernetes is truly the lingua franca of the cloud.

We still have a long way to go within CNCF to truly making cloud native computing ubiquitous across the industry, but I’m excited to see so many companies and individuals come together under CNCF to make this happen, especially as we have our largest annual gathering this week, KubeCon/CloudNativeCon. Personally, I’m nothing but thrilled to what the future holds and truly lucky to be serving our community under the auspices of the foundation.

A special thank you to Craig McLuckie, Sarah Novotony, Todd Moore, Ken Owens, Peixin Hou, Doug Davis, Jeffrey Borek, Jonathan Donaldson, Carl Trieloff, Chris Wright and many other folks that were at that CNCF first board meeting two years ago bootstrapping the foundation.


by Chris Aniszczyk at December 04, 2017 10:06 PM