Introducing our keynote speakers

by Anonymous at September 12, 2018 10:04 AM

Hirash Pillay, Global Head, Community Architecture and Leadership at RedHat - Lessons on Open Source from a 25-Year-Old Company

Tony Walsh, European Space Agency - Flying to Jupiter with OSGi

Amanda Whaley, Director of Developer Experience & Developer Advocacy for Cisco DevNet -  Marie Curie, Open Source, Kickstarter, and Women in Tech


by Anonymous at September 12, 2018 10:04 AM

Sponsor Testimonial: Renesas

by Anonymous at September 12, 2018 09:50 AM

Renesas has been developing our own Eclipse based IDE product now for over 8 years for Embedded Developers using Renesas hardware. Last year was our first with a booth and this gave us a great opportunity to show the attendees what we are doing and really get to meet members of the community. Be sure to visit our booth this year for a look at our OpenADx autonomous driving simulation technology demonstrator!

by Anonymous at September 12, 2018 09:50 AM

How to deploy Eclipse Theia on a Raspberry Pi

by Jonas Helming and Maximilian Koegel at September 12, 2018 09:49 AM

Eclipse Theia is a platform to create IDEs and custom (modeling) tools based on  web technology (Typescript, CSS and HTML). Please see this article for details about this new Eclipse project.

One advantage of using web technology and more in particular Eclipse Theia is that the tool can then be accessed directly in the browser without any installation or client set-up. However, this only works, if the tool based on Theia has been deployed somewhere. This could be a cloud server, a docker container, or in our case a Raspberry Pi! You might wonder why you would want to deploy a browser tool on a Raspberry Pi. First of all, a Raspberry is probably the cheapest server you can imagine, so if, for whatever reason, you cannot deploy and access your tool in the cloud, a Raspberry still allows a local client/server deployment. More interestingly, Raspberry Pis are often used to control or orchestrate embedded use cases. That means, the Raspberry executes some software which can control devices that are connected to it. In this scenario, having the tooling to develop this software running on the Raspberry provides a very consistent set-up. You could then ship a Raspberry which includes the software and the tooling –  all on one device.

Anyway, since when do we need a reason to deploy something on a Raspberry Pi? It is just fun, so let us get going!

How to deploy Eclipse Theia on a Raspberry Pi

At this point we assume that you already installed the runtime dependencies of Theia, namingly Node.js v8 and Yarn.

The main issue we will have to deal with is a mismatch of processor architectures. The Raspberry Pi is powered by an ARM processor, while your development machine is likely an x86 based architecture. While it is certainly possible to build Theia directly on the Raspberry, you might want to instead use your regular computer for this. Doings so will save you the hassle of setting up the full build environment on the Raspberry and compilation will also be much faster.

However, you cannot simply copy your Theia build from a x86 based machine onto the Raspberry. This might seem odd at first because after all it is JavaScript based. However, Theia uses certain interfaces to make use of native OS functionality. For example, the terminal feature is based on the node-pty module, which forks native OS processes in the background and redirects their output. These interfaces include some C or C++ code, which must be compiled for the architecture Theia is running on. To be able to build code for the ARM architecture on your regular x86 machine you need a so called “cross-compiler”. As the name implies it will allow you to compile code for a target architecture that is different from the architecture the building machine itself is running on. The Raspberry Linaro provides a cross-compiler which you can find here. Clone this repository as a preparation and point the paths to it later (see below).

First, compile Theia as usual using

yarn install

This will build Theia for your current architecture (x86). Now we need to re-compile the native bits using our cross compiler. To do that we have to set two environment variables, defining the C and CPP compiler to use. You can set them like so:

export CC=/opt/gcc-linaro-arm-linux-gnueabihf-raspbian/bin/arm-linux-gnueabihf-gcc
export CXX=/opt/gcc-linaro-arm-linux-gnueabihf-raspbian/bin/arm-linux-gnueabihf-g++

Make sure to accordingly adapt the locations to your installation.

Afterwards, you need to run the following command line in the Theia project root directory to re-compile the native code parts for a

npm rebuild --target_arch=arm

Finally, unset the cross-compiler using:

unset CC CXX

That’s it! Copy your project directory onto your Raspberry and start it as usual:

Note: There is currently a time out issue when running Theia on a slow machine like a Raspberry Pi. You can avoid this problem by appending –no-cluster to the yarn start command. With this option, everything will run in the same process rather than separate worker processes. The progress of this issue is being tracked here.


by Jonas Helming and Maximilian Koegel at September 12, 2018 09:49 AM

JBoss Tools 4.9.0.AM3 for Eclipse 2018-09 M2

by jeffmaury at September 12, 2018 06:24 AM

Happy to announce 4.9.0.AM3 (Developer Milestone 3) build for Eclipse 2018-09 M2.

Downloads available at JBoss Tools 4.9.0 AM3.

What is New?

Full info is at this page. Some highlights are below.

General

Server Tools

Wildfly 14 Server Adapter

A server adapter has been added to work with Wildfly 14. It adds support for Java EE 8.

Forge Tools

Forge Runtime updated to 3.9.1.Final

The included Forge runtime is now 3.9.1.Final. Read the official announcement here.

Enjoy!

Jeff Maury


by jeffmaury at September 12, 2018 06:24 AM

JBoss Tools 4.9.0.AM2 for Eclipse 2018-09 M2

by jeffmaury at September 12, 2018 06:24 AM

Happy to announce 4.9.0.AM2 (Developer Milestone 2) build for Eclipse 2018-09 M2.

Downloads available at JBoss Tools 4.9.0 AM2.

What is New?

Full info is at this page. Some highlights are below.

General

Eclipse 2018-09

JBoss Tools is now targeting Eclipse 2018-09 M2.

Fuse Tooling

WSDL to Camel REST DSL improvements

The version of the library used to generate Camel REST DSL from WSDl files has been updated. It now covers more types of WSDL files. See https://github.com/jboss-fuse/wsdl2rest/milestone/3?closed=1 for the list of improvements.

REST Editor tab improvements

In the last milestone we began adding editing capabilities to the read-only REST tab to the route editor we added in the previous release. Those efforts have continued and we now have a fully editable REST tab.

Fully Editable REST Editor

You can now:

  • Create and delete REST Configurations

  • Create and delete new REST Elements

  • Create and delete new REST Operations

  • Edit properties for a selected REST Element in the Properties view

  • Edit properties for a selected REST Operation in the Properties view

In addition, we’ve improved the look and feel by fixing the scrolling capabilities of the REST Element and REST Operations lists.

Enjoy!

Jeff Maury


by jeffmaury at September 12, 2018 06:24 AM

EC by Example: InjectInto

by Donald Raab at September 12, 2018 12:39 AM

Learn one of the most general, flexible, and least understood iteration patterns in Eclipse Collections.

Grounds for Sculpture, Hamilton N.J.

Continuum Transfunctioner

Like the Continuum Transfunctioner from “Dude, Where’s my car”, the method injectInto is a very mysterious and powerful method, and its mystery is only exceeded by its power.

So what does injectInto do?

The method injectInto can be used to do pretty much anything. The method injects an initial value into a two argument function along with the first element of the collection, and calculates some result. That result is then passed to the next invocation of the function as the initial value along with the next element of the collection. And so on and so forth until all elements of the collection have been visited and a final result is returned.

The name injectInto is based on the inject:into: selector from Smalltalk. InjectInto is an alternative name for foldLeft or reduce.

I will illustrate ways to implement various algorithms using injectInto to show how mysterious and powerful it is.

Example: Min and Max

@Test
public void injectIntoMinAndMax()
{
MutableList<Integer> list = Lists.mutable.with(1, 2, 3, 4, 5);

Integer maxInt = Integer.MAX_VALUE;
Integer minValue = list.injectInto(maxInt, Math::min);
Assert.assertEquals(list.min(), minValue);

Integer minInt = Integer.MIN_VALUE;
Integer maxValue = list.injectInto(minInt, Math::max);
Assert.assertEquals(list.max(), maxValue);
}

Example: Sum

@Test
public void injectIntoSum()
{
MutableList<Integer> list = Lists.mutable.with(1, 2, 3, 4, 5);

Integer sum = list.injectInto(Integer.valueOf(0), Integer::sum);
    Assert.assertEquals(Integer.valueOf(15), sum);
}

Example: Product

@Test
public void injectIntoProduct()
{
MutableList<Integer> list = Lists.mutable.with(1, 2, 3, 4, 5);

Integer product =
list.injectInto(
Integer.valueOf(1),
(result, each) -> result * each);
    Assert.assertEquals(Integer.valueOf(120), product);
}

Example: Collect

@Test
public void injectIntoCollect()
{
MutableList<String> lowerCase =
Lists.mutable.with("a", "b", "c", "d", "e");
    MutableList<Object> upperCase =
lowerCase.injectInto(
Lists.mutable.empty(),
(list, each) -> list.with(each.toUpperCase()));
    Assert.assertEquals(
lowerCase.collect(String::toUpperCase),
upperCase);
}

Example: GroupBy

@Test
public void injectIntoGroupBy()
{
MutableList<Integer> list = Lists.mutable.with(1, 2, 3, 4, 5);
    MutableListMultimap<Integer, Integer> grouped = 
Multimaps.mutable.list.empty();
    list.injectInto(grouped, (multimap, each) -> {
multimap.put(each % 2, each);
return multimap;
});
    Assert.assertEquals(list.groupBy(each -> each % 2), grouped);
}

Example: Collectors.groupingBy

@Test
public void injectIntoGroupingBy()
{
MutableList<Integer> list = Lists.mutable.with(1, 2, 3, 4, 5);
    MutableMap<Integer, List<Integer>> grouped =
Maps.mutable.empty();
    list.injectInto(grouped, (map, each) -> {
map.getIfAbsentPut(each % 2,Lists.mutable::empty)
.add(each);
return map;
});
    Assert.assertEquals(
list.stream().collect(
Collectors.groupingBy(each -> each % 2)),
grouped);
}

Example: Detect

@Test
public void injectIntoDetect()
{
MutableList<Integer> list = Lists.mutable.with(1, 2, 3, 4, 5);
    Integer value = list.injectInto(
null,
(result, each) ->
result == null && each > 2 ? each : result);
    Assert.assertEquals(list.detect(each -> each > 2), value);
}

Example: Select

@Test
public void injectIntoSelect()
{
MutableList<Integer> list = Lists.mutable.with(1, 2, 3, 4, 5);
    MutableList<Integer> value = list.injectInto(
Lists.mutable.empty(),
(result, each) ->
each % 2 == 0 ? result.with(each) : result);
    Assert.assertEquals(list.select(each -> each % 2 == 0), value);
}

APIs covered in the examples

  1. injectInto — applies a two argument function to each element of a collection using an initial value, and then injecting the result of each application of the function into the next iteration.

Check out this presentation to learn more about the origins, design and evolution of the Eclipse Collections API.

Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it on GitHub.


by Donald Raab at September 12, 2018 12:39 AM

Mizuho International Joins Eclipse Foundation

September 11, 2018 07:55 PM

Mizuho International, the securities & investment banking arm of the Mizuho Financial Group, has joined the Eclipse Foundation.

September 11, 2018 07:55 PM

Lightening the Release Review Burden

by waynebeaton at September 11, 2018 03:00 PM

The Eclipse Architecture Council is in the process of making a change to how the Eclipse Development Process (EDP) defines the Reviews that Eclipse open source projects are required to engage in. Foremost on our minds is the nature of Release Reviews which the EDP current requires ahead of all major and minor releases (service releases are excused from the requirement).

The current thinking is that we will decouple Releases from Reviews and instead require a regular (probably annual) equivalent to the Release Review.

This is actually not as big a departure from the current process as you might think. The EDP describes the Release Review as an opportunity “to summarize the accomplishments of the release, to verify that the IP Policy has been followed and all approvals have been received, to highlight any remaining quality and/or architectural issues, and to verify that the project is continuing to operate according to the principles and purposes of Eclipse.” That is, it is far less about the current release than it is about ensuring that the process is being followed and that the project is doing the right sorts of things to attract and grow community.

Likewise, the purpose of an IP Log is not to accurately represent the contents of any particular release, but rather to provide a checkpoint to ensure that the project is correctly following the Eclipse IP Due Diligence Process (so project teams can continue to receive contributions or engage in other activities that might change the IP Log between the point in time when it is reviewed and approved, and the project makes a release).

Eclipse committers are required to observe the Eclipse IP Policy and IP Due Diligence Process at all times, and so our open source projects must always be in correct state with regard to intellectual property management.

For those of you who have made it this far, it would be great if you could weigh in on Bug 534828 which includes an effort to more precisely define “Release”. We’re tracking all of our plans to update the EDP via Bug 484593. Input is welcome.

 


by waynebeaton at September 11, 2018 03:00 PM

An $8.7 Billion Shared Investment: Sizing the Economic Value of Eclipse Community Collaboration

by Thabang Mashologu at September 10, 2018 09:08 PM

What is the value of the code contributed by the Eclipse community? Inspired by a 2015 Linux Foundation study of the value of open source collaboration, we estimate that the roughly 162 million total physical source lines of code in Eclipse repositories represent an $8.7 billion USD shared technology investment by our community.

Open source has won because no single company can compete with the rate and scale of disruptive innovation delivered by collaborative ecosystems. In this post, I’ll share the details of our analysis of the economic value of open collaboration under the Eclipse governance model. As borne out by our experience, industry collaboration done properly delivers broad benefits to the ecosystem, including increased business agility, margin preservation, and the riskless sharing of intellectual property. 

The Strategic Value of Eclipse Collaboration

Delivering sustainable software innovation at scale requires an economic investment that is unlikely to be shouldered by one company alone. Industry leaders like Bosch, Google, Amazon, and many others have embraced this reality. Increasingly, collaboration is viewed as a basis for competitive advantage. 

The thinking goes like this: working with other industry players — even your fiercest competitors — on technology below the value line frees up scarce organizational resources to focus on delivering differentiating features faster, thereby igniting revenue growth. By pooling the development effort associated with commodity or backend capabilities, open source participants can save on headcount expenses, lower their development costs, and mitigate business risk by accelerating the market adoption of technologies and standards.

Consider mature industries faced with commoditization of traditional differentiators like the automotive and telecommunications sectors, where incumbents are squeezed by declining profits and the entry of digital-native disruptors like Tesla and Google. Collaborative development done right enables margin preservation and IP sharing without the threat of antitrust and regulatory challenges.

There are plenty of real-world examples of fierce industry rivals collaborating on open source. At the Eclipse Foundation’s openMDM Working Group, BMW, Daimler, Volkswagen, and other automotive OEMs and supply chain partners collaborate on open technologies for the management of standardized measurement data. In the cloud space, Alibaba, Amazon, Google and Microsoft are members of the Cloud Native Computing Foundation. Commercially-friendly OSS foundations provide the even playing field for everyone to frictionlessly work together on sustainable technology, while managing risk, and extracting value.

The Eclipse Foundation has a proven track record of enabling open collaboration and innovation earned over 15 years. The Eclipse Working Groups provide an open and vendor-neutral governance framework for individuals and organizations to engage in collaborative development. Combined with efficient development processes and rigorous intellectual property services, the end result is clean code that is readily built into commercial products. In essence, we help the Eclipse community deliver open source code that works and scales in the real world.

What’s in a Number?

My colleague Benjamin Cabé recently shared the detailed methodology behind the count of 162 million total physical source lines of code contributed to 330 active Eclipse projects in 1,120 Git repositories (as of August 1, 2018). The goal of our economic analysis was to assess the value to the ecosystem delivered by Eclipse projects. 

The creation of the original Eclipse project was announced in a 2001 IBM press release as a contribution of $40 million of software. At the time, the Eclipse community already involved more than 150 leading software tool suppliers and over 1,200 individual developers from 63 countries.

The Eclipse ecosystem is now supported by over 275 members and more than 1,550 committers. By sizing the software development effort to recreate the R&D effort over the last 14 years and now available to all, we can estimate the value provided to everyone who consumes these projects, including companies shipping this open code in commercial products.

The findings are remarkable:

  • Using Barry W. Boehm’s well-regarded Basic Constructive Cost Model (COCOMO), the total amount of development effort required to rebuild the R&D available to all is an estimated 59,246 person-years.
  • It would take 1,700 developers about 35 years to rebuild the 162 million total physical source lines of code in the Eclipse code repositories.
  • The total economic value of this work is estimated to be $8.7 billion
  • Coming in at 2.93 million lines of code, the development costs of Eclipse IoT projects total about $128 million.

Breaking Down the Math

Here are the detailed results of our analysis:

  Value Notes
Total Physical Source Lines of Code (SLOC) 162,582,158 Within the Eclipse repository as of August 2018
Thousands of Source Lines of Code (KSLOC) 162,582 (KSLOC = SLOC / 1,000)
Development Effort Estimate, Person-Years (Person-Months) 710,948 (59,246) (Basic COCOMO model, Person-Months = 2.4 * (KSLOC**1.05))
Schedule Estimate, Years (Months) 35 (418) (Basic COCOMO model, Months = 2.5 * (person-months**0.38))
Development Team Size 1,699 (Team Size = Effort/Schedule)
Total Estimated Cost to Develop $8,735,111,546 USD

Average salary = $102,470 / year, overhead dividend = 0.695

(Total Estimated Cost = Schedule Estimate, Years * Development Team Size *  $102,470 / 0.695)


Model Assumptions and Limitations:

  • Salary and fully loaded costs used are courtesy of the US Bureau of Labor Statistics (March and August 2018 data). The Linux Foundation’s 2015 study used $95,280 per year for the median software developer salary and a 0.693 fully-loaded overhead factor.
  • The model assumes all the software development would occur in the US. Clearly software development is global, and labor costs would vary accordingly.
  • It is assumed that all lines of code have equal value. This ignores the differences in languages, functionality and quality of the code. While counting lines of code allows an estimation of the development effort required to recreate the code within these project, the languages, problems being solved and the resulting code within the various Eclipse projects is quite varied. Arguably, better measures of value and derived benefits are the potential or measured impact to the ecosystem and end users.
  • The figures above do not include the value of the Eclipse intellectual property services. Thus the value of code of IP cleanliness — a major factor for companies building commercial products — is not captured.
  • Certain Eclipse projects, most notably the Eclipse Modeling, include code generation capabilities. For instance, the Eclipse Modeling Framework (EMF) project is a modeling framework and code generation facility for building tools and other applications based on a structured data model. Modeling accounts for 27.1 million out of the 162 million total lines of code. See the analysis by Eclipse project below for additional context.
  • This analysis focuses on net additions to the code base. Thus, the amount of effort tied to deleting and changing the code (not to mention modifying dependent code) is not captured in the estimate. 
  • For consistency with the studies by David Wheeler and The Linux Foundation respectively, the line counting approach ignores blank lines and comments. The case can be made that this approach undervalues code readability and maintainability.
  • The analysis assumes the same COCOMO complexity factors and development constants used by Wheeler et al. in their studies.
     

Line Count by Project

Top Level Project Lines of Code
Eclipse RT 54,961,728
Eclipse Technology 28,887,621
Eclipse Modeling 27,140,344
Eclipse Tools  14,214,182
Eclipse Web Tools 9,651,900
Eclipse 6,401,518
Eclipse Enterprise for Java (EE4J) 5,809,126
Eclipse Cloud Development 3,114,768
PolarSys 3,105,229
Eclipse IoT 2,930,217
Eclipse Business Intelligence and Reporting Tools (BIRT) 2,235,624
Eclipse Science 1,670,051
Eclipse Data Tools 939,424
Eclipse Mylyn 767,652
Eclipse SOA 752,774
Grand Total 162,582,158

 

Collaboration in the Real World

The massive shared code investment by the Eclipse community is a testament to the power of collaboration. More than 275 organizations and thousands of developers globally have contributed to the creation of billions of dollars of software value for the ecosystem to leverage. The Eclipse Foundation is proud to support these collective efforts.

Many thanks to Benjamin Cabé, Wayne Beaton, Barry W. Boehm, David Wheeler and the authors of the various Linux Foundation studies which inspired this analysis.
 


by Thabang Mashologu at September 10, 2018 09:08 PM

C++ and TypeScript - The Next Generation is Now

by Doug Schaefer at September 10, 2018 04:00 AM

We’re coming up on the 10’th anniversary of one of my favorite EclipseCon memories, Eclipse Summit Europe 2008. Something about staying up until 5 a.m. local time with a bunch of messed up Canadians and Europeans who just didn’t want to go to bed that make an event legendary. What made that special was probably the audience with a legend himself, Dave Thomas, who arguably started this all and who stayed right with us.

But it was his keynote, “Next Generation Embedded Software - The Imperative is Agility!” that I drew a lot of inspiration from. I can barely remember what it was about but he seemed to be bashing Java a lot so I had somewhat tuned it out. Until the last slide or so where he presented the alternative he controversially suggested would be the next generation: C++ and JavaScript. Wait, what?

I was probably the only person in the room who took that seriously. It was the year that Google had introduced the V8 JavaScript virtual machine. I spent a little time after the conference figuring out how you’d use it with a C++ application and it definitely seemed plausible. What was missing was a user interface and that seemed like an enormous amount of work so I left it aside.

Fast forward a few years, we began to see people try and solve that problem by slapping node.js and browser together to create a desktop application framework. First, we had NW.js who started with webkit but at some point switched to Chromium. Now we also have Electron who have done the same but with more separation between node and Chromium to make it easier to keep up with releases of both, in theory.

Playing with Electron a couple of years ago I started to get the feeling that this vision was indeed coming soon, especially the first time I got Electron to load a C++ Node Addon. And now, I just used a C++ addon to solve a performance problem I was having in a VS Code extension (Electron incarnated as an IDE). That sealed it for me.

To show what I mean, I have taken the environment I’m using and created a really simple VS Code extension that has everything one would need to get started. It’s dumb, but it implements an asynchronous native function to add two numbers and shows the results in a VS Code webview panel. It includes a bit of messaging framework to allow for type-safe(r) messaging between the webview and the extension. As usual, it’s available on my github to check out.

That was a bit of a long winded introduction, but I’ll dive into the technical details for the rest of this article. Needless to say, I’m pretty excited about it.

Believe Me, It Builds

Figuring out the cleanest way to build this thing and bundle everything together was the most challenging part, and probably what I’m most happy with. There are three different platforms at work and I am able to build them all with a single ‘yarn compile’. And I’m able to work incrementally with minimal fuss, pretty much none, a watch, for for the extension and view and a simple incremental build for native on demand. And thanks to VS Code’s TypeScript support and the clangd C++ language server, I find errors even before I build, as it should be.

CMake

Let’s start with the native side. Everyone who knows me from my work on the Eclipse CDT know I’m all about the CMake build tool these days. After years of working with Makefiles that model the system file by file, CMake lets you work at a higher level where you specify executables and libraries and dependencies between them. It takes care of the file level dependencies for you with a lot of magic, but good magic.

To build against the node.js APIs, you also need to download the header files, and for Windows, a library to build against. That’s tricky to do, so I borrowed a few ideas from the interweb and created a NodeModule.cmake file under the CMake folder. It takes care of downloading the and extracting the necessary tar ball and sets up the dependencies to it. That makes the CMakeLists.txt file as simple as setting the Electron version for the tar ball, the output directory, and then defining your module.

set(ELECTRON_VERSION 2.0.5)
set(NODE_MODULE_OUTPUT_DIR ${CMAKE_SOURCE_DIR}/out)

include(NodeModule)
add_node_module(n2wNative native.cpp)

That places your module, n2wNative.node in this case, into a platform specific bin directory in the out directory of the project ready to import into the extension.

Webpack

Webpack seems to be the leading build framework for web applications. I also noticed with the latest VS Code release they are using it now with it’s built-in extensions. Previously, I had a two step extension build, running webpack for client side and the TypeScript compiler for the extension. They both have file watchers but found the TypeScript one a bit flaky. So bringing them together into the same builder would make things a lot cleaner.

Webpack allows you to have multiple configs. I created one for the client side and one for the extension. The only real difference is the options for the TypeScript compiler to deal with the different modules systems between the browser and the extension which is running on node.js. I also chunked out the node_modules content for the clients since you are likely to have multiple of those.

The toughest issue I had was the ‘require’ loading the native module. It seems webpack tries to guess at location and changes the require to somewhere it thinks the module will be. But the whole idea of this module is that it’s platform specific and it needs to calculate which platform it’s on at run time. I finally ran into non_webpack_require which is a hacky way to tell webpack to leave things alone. The TypeScript doesn’t know about that magic so it needs to be declared.

declare function __non_webpack_require__(path: string): any;

export function add(x: number, y: number): Promise<number> {
    const native = __non_webpack_require__(`./${process.platform}/n2wNative`);
    return native.add(x, y);
}

The next issue I had to deal with was source maps. When I first got things running, VS Code with the Chromium debugger couldn’t figure out my breakpoints. I managed to stumble across a magic command in the Debug Output view, .script, which showed a detail view of the source maps. Webpack uses a magic URL webpack:// in the source maps it seems to be causing the confusion. Luckily, I found out you can control that.

output: {
    path: path.resolve(__dirname, 'out'),
    filename: '[name].js',
    libraryTarget: 'commonjs',
    devtoolModuleFilenameTemplate: '[absolute-resource-path]'
},

devtoolModuleFilenameTemplate lets you manage the path used for your source file. I guess webpack is designed more for web apps where using the absolute path is weird.

Time for a N-API

Node offers two API to hook up your C++ Addon, a scary one and a nice one. The scary one actually uses a lot of the V8 APIs and is heavy C++ and tends to tie you to a specific node release for some reason. Their newer simpler API introduce in version 8.0 is called N-API and thankfully VS Code finally switched to Node 8 in version 1.26. It’s a C API and is very reminiscent of Java’s JNI so I found it very easy to work with. It’s marked experimental in Node 8 but I don’t notice any major changes in version 10.

I didn’t originally intend to go so native. I was working on a tool that scanned a gigabyte size binary memory mapped file extracting data. I started doing that in TypeScript using node Buffers and found the performance surprisingly good, under 3 seconds to do a complete scan. But then I added an else clause to the main if statement and the time hit around 15 seconds. Really? Did I disturb something in JIT that killed performance? How do you even predict how that’s going to work.

Since I had written my own little memory map function using, I decided to rewrite the algorithm in C++ and ended up with around 1 seconds for the scan. Wow. Being a CDT guy, I know C++ and am pretty comfortable with it. And Modern C++ makes dealing with pointers pretty safe and collections fairly easy. For me, the performance gains were well worth it.

As I dug more into the features of N-API, I also discovered a hidden treasure. You can actually run your time consuming native algorithm on a worker thread and then use a JavaScript Promise to return the result when it finishes.

Basically, you create a struct to hold the arguments and result of the algorithm as well as the ‘work’ object the API gives you and a ‘deferred’ object that you use to resolve or reject the promise. That struct gets passed to a ‘work’ function that runs in a worker thread. When the work finishes a ‘completed’ function runs on the JavaScript main thread where you convert the result to a JavaScript value and send it out the promise through the deferred object. Easy, peasy, and pretty powerful.

I won’t reproduce the whole file here, but check it out at src/native/native.cpp. It’s a bit wordy written in C but I imagine one could wrap it in some C++ classes and generics to make it easier.

Playing it Safe

When you spend most of your life working in typed languages, first C++ then Java, the prospect of writing a serious app in a dynamically typed scripting language like JavaScript is a bit unsettling. Luckily I’m jumping into this as TypeScript is maturing. It has really helped me, especially as I learn both the VS Code API and the various npm packages I’m using thanks to the great IDE features you get with typed languages.

But there are two areas where you end up interacting with the raw JavaScript environment that you have to manage. The first is the native module. Most modules that you get off npm come with TypeScript definition files to make the integration easy. You have to provide something like that for the native module.

I already showed the TypeScript code to do this above. The require call returns a JavaScript object that you create in the module init native function. There are magic ways to give that object a type but I found it easier to start to just wrap it with a TypeScript function that manages loading the module and making the function call, at least for now.

The other area we need to manage is the communication channel between the extension and the code running in the webview. The webview extension API let’s you bind a OnReceivedMessage callback that gives you the message as a JavaScript object. The webview client injects ‘message’ events into the window object where you can pick them up with a listener.

To make the messages type safe, I leveraged TypeScript’s Discriminated Unions to declare message types in the shared messages.ts file. On the client side, I created a ServerPort class to manage the client side where I declared override methods to make posting messages type safe. The server side just has a switch statement where TypeScript figures out the type based on the case.

    async receiveMessage(request: Request) {
        switch (request.command) {
            case 'addRequest':
                this.postMessage({
                    command: 'addResponse',
                    token: request.token,
                    result: await add(request.x, request.y)
                });
                break;
        }
    }

That’s the great thing about TypeScript. You still have the ability to do JavaScript when you need. But you have to come up with a strategy to minimize that surface area as much as possible.

Challenges Ahead

Now that I have this running in Visual Studio Code, I don’t see any reason why you couldn’t use this architecture in other environments. It is just node.js and a browser in the end.

For example, I could simply get my tools running directly in Electron as a stand alone offering for people who don’t want to use VS Code as an editor.

Web IDEs could also benefit. The Theia IDE promises to be compatible with VS Code extensions. It’ll be interesting to try since this is probably the most complex extension I can think of. But other IDEs that have node.js as a server could also work this way.

But one thing is clear. I’m excited! Whether other tools developers will be as excited isn’t as clear. Either way, I’ve found my next generation tools platform. It uses C++ and JavaScript/TypeScript. And that next generation is now.


by Doug Schaefer at September 10, 2018 04:00 AM

How many lines of Open Source code are hosted at the Eclipse Foundation?

September 05, 2018 02:30 PM

As of August 1st, there are 330 active open-source projects and 1120 Git repositories, as for lines of code...

September 05, 2018 02:30 PM

The RSS reader tutorial. Step 2.

by Sammers21 at September 05, 2018 12:00 AM

Quick recap

In the previous step we have successfully implemented the first endpoint of the RSS reader app.

The RSS reader example assumes implementing 3 endpoints. This article is dedicated to implementing the GET /user/{user_id}/rss_channels endpoint.

Before completing this step, make sure your are in the step_2 git branch:

git checkout step_2

Implementing the second endpoint

The second endpoint produces an array of RSS channels by given user_id.

We need to execute the two following queries to:

  1. Fetch RSS links for a given user:
    SELECT rss_link FROM rss_by_user WHERE login = GIVEN_USER_ID ;
  2. Fetch RSS channel details for a given link:
    SELECT description, title, site_link, rss_link FROM channel_info_by_rss_link WHERE rss_link = GIVEN_LINK ;

Implementation

The endpoint allows the the front-end app to display the list of RSS feeds a user subscribed on. When the endpoint is accessed, the AppVerticle#getRssChannels method is called. We can implement this method in this way:

private void getRssChannels(RoutingContext ctx) {
    String userId = ctx.request().getParam("user_id");
    if (userId == null) {
        responseWithInvalidRequest(ctx);
    } else {
        Future> future = Future.future();
        client.executeWithFullFetch(selectRssLinksByLogin.bind(userId), future);
        future.compose(rows -> {
            List links = rows.stream()
                    .map(row -> row.getString(0))
                    .collect(Collectors.toList());

            return CompositeFuture.all(
                    links.stream().map(selectChannelInfo::bind).map(statement -> {
                        Future> channelInfoRow = Future.future();
                        client.executeWithFullFetch(statement, channelInfoRow);
                        return channelInfoRow;
                    }).collect(Collectors.toList())
            );
        }).setHandler(h -> {
            if (h.succeeded()) {
                CompositeFuture result = h.result();
                List> results = result.list();
                List list = results.stream()
                        .flatMap(List::stream)
                        .collect(Collectors.toList());
                JsonObject responseJson = new JsonObject();
                JsonArray channels = new JsonArray();

                list.forEach(eachRow -> channels.add(
                        new JsonObject()
                                .put("description", eachRow.getString(0))
                                .put("title", eachRow.getString(1))
                                .put("link", eachRow.getString(2))
                                .put("rss_link", eachRow.getString(3))
                ));

                responseJson.put("channels", channels);
                ctx.response().end(responseJson.toString());
            } else {
                log.error("failed to get rss channels", h.cause());
                ctx.response().setStatusCode(500).end("Unable to retrieve the info from C*");
            }
        });
    }
}

Also, this method uses selectChannelInfo and selectRssLinksByLogin fields, they should be initialized in the AppVerticle#prepareNecessaryQueries method:

private Future prepareNecessaryQueries() {
        Future selectChannelInfoPrepFuture = Future.future();
        client.prepare("SELECT description, title, site_link, rss_link FROM channel_info_by_rss_link WHERE rss_link = ? ;", selectChannelInfoPrepFuture);

        Future selectRssLinkByLoginPrepFuture = Future.future();
        client.prepare("SELECT rss_link FROM rss_by_user WHERE login = ? ;", selectRssLinkByLoginPrepFuture);

        Future insertNewLinkForUserPrepFuture = Future.future();
        client.prepare("INSERT INTO rss_by_user (login , rss_link ) VALUES ( ?, ?);", insertNewLinkForUserPrepFuture);

        return CompositeFuture.all(
                selectChannelInfoPrepFuture.compose(preparedStatement -> {
                    selectChannelInfo = preparedStatement;
                    return Future.succeededFuture();
                }),
                selectRssLinkByLoginPrepFuture.compose(preparedStatement -> {
                    selectRssLinksByLogin = preparedStatement;
                    return Future.succeededFuture();
                }),
                insertNewLinkForUserPrepFuture.compose(preparedStatement -> {
                    insertNewLinkForUser = preparedStatement;
                    return Future.succeededFuture();
                })
        ).mapEmpty();
    }

Conclusion

In this part, we have successfully implemented the second endpoint, which allows the browser app to obtain channels information for a specific user. To ensure that it is working fine, point your browser to localhost:8080 and click to the refresh button. Channel list should appear immediately.

If you have any problems with completing this step you can checkout to step_3, where you can find all changes made for completing this step:

git checkout step_3

Thanks for reading this. I hope you enjoyed reading this article. See you soon on our Gitter channel!


by Sammers21 at September 05, 2018 12:00 AM

Talk with your team about EclipseCon Europe 2018

September 04, 2018 03:00 PM

Review the program - talks, keynotes and special events - and register by October 1 for the best rates!

September 04, 2018 03:00 PM

Aliens, Go Home! VS Code-style!

by Doug Schaefer at August 31, 2018 06:01 PM

Looks ridiculous, doesn’t it. There is a method to my madness though. Stick with me…

I make it no secret how enthusiastic I am about Visual Studio Code as an IDE platform. I have often commented on my desire to start building tools using web front end technologies (and not necessarily web back ends). I even prototyped an “Eclipse Two” that was built on Electron directly. In the end, the Microsoft Zurich team, who also happened to be former leaders at Eclipse, created something similar with a huge community and ecosystem of extensions and I jumped on the bandwagon.

Being a good code editor and debugger is one thing, in the end there’s more to life as a developer than writing code. Often the systems we work with are more complicated than can be simply represented in text. We need graphics that can abstract away some of the gory details and make system behavior and relationships easier to understand. And that includes both ends, from modeling when creating the system, to tracing when trying to see what it’s doing once we built it.

There’s a number of ways you can do graphics in a web front end. The first one I considered was SVG. I like it because it’s backed by a data model with API that will let you change properties programmatically. And it simply gives an object oriented approach to graphics. It’s not all good though since those objects come with a price in memory and setup time. But if you keep the number of elements you create under a thousand or so, it’s pretty quick.

The other advantage of being in the DOM is that React can handle SVG with it’s virtual DOM. This speeds up rendering from changes to the underlying data model and automatically calculates that for you. It’s one of the coolest things in React and why we use it for all our web UI work.

So I set off to google around for examples where people had done that and I quickly ran across an article series Developing Games with React, Redux, and SVG. It’s really cool and proves out my main thesis about handling changing data models. Games, of course, are all about changing data models!

To complete the exercise, hooked up a simple VS Code extension that opened up a webview panel and ran the game in it. I rewrote most of it using TypeScript, since I refuse to give into the typeless world of JavaScript. I also switched over to MobX which some of my colleagues were using and it’s much easier to deal with than Redux. The whole thing was a joy to work with.

If you’re interested, I have pushed up my results to my GitHub repositories. It does look ridiculous running in VS Code. But it shows off the power of the platform. Running in a webview means you’re running in a separate process from the rest of vscode so what ever you do there doesn’t impact the rest of the IDE. From there, you can communicate back with your extension to allow it to interact with your environment and grab data from systems and files and feed back data for rendering. I can’t wait to see where the community (and myself for that matter) take this architecture.


by Doug Schaefer at August 31, 2018 06:01 PM

MetaModelAgent and Photon me

by tevirselrahc at August 30, 2018 01:12 PM

Adocus, a member of my industrial consortium, has released a version of MetaModelAgent, their DSML tool, for my Photon edition!

You can read more bout it on their release page!

Advertisements

by tevirselrahc at August 30, 2018 01:12 PM

Few seats available for the Eclipse Insight in Munich on September 3rd

by Jonas Helming and Maximilian Koegel at August 29, 2018 06:10 PM

We are looking forward to host the first Eclipse Insight in Munich next week on september the 3rd (5.30 pm – 9 pm). There are a few remaining seats available, for registration please see here.

The topic of the event will be how to build modeling tools based on open source and Eclipse technology. A special focus will be on the long-term protection of investments in Modeling Tools in times of technology changes, especially regarding web technologies.

In the first part, there will be expert talks and demos about building a modeling tool based on the well-proven Eclipse platform and EMF. In the second part, we will give an outlook on how those tools can be migrated or directly implemented based on web technologies (e.g. based on Eclipse Theia). All speakers will be available for detailed questions and discussions afterwards.

Registration is mandatory, as seats are limited, make sure to get one of the remaining seats and register here. The event is free of charge including drinks and snacks.

Please feel free to forward this invitation to interested colleagues.


by Jonas Helming and Maximilian Koegel at August 29, 2018 06:10 PM

Control flow graphs in N4JS

by Project N4JS (noreply@blogger.com) at August 29, 2018 11:57 AM

The N4JS IDE comes with several tools to get insights into the source code such as the AST and the control flow. In this post we present the control flow graph view.



When learning a programming language, we probably start with a small Hello World! program. From there on, we learn not only language keywords and libraries, but also get an implicit understanding in which order statements and expressions are executed. This order is called control flow. For instance, we learn the effects of if and for statements which are also called control statements, since they have a big influence on the execution order. The most difficult statement in this respect is probably the try-finally control statement, which is explained later.


Hello World


function f() {
  console.log("Hello World!");
}

Let's have a look at the Hello World! from a control flow perspective, first. The source code of Hello World! is simple, but it already consists of three important elements:
  • the method call console.log,
  • the nested argument "Hello World!", and
  • the function body of f() which contains the two elements mentioned above.
The function body is also called control flow container.


The image above shows the control flow graph of the Hello World! example. The method call is separated into the receiver and its property log. The next element the control flow goes to is the argument "Hello World!" until it reaches the method call of log as the final element.


Loops


The example above showed the succeeding control flow of some code elements. This control flow gets more interesting in statements that introduce branches such as loop statements do. In the example below, the control flow of a for-loop is shown. To indicate the start and end of the function, and also the body of the loop, the function calls start(), loop() and end() are used.

function f() {
  start();
  for (var i=0; i<2; i++) {
  loop();
  }
  end();
}


The control flow of the loop example shows branches and merges. After the condition i<2, either the body is entered via the edge named LoopEnter, or the loop is exited. In case the body was entered, the control flow first targets the call to loop() and then goes back to the entry of the condition.


Try-Finally


Finally blocks have tricky semantics since they can be entered and exited in two specific ways: normally and abruptly. Abrupt control flow occurs after a return, continue, break or throw statement. These will introduce a jump that targets either the end of the function or the entry of a finally block. In case the control flow jumps to a finally block, this finally block will be executed normally. However, since the block was entered abruptly, it will exit abruptly again. This means, that there will be a second jump from the exit of the finally block to either the end of function or the entry of the next finally block.


The following example shows this behaviour by using some dead code elements, which have a grey background colour in the graph image. These dead code elements are not reachable, since there is no normal control flow path that exits the try block.


function t() {
  "start";
  try {
  2;
  return;
  3;
    } finally {
  "finally";
  }
    "end";
}


Final catch


In case you ever wondered how a thrown exception can be caught without using a catch block, have a look at the final example. It is true that once a finally block was entered abruptly, it can only be exited abruptly. However, the kind of abrupt control flow might be changed. In the example below, it is changed from throwing an exception to breaking the loop. Of course, after the loop was exited due to the break statement, the control flow is normal again and the thrown exception remains without effect. Hence, the last statement "end" is executed which would have been skipped otherwise.



function t() {
  "start";
  do {
    try {
      2;
      throw "exception";
    } finally {
      break;
    }
  } while (4);
  "end";
}














by Marcus Mews

by Project N4JS (noreply@blogger.com) at August 29, 2018 11:57 AM

Gitpod: A One-Click Online IDE

by Moritz Eysholdt at August 28, 2018 04:44 PM

We at TypeFox are thrilled to announce the public beta of Gitpod. So far we have not been particularly loud about it, so it’s likely you haven’t heard about our first product.
Gitpod is an online IDE for GitHub and other Git-hosting-services. With a single click on a GitHub-Issue, Pull-Request or Branch, Gitpod launches a developer workspace for you. Your frontend is the open-source IDE Theia and the backend is a Docker container running in the cloud. Of course, you can bring your own docker images and launch plenty of simultaneous workspaces. As you may have guessed, Language Servers add excellent support for most popular programming languages.

Please read the blog-post “Gitpod — Online IDE For GitHub” for all the details.

We truly believe Gitpod is a valuable service for all developers on GitHub and in the future, users of other Git-hosting-services.

Gitpod makes Theia IDE available to developers

About one and a half years after we started the Theia project together with our friends from Ericsson, we are proud to see Eclipse Che choosing Theia as their future IDE frontend and Redhat and ARM significantly investing by contributing to Theia.

So far it was a bit hard to get your hands on Theia, as you had to either clone the repo, build and run it locally. So not exactly what you would call easy access. With Gitpod finally end users can easily work with Theia on a daily basis with just a single click on any GitHub repo. We are looking forward to work on all the useful feedback we might receive.

Gitpod can host your custom IDE

Theia makes it easy to create a branded web-based IDE, enhanced with custom languages and diagramming. Support for custom languages can be plugged in via the Language Server Protocol (LSP). See here for a list of already-available Language Servers. The LSP is also the mechanism of choice to integrate your Xtext language. For Diagramming there is the Sprotty framework. Sprotty not only integrates with Theia but also with Xtext-languages on the server-side. Sprotty covers the ground from read-only visualizations with excellent layout algorithms to highly interactive graphical modeling environments. Locally, customs IDEs are already usable as Electron-Apps or via Docker containers.

Starting this year, we offer Gitpod as a hosting service for your custom IDEs in the public cloud. Also we will make Gitpod available for on-premise installations via Kubernetes. Please contact us if you are interested to discuss the details.

What does this mean for our clients?

Gitpod adds a new division to TypeFox. You may know TypeFox for building custom products, consulting and support based on open-source projects. We will continue these services and we are looking for the synergies mentioned above. It is exciting to have a new building block at our disposal when creating custom IDEs! Additionally, TypeFox will operate a hosted version of Gitpod in a public cloud.

Work with us!

Theia, Sprotty, Gitpod, Xtext, and many customer project I can’t talk about here have been made possible by amazing people in a great team. If this technology excites you as well, contact us, and get paid for working full-time on this technology. Join our Team!

I hope you like Gitpod. Any kind of feedback is very welcome.


by Moritz Eysholdt at August 28, 2018 04:44 PM

EC by Example: Detect

by Donald Raab at August 23, 2018 12:08 AM

Learn how to find the first element of a collection that matches a condition in Java using Eclipse Collections.

Tomato <-> To.ma.to.

Detect / DetectWith

Use detect to find the first element of a collection that matches a given Predicate. Use detectWith to find the first element of a collection that matches a given Predicate2 with an extra parameter. The method detectWith works well with method references. If no element is found that matches then null is returned.

MutableList<String> rainbow = Lists.mutable.with(
"Red", "Orange", "Yellow", "Green",
"Blue", "Indigo", "Violet");
String red = rainbow.detect(each -> each.startsWith("R"));
Assert.assertEquals("Red", red);

String red2 = rainbow.detectWith(String::startsWith, "R");
Assert.assertEquals("Red", red2);

Assert.assertNull(rainbow.detect(each -> each.startsWith("T")));

Assert.assertNull(rainbow.detectWith(String::startsWith, "T"));

DetectIfNone / DetectWithIfNone

Use detectIfNone to find the first element of a collection that matches a given Predicate. If there is no match returns the result of evaluating the given Function0. The method detectWithIfNone is similar to detectWith, and also works well with method references.

String violetIfNone =
rainbow.detectIfNone(
each -> each.startsWith("T"),
() -> "Violet");
Assert.assertEquals("Violet", violetIfNone);

String violetIfNone2 =
rainbow.detectWithIfNone(
String::startsWith,
"T",
() -> "Violet");
Assert.assertEquals("Violet", violetIfNone2);

DetectOptional / DetectWithOptional

The method detectOptional is the equivalent of filter().findFirst() on a Java Stream. This method will return an Optional which can be queried to determine if any element has matched the given Predicate. The method detectWithOptional is similar to detectWith, and will work equally well with method references.

Optional<String> violetOptional =
rainbow.detectOptional(each -> each.startsWith("T"));
Assert.assertEquals("Violet", violetOptional.orElse("Violet"));

Optional<String> violetOptional2 =
rainbow.detectWithOptional(String::startsWith, "T");
Assert.assertEquals("Violet", violetOptional2.orElse("Violet"));

APIs covered in the examples

  1. detect / detectWith — finds the first element of a collection that matches a given Predicate or Predicate2. If there is no match, then null is returned.
  2. detectIfNone / detectWithIfNone — finds the first element of a collection that matches a given Predicate or Predicate2. If there is no match, then the result of evaluating the given Function0 is returned.
  3. detectOptional / detectWithOptional — finds the first element of a collection that matches a given Predicate or Predicate2. The result of the call will be an Optional, which can be queried to determine if there was a successful match.

Check out this presentation to learn more about the origins, design and evolution of the Eclipse Collections API.

Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it on GitHub.


by Donald Raab at August 23, 2018 12:08 AM