Skip to main content

GSoC 2019 Summary: Dart support for the Eclipse IDE

August 19, 2019 12:00 AM

Summary of my Summer of Code 2019

This post is a summary of my Summer of Code 2019. My project was to bring Dart development support to the Eclipse IDE. To do so I created a plugin that consumes the Dart analysis server using LSP4E. It also provides syntax highlighting using TM4E and many more features listed below.

Features

The following list showcases the most significant features of the plugin.

  • Syntax Highlight of Dart code
  • Running Dart programs directly from Eclipse
    • Standard & error output in a dedicated console
    • Support for multiple Launch configurations
    • Running single files from the context menu
  • Creating new Dart projects and files
    • Stagehand templates are also supported
  • First class Pub support
    • Automatic synchronization of Pub dependencies when changing the pubspec.yaml file
    • Shortcut for running $ pub get manually
  • Dart preference page
    • Set the location for the Dart SDK that should be used for all actions
    • Choose whether to automatically synchronize the dependencies of project
  • Usage of the Dart logo for files and launch configurations
  • Import existing Dart projects directly into the workspace (+ automatic dependency synchronization)

Upstream Bug Fixes

During development I encountered many issues with the libraries and tools I was using. As I was already aware I took the time to fix them directly and provide a patch or PR for the corresponding library.

  • A NullPointerException in TM4E's SymbolsModel#getChildren
  • Adjusted the Eclipse Light syntax theme in TM4E to match the classic Eclipse theme better
  • Using the quick access menu in Eclipse resulted in an Exception. This was caused by LSP4E not adhering to the LSP spec.
  • The textDocument/didSave notification is not supported by the Dart analysis server but LSP4E sent it anyway and this resulted in an error
  • Another NPE in TM4E's ThemeContribution

Things Left to do

I have completed all of my goals I set in the initial proposal for #GSoC. However a few things and features have come up during development that I will take care of.

  • Pub console - Currently there is now way to see the output of the pub commands
  • Flutter support - Flutter apps need to be run using the special $ flutter command suite, instead of the default SDK
  • Webdev support - Dart apps that should be run on the browser need to be run using the $ webdev command line tools

There also are a few subtleties in the user experience that need to be taken care of.


A full list of commits and issues can be found on the project's GitHub repository. Installation instructions can also be found there.

Appreciation

In the early days, Lakshminarayana Nekkanti has joined the project as a committer. He has been extremely helpful since by fixing bugs in the Eclipse platform that have been open for years (Bug 513034) and contributing a lot of features and knowledge to the plugin. Thank you, Lakshminarayana!

I would also like to thank Lars Vogel who has been my Mentor and helped tremendously when I was unsure what to do.


August 19, 2019 12:00 AM

Sponsor Testimonial: Bosch Software Innovations

by Anonymous at August 15, 2019 02:48 PM

...We find EclipseCon Europe to be the ideal gathering ground for the open source community and the perfect place to share and gain insights into new open source developments and trends. This year, we are contributing six sessions to the conference. We share insights into our approach to open source and take a closer look at Eclipse hawkBit, Eclipse Kuksa, OSGi as well as ways of improving efficiency of open source compliance processes. Feel free to also join us at the IoT playground....


by Anonymous at August 15, 2019 02:48 PM

Scaffolding a JSON Forms application with Yeoman

by Jonas Helming and Maximilian Koegel at August 14, 2019 10:15 AM

JSON Forms is a framework for efficiently developing form-based UIs based on JSON Schema.  It provides a simple declarative JSON-based language...

The post Scaffolding a JSON Forms application with Yeoman appeared first on EclipseSource.


by Jonas Helming and Maximilian Koegel at August 14, 2019 10:15 AM

Two Years and Fifty Blogs

by Donald Raab at August 09, 2019 03:53 AM

Happy Blogiversary

Grounds for Sculpture, Hamilton Township, NJ

Welcome to Medium

Two years ago, I wrote my first blog. It was about Symmetry in API design.

Symmetric Sympathy

Writing this blog was one of the hardest and most rewarding things I have done in my career as a software developer.

I have been blogging at least once a month ever since. In the following sections I will share the most popular and my personal favorite blogs for each of the three calendar years I have been blogging. Enjoy!

2017 — Finding My Voice

I write a lot about the Java programming language and Eclipse Collections. I am the creator of Eclipse Collections and am still an active Project Lead and Committer for the project at the Eclipse Foundation, so that should help explain why I write about it.

My top blog in 2017 was “Nine Features in Eclipse Collections 9.0”.

Nine Features in Eclipse Collections 9.0

My personal favorite blog in 2017 was “Preposition Preference”.

Preposition Preference

2018 — Finding My Way

In 2018 was nominated and selected as a Java Champion. I was humbled and honored to be selected into this group of amazingly talented and respected Java luminaries.

My top blog in 2018 was “Ten reasons to use Eclipse Collections”.

Ten reasons to use Eclipse Collections

My personal favorite blog in 2018 was “The 4am Jamestown-Scotland ferry and other optimization strategies”.

The 4am Jamestown-Scotland ferry and other optimization strategies

2019 — Continuing to Tell My Story

I hope to share more about myself and my general views on software development in 2019. I hope to write some blogs about the Smalltalk programming language and environment. I will keep blogging about Eclipse Collections and Java as well I’m sure, especially as we look to the future after the 10.0 release. Java has been changing a lot since Java 8 was released. There is a lot of excitement again around this now twenty four year old programming language.

My top blog in 2019 so far is “Eclipse Collections 10.0 Released”.

Eclipse Collections 10.0 Released

My personal favorite blog so far in 2019 is “Graduating from Minimal to Rich Java APIs”.

Graduating from Minimal to Rich Java APIs

On to 2020 — Finding More Bloggers

I’m heading to Oracle CodeOne in a few weeks to give a talk, meet with old friends, hopefully make some new friends, and spend time teaching and learning as much as I can. I have a new sense of community and purpose since I was selected as a Java Champion. I want to help more bloggers find their voices. It is hard to write, and very hard to write regularly, but it is so critically important to leave a bit of what we know to the current and future generations of developers to learn from.

Thank you for taking the time to read my blogs. I hope you enjoy them and learn something useful from them now and again.

Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it on GitHub.


by Donald Raab at August 09, 2019 03:53 AM

Community Day and Community Evening at EclipseCon Europe 2019

August 07, 2019 08:00 PM

Learn how you can help us plan Community Day and the new Community Evening this year at EclipseCon Europe!

August 07, 2019 08:00 PM

Update for Jakarta EE community: August 2019

August 07, 2019 08:00 PM

There's a lot happening in the Jakarta EE ecosystem so if you want to get a richer insight into Jakarta EE, read on.

August 07, 2019 08:00 PM

Update for Jakarta EE community: August 2019

by Tanja Obradovic at August 06, 2019 03:55 PM

We hope you’re enjoying the Jakarta EE monthly email update, which seeks to highlight news from various committee meetings related to this platform. There’s a lot happening in the Jakarta EE ecosystem so if you want to get a richer insight into the work that has been invested in Jakarta EE so far and get involved in shaping the future of Cloud Native Java, read on. 

Without further ado, let’s have a look at what happened in July: 

EclipseCon Europe 2019: Record-high talk submissions

With your help, EclipseCon Europe 2019 reported record-high talk submissions. Thank you all for proposing so many interesting talks! This is not the only record, though: it seems that the track with the biggest number of submissions is Cloud Native Java so if you want to learn how to develop applications and cloud native microservices using Java, EclipseCon Europe 2019 is the place to be. The program will be announced the week of August 5th. 

Speaking of EclipseCon Europe, you don’t want to miss the Community Day happening on October 21; this day is jam-packed with peer-to-peer interaction and community-organized meetings that are ideal for Eclipse Working Groups, Eclipse projects, and similar groups that form the Eclipse community. Plus, there’s also a Community Evening planned for you, where like-minded attendees can share ideas, experiences and have fun! That said, in order to make this event a success, we need your help. What would you like the Community Day & Evening to be all about? Check out this wiki first, then make sure to go over what we did last year. And don’t forget to register for the Community Day and/or Community Evening! 

EclipseCon Europe will take place in Ludwigsburg, Germany on October 21 - 24, 2019. 

JakartaOne Livestream: Registration is open!

Given the huge interest in the Cloud Native Java track at EclipseCon Europe 2019, it’s safe to say that JakartaOne Livestream, taking place on September 10, is the fall virtual conference spanning multiple time zones. Plus, the date coincides with the highly anticipated Jakarta EE 8 release so make sure to save the date; you’re in for a treat! 

We hope you’ll attend this all-day virtual conference as it unfolds; this way, you get the chance to interact with renowned speakers, participate in interesting interactions and have all your questions answered during the interactive sessions. Registration is now open so make sure to secure your spot at JakartaOne Livestream! 

No matter if you’re a developer or a technical business leader, this virtual conference promises to satisfy your thirst for knowledge with a balance of technical talks, user experiences, use cases and more. The program will be published soon. Stay tuned!

Jakarta EE 8 release

On September 10, join us in celebrating the Jakarta EE 8 release at JakartaOne Livestream!  

That being said, head over to GitHub to keep track of all the Eclipse EE4J projects. Noticeable progress has been made on Final Specifications Releases, Jakarta EE 8 TCK jobs, Jakarta Specification Project Names, and Jakarta Specification Scope Statements so make sure to check out the progress and contribute!  

Jakarta EE Trademark guidelines: Updates 

Version 1.1 of the Jakarta EE Trademark Guidelines is out! This document supplements the Eclipse Foundation Guidelines for Eclipse Logos & Trademarks Policy to address the permitted usage of the Jakarta EE Marks, including the following names and/or logos: 

  • Jakarta EE

  • Jakarta EE Working Group

  • Jakarta EE Member 

  • Jakarta EE Compatible 

The full guidelines on the usage of the Jakarta EE Marks are described in the Jakarta EE Brand Usage Handbook

EFSP: Updates

Version 1.2 of the Eclipse Foundation Specification Process was approved on June 30, 2019. The EFSP leverages and augments the Eclipse Development Process (EDP), which defines important concepts, including the Open Source Rules of Engagement, the organizational framework for open source projects and teams, releases, reviews, and more.

JESP: Updates

Jakarta EE Specification Process v1.2 was approved on July 16, 2019. The JESP has undergone a few modifications, including 

  • changed ballot periods for the progress and release (including service releases) reviews from 30 to 14 days

  • the Jakarta EE Specification Committee now adopts the EFSP v1.2 as the Jakarta EE Specification Process

TCK process finalized 

The TCK process has been finalized. The document sheds light on aspects such as the materials a TCK must possess in order to be considered suitable for delivering portability, the process for challenging tests and how to resolve them and more.     

This document defines:

  • Materials a TCK MUST possess to be considered suitable for delivering portability

  • Process for challenging tests and how these challenges are resolved

  • Means of excluding released TCK tests from certification requirements

  • Policy on improving TCK tests for released specifications

  • Process for self-certification

Jakarta EE Community Update: July video call

The most recent Jakarta EE Community Update meeting took place in mid-July; the conversation included topics such as Jakarta EE 8 release, status on progress and plans, Jakarta EE TCK process update, brief update re. transitioning from javax namespace to the jakarta namespace, as well as details about JakartaOne Livestream and EclipseCon Europe 2019.   

The materials used in the Jakarta EE community update meeting are available here and the recorded Zoom video conversation can be found here.  

Please make sure to join us for the August 14  community call.

Cloud Native Java eBook: Coming soon!

What does cloud native Java really mean to developers? What does the cloud native Java future look like? Where is Jakarta EE headed? Which technologies should be part of your toolkit for developing cloud native Java applications? 

All these questions (and more!) will be answered soon; we’re developing a downloadable eBook on the community's definition and vision for cloud native Java, which will become available shortly before Jakarta EE 8 is released. Stay tuned!

Eclipse Newsletter: Jakarta EE edition  

The Jakarta community has made great progress this year and the culmination of all this hard work is the Jakarta EE 8 release, which will be celebrated on September 10 at JakartaOne Livestream

In honor of this milestone, the next issue of the Eclipse Newsletter will focus entirely on Jakarta EE 8. If you’re not subscribed to the Eclipse Newsletter, make sure to do that before the Jakarta EE issue is released - on August 22! 

Meet the Jakarta EE Working Group Committee Members 

It takes a village to create a successful project and the Jakarta EE Working Group is no different. We’d like to honor all those who have demonstrated their commitment to Jakarta EE by presenting the members of all the committees that work together toward a common goal: steer Jakarta EE toward its exciting future. As a reminder, Strategic members appoint their representatives, while the representatives for Participant and Committer members were elected in June.

The list of all Committee Members can be found here

Steering Committee 

Will Lyons (chair)- Oracle, Ed Bratt - alternate

Kenji Kazumura - Fujitsu, Michael DeNicola - alternate

Dan Bandera - IBM, Ian Robinson - alternate

Steve Millidge - Payara, Mike Croft - alternate

Mark Little - Red Hat, Scott Stark - alternate

David Blevins - Tomitribe, Richard Monson-Haefel alternate

Martijn Verburg - London Java Community - Elected Participant Representative

Ivar Grimstad - Elected Committer Representative

Specifications Committee 

Kenji Kazumura - Fujitsu, Michael DeNicola - alternate

Dan Bandera - IBM, Kevin Sutter - alternate

Bill Shannon - Oracle, Ed Bratt - alternate

Steve Millidge - Payara, Arjan Tijms - alternate

Scott Stark - Red Hat, Mark Little - alternate

David Blevins - Tomitribe, Richard Monson-Haefel - alternate

Ivar Grimstad - PMC Representative

Alex Theedom - London Java Community Elected Participant Representative

Werner Keil - Elected Committer Representative

Paul Buck - Eclipse Foundation (serves as interim chair, but is not a voting committee member)

Marketing and Brand Committee

Michael DeNicola - Fujitsu, Kenji Kazumura - alternate 

Dan Bandera - IBM, Neil Patterson - alternate

Ed Bratt - Oracle, David Delabassee - alternate

Dominika Tasarz - Payara, Jadon Orglepp - alternate

Cesar Saavedra - Red Hat, Paul Hinz - alternate

David Blevins - Tomitribe, Jonathan Gallimore - alternate

Theresa Nguyen - Microsoft Elected Participant Representative

VACANT - Elected Committer Representative

Thabang Mashologu - Eclipse Foundation (serves as interim chair, but is not a voting committee member)

Jakarta EE presence at events and conferences: July overview 

Cloud native was the talk of the town in July. Conferences such as JCrete, J4K, and Java Forum Stuttgart, to name a few, were all about open source and cloud native and how to tap into this key approach for IT modernization success. The Eclipse Foundation and the Jakarta EE Working Group members were there to take a pulse of the community to better understand the adoption of cloud native technologies. 

For example, IBM’s Graham Charters and Steve Poole featured Jakarta EE and Eclipse MicroProfile in demonstrations at the IBM Booth at OSCON; Open Source Summit 2019 participants should expect another round of Jakarta EE and Eclipse MicroProfile demonstrations from IBM representatives. 



Thank you for your interest in Jakarta EE. Help steer Jakarta EE toward its exciting future by subscribing to the jakarta.ee-wg@eclipse.org mailing list and by joining the Jakarta EE Working Group. Don’t forget to follow us on Twitter to get the latest news and updates!

To learn more about the collaborative efforts to build tomorrow’s enterprise Java platform for the cloud, check out the Jakarta Blogs and participate in the monthly Jakarta Tech Talks. Don’t forget to subscribe to the Eclipse newsletter!  

 


by Tanja Obradovic at August 06, 2019 03:55 PM

Eclipse Collections 10.0 Released

by Donald Raab at August 04, 2019 11:49 PM

The features you want, with the collections you need.

Thank you to the contributors

Eclipse Collections 9.2 was released in May 2018. The 9.x releases were extremely feature rich and had many contributions from the community. The 10.0 release is even more so. There were 18 contributors in the 10.0 release. This is outstanding! Thank you so much to all of the contributors who donated their valuable time to making Eclipse Collections more feature rich and even higher quality. Your efforts are truly appreciated.

Too many features for one blog

There are so many features included in Eclipse Collections 10.0, that it is going to take me a bit longer to write good examples leveraging all of them. So I have decided to break this release blog into a few parts. This part will purely be a summary.

Update: Detailed Feature Blogs

Part 1 — Covers Features 1–10

Part 2 — Covers Features 11–20

Part 3 — Covers Features 21–26

The Feature Summary

  1. Specialized Interfaces for MultiReaderList/Bag/Set
  2. Implement Stream for Primitive Lists
  3. Implement toMap with target Map
  4. Implement MutableMapIterable.removeAllKeys
  5. Implement RichIterable.toBiMap
  6. Implement Multimap.collectKeyMultiValues
  7. Implement fromStream(Stream) on collection factories
  8. Implement LazyIterate.cartesianProduct
  9. Add updateValues to primitive maps
  10. Implement MutableMultimap.getIfAbsentPutAll
  11. Implement Bag.collectWithOccurrences
  12. Add reduce and reduceIfEmpty for primitive iterables
  13. Add <type1><type2>To<type1>Function for primitives
  14. Add ofInitialCapacity to primitive maps
  15. Implement countByEach on RichIterable
  16. Implement UnifiedSetWithHashingStrategy.addOrReplace
  17. Implement UnmodifiableMutableOrderedMap
  18. Implement withAllKeyValues on mutable primitive maps.
  19. Add ability to create PrimitivePrimitive/PrimitiveObject/ObjectPrimitiveMap from Iterable
  20. Implement ofInitialCapacity and withInitialCapacity in HashingStrategySets
  21. Implement getAny on RichIterable
  22. Revamp and standardize resize/rehash for all primitive hash structures
  23. Implement factory methods to convert Iterable<BoxedPrimitive> to PrimitiveStack/Bag/List/Set
  24. Implement ImmutableSortedBagMultimapFactory in Multimaps
  25. Implement a Map factory method that takes a Map parameter.
  26. Wildcard type in MultableMultimap.putAllPairs & add methods

Check out the latest JavaDoc for the new features.

Other Improvements

  1. Improved Test Coverage
  2. Many build improvements
  3. Remove duplicate code
  4. Removed some deprecated classes
  5. Improved generics
  6. Some new benchmark tests
  7. And much more!

Thank you

From all the contributors and committers… thank you for using Eclipse Collections. We hope you enjoy all of the new features and improvements in the 10.0 release.

I’ll be publishing detailed examples for the new features in the 10.0 release in a few blogs. Stay tuned!

Eclipse Collections is open for contributions. If you like the library, you can let us know by starring it on GitHub.


Eclipse Collections 10.0 Released was originally published in Oracle Developers on Medium, where people are continuing the conversation by highlighting and responding to this story.


by Donald Raab at August 04, 2019 11:49 PM

Dartboard: Pub support, Stagehand templates & better theming

August 03, 2019 12:00 AM

Automatic pub.dev dependency synchronization, stagehand templates and theme improvements for TM4E.

Pub dependency synchronization

pub.dev is the main source of packages for Dart projects. Dependencies are added to a pubspec.yaml file and the $ pub get command is used to download the packages from the registry. Since most projects require at least a few dependencies this step must be taken before the project compiles without errors.

To ease this process a little, the command is now automatically run when saving the pubspec.yaml file in Eclipse. With this, required dependencies are automatically downloaded when a project is imported or when a new package is added to the pubspec.yaml file.

If you don't want this behavior the feature can be disabled from the preference page. Manually running the synchronization is also supported from the context menu of a project.

Pub context handler

The current progress of the synchronization is reported to the Progress view.

Pub progress

Stagehand

Stagehand is a CLI tool, written in Dart, that generates new projects from a list of templates. There are different project types to chose from, like Flutter, AngularDart or just console applications. After a template is generated the project contains a pubspec.yaml file containing all necessary dependency information and various entry files that are required by the type of project.

This tool is now supported graphically by the New Dart Project wizard also provided by Dartboard. Under the hood the plugin uses the exact executable from pub.dev. And by automatically updating it we make sure that new templates can be immediately used.

This is how the New Dart Project wizard looks now: Stagehand

At first selection of the Wizard (after a fresh IDE start) Stagehand is updated and its templates are fetched. After that job is finished the available templates are added to the combo box. If no template should be used the Use Stagehand template checkbox can be unchecked and an empty project is generated.

Subsequent usages of the wizard use cached versions of the templates list to not strain your network or the pub.dev registry too much.

Project import

Not every Dart project was created from Eclipse though. So to be able to use Dartboard with existing Dart projects it is required to import them into the workspace. For this task we now provide a shortcut to import Dart projects from the Import Project context menu entry.

Currently it simply opens the Projects from Folder or Archive wizard. This wizard however allows to specify a ProjectConfigurator that takes care of selecting folders that should be imported as a project. In our own configurator we traverse all children and look for pubspec.yaml files. Every folder that contains such a file is considered to be a separate project.

Eclipse light theme for TM4E

TM4E can be used to apply different syntax highlighting to the editor. We provide a configuration file that specifies how certain words inside the editor should look. It also gives the option to select what theme to use. There are different light and dark themes available, like Solarized Light or the classic Eclipse Java editor theme.

I provided a patch to add some missing styles to the latter to make it look more like the classic theme. This is what it looks like now: Eclipse light theme

See eclipse/tm4e#215.

Wrap up

With two weeks to go until the end of Summer of Code 2019 there is not that much left to do for me to fulfill my proposed timeline. One major thing that is still missing are automated tests. I have started work on it now and will work on that for the last two weeks.

This will also be my last update post on Summer of Code as the next post will be a summary of my whole summer.


August 03, 2019 12:00 AM

The Wait Is Over!

by Anonymous at August 01, 2019 09:08 PM

The 2019 program has been chosen. Visit this page to see the list of accepted talks and tutorials.

Congratulations to the speakers, and thank you to all who submitted. The program committee can now take a well-earned break.

The detailed schedule will be published soon.


by Anonymous at August 01, 2019 09:08 PM

KubeCon Shanghai 2019: Cloud Native at China Scale

by Chris Aniszczyk at July 29, 2019 04:08 PM

Last month, we held KubeCon + Cloud NativeCon in Shanghai, one of the largest open source events in China. Recently, we published the conference transparency report detailing how the 3500 person event went but I’d like to share a couple take aways as someone who has been traveling to China quite a bit the last few years.

Cloud Native at China Scale

The scale of operating a popular service in China can be wild when you serve a billion users. This forces many of these companies to operate in a cloud native fashion, similar to their internet scale peers in Silicon Valley. I strongly believe we will see more interesting open source technology born in China over the next decade as they deal with scaling issues, similar to how a lot of internet scale open source technology was born from Google and other SV companies that had to support a larger user base. I highly recommend this great interview from Kevin Xu highlights some of the scale and open source projects coming out of China.

Also, Ant Financial joined CNCF as a Gold End User member, which is indicative of the trends of Chinese companies supporting open source they depend on and sharing the lessons they have learned. For those that aren’t aware, Ant Financial is the largest mobile payments company in the world (Alipay), running on Kubernetes and other CNCF projects. You can read their CNCF Case study here how they run on tens of thousands of Kubernetes nodes serving nearly a billion Alipay customers.

Giving Back: China: #2 contributor to Kubernetes

For those who aren’t aware, China is the second largest contributor to Kubernetes (and third to CNCF projects overall).

DevStats: https://k8s.devstats.cncf.io/d/50/countries-stats?orgId=1

It’s always great to have consumers contribute code back as that’s one important way how open source is sustainable in the long run.

On the whole, it was a fantastic to spend time with our cloud native community in Shanghai and we look forward to coming back to China next year, stay tuned for details for KubeCon + CloudNativeCon 2020 China! For now, we look forward to seeing many folks at KubeCon + CloudNativeCon 2019 San Diego which is gearing to be one of the largest open source events in the world.


by Chris Aniszczyk at July 29, 2019 04:08 PM

Pimping the status line

by Andrey Loskutov (noreply@blogger.com) at July 28, 2019 04:50 PM

This weekend I've tried to write a test for Eclipse debug hover, that required to know exact position of the selected text, somewhere in the middle of the editor. If you think this is easy - go figure out in Eclipse at which offset is your cursor - surprisingly there is no obvious way to do so!

So I've used some 3rd party editor that was kind enough to provide this information in the status line. Why shouldn't this be offered by Eclipse itself?

So I've created an enhancement request and pushed patch that adds both features to Eclipse. By default, status line shows now cursor position, and if editor has something selected, the number of characters in the selection (works also in block selection mode). Both new additions to the status line can be disabled via preferences.


 If there is no selection, cursor offset is shown:


Both new additions to the status line can be disabled via preferences:




by Andrey Loskutov (noreply@blogger.com) at July 28, 2019 04:50 PM

Incompatible Eclipse workspaces

by Andrey Loskutov (noreply@blogger.com) at July 28, 2019 04:35 PM


Eclipse has mechanism to recognize if the workspace to be used is created with older Eclipse version.
In such case, to be safe, Eclipse shows dialog like:




As of today (Eclipse 4.12 M1), if you click on "Cancel" button, Eclipse will behave differently, depending on the use cases "history":

A. If the workbench was not started yet:

  1. If Eclipse was started without "-data" argument and user selects incompatible workspace, Eclipse will show "Older Workspace Version" dialog above and by clicking on "Cancel" it will offer workspace selection dialog.
  2. If Eclipse was started with "-data" argument pointing to the incompatible workspace, Eclipse will show "Older Workspace Version" dialog above and by clicking on "Cancel" it will terminate (instead of offering to select another workspace).

B. If the workbench was started:

  1. If user selects compatible workspace in the "File -> Switch Workspace" dialog, Eclipse restarts fine.
  2. If user selects incompatible workspace in the "File -> Switch Workspace" dialog, Eclipse restarts, shows the "Older Workspace Version" dialog above and by clicking on "Cancel" it will terminate (instead of offering to select another workspace).
This behavior is inconvenient (at least), so we have bug 538830.

Fix Proposal #1

The proposal is, that independently on the way Eclipse was started, if user clicks on the "Cancel" button in the "Older Workspace Version" dialog, we always show the default workspace selection dialog (instead of termination):



In this dialog above user has two choices: launch any workspace or finally terminate Eclipse via "Cancel".

Proposal #1 Matrix

A1. If the workbench was not started yet:

  1. If Eclipse was started with or without "-data" argument and user selects incompatible workspace, Eclipse will show "Older Workspace Version" dialog above and by clicking on "Cancel" it will offer workspace selection dialog. To terminate Eclipse, user has to click "Cancel" in the workspace selection dialog.

B1. If the workbench was started:

  1. If user selects compatible workspace in the "File -> Switch Workspace" dialog, Eclipse restarts fine.
  2. If user selects incompatible workspace in the "File -> Switch Workspace" dialog, Eclipse restarts, shows the "Older Workspace Version" dialog above and by clicking on "Cancel" it will offer to select another workspace.

Fix Proposal #2

The proposal is, that depending on the way Eclipse was started, if user clicks on the "Cancel" button in the "Older Workspace Version" dialog, we may or may not show the default workspace selection dialog. So what happens after "Older Workspace Version" dialog is shown is not predictable by just looking on this dialog - it depends on the history of this dialog.

Proposal #2 Matrix

A2. If the workbench was not started yet:

  1. If Eclipse was started without "-data" argument and user selects incompatible workspace, Eclipse will show "Older Workspace Version" dialog above and by clicking on "Cancel" it will offer workspace selection dialog.
  2. If Eclipse was started with "-data" argument pointing to the incompatible workspace, Eclipse will show "Older Workspace Version" dialog above and by clicking on "Cancel" it will terminate (instead of offering to select another workspace).

B2. If the workbench was started:

  1. If user selects compatible workspace in the "File -> Switch Workspace" dialog, Eclipse restarts fine.
  2. If user selects incompatible workspace in the "File -> Switch Workspace" dialog, Eclipse restarts, shows the "Older Workspace Version" dialog above and by clicking on "Cancel" it will offer to select another workspace.

Similarities

Both proposals fix second bullet in the use case B2.

Differences

We see that Proposal #1 has no second bullet for A1 case, and is always consistent in the way how UI behaves after clicking on "Cancel" in the "Older Workspace Version" dialog. Proposal #2 fixes only B2 use case, inconsistency in UI behavior for the second part of A1 use case remains.

Technical (biased) notes:

  1. Proposal #1 is implemented and the patch is available, along with the demo video. To test it live, one has to build Eclipse - but here I have SDK binaries with the patch applied. The patch is relatively simple and only affects Platform UI internal code.
  2. Proposal #2 is not implemented yet. I assume that this will require more work compared to the patch #1. We will need a new command line argument for Eclipse to differentiate between "I want you not to terminate even if incompatible -data is supplied because I'm calling you from UI" and "Please terminate if incompatible data is supplied because I'm calling you from the command line". A new command line argument for Eclipse means not just Platform UI internal change, but also requires changes in the Equinox and Help, and also means public interface change.

Question to the masses!

We want to know your opinion - which proposal should be implemented?

Please reply here or on the bug 538830.

Update:

This version was implemented and available in 4.13 M1:



by Andrey Loskutov (noreply@blogger.com) at July 28, 2019 04:35 PM

Eclipse Vert.x 4 milestone 1 released!

by vietj at July 26, 2019 12:00 AM

We are extremely pleased to announce the first 4.0 milestone release of Eclipse Vert.x .

Vert.x 4 is the evolution of the Vert.x 3.x series that will bring key features to Vert.x.

This release aims to provide a reliable distribution of the current development of Vert.x 4 for people that want to try it and provide feedback.

Core futurisation

Vert.x 4 extends the 3.x callback asynchronous model to a future/callback hybrid model.

public interface NetClient {

  // Since 3.0
  void connect(int port, String host, Handler> handler);

  // New in 4.0
  Future connect(int port, String host);
}

In this first milestone, only the Vert.x Core library contains the hybrid model. More Vert.x components will be futurized soon and you will be able to try them in the next milestones.

Tracing

Instrumenting asynchronous application for distributed tracing is quite challenging because most tracing libraries rely on thread local storage. While it works reasonnably well in a blocking application, this does not work for an asynchronous application.

This supposes that the application control flow matters (i.e threads) although what really matters is the application request flow (e.g the incoming HTTP request).

We improved Vert.x 4 to reify the request flow, making it possible to integrate popular tracing tools such as Zipkin or Opentracing. Vert.x performance is legendary and we made sure that this does not have any overhead out of the box (disabled).

We provide support for these two popular libraries under the Vert.x Tracing umbrella.

Other changes

  • Groovy has been simplified in Vert.x 4 to remove code generation that was not really needed in practice
  • The original Redis client deprecated in 3.7 has been removed replaced by the new Redis client
  • The following components have reached their end of life and have been pruned
    • MySQL / PostgreSQL async client replaced by the Vert.x SQL Client (since 3.8)
    • AMQP bridge replaced by the Vert.x AMQP Client (since 3.7)

Ramping up to Vert.x 4

Instead of developing all new features exclusively in Vert.x 4, we introduce some of these features in the 3.x branch so the community can benefit from them. The Vert.x 4 development focus on more fundamental changes that cannot be done in the 3.x series.

Screenshot

This is the first milestone of Vert.x 4, we aim to release Vert.x 4 by the end of this year and you can of course expect more milestones to outline the progress of the effort.

Finally

The deprecations and breaking changes can be found on the wiki.

For this release there are no Docker images.,

The release artifacts have been deployed to Maven Central and you can get the distribution on Maven Central.

Most importantly the documentation has been deployed on this preview web-site https://vertx-ci.github.io/vertx-4-preview/docs/

That’s it! Happy coding and see you soon on our user or dev channels.


by vietj at July 26, 2019 12:00 AM

The State of Java in Flathub

by Mat Booth at July 22, 2019 03:00 PM

What's the deal with Java in Flathub?


by Mat Booth at July 22, 2019 03:00 PM

Eclipse Vert.x 3.8.0 released!

by vietj at July 19, 2019 12:00 AM

We are extremely pleased to announce that the Eclipse Vert.x version 3.8.0 has been released.

This is an important release that introduces a few changes ramping up to Vert.x 4 expected by the end of this year.

The Reactive SQL Client

The client is the evolution of the legendary Reactive PostgreSQL Client and provides

  • The Reactive PostgreSQL Client aka Vert.x PostgreSQL Client
  • The Reactive MySQL Client aka Vert.x MySQL Client

These implementations provide real high performance non-blocking access to PostgreSQL and MySQL.

To use these new modules, add the following to the dependencies section of your Maven POM file:

<dependency>
  <groupId>io.vertxgroupId>
  <artifactId>vertx-pg-clientartifactId>
  <version>3.8.0version>
dependency>
<dependency>
  <groupId>io.vertxgroupId>
  <artifactId>vertx-mysql-clientartifactId>
  <version>3.8.0version>
dependency>

Or, if you use Gradle:

compile 'io.vertx:vertx-pg-client:3.8.0'
compile 'io.vertx:vertx-mysql-client:3.8.0'

Then you are good to go!

// Connect options
PgConnectOptions connectOptions = new PgConnectOptions()
  .setPort(5432)
  .setHost("the-host")
  .setDatabase("the-db")
  .setUser("user")
  .setPassword("secret");

PgPool client = PgPool.pool(connectOptions, new PoolOptions().setMaxSize(5));

client.query("SELECT * FROM users WHERE id='julien'", ar -> {
  if (ar.succeeded()) {
    RowSet result = ar.result();
    System.out.println("Got " + result.size() + " rows ");
  } else {
    System.out.println("Failure: " + ar.cause().getMessage());
  }
});

Likewise you can achieve the same for MySQL:

MySQLConnectOptions connectOptions = new MySQLConnectOptions()
  .setPort(3306)
  .setHost("the-host")
  .setDatabase("the-db")
  .setUser("user")
  .setPassword("secret");

MySQLPool client = MySQLPool.pool(connectOptions, new PoolOptions().setMaxSize(5));

client.query("SELECT * FROM users WHERE id='julien'", ar -> {
  if (ar.succeeded()) {
    RowSet result = ar.result();
    System.out.println("Got " + result.size() + " rows ");
  } else {
    System.out.println("Failure: " + ar.cause().getMessage());
  }
});

The Reactive SQL Client brings to you the next generation database access, it is certainly the most exciting thing happening in the reactive database access space.

Future API improvements

In this release we updated the Vert.x Future interface to expose completion methods in a new Promise interface.

This improves the design of the API of Future by having Promise being the write side of an asynchronous result and the Future being its read side.

While there is little use for this in Vert.x 3.x, this has an impact on Vert.x 4.

Consequently some method signatures have been changed to use Promise instead of Future explained in this page.

Upgrading to 3.8

Upgrading to 3.8.0 might require a few changes in your application, you can read this page to understand the impact of the 3.8 release on your application upgrade.

And more…

Here are some other important improvements you can find in this release:

  • Cassandra Client gets out of tech preview
  • Jackson upgrade to 2.9.9 and databind 2.9.9.1
  • And obviously we have the usual bug fixes!

Finally

The 3.8.0 release notes can be found on the wiki, as well as the list of deprecations and breaking changes

Docker images are available on Docker Hub.

The Vert.x distribution can be downloaded on the website but is also available from SDKMan and HomeBrew.

The event bus client using the SockJS bridge is available from:

The release artifacts have been deployed to Maven Central and you can get the distribution on Bintray.

That’s it! Happy coding and see you soon on our user or dev channels.


by vietj at July 19, 2019 12:00 AM

React App Development in N4JS (Chess Game Part 1)

by n4js dev (noreply@blogger.com) at July 16, 2019 09:47 AM

React is a popular JavaScript library created by Facebook widely used for developing web user interfaces. In this post we introduce a tutorial on how to develop a chess game based on React, JSX and N4JS. The full tutorial is available (and playable) at eclipse.org/n4js and the sources can be found at github.com/Eclipse/n4js-tutorials.


Chess game implemented in N4JS with React

The chess game app implements the following requirements:
  • When the chess application is started, a chess board of 8x8 squares shall be showed containing 16 white pieces and 16 black pieces in their initial positions.
  • A player in turn shall be able to use the mouse to pick one of the pieces that she/he wants to move. A picked piece shall be clearly recognisable. Moreover, to aid players, especially beginners, whenever a piece is picked, all possible valid destination squares shall be visually highlighted as well.
  • In addition to the game board, there shall be a game information area that shows which player is in turn. Moreover, the game information area shall show a complete history of the game protocolling each move made by the players. As a bonus, jumping back to a previous state of the history shall be possible.

In the tutorial you will learn how to use npm, webpack and React to develop a web application with N4JS and the N4JS IDE. Most of the tutorial will elaborate on specific parts of the implementation and explain for example the graphical representation of the chess board and chess pieces, and how to use React to model the UI. Also, it will explain the game logic, i.e. how possible moves for the different piece types are computed, how the turn history is maintained, and how the end of the game (i.e. a win situation) is detected. In the end, the tutorial will make suggestions on how to improve the chess game e.g. by adding support for the en passant move.

Have fun with implementing this game!

by Minh Quang Tran

by n4js dev (noreply@blogger.com) at July 16, 2019 09:47 AM

Update for Jakarta EE community: July 2019

by Tanja Obradovic at July 15, 2019 04:19 PM

Two months ago, we launched a monthly email update for the Jakarta EE community which seeks to highlight news from various committee meetings related to this platform. There are a few ways to get richer insight into the work that has been invested in Jakarta EE so far, so if you’d like to learn more about Jakarta EE-related plans and get involved in shaping the future of Cloud Native Java, read on. 

Without further ado, let’s have a look at what happened in June: 

JakartaOne LiveStream: All eyes on Cloud Native Java

Are you interested in the current state and future of Jakarta EE? Would you like to explore other related technologies that should be part of your toolkit for developing Cloud Native Java applications? Then JakartaOne Livestream is for you! No matter if you’re a developer or a technical business leader, this virtual conference promises to satisfy your thirst for knowledge with a balance of technical talks, user experiences, use cases and more.  

You should join the JakartaOne Livestream speaker lineup if you want to 

  • Show the world how you and/or your organization are using Jakarta EE technologies to develop cutting-edge solutions. 

  • Demonstrate how Jakarta EE and Java EE features can be used today to develop cloud native solutions. 

This one-day virtual conference, which takes place September 10, 2019, is currently accepting submissions from speakers so if you have an idea for a talk that will educate and inspire the Jakarta community, now’s the time to submit your pitch!  The deadline for submissions is today, July 15, 2019. 

Note: All the JakartaOne Livestream sessions and keynotes are chosen by an independent program committee made up of volunteers from the Jakarta EE and Cloud Native Java community: Reza Rahman, who is also the program chair, Adam Bien, Arun Gupta, Ivar Grimstad, Josh Juneau, and Tanja Obradovic.

*As this inaugural event is a one-day event only, the number of accepted sessions is limited. Submit your talk now!  

Even if all the talks will be recorded and made available later on the Jakarta EE website, make sure to attend the virtual conference in order to directly interact with the speakers. We do hope you will attend “live”, as it will lead to more questions and more interactive sessions.  


Jakarta EE 8 release and progress

Are you keeping track of Eclipse EE4J projects on GitHub? Have you noticed that Jakarta EE Platform Specifications are now available in GitHub? If not please do!!!! Also please, check out the creation and progress of specification projects, which will be used to follow the process of converting the "Eclipse Project for ..." projects into specification projects to set them up to specification work as defined by the Eclipse Foundation Specification Process, and Specification Document Names.

Noticeable progress has been made on Jakarta EE 8 TCK jobs, Jakarta Specification Project Names, and Jakarta Specification Scope Statements so head over to GitHub to discover all the improvements and all the bits and pieces that have already been resolved.  

Work on the TCK process is in progress, with Scott Stark, Vice President of Architecture at Red Hat, leading the effort. The TCK process document v 1.0 is expected to be completed in the very near future. The document will shed light on aspects such as the materials a TCK must possess in order to be considered suitable for delivering portability, the process for challenging tests and how to resolve them and more. 

Jakarta EE 8 is expected to be released on September 10, 2019, just in time for JakartaOne Livestream.  

Javax package namespace discussions

The specification committee has put out two approaches regarding restrictions on javax package namespace use for the community to consider, namely Big Bang and Incremental. 

Based on the input we got from the community and discussions within the Working Group, the specification committee has not yet reached consensus on the approach to be taken, until work on the binary compatibility is further explored. With that in mind, the Working Group members will invest time to work on the technical approach for binary compatibility and then propose/decide on the option that is the best for the customers, vendors, and developers.

Please refer to David Blevins’ presentation from the Jakarta EE Update call June 12th, 2019

If you want to dive deeper into this topic, David Blevins has written a helpful analysis of the javax package namespace matter, in which he answers questions like "If we rename javax.servlet, what else has to be renamed?" 

 JCP Copyright Licensing request: Your assistance in this matter is greatly appreciated

As part of Java EE’s transfer to the Eclipse Foundation under the Jakarta EE name, it is essential to ensure that the Foundation has the necessary rights so that the specifications can be evolved under the new Jakarta EE Specification Process. For this, we need your help!

We are currently requesting copyright licenses from all past contributors to Java EE specifications under the JCP; we are reaching out to all companies and individuals who made contributions to Java EE in the past to help out, execute the agreements and return them back to the Eclipse Foundation. As the advancement of the specifications and the technology is at stake, we greatly appreciate your prompt response. Oracle, Red Hat, IBM, and many others in the community have already signed an agreement to license their contributions to Java EE specifications to the Eclipse Foundation. We are also counting on the JCP community to be supportive of this request.

For more information about this topic, read Tanja Obradovic’s blog. If you have questions regarding the request for copyright licenses from all past contributors, please contact mariateresa.delgado@eclipse-foundation.org.

 Election results for Jakarta EE working group committees

The nomination period for elections to the Jakarta EE committees is now closed. 

Almost all positions have been filled, with the exception of the Committer representative on the Marketing Committee, due to lack of nominees.   

The representatives for 2019-20 on the committees, starting July 1, 2019, are: 

Participant Representative:

STEERING COMMITTEE - Martijn Verburg (London Java Community)

SPECIFICATIONS COMMITTEE - Alex Theedom (London Java Community)

MARKETING COMMITTEE - Theresa Nguyen (Microsoft)

Committer Representative:

STEERING COMMITTEE - Ivar Grimstad

SPECIFICATIONS COMMITTEE - Werner Keil

MARKETING COMMITTEE - Vacant

 Jakarta EE Community Update: June video call

The most recent Jakarta EE Community Update meeting took place in June; the conversation included topics such as Jakarta EE 8 progress and plans, headway with specification name changes/ specification scope definitions, TCK process update, copyright license agreements, PMC/ Projects update, and more. 

The materials used on the Jakarta EE community update meeting are available here and the recorded Zoom video conversation can be found here.  

Please make sure to join us for the July 17th call.

 EclipseCon Europe 2019: Call for Papers open until July 15

You can still submit your proposals to be part of EclipseCon Europe 2019’s speaker lineup. The Call for Papers (CFP) is closing soon so if you have an idea for a talk that will educate and inspire the Eclipse community, now’s the time to submit your talk! The final submission deadline is July 15. 

The conference takes place in Ludwigsburg, Germany on October 21 - 24, 2019. 


Jakarta EE presence at events and conferences: June overview

(asked members on Jakarta marketing committee Slack channel if they participated in any conferences; waiting for a reply) 

Eclipse DemoCamp Florence 2019

Tomitribe: presence at JNation in Portugal 

 

Thank you for your interest in Jakarta EE. Help steer Jakarta EE toward its exciting future by subscribing to the jakarta.ee-wg@eclipse.org mailing list and by joining the Jakarta EE Working Group. 

To learn more about the collaborative efforts to build tomorrow’s enterprise Java platform for the cloud, check out the Jakarta Blogs and participate in the monthly Jakarta Tech Talks. Don’t forget to subscribe to the Eclipse newsletter!  


 

 


by Tanja Obradovic at July 15, 2019 04:19 PM

Commercial-Grade Collaboration at the Eclipse Foundation

by Thabang Mashologu at July 15, 2019 03:04 PM

When it comes to the digital economy, business runs on open source. That’s because open source is the best way to deliver large-scale business innovation and value at the pace customers expect. We’ve just released a Business of Open Source eBook that is essential reading for leaders in the age of digital disruption who are considering how to maximize their returns from open source participation.

 

For the last 15 years, companies ranging from startups to industry leaders the likes of Bosch, Broadcom, Fujitsu, Google, IBM, Microsoft, Oracle, Red Hat, SAP, and more have collaborated under the Eclipse governance model to advance open source projects and create value for their stakeholders. In this latest publication, we explore the role of open source as a pillar for transformation initiatives and the unique position of the Eclipse Foundation as the home of community-driven, code-first, and commercial-ready open source technologies.

 

Featuring interviews from leading open source industry experts, this eBook sheds light on how hundreds of organizations have leveraged the Foundation’s clear, vendor-neutral rules for intellectual property sharing and decision-making, business-friendly licensing, and ecosystem development and marketing services to accelerate market adoption, mitigate business risk, and harness open source for business growth while giving back to the developer community.

 

Deborah Bryant, Senior Director, Open Source Program Office at Red Hat, puts it this way in the eBook: “The Eclipse Foundation has a rich history of being an industry disrupter...It distinguishes itself in its long history and deep roots with large industry players. The Foundation has really been driven by engineers for engineers, but also as an honest broker of discussions with the business of these big companies that are doing very large-scale projects.”

 

Are you leveraging all that open source has to offer? Do you understand the value of participating in open source to develop customer-centric products and services faster? Do you recognize the scalability of open source, the ability to innovate on business models, and the ability to collaborate with a global developer community? Congratulations, you’re what we like to call an entr<open>eur. To get the most out of your stake in open source, it's time to consider joining a commercially-friendly open source foundation like ours.

 

To learn more about the business value of open collaboration at the Eclipse Foundation, visit entropeneur.org. In addition to the eBook, you’ll find video success stories from the Eclipse community, an infographic summarizing the role and benefits of participating in an open source foundation, and an informative slide deck that you can use to make the case for joining the Eclipse Foundation. Many thanks to Deborah Bryant, Todd Moore, and Tyler Jewell for contributing their expertise and insights to the eBook.

 

Let us know what you think and be sure to join the entrepreneurial open source conversation on Twitter @EclipseFdn and share your open source success story using #entropeneur.


by Thabang Mashologu at July 15, 2019 03:04 PM

Industrial-Scale Collaboration for the Business Win

by Mike Milinkovich at July 15, 2019 02:40 PM

Marc Andreessen once famously said, “Software is eating the world.” He was right, software gobbled up industry sectors as varied as financial services, automotive, mining, healthcare, and entertainment. Companies of all sizes have leveraged software to improve their business processes and adapt products to a digital economy. And then a funny thing happened: open source ate software.

From startups to the world’s large corporations, commercial software is built on and with open source. In fact, open source now comprises 80 to 90 percent of the code in a typical software application. Today, most companies ship commercial products based on open source. If software is the engine of industrial-scale digital transformation, open source is the rocket fuel.

The fact is, no single company can compete with the rate and scale of disruptive innovation delivered by diverse open source communities. Not only has open source proven to be the most viable way of delivering complex platform software, but open source tenets like transparency, community-focus, inclusion, and collaboration have been adopted by organizations for building customer-centric strategies and cultures. According to research from Harvard Business School, firms contributing to open source see as much as 100 percent of a productivity boost.

Nowadays, organizations collaborate at open source foundations to gain a competitive edge. Industry leaders leverage participation in open source foundations to accelerate the market adoption of technologies, improve time to market, and achieve superior interoperability. At the Eclipse Foundation over the last 15 years, industry leaders like Bosch, Broadcom, Fujitsu, Google, IBM, Microsoft, Oracle, Red Hat, SAP, and hundreds more have collaborated under the Eclipse governance model to drive shared innovation and create value within a sustainable ecosystem.

Today, we are thrilled to release the Business of Open Source eBook focused on how successful entrepreneurs are leveraging all that open source has to offer to drive digital disruption within business-friendly open source foundations like the Eclipse Foundation. We call this class of innovators entr<open>eurs.

Entr<open>eurs understand the value of open source participation to develop products faster, mitigate risk, and recruit talent to gain a competitive edge. They fundamentally recognize the role of vendor-neutral, community-driven, and commercially-friendly open source foundations like ours to foster industry-scale collaboration, anti-trust compliance, IP cleanliness, and ecosystem development and sustainability.

As Todd Moore, IBM’s Vice President of Open Technology, explains in the eBook, “being a disruptor generally means that you have to move very quickly. You don’t develop all of the technologies that you’re employing. You’ve got enough mastery over them to quickly be able to assemble them. You’re using automation and deployment strategies that allow you to rapidly cycle through the code. What you start with and what you end up with at the end of the string can radically change.”

Download the Business of Open Source eBook today to learn how to innovate with confidence by giving your mission-critical projects a proper home at the Eclipse Foundation. Thank you to Deborah Bryant, Todd Moore, and Tyler Jewell for contributing their insights and expertise to the eBook. Let us know what you think and be sure to join the entrepreneurial open source conversation on Twitter @EclipseFdn and share your open source success story using #entropeneur.

To learn more about the business value of open collaboration at the Eclipse Foundation, visit entropeneur.org to explore our other commercial open source resources, including video success stories featuring Eclipse community members. We’ve also developed an infographic summarizing the benefits and advantages of participating in an open source foundation, and slide deck that you can use to make the case for joining the Eclipse Foundation.


by Mike Milinkovich at July 15, 2019 02:40 PM

EMF Forms 1.21.0 Feature: Multi Edit for Tables and Trees

by Jonas Helming and Maximilian Koegel at July 15, 2019 10:29 AM

EMF Forms makes it easy to create forms that are able to edit your data based on an EMF model. To...

The post EMF Forms 1.21.0 Feature: Multi Edit for Tables and Trees appeared first on EclipseSource.


by Jonas Helming and Maximilian Koegel at July 15, 2019 10:29 AM

Papyrus SysML 1.6 available from the Eclipse Marketplace.

by tevirselrahc at July 12, 2019 02:03 PM

I should have mentioned, yesterday, that Papyrus SysML 1.6 is available from the Eclipse market place at https://marketplace.eclipse.org/content/papyrus-sysml-16


by tevirselrahc at July 12, 2019 02:03 PM

Papyrus 4.4 is available

by tevirselrahc at July 12, 2019 07:15 AM

I’m a bit late with this posting…better late than never!

A new version of papyrus 4.4 is available:

SysML1.6 ( a forum topic will be send when the market place is available)

  • SysML 1.6 profile done
  • The SysML requirement diagram shall be implemented
  • The SysML Parametric diagram shall be implemented
  • The SysML BDD shall be implemented
  • The SysML IBD shall be implemented
  • The SysML requirement table shall be implemented
  • The SysML Graphical element type shall be implemented
  • The SysML AF shall be implemented
  • The SysML allocation Matrix shall be implemented
  • The elementype of SysML 1.6 shall be implemented
  • Make SysML 1.6 open source
  • The SysML model explorer customization shall be implemented
  • Add written OCL constraints
  • Implement E3 of SysML 1.6
  • Update SysML 1.6 diagram of profile
  • Add Icon for conjugated Interface block
  • Add compartment of Conjugated Interfaceblock inside BDD
  • The SysML Junit Test shall be implemented
  • Papyrus shall support the migration from SysML 1.4 to 1.6 Papyrus toolsmith

Validation of plugins:

  • you have done your profile to customize papyrus, but you forget extension point, build.xml, dependencies. We have done a work not only to validate profile but the plugin that contains the profile
  • the work has also be done for plugins that contain elementTypes model.

Improve developer experience to use plugin org.eclipse.papyrus.infra.core.sasheditor

  • Decrease the usage of internal eclipse code
  • Papyrus has developed at the beginning a new kind of editor compnoent sasheditor. To be more stable, we have ask to open api to eclipse in order to improve integration to eclipse.
  • Dedicated API have been created from use cases in order to help developer to access to this graphical composite; add a new editor inside papyrus, get active editor…
  • These usecases will be published inside a plugin developer doc :
  • It is will be like a javadoc that has a list of use cases and references API that implement these usecases.

Model2Doc

  • Papyrus will provide a documentation generator targeting LibreOffice file (odt).
  • This generator will allow to the user to describe how to cross the UML model to create the document
  • This generator will allow to the user to define a document template to use for the generation
  • This generator will support image and table insertion.

Go try it and send me your comments!

HAVE FUN!


by tevirselrahc at July 12, 2019 07:15 AM

Announcing Eclipse Ditto Release 0.9.0

July 10, 2019 04:00 AM

Today the Eclipse Ditto team proudly presents its second release 0.9.0.

The topics of this release in a nutshell were:

  • Memory improvements for huge amounts (multi million) of digital twins which are held in memory
  • Adding metrics and logging around the connectivity feature in order to enable being able to operate connections to foreign systems/brokers via APIs
  • Enhancing Ditto’s connectivity feature by additionally being able to connect to Apache Kafka
  • Performance improvements of Ditto’s search functionality
  • Stabilization of cluster bootstrapping
  • Refactoring of how the services configurations are determined
  • Addition of a Helm template in order to simplify Kubernetes based deployments
  • Contributions from Microsoft in order to ease operating Eclipse Ditto on Microsoft Azure

Please have a look at the 0.9.0 release notes for a more detailed information on the release.

Artifacts

The new Java artifacts have been published at the Eclipse Maven repository as well as Maven central.

The Docker images have been pushed to Docker Hub:



Ditto


The Eclipse Ditto team


July 10, 2019 04:00 AM

Building UIs with EASE

by Christian Pontesegger (noreply@blogger.com) at July 08, 2019 06:54 PM

You probably used EASE before to automate daily tasks in your IDE or to augment toolbars and menus with custom functionality. But so far scripts could not be used to build UIs. This changed with the recent contribution of the UI Builder module.

What it is all about
The UI Builder module allows to create views and dialogs by pure script code in your IDE. It is great for custom views that developers do not want to put into their products, for rapid prototyping and even for mocking.

The aim of EASE is to hide layout complexity as much as possible and provide a simple, yet flexible way to implement typical UI tasks.

Example 1: Input Form
We will start by creating a simple input form for address data.

loadModule("/System/UI Builder");
createView("Create Contact");

setColumnCount(2);
createLabel("First Name:");
var txtFirstName = createText();
createLabel("Last Name:");
var txtLastName = createText();
This snippet will create a dynamic view as shown below:
The renderer used will apply a GridLayout. By setting the columnCount to 2 we may simply add our elements without providing any additional layout information - a simple way to create basic layouts.

If needed EASE provides more control by providing layout information when creating components:

createView("Create Contact");
createLabel("First Name:", "1/1 >x");
var txtFirstName = createText("2-4/1 o!");
createLabel("Last Name:", "1/2 >x");
var txtLastName = createText("2-4/2 o!");
Creates the same view as above, but now with detailed layout information.
As an example "1/2 >x" means: first column, second row, horizontal align right, vertical align middle. A full documentation on the syntax is provided in the module documentation (Hover over the UI Builder module in the Modules Explorer view).

Now lets create a combo viewer to select a country for the address:
cmbCountry = createComboViewer(["Austria", "Germany", "India", "USA"])
Simple, isn't it?

So far we did not need to react on any of our UI elements. Next step is to create a button which needs some kind of callback action:
createButton("Save 1", 'print("you hit the save button")')
createButton("Save 2", saveAddress)

function saveAddress() {
print("This is the save method");
}
The first version of a button adds the callback code as string argument. When the button gets pressed, the callback code will be executed. You may call any script code that the engine is capable of interpreting.

The second version looks a bit cleaner, as it defines a function saveAddress() which is called on a button click. Note that we provide a function reference to createButton().

View the full example of this script on our script repository. In addition to some more layouting it also contains a working implementation of the save action to store addresses as JSON data files.

Interacting with SWT controls

The saveAddress() method needs to read data from the input fields of our form. This could be done using
var firstName = txtFirstName.getText();
Unfortunately SWT Controls can only be queried in the UI thread, while the script engine is executed in its own thread. To move code execution to the UI thread, the UI module provides a function executeUI(). By default this functionality is disabled as a bad script executed in the UI thread might stall your Eclipse IDE. To enable it you need to set a checkbox in Preferences/Scripting. The full call then looks like this:
loadModule("/System/UI")
var firstName = executeUI('txtFirstName.getText();');

Example 2: A viewer for our phone numbers

Now that we are able to create some sample data, we also need a viewer for our phone numbers. Say we are able to load all our addresses into an array, the only thing we need is a table viewer to visualize our entries. Following 2 lines will do the job:
var addresses = readAddresses();
var tableViewer = createTableViewer(addresses)
Where readAddresses() collects our *.address files and stores them into an array.

So the viewer works, however we need to define how our columns shall be rendered.
createViewerColumn(tableViewer, "Name", createLabelProvider("getProviderElement().firstName + ' ' + getProviderElement().lastName"))
createViewerColumn(tableViewer, "Phone", createLabelProvider("getProviderElement().phone"))
Whenever a callback needs a viewer element, getProviderElement() holds the actual element.
We are done! 3 lines of code for a TableViewer does not sound too bad, right? Again a full example is available on our script repository. It automatically loads *.address files from your workspace and displays them in the view.

Example 3: A workspace viewer

We had a TableViewer before, now lets try a TreeViewer. As a tree needs structure, we need to provide a callback to calculate child elements from a given parent:
var viewer = createTreeViewer(getWorkspace().getProjects(), getChildren);

function getChildren() {
if (getProviderElement() instanceof org.eclipse.core.resources.IContainer)
return getProviderElement().members();

return null;
}
So simple! The full example looks like this:
Example 4: Math function viewer

The last example demonstrates how to add a custom Control to a view.
For plotting we use the Charting module that is shipped with EASE. The source code should be pretty much self explanatory.

Some Tips & Tricks

  • Layouting is dynamic.
    Unlike the Java GridLayout you do not need to fill all cells of your layout. The EASE renderer takes care to automatically fill empty cells with placeholders
  • Elements can be replaced.
    If you use coordinates when creating controls, you may easily replace a given control by another one. This simplifies the process of layouting (eg if you experience with alignments) and even allows a view to dynamically change its components depending on some external data/events
  • Full control.
    While some methods from SWT do not have a corresponding script function, still all SWT calls may be used as the create* methods expose the underlying SWT instances.
  • Layout help.
    To simplify layouting use the showGrid() function. It displays cell borders that help you to see row/column borders.



by Christian Pontesegger (noreply@blogger.com) at July 08, 2019 06:54 PM

Eclipse Milo 0.3, updated examples

by Jens Reimann at July 06, 2019 08:22 PM

We while back I wrote a blog post about OPC UA, using Milo and added a bunch of examples, in order to get you started. Time passed by and now Milo 0.3.x is released, with a changed API and so those examples no longer work. Not too much has changed, but the experience of running into compile errors isn’t a good one. Finally I found some time to update the examples.

This blog post will focus on the changes, compared to the old blog post. As the old blog post is still valid, I though it might make sense to keep it, and introduce the changes of Milo here. The examples repository however is updated to show the new APIs on the master branch.

Making contact

This is the first situation where you run into the changed API, getting the endpoints. Although the new code is not much different, the old will no longer work:

List<EndpointDescription> endpoints =
  DiscoveryClient.getEndpoints("opc.tcp://localhost:4840")
    .get();

When you compare that to the old code, then you will notice that instead of an array, now a list is being used and the class name changed. Not too bad.

Also, the way you create a new client instance with Milo 0.3.x is a bit different now:

OpcUaClientConfigBuilder cfg = new OpcUaClientConfigBuilder();
cfg.setEndpoint(endpoints[0]); // please do better, and not only pick the first entry

OpcUaClient client = OpcUaClient.create(cfg.build());
client.connect().get();

Using the static create method instead of the constructor allows for a bit more processing, before the class instance is actually created. Also may this new method throw an exception now. Handling this in an async way isn’t too hard when you are using Java 9+:

public static CompletableFuture<OpcUaClient> createClient(String uri) {
  return DiscoveryClient
    .getEndpoints(uri) // look up endpoints from remote
    .thenCompose(endpoints -> {
      try {
        return CompletableFuture.completedFuture(
            OpcUaClient.create(buildConfiguration(endpoints)) // "buildConfiguration" should pick an endpoint
        );
      } catch (final UaException e) {
        return CompletableFuture.failedFuture(e);
      }
    });
}

That’s it? That’s it!

Well, pretty much. However, we have only been looking at the client side of Milo. Implementing your own server requires to use the server side API, and that change much more. But to be fair, the changes improve the situation a lot, and make things much easier to use.

Milo examples repository

As mentioned, the examples in the repository ctron/milo-ece2017 have been updated as well. They also contain the changed server side, which changed a lot more than the client side.

When you compare the two branches master and milo-0.2.x, you can see the changed I made for updating to the new version.

I hope this helps a bit in getting started with Milo 0.3.x. And please be sure to read the original post, giving a more detailed introduction, as well.

The post Eclipse Milo 0.3, updated examples appeared first on ctron's blog.


by Jens Reimann at July 06, 2019 08:22 PM

Short-Circuit Evaluation in N4JS

by n4js dev (noreply@blogger.com) at July 04, 2019 01:01 PM

Short-circuit evaluation is a popular feature of many programming languages and also part of N4JS. In this post, we show how the control-flow analysis of the N4JS-IDE deals with short-circuit evaluation, since it can have a substantial effect on the data flow and execution of a program.



Short circuit evaluation is a means to improve runtime performance when evaluating boolean expressions. This improvement is a result of skipping code execution. The example above shows an if-statement whose condition consists of two boolean expressions that combine the values of 1, 2 and 3, and its control flow graph. Note that the number literals are placeholders for more meaningful subexpressions.

First the logical and, then the logical or gets evaluated: (1 && 2) || 3. In case the expression 1 && 2 evaluates to true, the evaluation of the subclause 3 will be skipped and the evaluation of the entire condition results to true. This skipping of nested boolean expressions is called short circuit evaluation.

However, instead of skipping expression 3, expression 2 might be skipped. In case condition 1 does not hold, the control flow will continue with condition 3 right away. This control flow completely takes places within the if-condition, whereas the former short circuit targets the then block.

The reasoning behind short circuit evaluation is that the skipped code does not affect the result of the whole boolean expression. If the left hand side of the logical or expression evaluates to true, the whole or expression also does. Only if the left hand side is false, the right hand side will be evaluated. Complementary, the right hand side of a logical and expression is skipped in case the left hand side evaluates to false.


Side Effects


Risks of short circuit evaluation might arise in case a subexpression has side effects: These side effects will not occur if the subexpression is skipped. However, a program that relies on side effects of expressions inside an if-condition can be called fragile (or adventurous). In any case it is recommended to write side-effect free conditions.


Have a look at the example above. In case variable i has a value of zero, the right hand side expression i++ is executed, otherwise, it is skipped. The side effect here is the post-increment the value of i. If the value of i is other than zero, this value will be printed out. Otherwise, the value will be incremented but not printed. The control flow shows this behavior with the edge starting at i and targeting the symbol console.


Loops


Loop conditions also benefit from short circuit evaluation. This is important to know when reasoning about the all possible control flow paths through the loop: Each short circuit will introduce another path. Combining all of them makes data flow in loops difficult to understand in case of side effects in the subconditions.


Creative use of short circuit evaluation


Misusing short circuit evaluation can mimic if-statements by using expressions but without using the language feature of conditional expressions (i.e. condition() ? then() : else()). This could be used when if-statements should be executed e.g. when passing arguments to method calls, or when computing the update part of for-loops.





The picture above shows the two versions: the first uses an if-statement and the second uses an  expression statement. These two statements call the functions condition, then and end. Depending on the return value of condition, the function then is executed or not. Consequently, the printouts are either "condition then end" or "condition end", depending on the control flow.

The corresponding control flows are depicted on the right: The upper three lines refer to the if-statement, and the lower three lines to the expression statement. They reveal that the expression statement behaves similar to the if-statement. Note that the control flow edge in the last line that skips the nodes end and end() is never traversed since the logical or expression always evaluates to true.

The interested reader would find more details about the N4JS flow graphs and their implementation in the N4JS Design Document, Chapter: Flow Graphs.


by Marcus Mews


by n4js dev (noreply@blogger.com) at July 04, 2019 01:01 PM

Current State of C/C++ Language Servers

by Doug Schaefer at June 28, 2019 07:59 PM

A Bit of History

When I joined the Eclipse CDT project back in 2002 (yeah, it’s been a long time), I was working on modeling tools for “real time”, or more accurately, embedded reactive systems. Communicating state machines. I wrote code generators that generated C and C++ from ROOM models and then eventually UML-RT. ROOM was way better by the way and easier to generate for because it was more semantically complete and well defined. That objective is key later in this story.

We had the vision to integrate our modeling tools more closely with Integrated Development Environments. We started looking at Visual Studio but Eclipse was the young up and comer. That and IBM bought us, Rational by that point, and had already bought OTI who built Eclipse so it was a natural fit. And we were all in Ottawa. And by chance, Ottawa-based QNX had already written a C/C++ IDE based on Eclipse and were open sourcing it and it was perfect for our customers as well. It’s amazing how that all happened and led to my life as CDT Doug.

Our first order of business was to help the CDT become an industry class C/C++ IDE and become a foundation for integrating our modeling tools. Since we wanted to be able to generate model elements from code, it required we have accurate C and C++ parsers and indexers. No one figured we could do it, but we were able to put together a somewhat decent system written in Java in the org.eclipse.cdt.core plug-in.

Scaling is Hard

However, as the community started to try it out on real projects, especially ones of a significant size, we started to run into pretty massive performance problems with the indexer. We were essentially doing full builds of the user’s projects and storing the results in a string table. On large projects, builds take a long time. But users expect that and put up with it because they really need those binaries it produces. They don’t have the same patience for their IDEs building indexes the don’t really see and we paid a pretty high price for that.

As a solution, I wondered if we could store the symbol information that we were gathering in a way that we could load it up from disk as we were parsing other files and plug the symbol info into the AST the same way we do symbols normally. This would allow us to parse header files once and reuse the results, similar to how precompiled headers work. The price you pay is in accuracy since some systems parse header files multiple times with different macro settings. But my guess was that it wouldn’t be that bad.

It was hard to convince my team at IBM Rational to take this road. Accuracy was king for our modeling tools. But when I moved to join QNX, I decide to forgo that requirement and give this “fast indexer” strategy a go. And the rest is history. Performance on large projects was an order of magnitude faster. Incremental indexing of files as they were saved isn’t even noticeable. It was a huge success and my proudest contribution to the CDT. And I was even better when other community members handed us their expertise to make the accuracy better and better so you barely notice that at all either.

C++ Rises from the “Dead”

Move the clock a decade later and we started running into a problem. The C++ standards community has new life and are adding a tonne of new features at a three year cadence. The CDT community has long lost most of the experts that build the original parsers. Lucky for us a new crop of contributors has come along and are doing heroes work to keep up. But it’s getting harder and harder. One thing we benefit from is how slow embedded developers, the majority of users of CDT, are to adopt the new standards. It gives us time, but not forever. We need to find a better way.

Then along came the Language Server Protocol and a small handful of language servers that do C/C++. I’ve investigated four of them. Three of them are based on llvm and clang. One of them is in tree with llvm and clang in clang-tools-extra, i.e., clangd. The other two are projects that use libclang with parts of the tree, i.e., cquery and ccls. Those two projects are what I call “one person projects” and with cquery at least, that person found something else to do last November. Beware of the one person project.

clangd

I’ve spent a lot of time with clangd when experimenting with Visual Studio Code. For what it does, clangd is very accurate and really fast. It uses compile_commands.json files to find out what source files are built and what compiler and command lines they use. I’ve had to fork the tree to add in support for discovering compilers it doesn’t know about, but that was pretty easy to put together. It gives great content assist and you get the benefit of clang’s awesome compilation error diagnostics as you type. It shows a lot of promise.

However clangd for the longest time lacked an indexer. When you search for references it only finds them in files you have opened previously. The thought as I understand it is that you use another process to build the index and that is usually done at build time. However, not all users have such an environment set up so having an index created by the IDE is a mandatory feature. Now, clangd did eventually get an indexer but it does what the old CDT indexer did and completely parses the source three. That predictably takes forever on large projects and I don’t think users have the appetite to take a huge step backwards like that.

IntelliSense

While waiting for the right solution to arrive for clangd, I thought I’d give the Microsoft C/C++ Tools for VS Code a try. My initial experience was quite surprising. It actually worked well with a gnu tools cross compiler project I used for testing. You have to teach it how to parse your code using a magic JSON file, which fits right in with the rest of VS Code. It’s able to pick out the default include path when you point it at your compiler. It has a MI support for debugging, though no built-in support for remote debugging but that was hackable. It seemed like a reasonable alternative, at least for VS Code.

However when I tried it with one of our production projects it quickly fell apart. It does a great job trying to figure out include paths, similar to the heuristics we use in CDT. That includes things like treating all the folders in your workspace as a potential include path entry. But it tended to make mistakes. It even has support for compile_commands.json files so I could tell it the command lines that were use. It did better but still tried to do too much and gave incorrect results.

That and it doesn’t have an index yet either. One is coming soon, but if it can’t figure out how to parse my files correctly, it’s not going to be a great experience. Still a lot of work to do there.

Where do we go from here?

As it stands today, at least from a CDT perspective, there really isn’t a language server solution that comes near what we have in CDT. Yes, some things are better. Both these language servers are using real parsers to parse the code. (or at least clangd is. Microsoft’s, of course, is closed source so I can only assume). They give really good content assist and error diagnostics and open declaration works. But without a usable indexer, you don’t get accurate symbol references. And I haven’t even mentioned refactoring which CDT has and which is not even suggested in the language server protocol.

So if all your doing is typing in code, the new language servers are great. But if you need to do some code mining to understand the code before you change it, you’re out of luck. The good news is that we are continuing to see investment in them so who knows. But then, maybe the CDT parsers catch up with the language standards before these other language servers grow great indexers making the whole thing moot. I wouldn’t bet against that right now.


by Doug Schaefer at June 28, 2019 07:59 PM

Graphical Editing Framework (GEF) 5.1.0 Release

by Tamas Miklossy (miklossy@itemis.de) at June 25, 2019 08:00 AM

The Eclipse GEF team is happy to announce that version 5.1.0 of the Eclipse Graphical Editing Framework is part of the Eclipse 2019-06 simultaneous release:

 

GEF_Installation

 

The project team has worked hard since the Eclipse GEF 5.0.0 release two years ago. The new release fixes issues on the GEF MVC, GEF Zest, and GEF DOT components.

We would like to thank all contributors who made this release possible:

GEF_Contributions

 

Your feedback regarding the new release is highly appreciated. If you have any questions or suggestions, please let us know via the Eclipse GEF forum or create an issue on Eclipse Bugzilla.

For further information, we recommend to take a look at the Eclipse GEF blog articles, watch the Eclipse GEF session on the EclipseCon Europe 2018, and try out the Getting started with Eclipse GEF online tutorial.


by Tamas Miklossy (miklossy@itemis.de) at June 25, 2019 08:00 AM

Bringing IoT to Red Hat AMQ Online

by Jens Reimann at June 24, 2019 07:47 AM

Red Hat AMQ Online 1.1 was recently announced, and I am excited about it because it contains a tech preview of our Internet of Things (IoT) support. AMQ Online is the “messaging as service solution” from Red Hat AMQ. Leveraging the work we did on Eclipse Hono allows us to integrate a scalable, cloud-native IoT personality into this general-purpose messaging layer. And the whole reason why you need an IoT messaging layer is so you can focus on connecting your cloud-side application with the millions of devices that you have out there.

This post was originally published on Red Hat Developers, the community to learn, code, and share faster. To read the original post, click here.

What is Eclipse Hono™?

Eclipse Hono is an IoT abstraction layer. It defines APIs in order to build an IoT stack in the cloud, taking care of things like device credentials, protocols, and scalability. For some of those APIs, it comes with a ready-to-run implementation, such as the MQTT protocol adapter. For others, such as the device registry, it only defines the necessary API. The actual implementation must be provided to the system.

Eclipse Hono IoT architecture overview
Eclipse Hono overview

A key feature of Hono is that it normalizes the different IoT-specific protocols on AMQP 1.0. This protocol is common on the data center side, and it is quite capable of handling the requirements on throughput and back-pressure. However, on the IoT devices side, other protocols might have more benefits for certain use cases. MQTT is a favorite for many people, as is plain HTTP due to its simplicity. LoRaWAN, CoAP, Sigfox, etc. all have their pros and cons. If you want to play in the world of IoT, you simply have to support them all. Even when it comes to custom protocols, Hono provides a software stack to easily implement your custom protocol.

AMQ Online

Hono requires an AMQP 1.0 messaging backend. It requires a broker and a component called “router” (which doesn’t own messages but only forwards them to the correct receiver). Of course, it expects the AMQP layer to be as scalable as Hono itself. AMQ Online is a “self-service,” messaging solution for the cloud. So it makes sense to allow Hono to run on top of it. We had this deployment model for a while in Hono, allowing the use of EnMasse (the upstream project of AMQ Online).

Self-service IoT

In a world of Kubernetes and operators, the thing that you are actually looking for is more like this:

kind: IoTProject
 apiVersion: iot.enmasse.io/v1alpha1
 metadata:
   name: iot
   namespace: myapp
 spec:
   downstreamStrategy:
     managedStrategy:
       addressSpace:
         name: iot
         plan: standard-unlimited
       addresses:
         telemetry:
           plan: standard-small-anycast
         event:
           plan: standard-small-queue
         command:
           plan: standard-small-anycast

You simply define your IoT project, by creating a new custom resource using kubectl create -f and you are done. If you have the IoT operator of AMQ Online 1.1 deployed, then it will create the necessary address space for you, and set up the required addresses.

The IoT project will also automatically act as a Hono tenant. In this example, the Hono tenant would be myapp.iot, and so the full authentication ID of e.g. sensor1 would be sensor1@myapp.iot. The IoT project also holds all the optional tenant configuration under the section .spec.configuration.

With the Hono admin tool, you can quickly register a new device with your installation (the documentation will also tell you how to achieve the same with curl):

# register the new context once with 'hat'
hat context create myapp1 --default-tenant myapp.iot https://$(oc -n messaging-infra get routes device-registry --template='{{ .spec.host }}')

# register a new device and set credentials
hat reg create 4711
hat cred set-password sensor1 sha-512 hono-secret --device 4711

With that, you can simply use Hono as always. First, start the consumer:

# from the hono/cli directory
export MESSAGING_HOST=$(oc -n myapp get addressspace iot -o jsonpath={.status.endpointStatuses[?(@.name==\'messaging\')].externalHost})
export MESSAGING_PORT=443

mvn spring-boot:run -Drun.arguments=--hono.client.host=$MESSAGING_HOST,--hono.client.port=$MESSAGING_PORT,--hono.client.username=consumer,--hono.client.password=foobar,--tenant.id=myapp.iot,--hono.client.trustStorePath=target/config/hono-demo-certs-jar/tls.crt,--message.type=telemetry

And then publish some data to the telemetry channel:

curl -X POST -i -u sensor1@myapp.iot:hono-secret -H 'Content-Type: application/json' --data-binary '{"temp": 5}' https://$(oc -n enmasse-infra get routes iot-http-adapter --template='{{ .spec.host }}')/telemetry

For more detailed instructions, see: Getting Started with Internet of Things (IoT) on AMQ Online.

IoT integration

As mentioned before, you don’t do IoT just for the fun of it (well, maybe at home, with a Raspberry Pi, Node.js, OpenHAB, and mosquitto). But when you want to connect millions of devices with your cloud backend, you want to start working with that data. Using Hono gives you a pretty simple start. Everything you need is an AMQP 1.0 connectivity. Assuming you use Apache Camel, pushing telemetry data towards a Kafka cluster is as easy as (also see ctron/hono-example-bridge):

<route id="store">
  <from uri="amqp:telemetry/myapp.iot" />

  <setHeader id="setKafkaKey" headerName="kafka.KEY">
    <simple>${header[device_id]}</simple>
  </setHeader>

  <to uri="kafka:telemetry?brokers={{kafka.brokers}}" />
</route>

Bringing together solutions like Red Hat Fuse, AMQ and Decision Manager makes it a lot easier to give your custom logic in the data center (your value add‑on) access to the Internet of Things.

What’s next?

AMQ Online 1.1 is the first version to feature IoT as a tech preview. So, give it a try, play with it, but also keep in mind that it is a tech preview.

In the upstream project EnMasse, we are currently working on creating a scalable, general purpose device registry based on Infinispan. Hono itself doesn’t bring a device registry, it only defines the APIs it requires. However, we think it makes sense to provide a scalable device registry, out of the box, to get you started. In AMQ Online, that would then be supported by using Red Hat Data Grid.

In the next months, we hope to also see the release of Eclipse Hono 1.0 and graduate the project from the incubation phase. This is a big step for a project at Eclipse but also the right thing to do. Eclipse Hono is ready, and graduating the project means that we will pay even closer attention to APIs and stability. Still, new features like LoRaWAN, maybe Sigfox, and a proper HTTP API definition for the device registry, are already under development.

So, there are lots of new features and enhancements that we hope to bring into AMQ Online 1.2.

The post Bringing IoT to Red Hat AMQ Online appeared first on ctron's blog.


by Jens Reimann at June 24, 2019 07:47 AM

Eclipse ioFog: Evolving Toward Native Kubernetes Orchestration at the Edge

by Mike Milinkovich at June 23, 2019 08:46 PM

With the proliferation of AI, autonomous vehicles, 5G, IoT, and other industrial use cases that require lightning-fast data processing, edge computing has emerged over the past few years as a way to address the limitations of existing cloud architectures to process information and deliver services at the “IoT edge”. Instead of backhauling data to the centralized cloud, edge computing brings computational power closer to data sources to support near real-time insights and local actions while reducing network bandwidth and storage requirements.

According to Gartner, 75% of enterprise-generated data will be created and processed outside a traditional centralized data center or cloud by 2025. While the problems at the IoT edge — connectivity, manageability, scalability, reliability, security — are being solved as point solutions by enterprises and ecosystem players, there is a need for a foundational industry-wide standard for managing distributed IoT workloads. Time and again, open source has been proven to be the best way to deliver complex platform software with industrial scale collaboration.

Enter Kubernetes, the de facto standard for orchestrating containers and the applications running inside them. Kubernetes has massive potential for handling IoT workloads on the edge by providing a common control plane across hybrid cloud and edge environments to simplify management and operations. Within the Kubernetes IoT Edge Working Group, members of the Kubernetes and Eclipse communities are collaborating with leading technology innovators to extend Kubernetes to support dynamically, securely, and remotely managing edge nodes.

A great example of this collaboration is Eclipse ioFog, a universal Edge Compute Platform which offers a standardized way to develop and remotely deploy secure microservices to edge computing devices. ioFog can be installed on any hardware running Linux and provides a universal runtime for microservices to dynamically run on the edge. Companies in different vertical markets such as retail, automotive, oil and gas, telco, and healthcare are using ioFog to turn any compute device into an edge software platform.

The Eclipse Foundation is excited to support today’s announcement of the initial availability of ioFog features that make any Kubernetes distribution edge-aware. With this latest release, developers are able to extend Kubernetes to easily deploy, secure, and manage edge computing networks supporting applications such as advanced AI and machine learning algorithms.

Farah Papaioannou, co-founder and president of Edgeworx, explains the significance of the release this way:

“ioFog is a proven platform at the edge. With this release, we build on native Kubernetes, seamlessly extending it to the edge…We do this based on things that actually matter at the edge, such as latency, location or resources. We are delivering today a full cloud-to-edge solution, that’s 100-percent open source and works with any Kubernetes flavors and distros.”

These native Kubernetes enhancements are in the process of being contributed to the Eclipse ioFog open source project. We are proud to support the collective efforts of the Eclipse community and the Kubernetes ecosystem to help developers deploy, manage, and orchestrate applications and microservices from cloud to edge in a simple and secure manner.

For more information about ioFog, get started by using this link to install and set up your production ioFog environment. If you have questions or want to connect with other people involved in this platform, join the ioFog community and the mailing list.


by Mike Milinkovich at June 23, 2019 08:46 PM

Eclipse Handly 1.2 Released

by Vladimir Piskarev at June 19, 2019 06:50 PM

Eclipse Handly 1.2 has just been released. This release is focused on further enhancements to UI components. Particularly, it provides a framework for creating a full-featured Call Hierarchy view.

New and Noteworthy
Migration Guide
Downloads


by Vladimir Piskarev at June 19, 2019 06:50 PM

WTP 3.14 Released!

June 19, 2019 03:14 PM

The Eclipse Web Tools Platform 3.14 has been released! Installation and updates can be performed using the Eclipse IDE 2019-06 Update Site or through the Eclipse Marketplace . Release 3.14 is included in the 2019-06 Eclipse IDE for Enterprise Java Developers , with selected portions also included in 9 other packages . Adopters can download the R3.14 update site itself directly and combine it with the necessary dependencies.

More news


June 19, 2019 03:14 PM

Xtext 2.18 – released!

by Sebastian Zarnekow (sebastian.zarnekow@itemis.de) at June 18, 2019 10:31 AM

The team around project lead Christian Dietrich has released version 2.18 of Xtext and Xtend. This version is also the one that enters the Eclipse release train number 2019-06, departing on time at June 19th.

More than 40 helping hands have made this Xtext release possible over the last quarter. A big shout-out especially to the first-time contributors! We are thankful for every reported issue, comment to a discussion, and proposed patch.

Even though the main focus was on stability and robustness, a couple of new features have been made available, too.

 

Playing catch-up with Java

Xtend eventually learned to understand the ternary expression and got support for try-with-resources. While the former is just expressing a matter of taste, being able to efficiently deal with closable resources is a big win. Forgetting to close a stream or a connection is just a little harder when you can benefit from the compiler auto-closing the thing. This is also available in legacy scenarios where code is still required to run on ancient Java versions.

Both features were implemented by Eva Poell, intern for 6 weeks at our Berlin office. We are thankful for her great work!

Improved “find references” view

When exploring a code base, software engineers navigate back and forth through the source, especially between references and declarations. If you want to take notes on the usage of a DSL concept, you can copy it directly from the “search results” page. Also, if you only want to deal with a subset of the results, it’s now possible to remove matches from the view.

 

Improved Find References View

New quick fixes for the Xtext grammar

The Xtext grammar editor received some love in this release cycle, too. The validation problems around invalid empty keywords can be automatically fixed from now on. Either a reasonable default is inserted, or the keyword is removed – it’s your call.

Xtext Grammar Editor

Rename refactoring

Long-running rename operations can now be cancelled by the user. Instead of waiting for the computation to finish and reverting the outcome, accidentally triggered renames are no longer a painful experience.

Rename class quick fix

Talking about renames: Even though Xtend allows to define as many public types per file as you want, it is a common usage pattern to have a single top-level class per file. If names get out of sync because of a change of mind, a quick fix is offered to align the two of them. If you decide to rename the type, a proper rename operation is triggered, and all the references are updated along with the declaration.

Language server fixes

The Xtext support for the Language Server Protocol has been improved in different areas. Extension implementations do now have access to the indexed resources, and the reporting of build results was fixed. Formerly, some notifications may have got lost such that the editors did not get any events about completed builds. The communication in the other direction was fixed, too. Sometimes the server did miss a couple of changed resources, thus its index information diverged from the code on disk.

Eclipse robustness

A few rarely-occurring issues around the startup of Eclipse and the processing of deleted projects have been fixed. It did not happen often, but if you were bitten by that bug, it was rather annoying. These issues have been resolved.

Your turn

The team is proud of the achievements in this release cycle. But we are also eager to know what your thoughts on this are. If you are missing a certain feature or do have suggestions how to improve Xtext or Xtend, just drop us a note and we are happy to discuss your ideas.

by Sebastian Zarnekow (sebastian.zarnekow@itemis.de) at June 18, 2019 10:31 AM

Eclipse Introduces New IDE-Agnostic Tools for Building and Deploying Cloud-Native Applications

by Dustin Schultz at June 17, 2019 05:35 AM

Eclipse Codewind is a new developer-centric project from the Eclipse Foundation that aims to assist developers by providing ways to quickly and consistently accomplish tasks that are common to cloud-native application development.

By Dustin Schultz

by Dustin Schultz at June 17, 2019 05:35 AM

My new book on TDD, Build Automation and Continuous Integration

by Lorenzo Bettini at June 12, 2019 04:59 PM

I haven’t been blogging for some time now. I’m getting back to blogging by announcing my new book on TDD (Test-Driven Development), Build Automation and Continuous Integration.

The title is indeed, “Test-Driven Development, Build Automation, Continuous Integration
(with Java, Eclipse and friends)
” and can be bought from https://leanpub.com/tdd-buildautomation-ci.

The main goal of the book is to get you started with Test-Driven Development (write tests before the code), Build Automation (make the overall process of compilation and testing automatic with Maven) and Continuous Integration (commit changes and a server will perform the whole build of your code). Using Java, Eclipse and their ecosystems.

The main subject of this book is software testing. The main premise is that testing is a crucial part of software development. You need to make sure that the software you write behaves correctly. You can manually test your software. However, manual tests require lots of manual work and it is error prone.

On the contrary, this book focuses on automated tests, which can be done at several levels. In the book we will see a few types of tests, starting from those that test a single component in isolation to those that test the entire application. We will also deal with tests in the presence of a database and with tests that verify the correct behavior of the graphical user interface.

In particular, we will describe and apply the Test-Driven Development methodology, writing tests before the actual code.

Throughout the book we will use Java as the main programming language. We use Eclipse as the IDE. Both Java and Eclipse have a huge ecosystem of “friends”, that is, frameworks, tools and plugins. Many of them are related to automated tests and perfectly fit the goals of the book. We will use JUnit throughout the book as the main Java testing framework.

it is also important to be able to completely automate the build process. In fact, another relevant subject of the book is Build Automation. We will use one of the mainstream tools for build automation in the Java world, Maven.

We will use Git as the Version Control System and GitHub as the hosting service for our Git repositories. We will then connect our code hosted on GitHub with a cloud platform for Continuous Integration. In particular, we will use Travis CI. With the Continuous Integration process, we will implement a workflow where each time we commit a change in our Git repository, the CI server will automatically run the automated build process, compiling all the code, running all the tests and possibly create additional reports concerning the quality of our code and of our tests.

The code quality of tests can be measured in terms of a few metrics using code coverage and mutation testing. Other metrics are based on static analysis mechanisms, inspecting the code in search of bugs, code smells and vulnerabilities. For such a static analysis we will use SonarQube and its free cloud version SonarCloud.

When we need our application to connect to a service like a database, we will use Docker a virtualization program, based on containers, that is much more lightweight than standard virtual machines. Docker will allow us to

configure the needed services in advance, once and for all, so that the services running in the containers will take part in the reproducibility of the whole build infrastructure. The same configuration of the services will be used in our development environment, during build automation and in the CI server.

Most of the chapters have a “tutorial” nature. Besides a few general explanations of the main concepts, the chapters will show lots of code. It should be straightforward to follow the chapters and write the code to reproduce the examples. All the sources of the examples are available on GitHub.

The main goal of the book is to give the basic concepts of the techniques and tools for testing, build automation and continuous integration. Of course, the descriptions of these concepts you find in this book are far from being exhaustive. However, you should get enough information to get started with all the presented techniques and tools.

I hope you enjoy the book 🙂


by Lorenzo Bettini at June 12, 2019 04:59 PM

Eclipse Tycho: Disable p2 dependency resolution with tycho.mode=maven

by kthoms at June 12, 2019 01:31 AM

In Eclipse Tycho based builds the first step is always computation of the target platform and depedency resolution. This takes quite some time and in certain use cases it is not necessary. Typical use cases are updating versions with the tycho-versions-plugin, or displaying the effective pom with help:effective-pom.

The p2 target platform & dependency resolution can be skipped by setting the tycho-mode system property:

mvn -Dtycho.mode=maven <goals>

This useful feature is a bit hidden in just a few posts, e.g. https://www.eclipse.org/lists/tycho-user/msg06439.html.


by kthoms at June 12, 2019 01:31 AM

Welcome Shabnam!

by Thabang Mashologu at June 05, 2019 09:13 AM

I am happy to announce that Shabnam Mayel has joined the Eclipse Foundation as a Senior Marketing Lead, Cloud Native Java. 

Shabnam brings with her several years of diverse marketing, business development, and technical sales experience. Most recently, she led marketing and brand management initiatives for tech startups in Southeast Asia. She holds an electronics engineering degree, an MBA and a Ph.D. in management. She is based in Toronto.

Shabnam will be responsible for developing and implementing global marketing programs to grow the awareness of and participation in Jakarta EE, Eclipse MicroProfile, and other Foundation projects in the cloud native application space.

Please join me in welcoming Shabnam to the Eclipse community!


by Thabang Mashologu at June 05, 2019 09:13 AM

Inline Display of Error / Warning / Info Annotations in Eclipse

by Niko Stotz at May 26, 2019 10:11 PM

tl;dr: A prototype implementation shows all error, warning, and info annotations (“bubbles” in the left ruler) in Eclipse Java editor as inline text. Thus, we don’t have to use the mouse to view the error message. The error messages update live with changes in the editor.

Screencast showing the live effect

I’m an avid keyboard user. If I have to touch the mouse, something is wrong. Eclise has tons of shortcuts to ease your live, and I use and enjoy them every day.

However, if I had an error message in e.g. my Java file, and I couldn’t anticipate the error, I had several bad choices:

  • Opening the Problems view and navigating to the current error (entries in the Problems view are called “markers” by Eclipse)
  • Moving the mouse over the annotation in the left ruler (“annotation” in Eclipse lingo)
  • Guessing

Not so long ago, Angelo Zerr and others implemented code mining in Eclipse. This feature displays additional information within a text file without changing the actual contents of the file. Sounds like a natural fit for my problem!

I first tried to implement the error code mining based on markers, (Bug 540443). This works in general. However, markers are bound to the persisted state of a file, i.e. how a file is saved to disk. So they are only updated on saving.

Most editors in Eclipse are more interactive than that: They update their error information based on the dirty state of the editor, i.e. the text that’s currently in the editor. This feels way more natural, so I tried to rewrite my error code mining based on annotations. The current prototype is shown in above’s screencast.

The code is attached to Bug 547665. The prototype looks quite promising.

As above’s screencast shows, I have at least one serious issue to resolve: When the editor is saved, all code minings briefly duplicate. Thankfully, they get back to normal quickly.


by Niko Stotz at May 26, 2019 10:11 PM

Going Straight to clang for WebAssembly

by Doug Schaefer at May 24, 2019 08:39 PM

A few years ago at EclipseCon I gave a demo of a C++ app using libSDL2 and showed how you build it with CDT and launch it for multiple platforms, my desktop, a BeagleBone running QNX, and finally in a web browser using Emscripten. I used CMake for the build system and that worked fine for the first two, but Emscripten really fought the idea of something else driving the build. I finally figured it out but it left the impression that there had to be a simpler way to build WebAssembly apps.

Recently with version 8 of clang, they have made the wasm target a first class citizen available with the standard distribution. I thought I’d take a look and found at least one example on github that showed how. Here’s a quick summary on how to get started. Be warned, though, one of the arguments is nostdlib which means this is a very barebones example. But that’s another area where I think Emscripten has gone a little to far with. More on that later.

To start this example is a pretty basic Fibonacci calculator, pretty standard for WebAssembly. Here’s the C++ file.

#include “wasm.h”
WASM_IMPORT void log(int i);
WASM_EXPORT int fib(int i) {
int res = i <= 1 ? i : fib(i — 1) + fib(i — 2);
log(res);
return res;
}

I wanted to show C++ calling back into JavaScript so there’s a very contrived log method we import. The fib function itself is pretty basic. I’ve created a couple of macros in the wasm.h file to manage marking functions as import or export.

#define WASM_EXPORT __attribute__((visibility(“default”))) \
extern “C”
#define WASM_IMPORT extern “C”

Since I’m writing C++ I want to make sure the compiler doesn’t mangle the names so I declare them as extern “C”. As you can see, the export also turns on the visibility of the symbol which is hidden by default in the Makefile.

I’m running this with node.js which has had WebAssembly support since at least version 8 that I have on my Linux box. The idea is to do some of the more computationally expensive tasks in my node server using wasm. Here’s my js file.

const fs = require(‘fs’)
async function run() {
const buf = fs.readFileSync(‘./fib.wasm’)
return await WebAssembly.instantiate(buf, {
‘env’: {
‘log’: function(i) { console.log(`log: ${i}`) }
}
})
}
run().then(res => {
const { fib } = res.instance.exports
console.log(fib(10))
})

It simply loads up the wasm file and instantiates it passing in the log function. When that’s complete, I extract my fib function from the exports and run it. You should see the output of the log function (more times that I was expecting at least), then the result, 55.

As with most things C++, the magic is actually in the Makefile.

CXX = $(HOME)/wasm/clang-8/bin/clang
CXXFLAGS = \
-Wall \
--target=wasm32 \
-Os \
-flto \
-nostdlib \
-fvisibility=hidden \
-std=c++14 \
-ffunction-sections \
-fdata-sections
LD = $(HOME)/wasm/clang-8/bin/wasm-ld
LDFLAGS = \
--no-entry \
--strip-all \
--export-dynamic \
--initial-memory=131072 \
-error-limit=0 \
--lto-O3 \
-O3 \
--gc-sections
fib.wasm: fib.o
$(LD) $(LDFLAGS) -o $@ $<

There’s lots of magic flags here and I have to thank the author of the example I linked above for getting me started. I’ll have to play with them to see what’s actually necessary. The key here is that it isn’t Emscripten but straight clang 8 that I downloaded from llvm.org. There’s no standard library, so don’t go and try and do a printf. You’re a bit on your own for now.

But that’s somewhat a conclusion I reached. Emscripten allows C++ developer to easily port their apps to run on the web. It doesn’t make the C++ developer think like a Web developer. What would be interesting to me is what it would look like if you weren’t handed those fancy libraries you get with Emscripten and really just wanted to build a web app, like a game, using the standard JavaScript APIs you get with node or the browser. I think you’d end up writing programs like an Arduino developer where you don’t have printf either…


by Doug Schaefer at May 24, 2019 08:39 PM

Apache Camel development on Eclipse Che 7

by Aurélien Pupier at May 21, 2019 07:00 AM

Apache Camel development is improving on Eclipse Che 7 compared to Che 6. On Che 6, it is limited to XML DSL and without classical XSD-based XML support. With Che 7, Camel Java DSL is available and XSD-based XML support is working nicely with the Camel XML DSL support. Please note that Che 7 is still in beta.

Camel language features available

Inside the same editor, there is access to classic XML tooling and Camel XML DSL support.

Classic XML tooling completion based on XSD:
XMl tag completion based on Camel xsd

Camel XML DSL tooling completion:
Camel URI completion with Camel XML DSL

Classic XML tooling validation:
Validation based on Camel XML xsd

Camel XML DSL tooling validation:
Camel XML DSL validation of Camel URI

Inside the same editor, there is access to classic Java tooling and Camel Java DSL support.

Classic Java tooling completion:
Classic Java completion

Camel Java DSL completion:
Camel URI completion with Camel Java DSL

Classic Java tooling validation:
Classic Java validation

Camel Java DSL tooling validation:
Camel URI validation with Java DSL

How to configure on che.openshift.io

Currently, some advanced steps are needed to have all extensions working together on a resource-limited Che environment, which is the default for che.openshift.io. Let’s see how to activate it.

  • Go to che.openshift.io (you will have to register if you’ve not done so already).
  • Create a workspace based on Che 7.

Create Che 7 Workspace

  • Wait that workspace creation is finished.
  • Import the Camel/Fuse project that you want.
  • Go back to workspace configuration by using the top-left yellow arrow and clicking on Workspaces.

Go to workspace config

  • Click on the running workspace.
  • Click stop at the top right.
  • Go to Plugins tab.
  • Enable Language Support for Apache Camel, Language Support for Java and XML.

Enable Camel, Java and XML plugins

  • Go to config tab.
  • Search for “attributes”, add memory limits for each of the plugins, you should end with something like:
    "attributes": {
    "sidecar.redhat/java.memory_limit": "1280Mi",
    "sidecar.camel-tooling/vscode-apache-camel.memory_limit": "128Mi",
    "sidecar.redhat/vscode-xml.memory_limit": "128Mi",
    "sidecar.eclipse/che-theia.memory_limit": "512Mi",
    "editor": "eclipse/che-theia/next",
    "plugins": "eclipse/che-machine-exec-plugin/0.0.1,redhat/java/0.43.0,camel-tooling/vscode-apache-camel/0.0.14,redhat/vscode-xml/0.5.1"
    }
  • Click on Open button on top right.
  • Open a Java file and wait that the Java Language Server has started (it can take several minutes).
  • Enjoy!

What’s next?

As you’ve noticed, the installation is currently a bit cumbersome as it requires you to touch the YAML config file. Don’t worry; there is work in progress to improve the installation experience, such as providing a specific Camel stack. This will allow you to create a workspace preconfigured, which means doing only the first three steps instead of the 11 steps of the configuration. Several other features are in the works by incorporating existing VS Code extensions inside Che 7. Stay tuned.

Share

The post Apache Camel development on Eclipse Che 7 appeared first on Red Hat Developer Blog.


by Aurélien Pupier at May 21, 2019 07:00 AM

I am an Incrementalist: Jakarta EE and package renaming

by BJ Hargrave (noreply@blogger.com) at May 17, 2019 05:11 PM


Eclipse Jakarta EE has been placed in the position that it may not evolve the enterprise APIs under their existing package names. That is, the package names starting with java or javax. See Update on Jakarta EE Rights to Java Trademarksfor the background on how we arrived at this state.

So this means that after Jakarta EE 8 (which is API identical to Java EE 8 from which it descends), whenever an API in Jakarta EE is to be updated for a new specification version, the package names used by the API must be renamed away from java or javax. (Note: some other things will also need to be renamed such as system property names, property file names, and XML schema namespaces if those things start with java or javax. For example, the property file META-INF/services/javax.persistence.PersistenceProvider.) But this also means that if an API does not need to be changed, then it is free to remain in its current package names. Only a change to the signature of a package, that is, adding or removing types in the package or adding or removing members in the existing types in the package, will require a name change to the package.

There has been much discussion on the Jakarta EE mail lists and in blogs about what to do given the above constraint and David Blevins has kindly summed up the two main choices being discussed by the Jakarta EE Specification Committee: https://www.eclipse.org/lists/jakartaee-platform-dev/msg00029.html.

In a nutshell, the two main choices are (1) “Big Bang” and (2) Incremental. Big Bang says: Let’s rename all the packages in all the Jakarta EE specifications all at once for the Jakarta EE release after Jakarta EE 8. Incremental says: Let’s rename packages only when necessary such as when, in the normal course of specification innovation, a Jakarta EE specification project wants to update its API.

I would like to argue that Jakarta EE should chose the Incremental option.

Big Bang has no technical value and large, up-front community costs.

The names of the packages are of little technical value in and of themselves. They just need to be unique and descriptive to programmers. In source code, developers almost never see the package names. They are generally in import statements at the top of the source file and most IDEs kindly collapse the view of the import statements so they are not “in the way” of the developer. So, a developer will generally not really know or care if the Jakarta EE API being used in the source code is a mix of package names starting with java or javax, unchanged since Jakarta EE 8, and updated API with package names starting with jakarta. That is, there is little mental cost to such a mixture. The Jakarta EE 8 API are already spread across many, many package names and developers can easily deal with this. That some will start with java or javax and some with jakarta is largely irrelevant to a developer. The developer mostly works with type and member names which are not subject to the package rename problem.

But once source code is compiled into class files, packaged into artifacts, and distributed to repositories, the package names are baked in to the artifacts and play an important role in interoperation between artifacts: binary compatibility. Modern Java applications generally include many 3rdparty open source artifacts from public repositories such as Maven Central and there are many such artifacts in Maven Central which use the current package names. If Jakarta EE 9 were to rename all packages, then the corpus of existing artifacts is no longer usable in Jakarta EE 9 and later. At least not without some technical “magic” in builds, deployments, and/or runtimes to attempt to rename package references on-the-fly. Such magic may be incomplete and will break jar signatures and will complicate builds and tool chains. It will not be transparent.

Jakarta EE must minimize the inflection point/blast radius on the Java community caused by the undesired constraint to rename packages if they are changed. The larger the inflection point, the more reason you give to developers to consider alternatives to Jakarta EE and to Java in general. The Incremental approach minimizes the inflection point providing an evolutionary approach to the package naming changes rather than the revolutionary approach of the Big Bang.

Some Jakarta EE specification may never be updated. They have long been stable in the Java EE world and will likely remain so in Jakarta EE. So why rename their packages? The Big Bang proposal even recognizes this by indicating that some specification will be “frozen” in their current package names. But, of course, there is the possibility that one day, Jakarta EE will want to update a frozen specification. And then the package names will need to be changed. The Incremental approach takes this approach to all Jakarta EE specifications. Only rename packages when absolutely necessary to minimize the impact on the Java community.

Renaming packages incrementally, as needed, does not reduce the freedom of action for Jakarta EE to innovate. It is just a necessary part of the first innovation of a Jakarta EE specification.

A Big Bang approach does not remove the need to run existing applications on earlier platform versions.  It increases the burden on customers since they must update all parts of their application for the complete package renaming when the need to access a new innovation in a single updated Jakarta EE specification when none of the other Jakarta EE specifications they use have any new innovations. Just package renames for no technical reason.  It also puts a large burden on all application server vendors. Rather than having to update parts of their implementations to support the package name changes of a Jakarta EE specification when the specification is updated for some new innovation, they must spend a lot of resources to support both old and new packages name for the implementations of all Jakarta EE specifications.

There are some arguments in favor of a Big Bang approach. It “gets the job done” once and for all and for new specifications and implementations the old java or javax package names will fade from collective memories. In addition, the requirement to use a certified Java SE implementation licensed by Oracle to claim compliance with Eclipse Jakarta EE evaporates once there are no longer any java or javax package names in a Jakarta EE specification. However, these arguments do not seem sufficient motivation to disrupt the ability of all existing applications to run on a future Jakarta EE 9 platform.

In general, lazy evaluation is a good strategy in programming. Don’t do a thing until the thing needs to be done. We should apply that strategy in Jakarta EE to package renaming and take the Incremental approach. Finally, I am reminded of Æsop’s fable, The Tortoise & the Hare. “The race is not always to the swift.”


by BJ Hargrave (noreply@blogger.com) at May 17, 2019 05:11 PM

Create your first Quarkus project with Eclipse IDE (Red Hat CodeReady Studio)

by Jeff Maury at May 09, 2019 10:45 AM

You’ve probably heard about Quarkus, the Supersonic Subatomic Java framework tailored for Kubernetes and containers. In this article, I will show how easy is it to create and set up a Quarkus project in an Eclipse IDE based environment.

Please note that even if we use Red Hat CodeReady Studio in this article, any Eclipse IDE can be used assuming it has the tooling for Java-based development. So, you can also use the Eclipse IDE for Java Developers package or the Eclipse IDE for Enterprise Java Developers package.

Install IDE

If you don’t already have an IDE on your workstation, you must download and install one. You can use Red Hat CodeReady Studio or one of the Java packages from the Eclipse Foundation.

Once the IDE is installed, launch it and open a new workspace or reuse an existing workspace based on your preferences.

Create your first Quarkus project

Although it is possible from within the Eclipse-based IDE to create a Maven-based project using the new Maven project wizard, we will not use this path. It is based on the concept of Maven archetypes, and the Quarkus project does not provide a Maven archetype to bootstrap a new project but rather provides a Maven plugin to create a new project.

So, we will follow the Quarkus Getting Started Guide recommendation on how to bootstrap a Quarkus project.

Using a terminal, go into a folder where you want your first Quarkus project to be stored and type the following command:

mvn io.quarkus:quarkus-maven-plugin:create

You will be asked for the groupId for your project:

Set the project groupId [org.acme.quarkus.sample]:

Press the ENTER key to accept the default value.

You will be asked for the artifactId for your project:

Set the project artifactId [my-quarkus-project]:

Press the ENTER key to accept the default value.

You will be asked for the version for your project:

Set the project version [1.0-SNAPSHOT]:

Press the ENTER key to accept the default value.

Then, you will be asked if you want to add a REST endpoint in your application.

Do you want to create a REST resource? (y/n) [no]:

Enter yes and press the ENTER key.

Then, you will be asked for the class name of the REST endpoint.

Set the resource classname [org.acme.quarkus.sample.HelloResource]:

Press the ENTER key to accept the default value.

Then, you will be asked for the path of the REST endpoint.

Set the resource path [/hello]:

Press the ENTER key to accept the default value.

At this point, your first Quarkus project has been created on your local workstation, let’s import it into our IDE.

Import the first Quarkus project into IDE

From the IDE main window, open the File -> Import -> Existing Maven Projects menu:

Using the Browse button, select the folder where your first Quarkus project has been generated:

Press the Finish button, you should now see a new project in the Project Explorer window (please note that it may take a while, as Maven will download some Quarkus dependencies if this is the first time you have built a Quarkus project on your workstation):

Launch your first Quarkus application

From the Project Explorer window, select your Quarkus project (my-quarkus-project), right-click the Run As -> Maven build… menu:

In the Goals field, enter compile quarkus:dev:

Press the Run button. Your Quarkus application will start, and you should see the following output in the Console window:

At this point, your Quarkus application is started, and you should be able to access it from the following URL: http://localhost:8080/hello

Debug your first Quarkus application

Although Quarkus has nice hot reload capabilities for developers, debugging is a key tool. Let’s see how to set up debugging on our Quarkus application and then start a debugging session.

As you have probably noticed, we started the Quarkus application with the dev mode, which means that any changes in the application source code will be hot reloaded the next time your Quarkus application will process a request.

Another nice thing about the dev feature from Quarkus is that the Java virtual machine that is running the Quarkus application has been launched in debug mode. So, to debug our Quarkus application, we only need to connect a debugger.

If you’re familiar with the Java development tools in Eclipse, you know that you can easily launch a Java debugger against a running JVM that has been started in debug mode, assuming you know the debug port the JVM is listening on.

If you looked at your Quarkus application output, you noticed the following message:

This is the message generated by the JVM when running in debug mode, and it gives us the information that the port used is 5005.

Now you can create a remote Java debugger. Even better, because the message has been recognized by the Eclipse Java development tools, you just need to click on the list associated with the message and Eclipse will magically create a remote Java debugger and connect it to your Quarkus application!

After clicking on the hyperlink, you will see nothing because the remote Java debugger has been launched in the background and it connected to your Quarkus application. However, you can check it if you switch to the Debug perspective. To do that, open the Window -> Perspective -> Open Perspective -> Debug menu and select the Debug view. You should see something similar to:

As you can see, the debugger is connected to the Quarkus application. That means you can add breakpoints in your application source code and, next time you send a request, the Quarkus JVM will stop and the Eclipse IDE will step to the code in which you set a breakpoint.

Share

The post Create your first Quarkus project with Eclipse IDE (Red Hat CodeReady Studio) appeared first on Red Hat Developer Blog.


by Jeff Maury at May 09, 2019 10:45 AM

Eclipse Contributor Agreement 3.0

by waynebeaton at May 07, 2019 09:03 PM

The Eclipse Contributor Agreement (ECA) is an agreement made by contributors certifying the work they are contributing was authored by them and/or they have the legal authority to contribute as open source under the terms of the project license.

The Eclipse Foundation’s IP Team has been working hard to get the various agreements that we maintain between the Eclipse Foundation and community updated. Our first milestone targeted the ECA, and we’re happy to report that a very significant number of our community members have successfully updated theirs. Today, we retired all of the rest of them. Specifically, we’ve revoked all ECAs that predate the ECA version 3.0.

We’re confident that we’ve managed to connect and update the ECA for everybody who still wants to be a contributor, so there should be no interruption for anybody who is actively contributing. If we missed you, you’ll be asked to sign the new ECA the next time you try to contribute. Or you can just re-sign it now.

We’ve made some changes with the new agreements that make contributing easier, (but explaining harder). Committers who have signed the Individual Committer Agreement (ICA) version 4.0 or work for a company that has signed the Member Committer and Contributor Agreement do not require an ECA.

Contact emo_records@eclipse.org if you’re having trouble with an agreement.


by waynebeaton at May 07, 2019 09:03 PM

Ways your company can support and sustain open source

by Chris Aniszczyk at April 30, 2019 01:49 PM

Note: this article was original posted on https://opensource.com/article/19/4/ways-support-sustain-open-source

To make sure open source continues to thrive, we all need to find ways to sustain the communities and projects we depend on

The success of open source continues to grow; surveys show that the majority of companiesuse some form of open source, 99% of enterprises see open source as important, and almost half of developers are contributing back. It’s important to note that companies aren’t contributing to open source for purely altruistic reasons. Recent research from Harvard shows that open source-contributing companies capture up to 100% more productive value from open source than companies that do not contribute back. Another research study concluded countries adopting modern open source practices saw:

“a 0.6%–5.4% yearly increase in companies that use OSS, a 9%–18% yearly increase in the number of IT-related startups, a 6.6%–14% yearly increase in the number of individuals employed in IT related jobs, and a 5%–16% yearly decrease in software-related patents. All of these outcomes help to increase productivity and competitiveness at the national level. In aggregate, these results show that changes in government technology policy that favor OSS can have a positive impact on both global social value and domestic national competitiveness.”

In the end, there are many ways for a company or organization to sustain open source. It could be as simple as training your organization to contribute to open source projects you depend on or hiring engineers to work on open source projects. Here are eight ways your organization can contribute back to open source, based on examples in the industry.

Hire open source maintainers to work on open source

Companies with strategies to leverage open source often find the highest returns from hiring a maintainer of the projects they depend the most on. It’s no surprise if you look at the Who Writes the Linux Kernel report that the top contributors are all employed by companies like ARM, Google, Facebook, Intel, Red Hat, Samsung, and more.

Having a maintainer (full time or part time) on your staff can help your organization learn how to work within the project community and enable prioritization of upstream contributions based on understanding of what the community is focused on. Hiring the maintainers also means that the project will have people with enough time to focus on the details and the rigor that’s needed for a project to be useful; think security reviews, bug cleanup, release management, and more. A more predictable and reliable upstream project can benefit many in your organization while also improving the overall project community. As a bonus, maintainers can also become advocates for your organization and help with recruiting too!

Develop an open source award program or peer bonus fund

It is common for companies to have internal employee recognition programs that recognize individuals who go above and beyond. As an example, Red Hat has a community award program through Opensource.com. Some other companies have expanded their recognition programs to include open source contributors. For example, Google has an open source peer bonus program that recognizes external people who have made exceptional contributions to open source.

Start an open source program office

Many internet-scale companies, including Amazon, Google, Facebook, Twitter and more, have established formal open source programs (colloquially called OSPOs) within their organizations to manage open source strategy along with the consumption and contribution of open source.

If you want to increase your contributions to open source, research has shown that companies with formal open source programs are more likely to contribute back. If you want to learn from organizations with formal open source programs, I recommend you read the TODO Group Open Source Program Guides.

Launch an open source fund

Some organizations contribute fiscally to the open source projects that are important to them. For example, Comcast’s Open Source Development Grants “are intended to fund new or continued development of open source software in areas of interest to Comcast or of benefit to the Internet and broadband industries.” This isn’t just for big companies; small companies have open source funds, too. For example, CarGurus launched an open source fund and Eventbot is supporting open source with a small percentage of its company revenue. Another interesting approach is what Indeed has done by democratizing the open source funding process with its employees.

Contribute a portion of your company equity to open source

Consider donating a portion of your organization’s equity to an open source project you depend on. For example, Citus Data recently donated one percent of its equity to the PostgreSQL community. This worked out nicely; Citus Data was acquired by Microsoft recently, so the PostgreSQL community will benefit from that acquisition, too.

Support and join open source foundations

There are many open source foundations that house open source projects your organization depends on, including the Apache FoundationEclipse FoundationCloud Native Computing Foundation (home of Kubernetes), GraphQL FoundationLet’s EncryptLinux FoundationOpen Source Initiative (OSI), OpenStack FoundationNodeJS Foundation, and more.

https://twitter.com/CloudNativeFdn/status/1092544807900110848

Fund and participate in open source internships or retreats

There are many open source internship programs that you can participate in and help fund. Google Summer of Code (GSoC) is the largest, and it requires mentorship from employees who work on open source projects as part of the program. Or you can sponsor internships for underrepresented minorities in open source through Outreachy and CommunityBridge.

Another approach is to host an open source retreat at your company. For example, Stripe hosts open source retreats to contribute to open source projects it depends on.

Include open source in your corporate philanthropy initiatives

If your organization has a corporate sustainability or philanthropic arm, consider working with that team to include open source as a part of its work. For example, Bloomberg has a software philanthropy budget for projects it depends on, from Git to Eclipse to Python and more. In the future, I hope to see more corporate sustainability and philanthropy efforts—like Pledge 1%—that focus on funding critical open source infrastructure.

Conclusion

In conclusion, giving back to open source is not only the right thing to do, according to research, it’s also good for your business. To make sure open source continues to thrive and is sustainable in the long run, we all need to ensure that companies find ways to sustain the open source communities they depend on.


by Chris Aniszczyk at April 30, 2019 01:49 PM

Announcing Ditto Milestone 0.9.0-M2

April 29, 2019 04:00 AM

The second milestone of the upcoming release 0.9.0 was released today.

Have a look at the Milestone 0.9.0-M2 release notes for what changed in detail.

The main changes and new features since the last milestone 0.9.0-M1 are

  • rewrite of Ditto’s “search” service in order to use the same index and have the same query performance for API v1 and v2
  • several contributions in order to operate Eclipse Ditto on Microsoft Azure

Artifacts

The new Java artifacts have been published at the Eclipse Maven repository as well as Maven central.

The Docker images have been pushed to Docker Hub:



Ditto


The Eclipse Ditto team


April 29, 2019 04:00 AM

Specification Scope in Jakarta EE

by waynebeaton at April 08, 2019 02:56 PM

With the Eclipse Foundation Specification Process (EFSP) a single open source specification project has a dedicated project team of committers to create and maintain one or more specifications. The cycle of creation and maintenance extends across multiple versions of the specification, and so while individual members may come and go, the team remains and it is that team that is responsible for the every version of that specification that is created.

The first step in managing how intellectual property rights flow through a specification is to define the range of the work encompassed by the specification. Per the Eclipse Intellectual Property Policy, this range of work (referred to as the scope) needs to be well-defined and captured. Once defined, the scope is effectively locked down (changes to the scope are possible but rare, and must be carefully managed; the scope of a specification can be tweaked and changed, but doing so requires approval from the Jakarta EE Working Group’s Specification Committee).

Regarding scope, the EFSP states:

Among other things, the Scope of a Specification Project is intended to inform companies and individuals so they can determine whether or not to contribute to the Specification. Since a change in Scope may change the nature of the contribution to the project, a change to a Specification Project’s Scope must be approved by a Super-majority of the Specification Committee.

As a general rule, a scope statement should not be too precise. Rather, it should describe the intention of the specification in broad terms. Think of the scope statement as an executive summary or “elevator pitch”.

Elevator pitch: You have fifteen seconds before the elevator doors open on your floor; tell me about the problem your specification addresses.

The scope statement must answer the question: what does an implementation of this specification do? The scope statement must be aspirational rather than attempt to capture any particular state at any particular point-in-time. A scope statement must not focus on the work planned for any particular version of the specification, but rather, define the problem space that the specification is intended to address.

For example:

Jakarta Batch provides describes a means for executing and managing batch processes in Jakarta EE applications.

and:

Jakarta Message Service describes a means for Jakarta EE applications to create, send, and receive messages via loosely coupled, reliable asynchronous communication services.

For the scope statement, you can assume that the reader has a rudimentary understanding of the field. It’s reasonable, for example, to expect the reader to understand what “batch processing” means.

I should note that the two examples presented above are just examples of form. I’m pretty sure that they make sense, but defer to the project teams to work with their communities to sort out the final form.

The scope is “sticky” for the entire lifetime of the specification: it spans versions. The plan for any particular development cycle must describe work that is in scope; and at the checkpoint (progress and release) reviews, the project team must be prepared to demonstrate that the behavior described by the specifications (and tested by the corresponding TCK) cleanly falls within the scope (note that the development life cycle of specification project is described in Eclipse Foundation Specification Process Step-by-Step).

In addition the specification scope which is required by the Eclipse Intellectual Property Policy and EFSP, the specification project that owns and maintains the specification needs a project scope. The project scope is, I think, pretty straightforward: a particular specification project defines and maintains a specification.

For example:

The Jakarta Batch project defines and maintains the Jakarta Batch specification and related artifacts.

Like the specification scope, the project scope should be aspirational. In this regard, the specification project is responsible for the particular specification in perpetuity. Further the related artifacts, like APIs and TCKs can be in scope without actually being managed by the project right now.

Today, for example, most of the TCKs for the Jakarta EE specifications are rolled into the Jakarta EE TCK project. But, over time, this single monster TCK may be broken up and individual TCKs moved to corresponding specification projects. Or not. The point is that regardless of where the technical artifacts are currently maintained, they may one day be part of the specification project, so they are in scope.

I should back up a bit and say that our intention right now is to turn the “Eclipse Project for …” projects that we have managing artifacts related to various specifications into actual specification projects. As part of this effort, we’ll add Git repositories to these projects to provide a home for the specification documents (more on this later). A handful of these proto-specification projects currently include artifacts related to multiple specifications, so we’ll have to sort out what we’re going to do about those project scope statements.

We might consider, for example, changing the project scope of the Jakarta EE Stable APIs (note that I’m guessing a future new project name) to something simple like:

Jakarta EE Stable APIs provides a home for stable (legacy) Jakarta EE specifications and related artifacts which are no longer actively developed.

But, all that talk about specification projects aside, our initial focus needs to be on describing the scope of the specifications themselves. With that in mind, the EE4J PMC has created a project board with issues to track this work and we’re going to ask the project teams to start working with their communities to put these scope statements together. If you have thoughts regarding the scope statements for a particular specification, please weigh in.

Note that we’re in a bit of a weird state right now. As we engage in a parallel effort to rename the specifications (and corresponding specification projects), it’s not entirely clear what we should call things. You’ll notice that the issues that have been created all use the names that we guess we’re going to end up using (there’s more more information about that in Renaming Java EE Specifications for Jakarta EE).


by waynebeaton at April 08, 2019 02:56 PM

Eclipse Scout 9 release: out now!

April 05, 2019 05:07 AM

The official Scout version 9 has been released as part of the Eclipse simultaneous release 2019-03 and is now publicly available. In this article we highlight some of the new features such as improved responsiveness, support for OpenJDK and more.

With the Eclipse simultaneous release 2019-03, the new Scout version 9.0 has been released. As usual, it contains a lot of changes. We are happy to share some of the highlights with you. The complete release notes can be found here.

Support for OpenJDK and newer Java Versions

Long requested and finally here: Scout now supports running on OpenJDK, and on Java versions up to 11. Note that this requires a bit of work on the side of developers; and RedHat's OpenJDK version is not compatible out of the box due to missing elliptic curves. For more details, see the Java 11 section in the release notes and the migration guide.

Dark theme

You enjoy the dark theme of Eclipse, and want your Scout application users to enjoy the eye-friendliness of a dark theme too? Good news: A dark theme is now included with Scout and the widgets have been adjusted to blend in properly.

Improved Usability

To improve responsiveness if the window becomes narrow, group boxes can reduce their width by moving their field labels to the top automatically. The Scrollbar handles should be easier to catch, while the trees (treeboxes, navigation) scroll to show you a better view of your data when you expand or collapse an entry. Don't forget to check out the improved options for menu bars and how you can control what happens if there isn't enough space for all the menus.

Denser layout option

Sometimes screen space can be scarce and the generously spaced elements of Scout will show only a small amount of data in these instances. If you need to display more data at once, you can switch to the "Dense" layout, which reduces the amount of whitespace, which especially increases the number of table rows that are visible at the same time. Below you can see an example from our Contacts demo application:

New widgets: Mode Selector and Popup

Many widgets got small but awesome improvements – and for sure we have some new widgets too! For instances, check out the model selector and the popup widget.

Widget 1: Mode Selector

Similar to a Radio Button Group, the new Mode Selector allows you to switch between predefined options, but with a "regular button"-like interface that is quite common on smartphones.

Widget 2: Popup

With the new "Popup" (also known as popover on some platforms), you can display additional information in an overlay. You have many options to embed widgets here - we can't wait to see what you do with it!

The Popup has the following features:

  • Take any widget you like and open it in a Popup by using the WidgetPopup.
  • Use any widget you like as anchor and align the Popup around it.
  • Decide whether you want to point the Popup to the anchor by using the property withArrow.
  • Control the behavior of what should happen if there is not enough space to display the whole Popup using various properties.
  • Choose how the popup should react when the user clicks on the outside or on the anchor.

try out all the widgets in our widget app

Changed property lookup order

In many technologies such as Docker or Kubernetes, changing the configuration without having to create a new deployment is essential. To support this in Scout, the lookup order for Scout properties has been adjusted: It now allows overriding properties in the configuration file by using environment variables.

Get to know Scout?

Visit our project page, make your first steps with Scout using the comprehensive documentation, and check out the Scout forum if you have questions around a particular topic!

Scout survey

Do you have two minutes to make Scout even better?

Complete survey (2 MIN)


April 05, 2019 05:07 AM

New Release: Python<->Java Remote Services

by Scott Lewis (noreply@blogger.com) at April 04, 2019 05:42 AM

There is a new release (2.9.0) of the ECF distribution provider for OSGi R7 Remote Services between Java and Python.

This release has:

An upgraded version of Py4j
An upgraded version of Google Protocol Buffers
Enhancements to the distribution provider based upon the improved Py4j and Protobuf libs

In this previous blog posting there are links to tutorials and examples showing how to use remote services between Python<->Java.

Python<->Java remote services can be consumed or implemented in either Java or Python.

by Scott Lewis (noreply@blogger.com) at April 04, 2019 05:42 AM

Announcing Orion 20

by Mike Rennie at March 29, 2019 07:25 PM

We are pleased to announce the twentieth release of Orion, “Your IDE in the Cloud”. You can run it now at OrionHub, from NPM or download the server to run your own instance locally.

Once again, thank you to all committers and contributors for your hard work this release.

This release was focussed entirely on accessibility.


by Mike Rennie at March 29, 2019 07:25 PM

LiClipse 5.2.2 released

by Fabio Zadrozny (noreply@blogger.com) at March 27, 2019 01:46 PM

This version does an updated of the main dependencies of LiClipse.

It's now based on:

  • Eclipse 4.11 (also named 2019-03)
  • PyDev (5.2.2) 
  • EGit (5.3.0)
As a note, it seems I haven't updated this blog as often as I should (I end up putting more effort into announcing the PyDev updates) but just to note, LiClipse is always updated whenever PyDev is updated.

Enjoy!


by Fabio Zadrozny (noreply@blogger.com) at March 27, 2019 01:46 PM

JFace TableViewer sorting via Drag and Drop

by Christian Pontesegger (noreply@blogger.com) at March 25, 2019 12:31 PM

Recently I wanted to sort elements in a TableViewer via drag and drop and was astonished that I could not find  existing helper classes or tutorial for this fairly trivial use case. So here is one for you in case you got the same use case.

Source code for this tutorial is available on github as a single zip archive, as a Team Project Set or you can browse the files online.

If you are just interested in the helper class, have a look at DnDSortingSupport.

Prerequisites:

To have something to work on I will start with a TableViewer containing some data stored in a java.util.List. It is a default TableViewer and therefore I expect you have something similar ready for your experiments.

Step 1: Add drag support

Drag and Drop support for SWT is implemented via DragSource and DropTarget instances. To define that we can drag data, we need to bind a DragSource to a Control.
  DragSource dragSource = new DragSource(tableViewer.getControl(), DND.DROP_MOVE);
dragSource.setTransfer(LocalSelectionTransfer.getTransfer());
dragSource.addDragListener(new DragSourceAdapter() {

@Override
public void dragStart(DragSourceEvent event) {
event.doit = !tableViewer.getStructuredSelection().isEmpty();
}

@Override
public void dragSetData(DragSourceEvent event) {
if (LocalSelectionTransfer.getTransfer().isSupportedType(event.dataType)) {
LocalSelectionTransfer.getTransfer().setSelection(tableViewer.getStructuredSelection());
LocalSelectionTransfer.getTransfer().setSelectionSetTime(event.time & 0xFFFF);
}
}

@Override
public void dragFinished(DragSourceEvent event) {
LocalSelectionTransfer.getTransfer().setSelection(null);
LocalSelectionTransfer.getTransfer().setSelectionSetTime(0);
}
});

In line 1 we create the DragSource and define allowed DnD operations. As we want to sort elements, we only allow DND.MOVE operations. Then we define the way data gets transferred from the DragSource to the DropTarget. As we stay within  the same Eclipse application we may use a LocalSelectionTransfer.

The first thing that happens on a drag is dragStart(). Technically the selection cannot be empty as we have to select something before we start the operation, so this implementation is merely to understand how we could deny the operation right from the start.

After the drop operation got accepted in the DropTarget (see below) we get asked to dragSetData() and define what data we are moving. setSelectionSetTime() is not needed by our DropTarget, so again this is for completeness only.

Finally we need to clean up after the operation is done.

Step 2: Add drop support

Implementation is similar like before, just now we need a DropTarget. Instead of writing our own DropTargetListener we may use a ViewerDropAdapter which covers most of the required work already.
  DropTarget dropTarget = new DropTarget(tableViewer.getControl(), DND.DROP_MOVE);
dropTarget.setTransfer(LocalSelectionTransfer.getTransfer());
dropTarget.addDropListener(new ViewerDropAdapter(tableViewer) {

@Override
public void dragEnter(DropTargetEvent event) {
// make sure drag was triggered from current tableViewer
if (event.widget instanceof DropTarget) {
boolean isSameViewer = tableViewer.getControl().equals(((DropTarget) event.widget).getControl());
if (isSameViewer) {
event.detail = DND.DROP_MOVE;
setSelectionFeedbackEnabled(false);
super.dragEnter(event);
} else
event.detail = DND.DROP_NONE;
} else
event.detail = DND.DROP_NONE;
}

@Override
public boolean validateDrop(Object target, int operation, TransferData transferType) {
return true;
}

@Override
public boolean performDrop(Object target) {
int location = determineLocation(getCurrentEvent());
if (location == LOCATION_BEFORE) {
if (modelManipulator.insertBefore(getSelectedElement(), getCurrentTarget())) {
tableViewer.refresh();
return true;
}

} else if (location == LOCATION_AFTER) {
if (modelManipulator.insertAfter(getSelectedElement(), getCurrentTarget())) {
tableViewer.refresh();
return true;
}
}

return false;
}

private Object getSelectedElement() {
return ((IStructuredSelection) LocalSelectionTransfer.getTransfer().getSelection()).getFirstElement();
}
});

dragEnter() is the first thing that happens on the drop part of DnD. The default implementation is already fine. Our implementation additionally checks that the drag source is our current TableViewer. Further we disable the selectionFeedback. The feedback visually shows the user whether we drop before an element, on the element, or after it. The ViewerDropAdapter already supports these kind of feedbacks. Until bug 545733 gets fixed the helper class contains a small patch to provide before/after feedback only. It does not make sense to drop on another element when we do sorting, right?

validateDrop() will be queried multiple times. We might check that we do not drop the table element on itself, but we spared this check for the current example.

performDrop() finally implements the drop operation. To keep the helper class generic I used an interface that allows to insert elements before or after another element. An implementation of it needs to be passed to the helper class.

 public interface IModelManipulator {
boolean insertBefore(Object source, Object target);

boolean insertAfter(Object source, Object target);
}
The helper class comes with an implementation for java.util.List, which you may reuse.


by Christian Pontesegger (noreply@blogger.com) at March 25, 2019 12:31 PM

WTP 3.13 Released!

March 20, 2019 11:35 PM

The Eclipse Web Tools Platform 3.13 has been released! Installation and updates can be performed using the Eclipse IDE 2019-03 Update Site or through the Eclipse Marketplace . Release 3.13 is included in the 2019-03 Eclipse IDE for Enterprise Java Developers and Eclipse IDE for JavaScript and Web Developers , with selected portions also included in 8 other packages . Adopters can download the R3.13 update site itself directly and combine it with the necessary dependencies.

More news


March 20, 2019 11:35 PM

RESTful OSGi R7 Remote Services with Jersey 2.28 or Apache CXF 3.3

by Scott Lewis (noreply@blogger.com) at February 26, 2019 07:39 AM

For some time, ECF has had remote service distribution providers that use the Jersey or the CXF implementation of standard Java API for RESTful Web Services (JaxRS). 

These distribution providers allow OSGi R7 Remote Services to be defined via JaxRS annotations and implemented by either Jersey 2.28 or CXF 3.3

OSGi R7 Remote Services provides support for renite service discovery, dynamics, versioning, configuration and extension of the distribution providers, and asynchronous remote calls as well as other features of the OSGi R7 Remote Services and Remote Service Admin specs.

This tutorial shows the use of OSGi Remote Services with these JaxRS distribution providers on Apache Karaf.

There is also a new version of the ECF Bndtools workspace template with example Bndtools projects showing the use of these distribution providers to define, configure, run and deploy RESTful OSGi R7 Remote Services with Bndtools 4.2+.

by Scott Lewis (noreply@blogger.com) at February 26, 2019 07:39 AM

Eclipse Foundation Contributor Validation Service

February 25, 2019 11:20 PM

In an effort to provide a more robust solution to our Contributor Validation Service on GitHub, we created the Eclipse ECA Validation Github App that can be installed on any GitHub account, organization or repository.

The goal of this new GitHub App is to make sure that every contributor is covered by the necessary legal agreements in order to contribute to all Eclipse Foundation Projects including specification projects.

For example, all contributors must be covered by the Eclipse Foundation Contributor Agreement (ECA) and they must include a “Signed-off-by” footer in commit messages. When contributing to an Eclipse Foundation Specification Project, contributors must be covered with version 3.0.0 or greater of the ECA.

We created a GitHub App to improve the following problems:

  1. Reduce our maintenance burden by simplifying the installation process.
  2. Increase our API rate limit.
  3. Create a better experience for users by allowing the App to be installed on non-Eclipse project repositories such as the Eclipse IoT website and the Jakarta EE website.

Finally, we made some improvements to our “details” page. We added a “revalidate” button to allow Eclipse users to trigger a revalidation without pushing new changes to the pull-request and we added some useful links to allow users to return to GitHub or to sign the ECA.

We are planning to install our new Eclipse ECA Validation Github App to all our Eclipse Projects on GitHub this week and I am hoping that these changes will improve the way our users are contributing via Github.

If you are using our new Github App and you wish to contribute feedback, please do so on Bug 540694 - Github IP validation needs to be more robust.


February 25, 2019 11:20 PM

OpenDaylight has Oomph!

by Michael Vorburger.ch (noreply@blogger.com) at February 05, 2019 09:11 PM

When I started to dig into the Open Daylight (ODL) code base in 2016, I almost got a severe case of irreversible stomach flu when I read on it’s Getting Started with Eclipse Wiki page that: "Eclipse is no longer able to compile OpenDaylight. The reason is three Maven plugins which are used by OpenDaylight but are not integrated into Eclipse: maven-plugin-plugin, karaf-maven-plugin and maven-antrun-plugin. This means you will always have Eclipse compile errors in the project (this could go to up to 100000 errors). You can use Eclipse for editing easily but to compile the project you need to open a terminal window and do the compilation according to the instructions from." (I may have removed this sentence from that Wiki page by the time you read this.)

The certainly meant to be helpful advise from a new work colleague at Red Hat that "we all ended just using JetBrains IntelliJ IDEA‎" didn't quite do it for me, particularly for contributing to the leading OPEN future of networking kind of community,  so... what to do?! As any real Hacker would presumably agree, there really was only one solution for me: Take a pause from reading books and watching videos to come up to speed about Software Defined Networking (as fascinating as those IP packets in wireshark can be from the inside... or are they?), and get down to fixing any problems anywhere preventing the Open Daylight code base from being able to be worked on neatly within Eclipse!

In 2016, the right way to go about is to use the Eclipse.org Oomph project, which also powers the new Eclipse Installer new used by millions for the core IDE eclipse.org/downloads, and create a project catalog with Oomph setup models for the ca. 80-ish individual sub-projects which are part of OpenDaylight.

It took me a little bit of great fun times to figure out how to change a number of things in various pom.xml in ODL (incl. some M2E Maven for Eclipse lifecycle-mapping; initially in pom.xml but then I realized that when using Oomph M2E lifecycle mappipngs are probably best just kept in workspace preferences), and a weird Maven/Eclipse interrelation Checkstyle LICENSE header check issue (contributing a clearer LOG about that); and fixing the build of another project, work around some small but blocking M2E corner case problem, but ultimately it now all works rather nicely!

This setup model pre-installs a number of nice to have and useful Eclipse plugins which you may find of interest for your projects too, if you haven't heard of them yet incl. M2E Code Quality, used to configure Checkstyle, FindBugs & PMD Eclipse plugins automatically in-line with their respective Maven settings.

see https://github.com/vorburger/opendaylight-eclipse-setup/ and these videos:







by Michael Vorburger.ch (noreply@blogger.com) at February 05, 2019 09:11 PM

モグニャンが半額50%オフ?100円お試しモニターも?キャンペーンを見逃さない裏技を紹介する

by Sato at December 28, 2018 10:50 AM

モグニャンが半額キャンペーンだって!?100円モニターもあったの? 僕は知らずにモグニャンを通常価格で購入したので、少し驚きでした…。 管理人まぁ、美味しそうに食べてくれるしウンチの調子もよくなったの ...

Copyright © 2019 モグニャンの口コミ解説 All Rights Reserved.


by Sato at December 28, 2018 10:50 AM

【口コミ101選】モグニャン利用者がこぼしたリアルな評判&悪評だけを厳選して徹底評価する

by Sato at December 28, 2018 10:50 AM

100%無添加でグレインフリー。原材料(63%)の白身魚から得る豊富な動物性たんぱく質で猫の健康と食いつきを追求したキャットフード、『モグニャン 』。 色んなサイトで軒並み高評価されていますが、実際の ...

Copyright © 2019 モグニャンの口コミ解説 All Rights Reserved.


by Sato at December 28, 2018 10:50 AM

Looking back on the first two simultaneous releases and forward to a full roadmap for 2019

December 21, 2018 01:38 PM

Since our last major release of Scout 8 with Eclipse Photon, we have polished a few things about Scout. We were able to successfully release two releases since the switch to the Eclipse Simultaneous Release Train. What has changed in these releases? What are Maven archetypes for Scout and where will Scout go in 2019? Keywords are "Dark Theme", "OpenJDK" and "ECMAScript6".

Simultaneous Releases September and December

With the September release, the first simultaneous release of the Eclipse platform, and the latest release in December, we mainly worked on the presentation and performance of Scout. With support for virtual scrolling, tile grids are now only calculated by the browser when the elements come into the visible area. This results in a significant performance gain in certain browsers, e.g. the Internent Explorer.

Updated release notes

Maven Archetypes for Scout

There are two Maven archetypes available for some time now. These provide a complete application architecture, from persistence to the user interface, as a Maven project. We used Jooq for the persistence, other persistence frameworks can be integrated analogously. These application blueprints show how a complete application breakthrough with Scout works. Further, we have added some additional functionality such as favorites or user administration in the archetype for the classic Scout framework.
You can find the ScoutJS archetype on Maven-Central: scout-hellojs-app
The archetype for Scout classic is available on github: ScoutJooq

The Maven Archetype for Scout makes it quick and easy to build complete business applications.

Outlook for 2019

We have set ourselves many goals for 2019. Scout 9 and 10 are the next major development steps. Various usability improvements will be implemented, some new widgets will be added for both the classic Scout and ScoutJS. A highlight is the dark theme, which can also be activated via a configuration attribute.

The new dark theme in Scout 9

Technically Scout 9 will be compatible with Java 11 and will now also support OpenJDK. Not only is the Scout Javascript Client Code put on a new technological basis with ECMAScript 6 in the course of 2019. We are also examining new tooling support, a possible conversion to TypeScript and much more.

 

Get to know Scout?

Visit our project page, make your first steps with Scout using the comprehensive documentation, and check out the Scout forum if you have questions around a particular topic!

Scout Survey

Do you have two minutes to make Scout even better?

Complete survey (2minutes)


December 21, 2018 01:38 PM

Slides from JavaFX-Days Zürich on e(fx)clipse APIs

by Tom Schindl at December 05, 2018 08:57 AM

If you could not attend my talk at the JavaFX-Days Zürich yesterday or you did and wanted to recap what you’ve presented. Here are the slides.

I enjoyed the conference and I hope you did as well. See you next year!


by Tom Schindl at December 05, 2018 08:57 AM

Back to the top