Yakindu Statechart Tools 2.4.0 released!

by Andreas Mülder (noreply@blogger.com) at October 31, 2014 02:54 PM

We released Yakindu Statechart Tools 2.4.0 today! It contains a lot of bugfixes and improvements as well as new features.

Installation
This release requires Eclipse Luna. You can install SCT 2.4.0 from our update site:
http://updates.yakindu.org/sct/luna/releases/ or you can download a full Eclipse zip package from our download page.

New and Noteworthy

Here is a summary of the improvements worth mentioning:

1. Toggle documentation toolbar entry added
Menu-16We added a toolbar menu item  to the editors toolbar to toggle between documentation and formal expression language. This allows to toggle modes for the whole diagram.

2. C/C ++ generator feature for inner class function visibility added
innerFunctionVisibility (String, optional): This parameter is used to change the visibility of inner functions and variables. In default the „private” visibility is used. It can be changed to „protected” to allow function overriding for a class which inherits from the generated statemachine base class.
Example configuration:

feature GeneratorOptions {
innerFunctionVisibility = "protected"
}

3. C/C++ generator feature to change operation callback generation behavior added
staticOperationCallback (Boolean, optional): If set to ‚true’ the callback function declaration for statechart operations is static and the functions are called by the statemachine code statically.
Example configuration:
 
feature GeneratorOptions {
staticOperationCallback = true
}

4. Model and Diagram compare feature
This release contains the beta version of a diff/merge viewer for statechart diagrams. It integrated seamlessly into the Eclipse Team API to allow diffing and merging of different revisions as well as comparing models with the local history.

compare_example

Note that this feature is still beta. There are currently two known problems (this and this) with the editor that will be fixed for the next release.

Bugfixes

We also fixed a bunch of bugs reported via our User Group:

Toggle subregion alignment does not work
Transition into substate does not recognize parents history context
Model can not be simulated if operations in named interfaces are used
C++ generator can generates allways actions with a lower-case letter in source file
CoreFunction methods for long types incomplete
Operations of other Statecharts (in the same folder) are in scope
Class cast exception if java provided custom operation is not executable
Simulating Operations With Custom Java Code is not working as intended
Diagram corrurpted when moving transition label

Thanks to all bug reporters!

by Andreas Mülder (noreply@blogger.com) at October 31, 2014 02:54 PM

Brand new BrowserSim / CordovaSim features

by ibuziuk at October 31, 2014 12:49 PM

In this article, I’m happy to introduce you new BrowserSim / CordovaSim features, which are available in the new JBoss Developer Studio 8.0.0.GA. Basically, I want to focus on:

  • JavaFx web engine

  • Eclipse console logging

  • Dev Tools Debugger

JavaFx web engine

BrowserSim and CordovaSim have a new JavaFx web engine as an alternative to SWT WebKit. In the original there was only one web engine - SWT WebKit. Unfortunately, it has several drawbacks. For example, using SWT WebKit on Windows requires Apple Safari installation (provides SWT WebKit engine), which is pretty obsolete for now - May 9, 2012 is the date of the last update. Moreover, SWT WebKit doesn’t support Debugger API. Due to these limitations it was decided to add JavaFX web engine support. Web engine can be changed in Menu → Preferences → Settings Tab → Browser Engine.

JavaFx web engine
If you want to use JavaFX web engine, you need to run BrowserSim / CordovaSim against Oracle JDK version 7 or higher (version 8 is recommended)

Eclipse console logging

Eclipse console logging is available for both SWT Webkit and JavaFx web engines. Now the output of the main javascript console functions (console.log, console.info, console.warn, console.error) is displayed in the Eclipse console.

Eclipse Console Logging

Dev Tools Debugger

Dev Tools Debugger is available only for JavaFx web engine. One can connect the debugger to the BrowserSim / CordovaSim (Right click → Debug → Dev Tools…​) and step through the code, introspect variables and so forth.

Dev tools Debugger

Demo

Here is a short demo video with the new features:

All these features are also available for CordovaSim

BrowserSim standalone

For one who doesn’t use Eclipse / JBoss Developer Studio there is a standalone mode of BrowserSim. However, only SWT WebKit web engine is supported (we are planning to add JavaFx support in the next releases - JBIDE-18703). More details about BrowserSim standalone can be found in the following blog.

BrowserSim FAQ

BrowserSim FAQ can be found here. If you wasn’t able to find the answer, just post your question in the comments to this blog.

Known issues

  • Dev Tools Debugger doesn’t work properly with the Oracle JDK 8u20. I do hope it will be fixed in the upcoming JDK releases - RT-38918, JBIDE-18526

  • JavaFx which is shipped with Oracle JDK 7 has no localStorage support. Fortunately, it is fixed in JDK 8 - RT-29584

  • JavaFx which is shipped with Oracle JDK 7 has no WebSocket support, which is vital for LiveReload functionality. So, LiveReload doesn’t work with Oracle JDK 7 for JavaFx web engine. Fortunately, it is fixed in JDK 8 - RT-14947

  • JavaFx HTML 5 Date and time inputs do not function properly - RT-34974, JBIDE-17054

Conclusion

We are trying our best to make our tools as good as possible. User feedback is what we are seeking for now. We look forward to hearing your comments, remarks and proposals. Please, comment below about features you would you like to have in the upcoming releases!
Have fun!

Ilya Buziuk
@ilyabuziuk


by ibuziuk at October 31, 2014 12:49 PM

W-JAX 2014 in Munich

by ekkescorner at October 31, 2014 09:24 AM

First week of November the next W-JAX will start in Munich. As always I developed the BlackBerry10 conference app

Conference2Go JAX

You can download this APP for FREE from BlackBerry World.

download_c2g-jax

There’s a 10.2 release running on all devices and a 10.3 release running on 10.3 devices: BlackBerry Passport, soon BlackBerry Classic then followed by an update f0r all devices.

Here are some screenshots from Passport.

w01_c2g

List of Sessions:

w02_sessions

Session Details:

w03_session_details

List of Speakers:

w04_speakerlist

Speaker Details:

w05_speaker_details

List of Tracks. Tap on a Track to see the Sessions.

w06_tracks

Special Days:

w07_special_days

IoT Day – Sessions on Wednesday:

w08_sessions_iotday_mi

Overview by Time (vertical) and Rooms (horizontal):

w09_overview

Session selected from Overview and ‘peeking back':

w10_over_details_peek

See where the Room is located:

w11_rooms

Some conference Infos:

w12_conference_infos

Find the Hotel on BlackBerry Map:

w13_map

Some Infos about me ;-)

w14_ekkes_corner

This is only a short snapshot. There’s also

  • Calendar integrated (get notified before next session starts)
  • Notebook integrated (BlackBerry Remember, Evernote, Office365)
  • Twitter integrated
  • Foursquare integrated to check-in
  • and more

Have many ideas what else could be done – but I’m developing my Conference2Go Apps only in my spare time.

BlackBerry 10 Development

Want to learn more about BlackBerry 10 Cascades Devlopment ? Take a look here.

MobileTechCon (MTC) 2015

At MTC (MobileTechCon) in March 2015 I’ll speak about Secure Workspaces and Bluetooth in Business Apps.

ekke_mtc_15

See You at W-JAX 2014 in Munich

See you there or next week at W-JAX.

Send me a message @ekkescorner if you want to discuss BlackBerry 10 Development or take a look at the Passport.

 


Filed under: BB10, Blackberry, Cascades, mobile

by ekkescorner at October 31, 2014 09:24 AM

EclipseCon Europe and book codes

October 31, 2014 08:00 AM

Unfortunately I wasn't able to make EclipseCon Europe this year in Ludwigsberg. It sounds like it was a great conference, with the announcement of Eclipse Cloud Development and the new release of Orion.

To celebrate, I have managed to arrange a deal with my publishers for 25% off the retail price of Eclipse Plug-in Development for Beginners and Mastering Eclipse Plug-ins. Head to one of the two URLs and use the code to get it:

Codes are valid until 2nd November 2014.


October 31, 2014 08:00 AM

Announcing Orion 7.0

by John Arthorne at October 30, 2014 08:44 PM

Orion 7.0 is now available! Check it out right now on OrionHub, or download your own server. This release brings some significant changes, including rewriting several parts of the UI and adding capabilities in others. Here is a quick overview of what’s new:

A new Git UI

The evolution of our new Git UI that we began in Orion 6.0 is now complete. All major Git capabilities are consolidated on one page, with a two-pane layout similar to the Orion editor. The left hand side shows a history timeline, and has operations for manipulating branches such as fetch, merge, squash, push, etc. A new Sync button has been added which performs all the most typical operations to get your local clone synchronized with the chosen reference point (merge or rebase, then push). Fetching occurs automatically every time you visit the Git page. The right hand side is used to display commit details, create new commits, and perform operations on commits such as reset, tag, cherry-pick, etc. Overall, this new layout makes much better use of available space, is more touch friendly, and simplifies the most common Git workflow. The underlying implementation has also had some significant performance work, making many operations much snappier than the previous release.

r7-git-ui

A new help system

The 1990’s called and wanted its help system back, so we implemented a brand new one using modern web technologies. The new help system is a simple HTML5 and JavaScript page with overview and detail panes. Help content is authored in Markdown (naturally using the Orion Markdown editor), and rendered into HTML for display. Ditching our last use of JSP technology has also made the Orion Java server lighter and easier to consume.

r7-help-ui

A new global search UI

The separate global search page has been replaced with a fly-out that appears directly in the editor. This reduces context switching and allows you to more seamlessly integrate searching into your editing workflow. All the same search and replace capabilities are available in this new UI. Read more about it here.

r7-search

Language tooling enhancements

Many improvements have been made to CSS and JavaScript language tooling in this release. Read all the details here.

Editor hover help

The Orion editor has initial support for hover help aid in code exploration. If you hover over a function, the rendered documentation for that function is shown. In Orion 8.0 we will hook this more deeply into the static analysis engine to provide hover help in many more places.

r7-hover-help

New authentication types

Support has been added for authenticating with Google+ and GitHub auth. Read more about it here.

Cloud foundry tooling

This release has many new features for developing and deploying for Cloud Foundry. Orion now has a Cloud Foundry manifest editor with syntax highlighting, error and warning reporting, and content assist. The deploy dialog now has support to launch applications with missing or incomplete manifests, with the option to persist missing manifest content during deployment.

r7-cf-editor

Multi-instance server

The Orion Java server has been architected to support multiple concurrent server instances behind a reverse proxy. This architectural change enables scenarios such as fail over, load balancing, and zero downtime upgrades of the server. This work involved more comprehensive use of file locking to avoid contention between server instances, and making sure only one server performs background tasks such as search indexing to avoid work duplication. The server also now always separation of instance-private disk state from content that needs sharing between instances. Stay tuned for more documentation on how to configure and deploy multi-instance Orion clusters.

It has been a busy four months! Our next release, Orion 8.0, will be coming at the end of February, 2015. Until then, enjoy the cloud coding!


by John Arthorne at October 30, 2014 08:44 PM

e(fx)clipse 1.1 – New features – Improved internationalization support

by Tom Schindl at October 30, 2014 03:48 PM

e(fx)clipse 1.1.0 is released in less than a week so I’ll once more go through the enhancements and features we developed in the 1.1 timeframe.

Improved internationalization support

As part of this release we’ve added a first set of runtime APIs makeing it easier to develop localized applications. The following APIs are now available to you:

  • AbstractTextRegistry: allows you to connect localization receivers (most likely your UI-Control) and the localization provider so that they update automatically when flipping the language
  • Formatter: a set of formatters for numbers, java.util.Date and java.time.TemporalAccessor who automatically update to the current application locale
  • MessageFormatter: Allows you to do similar things to java.text.MessageFormat but is a bit more powerful ;-)

AbstractTextRegistry

Let’s look at first into AbstractTextRegistry and its usage – I won’t go in the technical details how we reached at this API because this has already been discussed in another blog post.

The first thing you need is a class holding your translation texts:

public class MyMessages {
   public String mySimpleMessage;
}

And a properties file named MyMessages.properties (and for each language you want to support another one e.g. MyMessages_de.properties, …)

mySimpleMessage = This is a simple message

And another class which is a subclass of AbstractTextRegistry

@Creatable
public class MyMessagesRegistry extends AbstractTextRegistry<MyMessages> {
  public String mySimpleMessage() {
     return getMessages().mySimpleMessage;
  }

  @Inject
  public void updateMessages(MyMessages messages) {
    super.updateMessages(messages);
  }
}

And then the final useage in your UI is as simple as:

public class MyUIPart {
   @Inject
   MyMessagesRegistry r;

   @PostConstruct
   public void createUI(BorderPane p) {
     Label l = new Label();
     r.register(l::setText, r::mySimpleMessage);
   }
}

Formatter

Another thing you often need in your application are formatters e.g. to format numbers, dates, … so we’ve create a basic interface

public class Formatter<T> {
  public String format(T object, String format);
}

and added some useful basic implementation DateFormatter, NumberFormatter and TemporalAccessorFormatter and you can get access to them through dependency injection

public class MyUIPart {
   @Inject
   NumberFormatter numberFormatter;

   @PostConstruct
   public void createUI(BorderPane p) {
     Label l = new Label();
     l.setText(numberFormatter.format(20_000, "#,##0.00"));
   }
}

MessageFormatter

While the above formatters allow you to format single objects like dates, numbers, … this one allows you format messages similar to what you are used from java.text.MessageFormat but there are some difference who make it a lot for powerful.

public class MyUIPart {
   @Inject
   NumberFormatter numberFormatter;

   @PostConstruct
   public void createUI(BorderPane p) {
     Label l = new Label();
     String message = "The final amount is ${amount,number,#,##0.00}";
     
     Map<String,Object> data = 
       Collections.singletonMap("amount", 20_000);
     Map<String,Formatter<?>> formatters = 
       Collections.singletonMap("number",numberFormatter);

     l.setText(
       MessageFormatter.create( 
         data::get, formatters::get ).apply(message) );
   }
}

So you notice we are not using indices like in MessageFormat but a key and the other difference is that you are free to add formatters as you need them to do more complex things.

Let the 3 APIs work together for the common good

So while the APIs alone already provide some benefits over the lower level JDK APIs their real power can be seen when you let them work together.

If we come back to our initial AbstractTextRegistry stuff we often have stored in our translation texts something like this:

# ...
myAmountMessage = The final amount is ${amount,number,#,##0.00}

which results in another field in MyMessages

public class MyMessages {
  // ...
  public String myAmountMessage;
}

and 3! more methods in MyMessagesRegistry

@Creatable
public class MyMessagesRegistry {
  // ...
  public String myAmountMessage() {
    return getMessages().myAmountMessage;
  }

  @Inject
  NumberFormatter numberFormatter;

  public String myAmountMessage(Number amount) {
     Map<String,Object> data = 
       Collections.singletonMap("amount", amount);
     Map<String,Formatter<?>> formatters = 
       Collections.singletonMap("number",numberFormatter);
     return MessageFormatter.create( 
         data::get, formatters::get ).apply(myAmountMessage());
  }

  public Supplier<String> myAmountMessage_supplier(Number amount) {
    return () -> myAmountMessage(amount);
  }
}

which result in your UI code looking like this

public class MyUIPart {
   @Inject
   MyMessagesRegistry r;
   
   @PostConstruct
   public void createUI(BorderPane p) {
     Label l = new Label();
     r.register(l::setText, r.myAmountMessage_supplier(20_000));
   }
}

That’s it for the first runtime feature, the next blog post will show you how you can make Eclipse generate all the localization artifacts for you from a single resource instead of writing them by hand.



by Tom Schindl at October 30, 2014 03:48 PM

Standalone BrowserSim is back!

by kmarmaliykov at October 30, 2014 11:12 AM

In this article I’m happy to say that standalone BrowserSim is back.

Standalone BrowserSim allows to use BrowserSim without firing up Eclipse. Unfortunately, only SWT.WEBKIT engine is available in standalone BrowserSim, so it will require Safari on Windows or WebKitGTK 1.2.0 on Linux to be installed. Nevertheless, all BrowserSim features available there.

You can read about BrowserSim features here. For more information about BrowserSim see Browsersim FAQ.

standalone-bs

How can I try it?

Standalone BrowserSim is available on the artifacts tab on the downloads page. You can try stable or if you want latest greatest nightly builds are available too.

You can also build your own standalone BrowserSim from source. To do it:

  • ensure you have Java (1.6+), Ant (1.5+) and Maven (3.1+) installed.

  • execute the following commands:

    $ git clone https://github.com/jbosstools/jbosstools-browsersim
          $ cd products
          $ mvn clean package
          $ cd browsersim-standalone/target/application

You can run browsersim.jar using the following command:

  • Windows, Linux:

    java -jar browsersim.jar [$start_page]
  • Mac OS:

    java -XstartOnFirstThread -jar browsersim.jar [$start_page]
To run standalone Browsersim on Linux with specific GTK version add SWT_GTK3=1 (GTK 3) / SWT_GTK3=0 (GTK 2) before the run command.

by kmarmaliykov at October 30, 2014 11:12 AM

The Internet of Things Will be Built on Open Source

by Mike Milinkovich at October 28, 2014 07:50 AM

This post was originally published on the Bosch Connected World Blog.

The Internet of Things is poised to become the next wave of technology to fundamentally change how humanity works, plays, and interacts with their environment. It is expected to transform everything from manufacturing to care for the elderly. The internet itself has — in twenty short years — dramatically transformed society. This scale of change and progress is about to be repeated, in perhaps even larger and more rapid ways. New ventures will emerge, existing businesses will be disrupted, and everywhere the incumbents will be challenged with new technologies, processes, and insight.

It is important to recognize that the internet is successful because it is one of the most radically open technology platforms in history. The fundamental protocols of the internet were invented in the 1970’s, and put in the public domain in the late 1980’s. The world-wide web was invented at the European Organization for Nuclear Research (CERN), which made it free for everyone. In subsequent years, open source technologies such as Linux, the Apache web server and the Netscape / Firefox browser ensured that the basic infrastructure for the web is based on open source. The technology behemoths of our day such as Google, Amazon, Facebook and Twitter are only able to scale their infrastructure and their business models by relying on open source. In short: our modern digital world is built on open source software.

The Internet of Things will be implemented using open source software platforms. There is utterly no alternative to this outcome. Anyone who says otherwise is fooling themselves.

There are four reasons why this is true.

  1. Scale: Depending on which analyst you prefer, the next decade will see between 50 and 70 billion sensors being deployed on Earth. This will require tens, if not hundreds of millions of routers, gateways, and data servers. There is simply no way to achieve those levels of scale without relying on open source software to drive the vast majority of that infrastructure. Any other approach will simply be unaffordable, and will be out-competed by the economies of scale achievable by the open source alternatives.
  2. Freedom to Innovate: Open source software allows permission-less innovation. In particular, open source allows innovation by integration, where developers create new and novel systems by combining freely available open source components. This approach is somewhere between difficult and impossible for proprietary software stacks, where the vendor has to drive all of the invention.
  3. Inter-operability: I am a big believer in open standards, and firmly believe that they will be an integral part of the IoT. However, it has been proven time and again that the best possible way to have a new technology achieve rapid adoption is by combining open standards with a robust open source implementation. OSS implementations provide an easy adoption path, near-perfect interoperability with others, and reduces the cost of entering the market. In a world where developers are becoming one of the most precious of commodities, it makes no sense to waste them on implementing a standard. They should be focused on building software which provides the firm with product differentiating features that customers value.
  4. Developers: Lastly, recruiting and enabling developers is a key, and often overlooked part of any IoT strategy. By the end of this decade the number of IoT developers needs to grow from a few hundred thousand to over four million. Today’s developers demand open source solutions and tools. Even a decade ago, technology acquisition was largely a top-down process. Now technology choices are largely made bottom-up, by developers experimenting with open source components and integrating them into a solution.

For these reasons, IoT is rapidly becoming a strategic area of focus for the Eclipse community. From three projects two years ago the Eclipse IoT community has grown to seventeen projects, implementing protocols, device gateway frameworks, vertical frameworks, and tools for the needs of IoT developers.

Bosch has been an active member of the Eclipse Foundation since March 2010. Their initial focus was on the Automotive Working Group, which has been working on tools and methods for automotive embedded systems. Its subsidiary Bosch Software Innovations (BoschSI) is one of the world’s thought leaders in driving open source platforms for the Internet of Things. They have recognized its importance, and with contributions such as the Eclipse Vorto project are helping to make it a reality. The Eclipse Foundation values the partnership that we have with the development teams, and look forward to a long and fruitful collaboration.

The digital world we have today is built on open source technologies. The Internet of Things will be too. Come join the Eclipse IoT community to help make that happen


Filed under: Foundation, Open Source, Strategy

by Mike Milinkovich at October 28, 2014 07:50 AM

New Tasktop Data product launched with Tasktop 4.0, unlocks Agile, ALM and DevOps

by Mik Kersten at October 28, 2014 06:46 AM

From Galileo’s telescope to the scanning electron microscope, scientific progress has been punctuated by the technology that enabled new forms of measurement. Yet in the discipline of software delivery, robust measurement has been elusive. When I set out on a mission to double developer productivity, I ended up spending a good portion of my PhD first coming up with a new developer productivity metric, and then even more time implementing a tool for measuring it (now a core part of Eclipse Mylyn). Over the past few years, while working with the largest software delivery organizations in the world, I’ve noticed almost every one of them going through a similar struggle. All are looking for the best ways to scale or improve their software delivery via enterprise Agile frameworks and tools, DevOps automation technologies, and end-to-end ALM deployments. The problem is that nobody is able to reliably measure the overall success of those efforts because we are missing the technology infrastructure that allows for measurement across software delivery disciplines, methods and tools.

With the launch of Tasktop Data we have a single goal: to unlock the data flowing through the software lifecycle. New measurement ideas have recently arrived on the market, ranging from the metrics backing the Scaled Agile Framework (SAFe) to methods for tracking cycle time through the DevOps pipeline originating from Sam Guckenheimer. There’s also no shortage of tools out there to allow you to visualize such data, ranging from generic Business Intelligence (BI) tools, to innovative new DevOps-specific reporting such as the IBM Jazz Reporting Service. The problem that’s plaguing any large-scale software delivery organization is that there’s simply no way to get at the end-to-end data to drive those metrics and reporting tools. Database-driven approaches such as ETL no longer work due to the fact that databases do not contain the complex business logic of modern Agile/ALM/DevOps tools, and are additionally inaccessible for SaaS solutions. Single tool approaches, such as Scrum or CI metrics, only work for one stage of the software lifecycle and cannot deliver end-to-end analytics such as cycle time. We need a new measurement technology in in order to take the next step in improving how software is built. That new technology is Tasktop Data.

Tasktop has created two key innovations that make Tasktop Data possible. The first is our semantically rich data model of the end-to-end software lifecycle. This is at the core of the Tasktop products and allows us to map and synchronize artifacts across the various tools and levels of granularity that define software delivery. The second is the massive “integration factory” that allows us to test all of the versions of all the leading Agile, ALM and DevOps tools that we support. With Tasktop Data, we are leveraging this common model and all our integrations, allowing organizations to stream the data that defines their software lifecycle to the database & reporting solution of choice. What makes this new technology even more profound is that we are exposing the models within the Tasktop platform, enabling software lifecycle architects to author the models that will drive their reports. The end-result is the real-time flow of clean lifecycle data flowing in to your reporting tool of choice. Running Enterprise Agile DevOps analytics and metrics that were previously impossible is now easy. Check out the demo below for a start-to-finish setup of Tasktop Data that connects Rally and HP ALM to Tableau in minutes. Then imagine this working for your entire tool chain, with your reporting solution of choice.

Tasktop 4.0 Connectors

Tasktop Data is being released as part of Tasktop 4.0, which includes significant updates across our entire product portfolio. The most notable is the fact that we’re releasing 6 new Sync connectors (BMC Remedy, GitHub, IBM Bluemix, Polarion ALM, Serena Dimensions RM and Tricentis Tosca) in addition to bringing Tasktop Dev up-to-speed with the latest developer tools (eg, Eclipse/Mylyn Luna, Jenkins, Gerrit as well as commercial tools that leverage Dev such as HP Agile Manager).

We’re thrilled that the past 7 years of creating the de facto integration layer for software delivery is now materializing in a whole new way of measuring and improving how software is built. This is just the start of a new journey, as the most interesting aspects of data will arise from the way that our customers and partners leverage it in order to create unique and valuable insights in the software delivery process. For more information on how you can become a part of that journey:


by Mik Kersten at October 28, 2014 06:46 AM

Mozilla pushes - September 2014

by Kim Moir (noreply@blogger.com) at October 27, 2014 09:11 PM

Here's September 2014's monthly analysis of the pushes to our Mozilla development trees.
You can load the data as an HTML page or as a json file.


Trends
Suprise!  No records were broken this month.

Highlights
12267 pushes
409 pushes/day (average)
Highest number of pushes/day: 646 pushes on September 10, 2014
22.6 pushes/hour (average)

General Remarks
Try has around 36% of pushes and Gaia-Try comprise about 32%.  The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 22% of all the pushes.

Records
August 2014 was the month with most pushes (13,090  pushes)
August 2014 has the highest pushes/day average with 620 pushes/day
July 2014 has the highest average of "pushes-per-hour" with 23.51 pushes/hour
August 20, 2014 had the highest number of pushes in one day with 690 pushes






by Kim Moir (noreply@blogger.com) at October 27, 2014 09:11 PM

ECF Remote Services Architecture

by Scott Lewis (noreply@blogger.com) at October 27, 2014 08:01 PM

With the many updates, improvements in ECF's Remote Services releases, along with the tutorials and documentation added recently, it was time to update the Remote Services Architecture diagram.

by Scott Lewis (noreply@blogger.com) at October 27, 2014 08:01 PM

Let's Practice Sirius: BOF Session at EclipseCon

by Fred Madiot (noreply@blogger.com) at October 27, 2014 02:19 PM

Like last year at EclipseCon, the Obeo team will organize a BOF session about Sirius.
https://www.eclipsecon.org/europe2014/bof-session/lets-practice-sirius

It will take place Wednesday the 29th in Silchersaal.


(A BOF (Birds of feather) is an informal discussion group of members interested in a particular issue).

If you already know Sirius, you can come with your first Sirius modeler. But if you just want to discover this technology, you can also come with your own work (an Ecore metamodel, an Xtext editor, etc). Finally, you can also come empty-handed to start from scratch with an example provided by the Obeo team.

During the BOF session, 10 USB keys will be provided containing Sirius bundles (linux, mac and windows) and the resources corresponding to a basic tutorial.

If you plan to attend this BOF and don't want to loose time installing Sirius, you can already find these bundles and resources here: https://filetransfer.obeo.fr/browse.php?parent=38896


by Fred Madiot (noreply@blogger.com) at October 27, 2014 02:19 PM

Release Engineering in the classroom

by Kim Moir (noreply@blogger.com) at October 27, 2014 01:34 PM

The second week of October, I had the pleasure of presenting lectures on release engineering to university students in Montreal as part of the PLOW lectures at École Polytechnique de Montréal.    Most of the students were MSc or PhD students in computer science, with a handful of postdocs and professors in the class as well. The students came from Montreal area universities and many were international students. The PLOW lectures consisted of several invited speakers from various universities and industry spread over three days.

View looking down from the university

Université de Montréal administration building

École Polytechnique building.  Each floor is painted a different colour to represent a differ layer of the earth.  So the ground floor is red, the next orange and finally green.

The first day, Jack Jiang from York University gave a talk about software performance engineering.
The second day, I gave a lecture on release engineering in the morning.  The rest of the day we did a lot of labs to configure a Jenkins server to build and run tests on an open source project. Earlier that morning, I had setup m3.large instances for the students on Amazon that they could ssh into and conduct their labs.  Along the way, I talked about some release engineering concepts.  It was really interesting and I learned a lot from their feedback.  Many of the students had not been exposed to release engineering concepts so it was fun to share the information.

Several students came up to me during the breaks and said "So, I'm doing my PhD in release engineering, and I have several questions for you" which was fun.  Also, some of the students were making extensive use of code bases for Mozilla or other open source projects so that was interesting to learn more about.  For instance one research project looking at the evolution of multi-threading in a Mozilla code bases, and another student was conducting bugzilla comment sentiment analysis.  Are angry bug comments correlated with fewer bug fixes?  Looking forward to the results of this research!

I ended the day by providing two challenge exercises to the students that they could submit answers to.  One exercise was to setup a build pipeline in Jenkins for another open source project.  The other challenge was to use a the Jenkins REST API to query the Apache projects Jenkins server and present some statistics on their build history.  The results were pretty impressive!

My slides are on GitHub and the readme file describes how I setup the Amazon instances so Jenkins and some other required packages were installed before hand.  Please use them and distribute them if you are interested in teaching release engineering in your classroom.

Lessons I learned from this experience:
  • Computer science classes focus on writing software, but not necessarily building it is a team environment. So complex branching strategies are not necessarily a familiar concept to some students.  Of course, this depends on the previous work experience of the students and the curriculum at the school they attend. One of students said to me "This is cool.  We write code, but we don't build software".
  • Concepts such as building a pipeline for compilation, correctness/performance/
    regression testing, packing and deployment can also be unfamiliar.   As I said in the class, the work of the release engineer starts when the rest of the development team things they are done :-)
  • When you're giving a lecture and people would point out typos, or ask for clarification I'd always update the repository and ask the students to pull a new version.  I really liked this because my slides were in reveal.js and I didn't have to export a new PDF and redistribute.  Instant bug fixes!
  • Add bonus labs to the material so students who are quick to complete the exercises have more to do while the other students complete the original material.  Your classroom will have people with wildly different experience levels.
The third day there was a lecture by Michel Dagenais of Polytechnique Montréal on tracing heterogeneous cloud instances using (tracing framework for Linux).  The Eclipse trace compass project also made an appearance in the talk. I always like to see Eclipse projects highlighted.  One of his interesting points was that none of the companies that collaborate on this project wanted to sign a bunch of IP agreements so they could collaborate on this project behind closed doors.  They all wanted collaborate via an open source community and source code repository.  Another thing he emphasized was that students should make their work available on the web, via GitHub or other repositories so they have a portfolio of work available.  It was fantastic to seem him promote the idea of students being involved in open source as a way to help their job prospects when they graduate!

Thank you Foutse and  Bram  for the opportunity to lecture at your university!  It was a great experience!  Also, thanks Mozilla for the opportunity to do this sort of outreach to our larger community on company time!

Also, I have a renewed respect for teachers and professors.  Writing these slides took so much time.  Many long nights for me especially in the days leading up to the class.  Kudos to you all who do teach everyday.

References
The slides are on GitHub and the readme file describes how I setup the Amazon instances for the labs

by Kim Moir (noreply@blogger.com) at October 27, 2014 01:34 PM

Beyond the Code 2014: a recap

by Kim Moir (noreply@blogger.com) at October 27, 2014 01:33 PM

I started this blog post about a month ago and didn't finish it because well, life is busy.  

I attended Beyond the Code last September 19.  I heard about it several months ago on twitter.  A one-day conference about celebrating women in computing, in my home town, with an fantastic speaker line up?  I signed up immediately.   In the opening remarks, we were asked for a show of hands to show how many of us were developers, in design,  product management, or students and there was a good representation from all those categories.  I was especially impressed to see the number of students in the audience, it was nice to see so many of them taking time out of their busy schedule to attend.

View of the Parliament Buildings and Chateau Laurier from the MacKenzie street bridge over the Rideau Canal
Ottawa Conference Centre, location of Beyond the Code
 
There were seven speakers, three workshop organizers, a lunch time activity, and a panel at the end. The speakers were all women.  The speakers were not all white women or all heterosexual women.  There were many young women, not all industry veterans :-) like me.  To see this level of diversity at a tech conference filled me with joy.  Almost every conference I go to is very homogenous in the make up of the speakers and the audience.  To to see ~200 tech women in at conference and 10% men (thank you for attending:-) was quite a role reversal.

I completely impressed by the caliber of the speakers.  They were simply exceptional.

The conference started out with Kronda Adair giving a talk on Expanding Your Empathy.  One of the things that struck me from this talk was that she talked about how everyone lives in a bubble, and they don't see things that everyone does due to privilege.  She gave the example of how privilege is like a browser, and colours how we see the world.  For a straight white guy a web age looks great when they're running the latest Chrome on MacOSx.  For a middle class black lesbian, the web page doesn't look as great because it's like she's running IE7.  There is less inherent privilege.  For a "differently abled trans person of color" the world is like running IE6 in quirks mode. This was a great example. She also gave a shout out to the the Ascend Project which she and Lukas Blakk are running in Mozilla Portland office. Such an amazing initiative.

The next speaker was Bridget Kromhout who gave talk about Platform Ops in the Public Cloud.
I was really interested in this talk because we do a lot of scaling of our build infrastructure in AWS and wanted to see if she had faced similar challenges. She works at DramaFever, which she described as Netflix for Asian soap operas.  The most interesting things to me were the fact that she used all AWS regions to host their instances, because they wanted to be able to have their users download from a region as geographically close to them as possible.  At Mozilla, we only use a couple of AWS regions, but more instances than Dramafever, so this was an interesting contrast in the services used. In addition, the monitoring infrastructure they use was quite complex.  Her slides are here.

I was going to summarize the rest of the speakers but Melissa Jean Clark did an exceptional job on her blog.  You should read it!

Thank you Shopify for organizing this conference.  It was great to meet some many brilliant women in the tech industry! I hope there is an event next year too!


by Kim Moir (noreply@blogger.com) at October 27, 2014 01:33 PM

Eclipse Cloud Development: The FAQ

by Mike Milinkovich at October 27, 2014 12:00 PM

This morning, the Eclipse Foundation announced a new industry initiative focused on building tooling platforms for the cloud. The team prepared an FAQ, but I thought it might be helpful to publish it here as well.

What is being announced?

Eclipse is announcing the formation of a new Top Level Project, “Eclipse Cloud Development” to create the technologies, platforms, and tools necessary to enable the delivery of highly integrated cloud development and cloud developer environments. The ECD charter is available here. This TLP initially combines Eclipse Orion, Eclipse Che, and Eclipse Flux with SAP signaling that Dirigible will become part of the initiative. The Eclipse board of directors voted to approve this new project on September 17th, 2014, and a new project management committee has been formed. Additionally, Codenvy is announcing the creation of project Che, which contains their IP that makes up the Codenvy SDK, Codenvy IDE, and 50 plug-ins that provide programming language, source code control, deployment, and build / debugger support for cloud development. Codenvy will be contributing 30 full time resources to the ongoing development of Eclipse projects, the development of the community around cloud development, and the promotion of the Ecosystem that makes up the ECD. Codenvy has become a strategic developer member of Eclipse, and taken a board of directors seat with the foundation.

Why is Eclipse creating a top level project dedicated to cloud development?

There are over 22 million professional developers, and more than 99% of all development is still done on the desktop. The cloud has proven benefits to eliminate configuration overhead and improve visibility and control for organizations. The transition to cloud development away from the desktop has begun. Over the past 5 years, there have been 100s of global initiatives to work on development tools or underlying infrastructure necessary to enable development entirely in the cloud. With three projects at Eclipse already working on cloud development, and more coming soon, the time has come to focus the industry’s efforts on enabling this transition. By creating a Top Level Project, the combined projects are better able to concentrate their resources, more easily align on technological and market objectives, and create a streamlined path for onboarding additional ecosystem projects, developers, and committers to cloud development.

Which projects will be part of the Cloud Development Platform top level project?

Eclipse Orion, Che, and Flux. SAP Dirigible is also planned for submission and will become part of the TLP.

What is the Che project?

You can read the Eclipse Che project proposal here.

What is the difference between Orion and Che?

Orion & Che:

  • Provide a runtime for hosting, managing and scaling developer environments as Web apps.
  • Provide an SDK for packaging, loading, and running tooling plug-ins.
  • Provide a default set of plug-ins related to programming languages, source code, deployment, and other elements that are part of the developer workflow.
  • Provide a default cloud IDE that combines a set of plug-ins, the SDK, and runtime together to offer a hosted developer experience.
  • Have ways (or plans for ways) to connect desktop IDEs (like Eclipse and IntelliJ) directly into the hosted services

This commonality is also what makes them different. Orion has been authored with Node.js and JavaScript, to create a system optimized for creating, testing, and deploying interpreted language systems entirely using the latest Web technologies. To achieve this goal, it has adopted a unique architecture that is optimized around the types of applications that are meant to be built with it. With this, Orion provides many advancements around JavaScript and other interpreted languages. Che has been authored in Java and follows many of the architectural principles used by the Eclipse RCP and Java Development Tools projects. Che provides an architecture and design optimized for compiled languages along with in-depth extensions related to the Java ecosystem like maven, Java debugging, ant, and so on. The Che architecture was created to minimize the effort required to port Eclipse plug-ins to work within Che for a Web experience. While plug-ins must be rewritten in Che interfaces, the plug-in lifecycle and tooling support for Che plug-ins is designed to make the transition tax as low as possible. Additionally, the Che runtime model supports developing non-Web applications including mobile, desktop, console, and API-oriented applications that do not have a native HTML output. Additionally, the Che project also has the scope to build out infrastructure supporting a developer environment PAAS, for running developer environments with large numbers of concurrent developers, builds, runs, and projects on a unified set of hardware, while providing enterprise behavioral and access controls. The PAAS infrastructure that is part of the Che project will be designed to support any type of editor, cloud IDE, or development runtime, enabling scale out of those services. So, Orion can work within it as well.

What is Dirigible and how does that relate to Che and Orion?

Dirigible extends the concepts promoted by Orion and Che to deliver on a rapid application development framework, fully hosted in the cloud. Dirigible abstractions make developing Web services and the clients that consume them structured with scaffolding, rapid, and easier to maintain. Rapid development frameworks have commonly been deployed to support database and packaged application development, and Dirigible brings those concepts to cloud developer environments. All three projects provide hosted developer environments. But our commonality and agreement on a common vision means that there will be nice reuse and alignment between the projects. All of the projects agree on core principles relating to providing developer services as atomic microservices, decoupling the clients (IDEs) that consume those services from the services themselves, supporting a broad range of clients (whether our browser IDEs or desktop IDEs connected over a bus), and providing a consistent way to provision, share and scale hosted developer environments together.

When should a developer user choose Orion and when should they choose Che?

They should choose both :). But more seriously, while a developer should try both products for all kinds of applications, if a developer is doing a JavaScript project, they should experience Orion. If their project has Java language extensions, Eclipse plug-ins, mobile development, or other non-Web application development, then they should give Che a try. Both Orion and Che will be working on a variety of common infrastructure projects that will make it easy for projects to migrate between Che and orion, along with the Che / Orion IDEs operating on a common set of enterprise infrastructure.

How will Orion, Che, Flux, and Dirigible collaborate together?

We have identified a number of initiatives that the projects will align on. These include: Finding ways to have Che and Orion plug-ins work within each other’s systems. Standardizing on the synchronous REST API model that allows browser clients to communicate with server-side systems. A sample set of APIs can be seen at docs.codenvy.com. Using Flux to standardize on the asynchronous communication model between various cloud systems, along with enabling browser and desktop clients to have decoupled access to cloud systems. Collaborating on a model that allows for on-demand, authentication-less environment creation through URLs, similar to the effects seen by Codenvy Factory, as documented at docs.codenvy.com/user. Temporary environments will be possible to be generated on any system supporting the format. Align on common underlying enterprise infrastructure, that will seamlessly take a single server cloud IDE package, and allow it to be deployed within an enterprise system that provides multi-tenancy, elasticity, and security. Incorporate the Dirigible configurate-stored-as-a-model approach to create an abstraction to support a variety of rapid development approaches and frameworks.

What is the future of cloud development, and your roadmap here?

Cloud development’s benefits are significant to individual developers, development teams, and enterprise organizations. There are huge configuration taxes that exist today due to environment creation, environmental technical debt, environmental tribal knowledge, hardware interoperability issues, and the general interoperability issues that exist between making an ecosystem of tools and plug-ins work together. All of these problems can be dramatically reduced with a cloud development platform that automates the full lifecycle of developer environment and its supported tooling. To achieve this vision, much more than a cloud IDE is required. A cloud development platform (CDP) takes a cloud IDE and turns it into an orchestration system for developer environments. The power of the CDP is that it can automate many of the workflows that make up the tasks carried out by developers, QA, product managers, documentation specialists, and devops professionals. Why should each developer only have a single developer environment that lives indefinitely (statically) on their desktop, when – with full automation – every developer and system can have a unique developer environment for each task they carry out during the day? Whether it’s fixing a bug, investigating a legacy branch, working on a new feature, or exploring a new technology library, a cloud development platform can auto-provision a specialized environment for each task, on-demand, with no downloads or configuration required by the developer. The CDP can then provide numerous services to make the developer’s workflow shorter and less error prone. These include advanced editors, dependency analysis, automated unit testing, debugging, building, packaging, and source code management integration. The CDP can take these operations at scale, and make them run faster than they would normally run on an individual desktop, but also require less hardware for a large organization than performing these functions on desktops since the cloud can be operated on a dense hardware cluster. Finally, with development centralized, devops and development leaders, can better support the development of a population of developers by incorporating best practices, behavioral access controls, and monitoring tools to ensure IP compliance and maximum productivity of individuals. Imagine being able to direct an extra 20GB of RAM to a developer that urgently needs it for compilation, or creating a special set of sand boxes to support white room development of secure IP, or allowing an instructor to create a programming exam accessible to his students for exactly one hour with controls to detect any plagiarism or cheating. With a CDP, all of these scenarios are configurable, simply. Net, net: Faster development. Cheaper development. Development done with compliance and security. Our roadmap to support this vision includes: Advancing each TLP project through their normal roadmap and evolution processes. Recruiting new projects that provide critical developer services in the cloud into the ECD TLP. Aligning core plug-in, editor, and API / interface models between the projects. There will be combined ecosystem, evangelism, and business development efforts to bring more developers, projects, and plug-ins to ECD model including special efforts and attention to migrating existing Eclipse plug-ins. Codenvy will create an additional project that focuses on the enterprise scale elements of a ECD, and Flux / Orion / Che will work to standardize deployment of each system within the enterprise infrastructure. An effort to study how analytics, events, and the BI of development workflows will be created.

Why is Codenvy donating their core IP?

Codenvy believes in openness, transparency, ecosystem, and Eclipse. The cloud development problem is massive and cannot be solved individually. By working with the Eclipse foundation, Codenvy is able to concentrate their resources with others that have a similar value system and vision.

When will the Che code be available? For which “parts” of the platform?

The initial Che code is available under EPL at github.com/codenvy/sdk. We are working within the Eclipse process to get through Incubation now.

What kinds of products, applications can we expect to see as a result of this collaboration?

You can expect to see new IDEs, new plug-ins, new enterprise technologies to support executing these systems at scale, and you can expect to see integration bridges that bind ECD into other core development platforms like Jenkins and Jira.

How can I contribute to and participate in the Che project?

Get started by cloning the source and getting active at Eclipse. https://github.com/codenvy/sdk


Filed under: Foundation, Open Source

by Mike Milinkovich at October 27, 2014 12:00 PM

EclipseCon Europe 2014

by ekkescorner at October 27, 2014 09:52 AM

This week the EclipseCon Europe 2014 happens again in Ludwigsburg. An here’s the native BlackBerry 10 Conference2Go App. You can download the app for FREE from BlackBerry World.

econ_bbworld

Some Screenshots:

Econ02

Econ03

Econ04

Econ05

Econ06

Econ07

Econ08

Econ09

Econ10

Econ11

Want to know more about new OS 10.3 and HowTo support BlackBerry Passport  ?

Here’s a blog series.

Have fun at EclipseCon 2014 Europe !


Filed under: BB10, Blackberry, Cascades, Eclipse, EclipseCon

by ekkescorner at October 27, 2014 09:52 AM

Eclipse Announces Cloud Development Industry Initiative

October 27, 2014 08:00 AM

Codenvy, IBM, Pivotal and SAP Lead New Eclipse Cloud Development Top-level Project

October 27, 2014 08:00 AM

Enabling Spring in Scout applications

by kthoms at October 27, 2014 07:48 AM

Today I am attending the first Scout User Day 2014 in Ludwigsburg, which is aligned with EclipseCon Europe 2014 starting tomorrow. Yesterday we had a pre-event dinner with some attendees and the organizers at the Rossknecht restaurant. IMG_3161IMG_3160 I got into a chat with Nejc Gasper, who will give a talk titled “Build a Scout backend with Spring” today. I was a bit surprised as he told me he did not manage to get Spring’s classpath scanning working yet. Since we are doing this in our application, I think it is worth writing now down what we had to do to get this working. The goal in our application is primarely to use Spring as dependency injection container, since the customer uses Spring in all their other Java based applications, too, and wanted us to do so also.

Spring Configuration

The Spring configuration files are located in the folder META-INF/spring of the *.client, *.shared, *.server projects. In this configuration files, we mainly activate classpath scanning:

<?xml version="1.0" encoding="UTF-8"?>
<beans:beans xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:beans="http://www.springframework.org/schema/beans" xmlns:p="http://www.springframework.org/schema/p"
    xmlns:context="http://www.springframework.org/schema/context"
    xsi:schemaLocation="
        http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd
        http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.1.xsd">

    <context:annotation-config />
    <context:component-scan base-package="com.rhenus.fl" />
    <!--  http://docs.spring.io/spring/docs/4.0.x/spring-framework-reference/html/validation.html#core-convert-Spring-config -->
    <beans:bean id="conversionService" class="org.springframework.core.convert.support.DefaultConversionService" />
</beans:beans>

The next important thing is to copy the files spring.handlers, spring.schemas<, spring.tooling into the META-INF folder. The files can be found in the META-INF directory of bundle org.springframework.context. screenshot 21 Without doing this, you will get errors while loading the Spring configuration like this:

Caused by: 
org.springframework.beans.factory.parsing.BeanDefinitionParsingException: Configuration problem: Unable to locate Spring NamespaceHandler for XML schema namespace [http://www.springframework.org/schema/context]|Offending resource: URL [bundleresource://9.fwk1993775065:1/META-INF/spring/fl_client.xml]|
	at org.springframework.beans.factory.parsing.FailFastProblemReporter.error(FailFastProblemReporter.java:70)
	at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderContext.java:85)
	at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderContext.java:80)
	at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.error(BeanDefinitionParserDelegate.java:316)
	at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.parseCustomElement(BeanDefinitionParserDelegate.java:1421)
	at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.parseCustomElement(BeanDefinitionParserDelegate.java:1414)
	at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.parseBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:187)
	at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.doRegisterBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:141)
	at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.registerBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:110)
	at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.registerBeanDefinitions(XmlBeanDefinitionReader.java:508)
	at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:391)
	at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:335)
	at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:303)
	at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:180)
	at org.springframework.context.support.GenericXmlApplicationContext.load(GenericXmlApplicationContext.java:116)

Bundle Activator

The Spring configuration files are loaded in the Bundle Activator classes of the three “main” Scout projects (client/shared/server). The Activator can also be used then to access the ApplicationContext. We use the GenericXmlApplicationContext to initialize the context from the XML configuration above. One important thing is that this class uses the ClassLoader of the Activator. Otherwise you will get again the error mentioned in the section above. The Activator class looks then as follows:

public class Activator extends Plugin {

 // The plug-in ID
 public static final String PLUGIN_ID = "com.rhenus.fl.application.client";
 public final static String SPRING_CONFIG_FILE = "META-INF/spring/fl_client.xml";

 // The shared instance
 private static Activator plugin;
 private ApplicationContext ctx;

 @Override
 public void start(BundleContext context) throws Exception {
  super.start(context);
  plugin = this;
  init(context);
 }

 @Override
 public void stop(BundleContext context) throws Exception {
  plugin = null;
  super.stop(context);
 }

 public static Activator getDefault() {
  return plugin;
 }

 private void init(BundleReference bundleContext) {
  URL url = getClass().getClassLoader().getResource(SPRING_CONFIG_FILE);

  UrlResource usr = new UrlResource(url);

  ctx = new GenericXmlApplicationContext() {
  @Override
  public ClassLoader getClassLoader() {
   return Activator.class.getClassLoader();
  }
 };
  ((GenericXmlApplicationContext) ctx).load(usr);
  ((AbstractApplicationContext) ctx).refresh();

 }

 public ApplicationContext getContext() {
  return ctx;
 }
}

Service Factory

In order to use dependency injection in Scout services, the services themselves must be instantiated through the Spring ApplicationContext. The default implementation of course is not aware of Spring, so we need to customize this. Unfortunately we have to copy the class org.eclipse.scout.rt.server.services.ServerServiceFactory. We need just to exchange one single line in the method updateInstanceCache(), where the service is instantiated, but this method is private in Scout. The line

m_service = m_serviceClass.newInstance();

is replaced by

m_service = getContext().getBean(m_serviceClass);

Since we have to provide different ApplicationContexts in the different plugins, we put this into the abstract class AbstractSpringAwareServerServiceFactory (full code):

public abstract class AbstractSpringAwareServerServiceFactory implements IServiceFactory {

 private void updateInstanceCache(ServiceRegistration registration) {
  synchronized (m_serviceLock) {
   if (m_service == null) {
    try {
     // CUSTOMIZING BEGIN
//     m_service = m_serviceClass.newInstance();
     m_service = getContext().getBean(m_serviceClass);
     // CUSTOMIZING END
     if (m_service instanceof IService2) {
      ((IService2) m_service).initializeService(registration);
     } else if (m_service instanceof IService) {
      ((IService) m_service).initializeService(registration);
     }
    } catch (Throwable t) {
     LOG.error("Failed creating instance of " + m_serviceClass,
       t);
    }
   }
  }
 }
 
 // CUSTOMIZING BEGIN
 protected abstract ApplicationContext getContext();
 // CUSTOMIZING END

}

The concrete classes implement the method getContext() by accessing the method from the Bundle Activator:

public class ServerServiceFactory extends AbstractSpringAwareServerServiceFactory {

  /**
   * @param serviceClass
   */
  public ServerServiceFactory(Class<?> serviceClass) {
    super(serviceClass);
  }

  @Override
  protected ApplicationContext getContext() {
    return Activator.getDefault().getContext();
  }

}

plugin.xml

The service factory class implemented above must be used now to create the services. This is done in the plugin.xml file:

<service
 factory="com.rhenus.fl.application.server.services.ServerServiceFactory"
 class="com.rhenus.fl.tmi.server.tmirln010.TMIRLN010Service"
 session="com.rhenus.fl.application.server.ServerSession">
</service>

Use Dependency Injection

Now we are finally able to use Dependency Injection with javax.inject.Inject with Scout services.

import org.springframework.stereotype.Component;
import javax.inject.Inject;
...

@Component
@InputValidation(IValidationStrategy.PROCESS.class)
public class TMIRLN010Service extends AbstractTMIRLN010Service {
  @Inject
  protected ConversionService conversionService;
  ...
}

Go!

If everything is correct, you will now recognize the following lines in the console when starting up the Scout application:

Okt 27, 2014 8:45:28 AM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions
INFO: Loading XML bean definitions from URL [bundleresource://9.fwk1993775065:1/META-INF/spring/fl_client.xml]
Okt 27, 2014 8:45:29 AM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions
INFO: Loading XML bean definitions from URL [bundleresource://10.fwk1993775065:1/META-INF/spring/fl_shared.xml]
Okt 27, 2014 8:45:29 AM org.springframework.context.support.AbstractApplicationContext prepareRefresh
INFO: Refreshing com.rhenus.fl.application.shared.Activator$1@b40d694: startup date [Mon Oct 27 08:45:29 CET 2014]; root of context hierarchy
Okt 27, 2014 8:45:29 AM org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor 
INFO: JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
Okt 27, 2014 8:45:29 AM org.springframework.context.support.AbstractApplicationContext prepareRefresh
INFO: Refreshing com.rhenus.fl.application.client.Activator$1@292f062b: startup date [Mon Oct 27 08:45:29 CET 2014]; root of context hierarchy
Okt 27, 2014 8:45:29 AM org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor 
INFO: JSR-330 'javax.inject.Inject' annotation found and supported for autowiring


by kthoms at October 27, 2014 07:48 AM

Sirius Gallery: 20 cool and noteworthy examples

by Fred Madiot (noreply@blogger.com) at October 24, 2014 02:11 PM

We have added a new Gallery page today on the Sirius web site:

http://eclipse.org/sirius/gallery.html

This page presents 20 cool and noteworthy modelers already created with Sirius in various domains: Systems Engineering, Software Development, Business Configuration, etc.


Configuration of plastic products manufacturing

Capella Systems Engineering workbench

Modeling of Android mobile applications

Configuration of home automation systems


The list is not complete ... since we don't know or can't publish all what is done with Sirius, but it will be completed soon with several other examples we have in store.

So, if you also did something that could be published on this page, you can submit a short description and some nice screenshots here :

https://bugs.eclipse.org/bugs/show_bug.cgi?id=448492

by Fred Madiot (noreply@blogger.com) at October 24, 2014 02:11 PM

@Flight Conference 2014

by Chris Aniszczyk at October 24, 2014 12:08 PM

I had a great time at @Flight, our first mobile developer conference at Twitter where we announced Fabric. As part of the conference, I helped organized a small run in the morning to start things off, it was nice to see about 20 people show up to run a 5K (ok, it was really more like an 8K with hills).

At the conference, I had the opportunity to talk briefly in the Lightning Theater about some of the open source technology behind tweets, in the context of what happens behind the scenes of a typical API call.

I hope the audience left with some new knowledge and appreciation of what helps power those tweets they see everyday. I posted the slides on Slideshare if anyone is interested. I look forward to us doing this next year, it’s about time that we do more developer focused events at Twitter.


by Chris Aniszczyk at October 24, 2014 12:08 PM