November 16, 2018
After a week in Shanghai all I can say is wow, I’m truly humbled by the open source and cloud native community in China that showed up to support our first conference in China. I first want to thank the amazing CNCF events team and most importantly Janet Kuo and Liz Rice who acted as tireless program chairs for this first time event:
— Liz Rice @ KubeCon CloudNativeCon (@lizrice) November 15, 2018
I’ve had the fortunate/unfortunate experience of traveling to China 6 times in the last 12 months and it’s been an experience learning about the open source community here. Also it’s been hilarious learning all the new tools like WeChat, Didi, Ofo and so on to navigate life in China but I can save that for another time (DiDi jail is the worst). The CNCF has grown from a few members in China to about 40 which represents a little more than 10% of the CNCF total membership. China is the third largest contributor to CNCF projects (in terms of contributors and committers) after the U.S. and Germany:
— Chris Aniszczyk (@cra) November 16, 2018
Huawei and PingCap lead the way for Chinese companies with 34,000+ and 32,000+ contributions respectively, and are the fifth and sixth largest contributors overall. We also now host three CNCF projects that were effectively born in China: Dragonfly (Alibaba), Harbor (VMWare China) and TiKV (PingCap). I’m proud of the work the CNCF has done to facilitate project learnings across the world as China scale open source is a trend that will continue to grow (I plan on writing more about this soon as I finalize my thoughts).
I’m also proud to award JD.com our first End User Award in China for their cultivation of cloud native in China, they run one of the largest bare metal Kubernetes and Vitess deployments in the world and have been very forthcoming in sharing the lessons around that experience:
— Jeffrey Borek (@jeffborek) November 15, 2018
Here are some of my other favorite tweets and moments from the conference:
— Phil Estes (@estesp) November 15, 2018
It's also my first bilingual conference and the experience has been really cool! pic.twitter.com/sHVG3Axr5R
— Vicki Cheung (@vmcheung) November 15, 2018
— Brad Topol (@bradtopol) November 15, 2018
Listening to the first #KubeCon + #CloudNativeCon keynote of the day: PhD Julia Han and Xin Zhang talking about how they're helping the State Grid of China solve difficult problems with #MachineLearning using @kubeflow pic.twitter.com/q95h82BnSw
— Kubernetes Finland (@KubernetesFin) November 15, 2018
I love seeing many VMware coworkers within the group of Harbor contributors being recognized in the #KubeCon China keynote. Harbor is the first China originated CNCF project, just advanced from sandbox to incubating status pic.twitter.com/QwLqiPtahZ
— Steve Wong (@cantbewong) November 14, 2018
Had an awesome conversation about binary authorization projects in-toto [https://t.co/959vCU6XOE] and Grafeas & Kritis [https://t.co/lC5X9ScGK4] at #KubeCon + #CloudNativeCon China – nowhere else can you have these kinds of conversations–this is why I attend!
— Christopher Hanson (@CloudNativChris) November 14, 2018
Also, massive kudos to the translators at #KubeCon + #CloudNativeCon who have been ON POINT. The translator for the Service Mesh panel was capturing and relaying a boatload of technical details in a staggeringly fast amount of time.
— George Miranda (@gmiranda23) November 15, 2018
Tencent originally had something like Borg in 2009, migrated to Docker and k8s in 2013-14, Docker on Yarn in 2015, 2016-now is Tencent k8s engine. #KubeCon
— Justin Warren (@jpwarren) November 15, 2018
— Dan Kohn @ #KubeCon + #CloudNativeCon Shanghai (@dankohn1) November 15, 2018
— Janet Kuo @ KubeCon CloudNativeCon (@janet_kuo) November 14, 2018
— Chris Aniszczyk (@cra) November 14, 2018
— Zach Corleissen (@zachorsarah) November 14, 2018
— Zach Corleissen (@zachorsarah) November 14, 2018
— Vijay Dhama @Kubecon Shanghai (@vjdhama) November 14, 2018
Anyway, thank you so much for everyone who attending and took a chance on our first event in China. I’m exhausted and heading for a long vacation but truly proud of the CNCF team and community for putting on an amazing event.
November 14, 2018
Preparing the upcoming first release 0.8.0 of
Eclipse Ditto, this milestone is a last checkpoint to ensure that the release will be performed smoothly.
Therefore, this milestone release primarily focuses on stabilization.
Have a look at the Milestone 0.8.0-M3 release notes for what changed in detail.
The main changes and new features are
- speed up of search index creation
- applying enforcement of messages received via connections (e.g. from Eclipse Hono)
- copying already existing policies when creating things
The Docker images have been pushed to Docker Hub:
The Eclipse Ditto team
November 11, 2018
It has been quite some time since the last update of the Timekeeper for Eclipse plug-in. Having used it myself for a while I soon realized that the tool was promising, but there were some essential bits that had to be improved. So all this time I’ve been
mucking about carefully planning, developing and testing improvements.
After I started using the excellent Eclipse Installer my workflow has changed: For projects that I don’t work on very often, I typically create a new installation whenever I need it and scrap it when I’m done. Since the timekeeper data were stored in the Mylyn Tasks metadata they would be lost once the installation and workspace got wiped. Sometimes I even have the same projects open in different workspaces, so the timekeeping data would be stored in different places.
Some tasks are not easily resolved and I can spend a little time over several days on these, so I also wanted a way to track these activities and make short notes for each.
The last bit I felt was missing was having the ability to customize reports and switch between i.e. different plain text or HTML summaries. Looking forward there it may even be possible to export to various other tools.
In addition to fix a few bugs, I’ve addressed all of the above concerns in the upcoming release. This blog post attempts to make a summary of the most important changes.
As always with hobby projects there is an opportunity to learn. So when implementing these new features and improvements I wanted to make use of some APIs and technologies that I believe I should know more about. As a result the data is now stored in a H2 SQL database, mapped to POJOs using the Java Persistence API with EclipseLink. Establishing the baseline and migration to a new version of the database is handled using Flyway, and finally; reports are generated using Apache FreeMarker.
The Database configuration page in preferences (Timekeeper > Database) allows you to configure where the database for the current Eclipse instance should be kept. The default is to place it in the shared location, under .timekeeper in your home folder. But you can also use a workspace relative path, or even a H2 server if you have one running.
Multiple instances of the Timekeeper can share the database as it utilizes a H2 feature called mixed mode. This will automatically start a server instance on port 9090 if more connections are needed.
The Export and Import buttons are used for exactly that. CSV files, one for each table, are created once a destination folder has been selected. Note that when importing, the data is merged with what’s already in the database. So if you at some time want to start with a clean sheet, it you will have to delete the database files while no Timekeeper instance is up and running.
Recording task activity
As you might notice from the screenshot below, there are now activities associated with each task. Each activity has a start time and a stop time that can be manually edited. While as before you could only assign a number of hours to each task.
Local task repositories are just that. So each task is assigned a number in sequence. That won’t work by itself when storing local task information in the database, so for each workspace an identifier is created to keep track of activities related to a local task. If you wipe the workspace these are lost, however they still exist in the database so you could retrieve them if you really wanted to.
The only report templates that are already in place are quite simple. There are two HTML versions, one using Font Awesome for some eye candy, and the other without. Both basically replicate the “workweek” view.
Configuring Report templates
The report templates have their own configuration page in the preferences (Timekeeper > Report Templates). Here you can add your own templates or modify the existing ones. The source editor has very basic support for FreeMarker syntax highlighting.
Currently there is support for three different content types: HTML, plain text and Rich Text Format.
The default template is the one used when simply pressing the report toolbar button in the workweek view. Activating the pulldown menu will show all templates, allowing you to select the one you want.
There is currently no documentation on how to use the various functions and data structures in the report engine. Please examine the existing templates for now if attempting to do modifications or create your own.
I think the new features added to the Timekeeper plugin to Eclipse makes it much more usable, now it’s only a matter of ironing out the bugs. It appears fairly stable, but there may still be issues that I have not noticed even after weeks of use. Such a large rewrite is a bit scary. In any case, I’m releasing a beta-version now so that those of you interested can give it a spin. You can point your Eclipse plug-in installer UI to https://resheim.net/p2/eclipse-timekeeper_beta/ and take it from there.
If you want more details, the project code is found at https://github.com/turesheim/eclipse-timekeeper.
November 08, 2018
November 07, 2018
The Eclipse Foundation recently made available a new policy to make sure that our projects and hosted services are compliant with the General Data Protection Regulation (GDPR).
The Eclipse Foundation Hosted Services Privacy and Acceptable Usage Policy will provide guidance to folks who operate a virtual server or a website hosted either directly by the Eclipse Foundation or provided via the Eclipse Foundation’s funding in support of an Eclipse Foundation open source project.
We want to ensure that all such services meet the highest standards of privacy and transparency, and to ensure that any collected data is used strictly in support of the activities of its open source projects.
There are two changes that we would like to highlight. First we have updated our position on Google Analytics (GA). Projects will now be allowed to create their own GA property, provided they agree to the conditions listed in our new policy.
Secondly we are better defining the responsibilities of the projects or committers responsible for hosting a service or website with the Eclipse Foundation.
Hosted services and Eclipse Projects can adopt this new policy by creating an issue on Eclipse Bugzilla under Community > Hosted Services Privacy and Acceptable Usage Policy where they acknowledge reading and understanding the policy.
Those who wish to store Personally Identifiable Information (PII) must create and include a Data Protection Impact Assessment (DPIA) document. The DPIA must describe what kinds of PII data will be collected and their purpose. This will have to be updated as your services evolve by uploading a new version on Eclipse Bugzilla.
If a service wishes to retain PII for longer than 1 year, they must produce a Data Retention Policy (DRP) that indicates how long they plan to keep each pieces of PII data and why they need to keep them for that long.
For the complete list of requirements & conditions, please make sure to read the Eclipse Foundation Hosted Services Privacy and Acceptable Usage Policy.
Please let us know if you have any questions or concerns regarding this new policy by sending your questions to email@example.com.
The Eclipse Foundation Specification Process (EFSP) was authored as an extension to the Eclipse Development Process (EDP). With this in mind, before we can discuss the EFSP, we’ll start with a quick EDP primer.
At a high (and very simplified) level, the EDP looks a little something like this:
All open source projects at the Eclipse Foundation start life as a proposal. A proposal literally proposes the creation of a new open source project: the proposal document suggests a name for the new project, and defines many things, including a description and scope of work. The proposal also serves as the nomination and election of all project committers and project leads.
The proposal is posted for community feedback for a minimum of two weeks; during that time, the Eclipse Foundation staff works behind the scenes to ensure that the project’s name can be claimed as a trademark, a mentor has been identified, the licensing scheme works, and more. The community feedback period ends with a creation review which lasts for a minimum of one week. The creation review is the last opportunity for the community and the members of the Eclipse Foundation to provide feedback and express concerns regarding the project.
After successful completion of the creation review, and the project resources have been provisioned by the Eclipse Webmaster team, the project team engages in development. Project committers push code to into the project’s source code repositories, and produce and disseminate milestone (snapshot) builds to solicit feedback as part of an iterative development process.
When the time comes to deliver a formal release, the project team produces release candidates and engages in a release review. A release review provides an opportunity for the project team to demonstrate to their Project Management Committee (PMC) that their content is ready for release, work with the Eclipse Intellectual Property Team to ensure that all of the required IP due diligence has been completed successfully, and give the community and membership a final opportunity provide feedback and express concerns. Following a successful release review, the project team will push out their final (GA) build and announce the official release to their community via established channels.
The proposal serves as the first plan for the new open source project. Subsequent releases start with the creation of some sort of plan before reengaging in the development (release) cycle. The level of formality in the planning process varies by project. For many projects, the plan is little more than an acknowledgement that further development is needed. But for some projects, planning is a well-defined open process by which the committers work with their communities to identify themes and issues that will be addressed by the release.
In my next post, I’ll discuss how this process is extended by the the EFSP. Then, I’ll start digging into the details.
You can find the community draft of the Eclipse Foundation Specification Process here.
November 05, 2018
Note: to see redline versions of the changes to the documents discussed below, please visit this contribution and committer agreements page.
Over my almost 15 years of sharing updates about what’s going on at Eclipse, some blogs are more important than others. This one is important as it requires action by our members, committers, and contributors! There is a lot of ground to cover explaining what’s going on and why we’re changing things, so please forgive me for a longer than normal post.
tl;dr. The Eclipse Foundation is starting to develop specifications. First for Jakarta EE, but soon for other areas as well. We want to make it clear that contributions to our open source projects may someday be used to create a specification, because we believe in code-first innovation. We also believe that if you’re contributing to open source, you want your contributions to be used for open purposes, including specs.
We are updating our standard contributor and committer agreements, and we will be requiring all our committers and contributors, as well as those members who have member committer agreements, to re-sign their agreement with us.
To make this happen, we will be reaching out to everyone who needs to re-sign. You don’t have to do anything yet – just be aware the change is coming, and please act when we do make contact with you.
First, a bit of background. All contributions and commits made to any Eclipse Foundation project are covered by one of three distinct agreements – the Member Committer Agreement, the Individual Committer Agreement, or the Eclipse Contributor Agreement.
These agreements basically say that if you contribute to an Eclipse project, your contributions are being made under the license of the project. That license is usually the Eclipse Public License, but about 20% of our projects using additional or alternate licenses such as the Apache License, BSD, or MIT. It is important to note that the way things work at the Eclipse Foundation, the Foundation itself does not acquire any rights to the contributions. This is very different from other organizations like the FSF, OpenJDK, or the Apache Software Foundation. Eclipse uses a licensing model sometimes referred to as symmetrical inbound/outbound licensing, where contributors license their code directly to the users (recipients) of their contributions. Our approach requires us to ensure that all of our contribution agreements provide all necessary grants because we at the EF don’t have any rights to re-license contributions.
As most are aware, Eclipse is now about to start hosting specifications as open source projects. This is very exciting for us, and we think it represents a new opportunity for creating innovative specifications using a vendor neutral process. The first specification projects will be a part of the Jakarta EE initiative, but we expect other specification projects to follow shortly.
Everyone expected to re-sign one of these is encouraged to ensure they understand the details of the agreements and to seek their own legal advice. However, the change we have made is basically to ensure the copyrights in contributions to Eclipse projects may be used in specifications as well. (For the lawyers in the crowd, please note that these additional grants do not include patents.) We certainly expect that our committers and contributors are fine with this concept. In fact, I assume that most folks would have expected that this was already obvious when they contributed to an open source project. To that, all I can say is….ahhhh…the lawyers made us do it.
The new agreements are already posted, so they are in immediate effect for new contributors and committers. Since we need to overhaul our contribution agreements, we are also taking this opportunity to fix a few things. In particular, our committers will know that up until now they’ve been required to be covered by both a committer agreement and the ECA. We’re going to fix that, so if you sign an Individual Committer Agreement, or are covered by your employer’s Member Committer Agreement, you will no longer have to personally sign an ECA. We are also going to implementing electronic signatures for ICAs using HelloSign. So going forward there is going to be a little less paper involved in being a committer. Yay!
We’re sensitive that asking our contributors and committers to ‘update their paperwork’, especially if they’re not working on a specification, is – well, a pain in the backside. But we’re hoping everyone will be supportive and understanding, and recognize that we take IP very seriously, and it’s one of the real value propositions of working with Eclipse.
Contributors who have an ECA will see them revoked over the coming months, and will be asked to re-sign the new one. We will be starting first with the contributors to the EE4J projects, since they are the ones who are most likely to have contributions flowing into Jakarta EE specifications.
Executing this change represents a massive effort for our team, as it literally means updating hundreds of committer agreements. Our staff will be emailing individually each individual and member company needing to update their agreement with us, but we will be spread it over a period of the next few months. So don’t be surprised if you don’t get an email for a while – we will get to everyone as soon as we can.
Stay tuned for emails on this subject that will be sent to our various mailing lists with more details. If you have questions, feel free to reach out to us at firstname.lastname@example.org and we’ll do our best to provide answers.
I thank our entire community in advance for accommodating this significant change. We are excited about the Eclipse Foundation hosting an even more vibrant collection of projects, and believe hosting open source specification projects is a great step forward in our evolution!
For the solutions, I used flatCollect, toBag, topOccurrences, collect, toSet, reduce and intersect.
public void topCandy()
MutableList<Bag<Candy>> bagsOfCandy =
// Hint: Flatten the Bags of Candy into a single Bag
Bag<Candy> bigBagOfCandy =
bagsOfCandy.flatCollect(bag -> bag).toBag();
// Hint: Find the top occurrence in the bag and convert that
// to a set.
MutableSet<Candy> mostCommon =
// Hint: Find the top 10 occurrences of Candy in each of the
// bags and intersect them.
MutableSet<Candy> commonInTop10 =
bag -> bag.topOccurrences(10)
APIs covered in Kata
- flatCollect — flattens a nested collection of collections based on some attribute specified in a Function.
- toBag — converts a collection to a Bag.
- topOccurrences — find the top occurrences of items in a Bag based on their counts. The List returned will be bigger than the specified count requested if there are any ties.
- collect — transforms a collection from one type to another using a specified Function.
- toSet — converts a collection to a Set.
- reduce — applies a BinaryOperator to all elements of the collection, in this case a call to intersect two sets.
- intersect — returns the result of intersecting two sets.
November 04, 2018
November 02, 2018
We are excited to launch the 2018 edition of our brand survey. The survey will run until December 14, 2018. Please let us know what YOU think of the Eclipse Foundation and share your ideas for making it represent the Eclipse community better.
Go to http://bit.ly/2018EFSurvey to take the survey today!
November 01, 2018
Learn how to use Eclipse Collections APIs in a fun Java code kata.
At Oracle CodeOne last week I co-presented a talk titled “Invest in your Java Katalogue”. I’ve co-presented the talk previously at QCon New York. There is a video of the talk from QCon NY available here. In the talk, I encourage developers to create their own katas to teach themselves new programming skills and then to share those katas with other developers. I often say that we learn best by doing, and the best way to learn is to teach.
So here I am now, practicing what I preach. The rest of this blog will include code for a Halloween Kata using Java 8 with Eclipse Collections. I just developed the kata this evening. There’s perhaps no better way to see how sweet the APIs of Eclipse Collections are than with a cup or Bag of candy. You might also get to see how sweet some of the Java Time APIs are along the way.
You can find Maven coordinates to get the Eclipse Collections binaries here if you want to set up a simple Maven project to work in. To make things even easier, you can also just download the Eclipse Collections Katas from GitHub, import it into your favorite IDE as a Maven project and add a class in the test folder under the Pet Kata. This is exactly what I did. I called my class HalloweenKata. I’ve included the imports I used below to be helpful.
public class HalloweenKata
Trick or Treat
Here’s an enum of Candy. You can include this as an inner class in HalloweenKata. Do you see all of your Halloween favorites? I left my favorite candy bar out — Whatchamacallit? Too tempting.
Time to send the kids out trick or treating
We usually see several rounds of kids come to our house looking for candy on Halloween. I’ve grouped them by three educational groupings in the United States to keep things simple.
private MutableList<Bag<Candy>> collectBagsOfCandy()
LocalDate halloween =
LocalDate.of(2018, Month.OCTOBER, 31);
LocalTime elementarySchoolStart =
LocalTime middleSchoolStart =
LocalTime highSchoolStart =
long candyCount = 250L;
Bag<Candy> elementarySchoolBag = this.trickOrTreat(
Bag<Candy> middleSchoolBag = this.trickOrTreat(
Bag<Candy> highSchoolBag = this.trickOrTreat(
When each group goes trick or treating, they get a random collection of candy in their bags, seeded by their start time.
public Bag<Candy> trickOrTreat(LocalDateTime time, long candyCount)
ZoneId newYork = ZoneId.of("America/New_York");
IntStream limit = new Random(
.ints(0, Candy.values().length - 1)
Bag<Candy> bagOfCandy = limit.mapToObj(i -> Candy.values()[i])
A fix the test style kata
This test is missing some code. Your job is to fill in the missing code with code that will compile and pass the test. This is where you get to try things out and experiment as you look to learn some unfamiliar or even practice familiar APIs in Eclipse Collections.
public void topCandy()
MutableList<Bag<Candy>> bagsOfCandy =
// Hint: Flatten the Bags of Candy into a single Bag
Bag<Candy> bigBagOfCandy = null;
// Hint: Find the top occurrence in the bag and convert that
// to a set of Candy.
MutableSet<Candy> mostCommon = null;
// Hint: Find the top 10 occurrences of Candy in each of the
// bags and intersect them to see which are the common ones
// between all of the bags.
MutableSet<Candy> commonInTop10 = null;
Note: I have tried running these tests on a Mac Book Pro and Windows 10 Machine with Java 8. The results are consistent between runs, but I have not verified if they are consistent on any other platforms and Java versions.
Kata to learn, Kata to teach
I put this kata together quickly today to show how you can explore different APIs in a programming language and library by building a kata. You could try this same kata in different languages or with different collections libraries or using Java Streams. It’s really up to you to decide what you want to learn and what you want to teach others. This kata focused on learning and teaching several APIs available on Eclipse Collections types. I built a simple use case to demonstrate these APIs that I thought many developers might find fun and inviting.
I have posted my solutions to the kata.
October 29, 2018
EclipseCon for me is many things. It’s a chance to meet face to face with my fellow CDT contributors. It’s an opportunity to run things by one another that may feel awkward over the mailing list or conference calls. It’s a chance to get a good feel for what’s happening in the rest of the Eclipse IDE and the rest of the Eclipse ecosystem. And it’s a chance to hang out with my brothers and sisters in the community and have a few laughs over a few beers going too late into the night but ready to get to work the next morning. It’s the best.
This year was special for another reason. The Eclipse IDE is changing. The world of IDEs is changing. A new generation is upon us. And, no, it’s not any particular IDE. Nor is it my fictional Eclipse Two IDE :). And believe it or not, it does involve and give a new lease on life to the old workhorse most of us simply call Eclipse. It’s a new architecture for all IDEs and the Eclipse community is taking a leadership role in adopting that architecture. Talks on the topic were everywhere at EclipseCon.
Of course I’m talking about the Language Server Protocol and the Debug Adapter Protocol. They were introduced by Microsoft for Visual Studio Code but are also open for adoption by any IDE. It allows users to chose the front end that gives them the best user experience while giving access to the language and debug features they expect from all IDEs. It allows IDE builders to work together on these features and it allows platform vendors to not only help with that and but also give their users and customers choice.
For the CDT, we’ve been monitoring the clang/LLVM based language servers closely. clangd has industry momentum but is missing some key features. cquery has a ton of features including extensions to the LSP but has a relatively small community. CDT is working on support for both by leveraging the common Eclipse LSP4E plugins and the Generic editor. We have a long way to go before these services reach parity with the current CDT, but working with this larger community, I’m confident we’ll get there. And it will solve our problem of keeping up with the ever evolving C++ language standard thanks to the great work that goes into clang.
The biggest benefit of this new component architecture is to allow users choice. For CDT, we’re going to turn that on it’s ear a bit. For us, it’s also about sharing our expertise with other IDEs. Our first step down that road will be to produce a set of Visual Studio Code extensions first for our debug adapter to ensure a seamless experience on par with CDT. Depending on what happens on the language server side, we may also produce one for LSP to help integrate clangd which may need to be forked to properly handle gcc-based environments or add features the clangd community aren’t interested in.
Our committment has always been to provide the best open tooling for C/C++ developers. For many, many years, that was Eclipse. This new architecture opens the door for alternatives and as the C/C++ community spreads their wings into this new world, we, the CDT contributors, will be there for them.
What an M&A surprise in the tech world yesterday with IBM picking up Red Hat, the jokes on Twitter were of course on point:
Nobody got fired for buying Kubernetes
— Alexis Richardson (@monadic) October 28, 2018
To expand on Alexis Richardson’s funny joke above, the clouds wars are no joke amongst the hyper scale clouds of the world and the war continues to escalate. Microsoft recently closed its deal with GitHub at $7.5B to only have IBM buy Red Hat for $34B. I can’t wait to see what Google, Oracle and the other large cloud providers pick up in the coming months.
I’ve had the privilege to work at both IBM and Red Hat earlier in my career so I’m familiar with the culture of both companies; it’s going to be interesting to see how the acquisition plays out over time. IBM is a gigantic company known for its bureaucracy that has been around for over 100 years and has successfully reinvented itself multiple times to survive (see the Who Says Elephants Can’t Dance book by IBM former CEO Lou Gerstner for a case study on this). Red Hat is an early open source pioneer with a fantastic and unique engineering culture that has been supporting remote work before it was cool and pioneered the concept of an “open source conflict of interest” clause (which will be interesting to see if IBM adopts):
“Participation in an open source community project, whether maintained by the Company or by another commercial or non-commercial entity or organization, does not constitute a conflict of interest even where you may make a determination in the interest of the project that is adverse to the Company’s interests”
There has been some FUD going around that IBM doesn’t fund open source or participate much in open source:
This isn't true, though. IBM is a major contributor to Linux, the CNCF, Eclipse and Apache Foundations, Java itself, Docker, and a ton of other things. They helped create Istio and Knative. They were doing this long before MS was. https://t.co/Oddn1pBHLw
— Karl Matthias (@relistan) October 29, 2018
This FUD is absolutely crazy and needs to stop, IBM has arguably done more for open source than any other company to get where we are today with open source being prevalent in almost every industry and vertical:
we probably shouldn't forget that IBM has arguably done more than any company on the planet to make open source an enterprise play. it paved the way for all of us.
— Ruthless Netpromoter (@monkchips) October 29, 2018
IBM spent $1B on Linux before open source (and even Linux was cool)? Hell, I spent my early career working on open source at IBM where they had one of the first Open Source Program Offices (OSPOs) and spent my time hacking on Eclipse full time, which was another open source project that IBM helped start that disrupted the whole commercial tooling industrial complex. You can read more about IBM’s commitment to open source here which I think provides a great timeline of the various open source projects they have been involved in before open source was cool.
Anyways, to my Red Hat colleagues, my advice would be to give this a chance for awhile as IBM has a lot of strengths that Red Hat could take advantage of, they are truly a global company and have a solid sales channel that is embedded all over the world.
To my IBM colleagues, don’t “bluewash” this company and almost treat this as a reverse merger, embrace the culture from Red Hat and you should honestly consider making Jim Whitehurst CEO of IBM and Chris Wright CTO of IBM. As Lou Gerstner said, “culture isn’t just one aspect of the game, it is the game” and this is one area that Red Hat can greatly help IBM as it navigates towards the cloud.
Here are also a couple other good takes on the acquisition I enjoyed:
— Tyler Jewell (@TylerJewell) October 28, 2018
IBM's Old Playbookhttps://t.co/DW3GAD3FTs
IBM has bought Red Hat in an attempt to recreate its success in the 90s; it's not clear, though, that the company or the market is the same.
— Stratechery (@stratechery) October 29, 2018
Finally, I’m really looking forward to see what IBM and Red Hat together, they have both been kindred spirits in making bets early on open source and I hope they bring that same zeal to the cloud. It at least makes my job running the Cloud Native Computing Foundation (CNCF) more entertaining
October 24, 2018
One of the most fundamental features of the e(fx)clipse runtime is to integrate JavaFX into the Equinox OSGi-Container and even a running Eclipse IDE.
We currently support the following setups:
- JavaFX 8
- JavaFX 9/10
- JavaFX 11
and the integration for all those versions is a bit different. I don’t want to go into details but starting with JavaFX-11 we need to spin up a new Java-Module-System-Layer at runtime because we can not assume JavaFX being part of the JRE running your OSGi-Container (Eclipse IDE).
Since JavaFX-9 we spin up a dynamic layer to implement JavaFX-SWT-Integration and we adapted that logic for JavaFX-11 to load all JavaFX-11 modules.
The code we have works like this and it works prefectly fine until someone like ControlsFX comes along and does not play by the rules trying to load classes from unexported packages like com.sun.javafx.runtime.VersionInfo.
The standard answer from ControlsFX to fix that problem temporarily is to force the module-system to export them using –add-exports=javafx.base/com.sun.javafx.runtime=ALL-UNNAMED.
Unfortunately this workaround does not work in our case because the command-line flag only allows to modify modules of the Boot-Layer but not those created in dynamic ones like those we construct inside our JavaFX-OSGi integration.
I was investigating yesterday how one could fix this problem but could not come up with a good solution (one that does not call into internals of the module system) until I tweeted
about it and Tom Watson (one of the maintainers of Equinox) pointed me into the right direction.
Tom Schindl (@tomsontom) October 23, 2018
So the solution is
and now I have to think how we expose that to in our OSGi-Integration.
I am very excited to say that Eclipse GlassFish 5.1-RC1 is now released! The version we are working on, Eclipse GlassFish 5.1.0, will be Java EE 8 certified once it is fully released, however, the RC1 gives the community an opportunity to test the code and provide their feedback. We are making available nightly builds as well.
Huge progress has been made in the Jakarta EE world over the last couple of months. A big thank you to everyone involved!
Let’s recap all the successes and take a moment to celebrate our little victories!
GlassFish and Oracle Java EE API contributions to Jakarta EE are now complete.
Java EE TCKs are open sourced and hosted at the Eclipse Foundation.
Nightly builds for Eclipse GlassFish are available on the Foundation’s Jenkins-based Common Build Infrastructure here.
The work on ensuring that Eclipse GlassFish is Java EE 8 compatible and can be branded as Java EE 8 compatible is well on its way.
A test infrastructure at the Eclipse Foundation is now ready for testing Eclipse GlassFish against the Java EE 8 TCKs.
The Eclipse Foundation has signed the Oracle Java EE TCK agreement, which will allow us to proceed with the testing.
We expect that Eclipse GlassFish sources will become the basis for an implementation of the Jakarta EE specifications.
To find out more about the Eclipse GlassFish 5.1-RC1 please refer to this great blog from Dmitry Kornilov. Once again a big shout out to all the project teams working so hard to meet the target milestones. Please join me in celebrating another Jakarta EE milestone and spread the good news!
October 23, 2018
The primary role of the Eclipse IP Team is to reduce the risks associated with adopting open source software. In broad terms, they ensure that the licenses on content are compatible, that provenance is clear, and that content otherwise unencumbered from a legal point-of-view (strictly speaking, the team does all of this only for Type B requests). In other words, they do the sorts of things that every software project really needs to do (especially those projects that care about wide scale adoption), but software developers hate doing.
It’s impossible to remove all risk. The IP Due Diligence process is all about risk mitigation.
Project committers do play an important role in this work. The Eclipse IP Team does the heavy investigative work, but it is the committers who must bring intellectual property matters to the IP Team for their review. This takes the form of creating a contribution questionnaire (CQ) and then providing assistance where necessary to our analyst to investigate, and identify and resolve issues.
Experience has demonstrated that service releases of third party content are very low risk. By their nature, service releases include bug fixes only, and so don’t tend to include a lot of new intellectual property. Our experience is that bug fix releases generally change or add a few lines of code here and there.
Based on this experience, the Eclipse IP Due Diligence Process gives service releases of third party content a pass: project committers do not need to create a CQ or otherwise engage with the Eclipse IP Due Diligence Process for any service release of third party content that has already been approved.
That is, if a version of some bit of third party content has been approved by the IP Team, then service releases based on that approved version do not require any review. Just drop ’em into your build and have at it (e.g. if version 3.2 has been approved for use, a project can just use version 3.2.n without formal review).
Of course, if you suspect shenanigans or otherwise lack confidence in the status of the content, you can bring the service release to the IP Team in the usual manner. In fact, if you do suspect that maybe something labeled as a service release isn’t actually service release, please do engage the IP Team.
This and many other topics are covered by the Eclipse Project Handbook.
I’m at EclipseCon Europe. If I’m not in a session, I’ll be in the registration area. Ask me questions!