Busy times, but let’s talk Cloud Foundry!

Users of the existing beta Cloud Foundry hosted service cloudfoundry.com were sent emails this week explaining that we are almost ready to launch version 2 of the service. If you’re a current user, or if you have signed up in the past, dig through your inbox filters for the email (mine ended up under the “Promotions” label thanks to Gmail’s auto-filing magic).

Cloud Foundry v2, sometimes known as “next-gen” or ng, is a big set of updates. I wrote about some of them in my last blog post, and also noted there that we are going to run the new version on AWS.

Some of the things worth getting excited about are:

  • custom domains (the number one thing I’ve been asked for after every talk!)
  • buildpacks – the ability to use “any” language, framework or runtime that has a buildpack, not just Java, Ruby, or node.js (Matt and Brian seem to be competing to find interesting ones!). By the way, you should totally be trying your Spring and Groovy apps on v2! 🙂
  • organisations and spaces – the ability to share apps with a team and collaborate
  • a web management console for your apps
  • a Marketplace, which we will be expanding over time, allowing you to bind third-party services in to your applications.

These are all big changes, and there are many more under the hood (Warden, a new staging process, a new router… it’s a very long list).

My colleague Nima posted a nice slide deck giving a more technical overview of some of the internal changes:

In addition, our demo ninja Dekel has shared a great video of some of the things you can expect from version 2:

Over the past 24 hours or so I’ve been doing my best to respond to questions on Twitter and elsewhere. The existing v1 version will go away on June 30th, so if you are using it now you’ll want to look at migrating apps in the next couple of weeks, and we’ll share more on that soon. The new version will have pricing attached, with a free trial period too. Of course, the code is always available on Github and you’re free to spin up your own CF instance running on AWS, OpenStack or vSphere.

I know folks will have many more questions about the specifics over the next week or so, and we will be looking out for them.

Supercharging the community

As we have grown closer to v2 release, there has been ever-increasing activity in the vcap-dev mailing list and around the community. We’ve had more and more code contributions (so much so that I recently wrote a blog post about how you can contribute to the CF core projects). Projects like the cf-vagrant-installer and cf-nise-installer are helping people get local environments running very quickly. Our friends from PistonCloud released their turtles project. Best of all, the super Dr Nic Williams recently set up a cloudfoundry-community organisation on Github to act as an umbrella for many of these community contributions (info on how to join is here).

Let’s talk! (in London)

Over the past year or so I’ve spent a lot of time out in the developer community in London, and it has become apparent that a lot of folks are interested, already contributing to the community, or in some cases, already running their own CF instances in production 🙂

So, I thought it would be a good idea to do some bridge building and bring folks together to get to know one another. A brief unscientific Twitter poll suggested that other people liked the idea, so we’ve stuck a stake in the ground (evening of July 3rd) and I set up an Eventbrite page for a meet up. If you’d like to chat with people about Cloud Foundry over a drink, do sign up and come along. I’ll sort out a venue in the next couple of weeks, but I imagine it will be “around Shoreditch” or possibly over towards Waterloo, for purely selfish reasons! Totally informal, this is just a community meet up, so I’m not planning to do slides and talks and stuff – just come and share ideas or ask questions!

 

Cloud Foundry has gone Pivotal – so what’s new?

A few weeks ago I was privileged to be at the launch of Pivotal – a new organisation formed by VMware, EMC, and with investment from GE. You can read all about our new company at GoPivotal.com.

I am Pivotal
I am Pivotal

What does that mean for me, and for my role on the Cloud Foundry team? What is happening with Cloud Foundry right now? What about the Cloud Foundry community?

Well, as my ĂĽber-boss James Watters recently wrote – we are a central part of the Pivotal business.

Our mission is to become the most popular platform for new applications and the services used to build them across public and private clouds.

That’s a pretty compelling mission statement, and I’d personally even add that we want to be the “best” platform, as much as “most popular”. One of the main reasons I wanted to spend a couple of weeks at the Pivotal office in San Francisco was really to immerse myself in the team and in the culture of Pivotal Labs, as well as to be at the launch event, and to get a strong handle on what is happening with Cloud Foundry, version 2…

Wait, what? Version 2?

In the middle of last year, the Cloud Foundry team started some major work to improve many of the features offered by the platform. Back then, it was written about on the Cloud Foundry blog. We initially started to refer to “ng” components like the Cloud Controller (“cc-ng”), and that’s what we now mean when we refer to “v2”. At the start of the year we published a roadmap which laid out a lot more detail in terms of what is coming. There’s some really great stuff in there – many bugs squashed; a new, high performance router; support for developers to collaborate on apps, via concepts of organisations and spaces; new containerisation via Warden; custom domains (yes, finally!); and most importantly, support for buildpacks. Buildpacks will bring a major change to our platform, replacing the former concepts of runtimes and frameworks (say, Java with Spring) with the ability to drop in whatever runtime or container you may choose, instantly making the platform more customisable. We’re pleased that the folks over at Heroku have allowed us to inherit the buildpack concept and having played with the new platform, I believe this gives us a really cool and solid way to support apps.

Deploying #cloudfoundry v2 on Amazon
Deploying Cloud Foundry v2 on Amazon

While I was in San Francisco, I used BOSH to deploy my own new Cloud Foundry v2 instance to Amazon EC2 (and also attended the AWS Summit, which was a bonus!). Right now the team is working on migrating our  hosted cloudfoundry.com platform to EC2, and when we officially boot up v2 for the public, it will be running right there. This is not new news – both James, and our CEO Paul Maritz, have repeatedly spoken about AWS.  The point of Cloud Foundry has always been that it is a platform that is Infrastructure-as-a-Service agnostic, even when it was started by VMware, and I’m seeing increasing interest from folks want to run it on OpenStack, AWS, and other infrastructures as well as vSphere (by the way, did you read about how Comic Relief 2013 ran on Cloud Foundry on vSphere and AWS? so cool!). There is no lock-in here – write once, deploy to cloudfoundry.com, to a partner running a compatible Cloud Foundry-based instance, or to your own private cloud on your on infrastructure, as you wish. The Open Source nature of the project is exactly why I jumped on board with the team a little over a year ago.

Talking of the update to cloudfoundry.com: it is also worth mentioning that when the beta period comes to a close we will have pricing plans, a nice web console for user, organisation and application management, and the start of a marketplace for partners to plug-in their own services for developers. I can’t give more details in this post, watch the official channels for news!

I felt very strongly that I wanted to write about version 2. It is a very big step in evolving the Cloud Foundry architecture, and I believe that it is important for the broader  community to understand that it is a significant change. If you are running an app on cloudfoundry.com today, we’ll shortly contact you with information about migration to the new platform, as some changes will be needed to adapt to the fact that runtimes and frameworks are now buildpacks, there will be some changes to the way services work, and you will need our new ‘cf’ gem to deploy to the new service. We have already “paused” new signups on the current platform. If you look at the new documentation, you will find that it now focuses on version 2 – we apologise for any confusion during the transition process.

We’ve been talking with ecosystem partners about version 2 as well. For instance, our friends at Tier 3 recently blogged about Iron Foundry plans, and I had the pleasure of meeting with Stackato folks in person in San Francisco recently. If you are working with your own Cloud Foundry instance privately (we know that many organisations are!) I strongly urge you to talk to us via the vcap-dev mailing list to learn how you can start to take advantage of what the new platform brings.

What else does Pivotal mean for Cloud Foundry? Well – we are more open than ever, and keen to work with the community on pull requests to add features via Github. I’ve just written a  post for the Cloud Foundry blog about how to participate in the Open Source project. In fact, I’ll be talking more about this at the Cloud East conference in Cambridge next Friday May 24. We’re always happy to talk more about how to collaborate.

These are exciting times!

 

M2M, IoT and MQTT at EclipseCon 2013

EclipseCon 2013 is here, and I’m in Boston with the great folks from around the community this week.

Koneki, Paho, Mihini

There’s a LOT of content around the machine-to-machine space this year, and growing interest in how to use instrumented devices with an embedded runtime with lightweight messaging. If you’ve not been following the progress of the M2M community at Eclipse, we now have an M2M portal, along with nice pages for each of the three associated projects Koneki, Mihini, and Paho.

M2M hardware kits

Almost the first thing I saw when I walked in yesterday was my buddy Benjamin CabĂ© assembling a bunch of electronics kits (Raspberry Pis and Arduino Unos) for today’s M2M tutorial which will use Eclipse Koneki and Mihini. This will be the first opportunity for many folks to play with the new Mihini runtime. Later this evening, we’ll have the chance to run a hackathon with things like Raspberry Pi and Orion and others parts as an extended Birds of a Feather.

What are some of the other M2M sessions to look out for?

There’s also the first meeting of the OASIS TC for MQTT due this week, and a meeting of the Eclipse M2M Industry Working Group scheduled as well. Exciting times!

The corridor conversations and late night beer sessions are as always invaluable, and myself and many of the other project folks will be around – I’m always happy to talk about Paho in particular. At Paho we now have updated Java and C MQTT clients in Git (NB check the ‘develop’ branch for the latest Java updates), along with the Lua client, and proposed contributions of Objective-C, Javascript and Python clients are at various stages of review looking to join the project.

Oh, and if you are interested in MQTT, come and find me for some MQTT Inside stickers that you can use with your own hardware projects 🙂

MQTT goes free – a personal Q&A

There has been a lot of coverage over the past couple of days of some exciting announcements that I’ve been involved with at work. I’ve spent the past three days at EclipseCon Europe 2011, which doubled as the 10th birthday celebration for the Eclipse initiative. It was a funny feeling, because Eclipse started just a few weeks after I first joined IBM, and although I’ve used it and watch it “grow up”, I’ve never done EclipseCon before. The reason I’ve been out there for three days this time (as a WebSphere Messaging guy rather than a Rational tooling or build person, for example) was to get involved with activities around these announcements.

It’s all about machine-to-machine (or M2M) communications, Smarter Planet, and the Internet of Things.

Before I dive in to this, a few clarifications. First, I’m being described in a couple of news stories as “an IBM distinguished engineer”, and whilst I wish that was true, I’ve yet to ascend to those heights! Also, there are various numbers being quoted – note that the figures in the press release were not invented by IBM, the headline number of an expected 50 billion connected devices by 2020 comes from a recent study conducted by Ericsson AB. Oh, and this isn’t about a “new” protocol – MQTT has been in use since 1999.

The other clarification is that some articles seem to suggest that IBM is out to create some kind of new, alternative, Web – that’s not what has been announced, and I’m certainly not aware of any such plan! It’s about connecting “things” – sensors, mobile devices, embedded systems, even small appliances or medical devices for example – to the Web and the associated platform and ecosystem of technologies, not about reinventing or recreating them. I’m personally a huge fan of the Web as a platform 🙂

Oh, and of course, the obligatory “all opinions expressed are my own” – this is my understanding of where things are going, although of course I’m talking about events I’m directly involved in!

So what is this all about?

Two things.

1. On Nov 2, IBM, Eurotech, Sierra Wireless and Eclipse formed a new M2M Industry Working Group at Eclipse. Sierra had already started the “Koneki” project at Eclipse to work on M2M tools, and the Working Group will look at a range of topics together, such as M2M tooling, software components, open communication and messaging protocols, data formats, and APIs.

2. On Nov 3, IBM and Eurotech announced the donation of their C and Java clients for MQTT to a new Eclipse project called “Paho” which is under proposal in the incubator – with code expected to hit the repository within the next couple of months. MQTT is being given to Eclipse to live within the M2M ecosystem that is emerging there, and to provide an avenue for adoption of the protocol as a more pervasive standard for connected devices.

How is that news? Isn’t MQTT already open / free?

Technically… kinda, sorta 🙂

The MQTT specification has been published under a royalty-free license for some time, and that has led to a fantastic community contributing a range of different projects. IBM and Eurotech took this approach from early on, because it wouldn’t have been possible to compile and support code on every embedded platform that might come along – far simpler to set the protocol free.

Initially the specification was hidden away in the WebSphere Message Broker documentation, but last year it was republished, moved to a new home on developerWorks, and the license was clarified.

In August, IBM and Eurotech announced their intention to take MQTT to a standards organisation. The specific organisation has not yet been finalised, but this is also an important step in ensuring that MQTT is not “just” an IBM protocol, but something of general use which the community can feel comfortable with. If you’d like to join that discussion then there’s a Get Involved page on the mqtt.org community site.

The missing piece was code – a reference implementation, if you like. That’s one reason why the Eclipse Paho announcement is significant.

Why else is this significant?

Well, here are some of my musings on that one:

  • it shows IBM is serious, by committing code and open sourcing it (as with the original Eclipse donation in 2001);
  • the M2M Industry Working Group exists to foster the discussion in this space;
  • it makes high-quality reference Java and C client implementations freely available in source form, with a good Java implementation something that has been particularly lacking;
  • it creates an opportunity for Eclipse projects to use MQTT, and to develop tools on top of it.

The press release and Paho project proposals aren’t clear (to me) – what exactly is being donated?

IBM is seeding Eclipse Paho with C and Java client implementations of MQTT. Eurotech is donating a framework and sample applications which device and client developers can use when integrating and testing messaging components.

Why C and Java clients (aren’t they “dying” languages?) Where’s my Perl and Ruby code?!

IBM had previously made some C and Java code available in some SupportPacs, but those are outdated and the license for reuse was never clear.

It’s important to realise that this stuff came from the embedded world of 10 (and more) years ago, and continues to be applied in that industrial space. That category of device typically runs some kind of realtime Java-based OS, or a Linux-based or other runtime with a GCC toolchain for the CPU in question. C and Java are genuinely the most useful implementations to get out there. Oh, and on that “those old languages” thing – I think you’ll find they are very widely used (Android, iOS etc run variants of sorts, most non-web app development is likely to be in one or the other).

We’re very fortunate that clients libraries for a wide range of languages already exist thanks to the MQTT community – see the list at mqtt.org!

Hold on… don’t we need a broker / server / gateway?

Yes. But, one step at a time! 🙂

There are brokers available for free today, either as precompiled binaries or as full Open Source implementations, so this is not a dead end from day one.

The Paho project scope outlines the intention to add a broker to the project in the future, and to host an M2M sandbox for developers as well. That is where we are today, and this position will evolve over time.

Why Eclipse?

10 years of Eclipse The Eclipse Foundation has been a fantastic success story (oh, and, Happy 10th Birthday, Eclipse!). As the scope of their mission has broadened beyond an IDE to the web, build environments, and all kinds of other tools, it was a good place for Sierra Wireless to kick off the Eclipse Koneki M2M tools project, and is now a natural place for this primarily M2M protocol to be hosted under Paho. As James Governor notes in his write-up of the news:

… the Eclipse Public License is designed to support derivative works and embedding, while the Eclipse Foundation can provide the stewardship of same. One of the main reasons Eclipse has been so successful is that rather than separate software from specification it brings them together – in freely available open source code – while still allowing for proprietary extensions which vendors can sell.

How quickly will the code donation happen?

The Paho proposal tentatively includes dates in November and December 2011 – there will need to be various approvals as code is accepted into Eclipse, so that may “flex” a little, but it is all in the pipeline.

OK… Why MQTT? Why not HTTP/XMPP/AMQP/PubSubHubbub/WebSockets/etcetcetc?

To answer this one adequately I’d probably end up addressing each individual difference between protocols in turn, and if you’ve heard me speak about MQTT I’ve covered some of this before – so I’ll keep this answer relatively brief. I will admit that I’ve been asked about all of these by journalists in the past couple of days.

There is space for a range of protocols to coexist, because they address different areas. In the messaging space, we’ve found over time that whilst efforts to create a single protocol have been made, that has often ended up as focused around a particular set of qualities of service, and not optimised to cover the the whole range of them.

For example, if we look at IBM’s own messaging protocols – there are several. There’s WebSphere MQ which is all about reliable, transactional, solid, clusterable, enterprise, JMS and other APIs, etc etc.. WMQ itself isn’t ideal for very high-speed in-memory or multicast scenarios, so there is also WMQ Low Latency (interoperable with the new multicast feature in WMQ 7.1, but a separate protocol). Neither WMQ LLM or WMQ scales down to unreliable device networks and embedded systems, so there is WMQ Telemetry (aka MQTT), which was specifically designed for constrained devices and networks, and that can interoperate with the main queue manager, too. Oh, and sometimes you want to deal with files (WMQ File Transfer Edition), or access message data via HTTP (WMQ HTTP Bridge). You need to address a range of requirements in a messaging story.

So why not those others? In this case, IBM believes that MQTT is ideally-suited to the Smarter Planet Instrumented->Interconnected layer – it’s tiny, not synchronous and brittle, isn’t specific to the web as it is all about data rather than documents, XML etc etc. In these scenarios, REST principles may add an overhead. Oh, and it has been around for over 10 years, and has been proven across a range of industries and in a range of extreme conditions. IBM’s commercial implementation is known to scale to hundreds of thousands of connected devices, and we know that is the direction that this space is heading.

Congratulations! / Thank you!

Thanks, but don’t congratulate or thank me! I’m familiar with this stuff, I’ve coded with this stuff, but I didn’t invent it and I didn’t write it. There are some amazing folks at both IBM and Eurotech (and some who have moved on) who started this all off in 1999, and who have helped to implement solutions using this protocol since then, and who have of course developed it. Several of them are on Twitter if you want to say hi! And huge thanks again to the community of folks that formed around mqtt.org and contributed client and server implementations – that absolutely helped to move things forward to this point.

HERE ENDS TODAY’S Q&A!

That, hopefully helps to clarify a few things and answers some of the questions I’ve seen via Twitter, forums, and mailing lists over the past few days. It has been something of a blur, to be honest, but a lot of fun. I’m looking forward to the next stage – working with the community more, working with our friends at Eurotech, Sierra Wireless and elsewhere, and making the M2M space much more real.

For more, here are a bunch of stories I’ve seen in the past couple of days… no particular order, just my cut-and-paste list!

TransferSummit 2011

One of the benefits of having attended OggCamp a few weeks ago was that I became aware of another event. Steve Lee, one of the speakers at OggCamp, is also involved with TransferSummit, and he was good enough to point it out to me. I’m grateful that he did.

TransferSummit bills itself as

… a forum for business executives and members of the academic and research community to discuss requirements, challenges, and opportunities in the use, development, licensing, and future innovation in Open Source technology.

Unlike a *Camp event, this wasn’t a self-organising unconference, and was much more business-focused. The sense I had was that it was far more about “getting down to work” than the more fun Open Source-oriented events that I otherwise attend. There were a range of fantastic speakers, and with my good friend James Governor giving the opening keynote it really didn’t take me long to decide that it was something that I should get to.

Not only that, but the event was held at my alma mater Oxford, in the rather lovely surroundings of Keble College – which I don’t remember ever having visited whilst I was at university – it was red brick, comparatively far up Parks Road away from my college, in the “science area”, and as a History student I simply never had much occasion to go up there! Have to say that I was very impressed by the college, accommodation, and service from students and staff. Fantastic.

You can explore more of my TransferSummit 2011 photos on Picasa.

I really enjoyed a number of elements of TransferSummit. Firstly, whilst there were a few folks I knew from my other networks, it was largely a group of people I’d not come across before directly, so it was a great opportunity to meet some new people in this space. It wasn’t too much of an echo chamber, and as Ross Gardler said during his introduction, it wasn’t a crowd of folks who already “get it” in terms of Open Source usage and adoption – there were a fair few organisations on the edge of making choices and I felt that the talks were more about how to go about making sensible ones, putting the right governance practices in place, and learning from the successes and mistakes of others.

I couldn’t cover all three tracks of the agenda in detail, but I’ll highlight a few particularly interesting sessions I did listen to (again, there’s more complete coverage on Lanyrd):

Another nice element of the event was the “gadget playtime” Open Source (and not-so-open) Hardware area, where I spent a lot of time talking to the folks from OSHUG and other projects.

One of the things that was negatively commented on via Twitter and other discussions was that Microsoft was the Platinum sponsor of the event. I found that very interesting, particularly where the commenters weren’t present at TransferSummit itself. To reassure those who may have stayed away or otherwise expressed concerns, I’ll just say that there was very definitely no “Microsoft agenda” being pushed, that my friend Steve Lamb was there very much in “listen, learn and interact” mode, but that others who attended and who I greatly respect did express other views about some elements of their participation (and I imagine it’s not hard to track those opinions down via hashtags etc.). Either way, having been involved with various conferences now, I fully support the idea that having a wide range of sponsors willing to help fund a professional conference and make is successful is important, so I thank Microsoft, HP and all of the sponsors (and in particular to the folks from OpenDirective) for enabling it to happen.

Definitely a worthwhile way to spend a couple of days of time – a well-run, informative event with great experiences shared, and some good contacts that I look forward to maintaining. My tip: look out for similar events and make an effort to mingle with the business, academic and government communities on Open Source. You might just learn something.

Disclosure: I was (unexpectedly) generously comp’d a ticket enabling me to attend, thanks to the organisers. My employer had no involvement and I attended on my own time.