Latest interviews, and upcoming speaking schedule

Just a quick note to comment that I’ve added links to two recent interviews on the media section of my Speaking and Media page:

I’ve been at QCon London this week and also did a couple more interviews, again one on each of these topics – it may take slightly longer for these to emerge, though. Let me know if you have any feedback on these, always interesting to hear!

It may seem hard to believe, and indeed looking back at my schedule it hardly seems it, but I’ve been avoiding too many speaking gigs since an annoying incident back in December and a need to take a break! Coming up in the next few months, apart from the job change (!), I’m back on the road and I’ll be speaking at the following events:

#newjob

In late 2011, I was contacted by a very charming, smart and persuasive French gentleman who spoke of clouds, platform-as-a-service, and polyglot programming. It took him and his team a couple of months to get me thinking seriously about a career change, after 10 great years at IBM. I’d spent that period with “Big Blue” coding in Java and C, and primarily focused on enterprise application servers, message queueing, and integration – and yet the lure of how easy vmc push[1] made it for me to deploy and scale an app was astounding! Should I make the transition to a crazy new world? Over Christmas that year, I decided it would be a good thing to get in on this hot new technology and join VMware as Developer Advocate on the Cloud Foundry team. I joined the team early in 2012.

The Cloud Foundry adventure has been amazing. The day after I joined the team, the project celebrated its first anniversary, and we announced the BOSH continuous deployment tool; I spent much of that first year with the team on a whirlwind of events and speaking engagements, growing the community. The Developer Relations team that Patrick Chanezon and Adam Fitzgerald put together was super talented, and it was brilliant to be part of that group. Peter, Chris, Josh, Monica, Raja, Rajdeep, Alvaro, Eric, Frank, Tamao, Danny, Chloe, D, Giorgio, friends in that extended team… it was an honour.

A year after I joined, VMware spun out Cloud Foundry, SpringSource and other technologies into a new company, Pivotal – headed up by Paul Maritz. I’ve been privileged to work under him, Rob Mee at Pivotal Labs, and most closely, my good friend James Watters on the Cloud Foundry team. I’ve seen the opening of our new London offices on Old Street, welcomed our partners and customers into that unique collaborative and pairing environment, and observed an explosion of activity and innovation in this space. We launched an amazing productJames Bayer heads up a remarkable group of technologists working full-time on Cloud Foundry, and it has been a pleasure to get to know him and his team. Most recently, I’ve loved every minute working with Cornelia, Ferdy, Matt, Sabha and Scott (aka the Platform Engineering team), another talented group of individuals from whom I’ve learned much.

Over the course of the last two years I’ve seen the Platform-as-a-Service space grow, establish itself, and develop – most recently resulting in my recent talk at bcs Oxfordshire:

Last week, we announced the forthcoming Cloud Foundry Foundation – and one could argue that as a community and Open Source kinda guy, this was the direction I’ve helped to move things in the past two years, although I can claim no credit at all for the Foundation announcement itself. I’ve certainly enjoyed hosting occasional London Cloud Foundry Community meetups and drinks events (note, next London PaaS User Group event has 2 CF talks!), and I’ve made some great friends locally and internationally through the ongoing growth of the project. I’m proud of the Platform event we put on last year, I think the upcoming Cloud Foundry Summit will be just as exciting, and I’m happy to have been a part of establishing and growing the CF community here in Europe.

Cloud Foundry is THE de facto Open Source PaaS standard, the ecosystem is strong and innovative, and that has been achieved in a transparent and collaborative way, respectful to the community, in a good-natured way in the face of competition. Rest assured that I’ll continue to watch the project and use PaaSes which implement it (I upgraded to a paid Pivotal Web Services account just this past week, I tried BlueMix, and I’m an ongoing fan of the Anynines team).

There are many missing shout-outs here… you folks know who you are, and should also know that I’ve deeply enjoyed learning from you and working with you. Thank you, Pivotal team! I do not intend to be a stranger to the Bay Area! In my opinion, Pivotal is positioned brilliantly in offering an end-to-end mobile, agile development, cloud platform and big data story for the enterprise. I look forward to continuing the conversations around that in the next couple of weeks.

[…]

What happens after “the next couple of weeks”? Well, this is as good time as any (!) to close that chapter, difficult though it is to leave behind a team I’ve loved working with, on a product and project that is undoubtedly going to continue to be fantastically successful this year and beyond. So, it is time to announce my next steps, which may or may not be clear from the title of this post… 🙂

Joining Twitter!

I joined Twitter as a user on Feb 21 2007. On the same day, seven years later, I accepted a job offer to go and work with the Twitter team as a Developer Advocate, based in London.

If you’ve been a long-term follower of mine either here on this blog, or on Twitter, or elsewhere, you’ll know that Twitter is one of my favourite tools online. It has been transformational in my life and career, and it changed many of my interactions. True story: between leaving IBM and joining VMware I presented at Digital Bristol about social technologies, and I was asked, which one I would miss the most if it went away tomorrow; the answer was simple: Twitter. As an Open Source guy, too, I’ve always been impressed with Twitter’s contributions to the broader community.

I couldn’t be more #excited to get started with the Twitter Developer Relations team in April!

Follow me on Twitter – @andypiper – to learn more about my next adventure…

[1] vmc is dead, long live cf!

Getting inside Cloud Foundry for debug (and profit?)

I’ve recently started to play with some more of the internals of Cloud Foundry than I’ve been used to. This has been made much easier by the advent of bosh-lite, a system for deploying all of Cloud Foundry’s components using the bosh continuous deployment and configuration tool, into a single virtual machine. bosh-lite achieves this by using containers (Cloud Foundry’s own Warden container technology) to “emulate” the individual VMs where jobs would run in a full distributed topology.

bosh-lite has actually been around for a number of months now, but I’ve not had much of a chance to play with it until recently. This is partly due to other activities, and also that my earlier attempts to get an environment up-and-running were hampered by lack of memory. It should be possible to run bosh-lite with a Cloud Foundry deployment in 8Gb of RAM, but given my laptop’s configuration and the amount of other stuff I’m usually running, that was never comfortable – now I’m rocking 16Gb in a MacBook Pro, things are running more smoothly.

I don’t intend to spend this post documenting how to install bosh-lite and get a running single-node Cloud Foundry system. I followed the instructions in the README and things went well on this occasion. One suggestion that I’d make is if you can, to use VMware Fusion (assuming like me, you’re on OS X) and the Vagrant provider for Fusion, seems quite a lot better than Virtualbox. If you do, don’t forget to pass the --provider=vmware_fusion flag when you bring your Vagrant image up (that’s something I do usually forget). One other little thing to mention is that after I started the bosh deployment, the bosh CLI gem timed out and returned a REST error – but the deployment process itself continued without any issues, and I was able to use bosh tasks to check in on the progress. If you are interested, I used cf-release-157 this time around.

Once I had my minty-fresh Cloud Foundry running, I deployed Matt Stine’s handy, simple, Ruby scale demo app and pushed up the number of instances.

So what’s the point of this post? I want to mention two things…

Note: this is not about debugging applications on Cloud Foundry in general – a PaaS is an opinionated system and you generally shouldn’t need to poke around inside it like this. This is for debugging the Cloud Foundry runtime itself, or aspects that might run inside a container. Oh, and I’m sorry about the formatting of some of the shell output examples below!

Peeking at NATS traffic

NATS is the internal, lightweight message bus that Cloud Foundry components use to talk to one another. I’d read blog posts from Cornelia and from Dr Nic about digging into this before.

First of all, I used bosh ssh to access the NATS host:

$ bosh ssh
1. ha_proxy_z1/0
2. nats_z1/0
3. postgres_z1/0
4. uaa_z1/0
5. login_z1/0
6. api_z1/0
7. clock_global/0
8. api_worker_z1/0
9. etcd_leader_z1/0
10. hm9000_z1/0
11. runner_z1/0
12. loggregator_z1/0
13. loggregator_trafficcontroller_z1/0
14. router_z1/0
Choose an instance: 2
Enter password (use it to sudo on remote host): ***
Target deployment is `cf-warden'

Setting up ssh artifacts

Director task 9

Task 9 done
Starting interactive shell on job nats_z1/0

So now I’m on the NATS host – now what? well, strictly speaking I didn’t need to login to that host / container, since of course, as a messaging system, the other hosts can connect to it anyway. The reason I wanted to login to it was to find out how NATS was configured.

$ ps -ef | grep nats
root 1470 1 0 12:09 ? 00:00:12 /var/vcap/packages/gnatsd/bin/gnatsd -V -D -c /var/vcap/jobs/nats/config/nats.conf

$ more /var/vcap/jobs/nats/config/nats.conf

net: "10.244.0.6"
port: 4222

pid_file: "/var/vcap/sys/run/nats/nats.pid"
log_file: "/var/vcap/sys/log/nats/nats.log"

authorization {
user: "nats"
password: "nats"
timeout: 15
}

cluster {
host: "10.244.0.6"
port: 4223

authorization {
user: "nats"
password: "nats"
timeout: 15
}

routes = [

]
}

From this, I can see that NATS is listening on IP 10.244.0.6, port 4222 (the NATS default), and that it is configured for username/password authentication. Handy to know!

I borrowed a little script from Dr Nic, but needed to modify it slightly to talk to authenticated NATS (his original script assumed there was no auth in place):


#!/usr/bin/env ruby
require "nats/client"
NATS.start(:uri => "nats://nats:nats@10.244.0.6:4222") do
NATS.subscribe('>') { |msg, reply, sub| puts "Msg received on [#{sub}] : '#{msg}'" }
end

view raw

nats-all.rb

hosted with ❤ by GitHub

[update – Dr Nic has provided a more convenient method to do this, in the comments below – check out nats-sub – but this works, as well]

$ ./nats-all.sh
Msg received on [router.register] : '{"host":"10.244.0.134","port":8080,"uris":["login.10.244.0.34.xip.io"],"tags":{"component":"login"},"index":0,"private_instance_id":"e6194fe8-4910-4cb1-9f7c-d5ee7ff3f36b"}'
Msg received on [router.register] : '{"host":"10.244.0.130","port":8080,"uris":["uaa.10.244.0.34.xip.io"],"tags":{"component":"uaa"},"index":0,"private_instance_id":"7713dd5b-3613-41a6-9c67-c48f22a769b4"}'
Msg received on [router.register] : '{"dea":"0-1ba3459ea4cd406db833c1d188a78c02","app":"b8550851-37a0-4bd5-bdce-1d787b087887","uris":["andyp.10.244.0.34.xip.io"],"host":"10.244.0.26","port":61021,"tags":{"component":"dea-0"},"private_instance_id":"b52dfd91d68144cabb14b6c7bae77daae8b493acf1354c99941d49772a1f61fb"}'
Msg received on [router.register] : '{"dea":"0-1ba3459ea4cd406db833c1d188a78c02","app":"b8550851-37a0-4bd5-bdce-1d787b087887","uris":["andyp.10.244.0.34.xip.io"],"host":"10.244.0.26","port":61025,"tags":{"component":"dea-0"},"private_instance_id":"090f5c5aeee94fdfb4a4e0f0afde2553480dcd97c018431db37b4dffdc80fde4"}'
Msg received on [router.register] : '{"dea":"0-1ba3459ea4cd406db833c1d188a78c02","app":"b8550851-37a0-4bd5-bdce-1d787b087887","uris":["andyp.10.244.0.34.xip.io"],"host":"10.244.0.26","port":61028,"tags":{"component":"dea-0"},"private_instance_id":"92e10af77b274836a3f54373c9b7feee025c5b72f41a4c4982bde97d241ebd5b"}'
Msg received on [router.register] : '{"dea":"0-1ba3459ea4cd406db833c1d188a78c02","app":"b8550851-37a0-4bd5-bdce-1d787b087887","uris":["andyp.10.244.0.34.xip.io"],"host":"10.244.0.26","port":61039,"tags":{"component":"dea-0"},"private_instance_id":"86edf0c0a7f84f04b52693b489ad93b7f857f77271b84d568d8f5600b34f7054"}'
Msg received on [router.register] : '{"host":"10.244.0.26","port":34567,"uris":["8b24c0a7d28f4e03aa028a3dc89fb8c3.10.244.0.34.xip.io"],"tags":{"component":"directory-server-0"}}'
Msg received on [dea.advertise] : '{"id":"0-1ba3459ea4cd406db833c1d188a78c02","stacks":["lucid64"],"available_memory":23296,"available_disk":22528,"app_id_to_count":{"b8550851-37a0-4bd5-bdce-1d787b087887":10},"placement_properties":{"zone":"default"}}'
Msg received on [staging.advertise] : '{"id":"0-1ba3459ea4cd406db833c1d188a78c02","stacks":["lucid64"],"available_memory":23296}'
Msg received on [dea.heartbeat] : '{"droplets":[{"cc_partition":"default","droplet":"b8550851-37a0-4bd5-bdce-1d787b087887","version":"a420d371-0816-4baf-9649-4e21255a66a4","instance":"d92d3c0c43ce4b6981e443e5c2064580","index":0,"state":"RUNNING","state_timestamp":1392639135.9526377},{"cc_partition":"default","droplet":"b8550851-37a0-4bd5-bdce-1d787b087887","version":"a420d371-0816-4baf-9649-4e21255a66a4","instance":"898e632697e246de9cf6b7330444227c","index":1,"state":"RUNNING","state_timestamp":1392639136.3117783},{"cc_partition":"default","droplet":"b8550851-37a0-4bd5-bdce-1d787b087887","version":"a420d371-0816-4baf-9649-4e21255a66a4","instance":"56d023e374aa49d88720daabac58e862","index":2,"state":"RUNNING","state_timestamp":1392639135.2225387},{"cc_partition":"default","droplet":"b8550851-37a0-4bd5-bdce-1d787b087887","version":"a420d371-0816-4baf-9649-4e21255a66a4","instance":"f11d86a7f4ad47f1ad554ae1b087d5f6","index":3,"state":"RUNNING","state_timestamp":1392639136.1042},{"cc_partition":"default","droplet":"b8550851-37a0-4bd5-bdce-1d787b087887","version":"a420d371-0816-4baf-9649-4e21255a66a4","instance":"c9e6de77f0484e6cae47f73ad6ca778a","index":4,"state":"RUNNING","state_timestamp":1392639135.9426212},{"cc_partition":"default","droplet":"b8550851-37a0-4bd5-bdce-1d787b087887","version":"a420d371-0816-4baf-9649-4e21255a66a4","instance":"924c387fc33444289b2db2762eefac42","index":5,"state":"RUNNING","state_timestamp":1392639135.940636},{"cc_partition":"default","droplet":"b8550851-37a0-4bd5-bdce-1d787b087887","version":"a420d371-0816-4baf-9649-4e21255a66a4","instance":"69866b260b1a49a09c03e178c4add2c5","index":6,"state":"RUNNING","state_timestamp":1392639135.944143},{"cc_partition":"default","droplet":"b8550851-37a0-4bd5-bdce-1d787b087887","version":"a420d371-0816-4baf-9649-4e21255a66a4","instance":"94bc605505d94dc1832e55bf2f671a99","index":7,"state":"RUNNING","state_timestamp":1392639135.4456258},{"cc_partition":"default","droplet":"b8550851-37a0-4bd5-bdce-1d787b087887","version":"a420d371-0816-4baf-9649-4e21255a66a4","instance":"8420df9bbe64456385dfa91285641ba4","index":8,"state":"RUNNING","state_timestamp":1392639135.9456131},{"cc_partition":"default","droplet":"b8550851-37a0-4bd5-bdce-1d787b087887","version":"a420d371-0816-4baf-9649-4e21255a66a4","instance":"ed9ad14f6599494c96f90296c59e6041","index":9,"state":"RUNNING","state_timestamp":1392639135.938359}],"dea":"0-1ba3459ea4cd406db833c1d188a78c02"}'
Msg received on [router.register] : '{"host":"10.244.0.10","port":8080,"uris":["loggregator.10.244.0.34.xip.io"]}'
Msg received on [router.register] : '{"host":"10.244.0.138","port":9022,"uris":["api.10.244.0.34.xip.io"],"tags":{"component":"CloudController"},"index":0,"private_instance_id":null}'
Msg received on [router.register] : '{"host":"10.244.0.134","port":8080,"uris":["login.10.244.0.34.xip.io"],"tags":{"component":"login"},"index":0,"private_instance_id":"e6194fe8-4910-4cb1-9f7c-d5ee7ff3f36b"}'

Warden containers and shells

Cloud Foundry’s native container technology is called Warden. When an application is deployed, Cloud Foundry starts up a Warden container based on the limits assigned in terms of memory etc, and the applications run inside that. How can you get “inside” the container to see what is going on?

Well, there are a couple of techniques. Cloud Foundry Loggregator provides streaming access to the standard application logs (stdout/stderr) via the cf logs command. Another option is James Bayer’s cool websocket-based method for getting access to the container. Yet another option is Warden’s own shell, wsh. This does assume you can access the DEA machine with ssh, however.

wsh doesn’t seem to be very well documented, although I knew Cornelia had played around with it – see her excellent blog post on troubleshooting CF and applications, including a great flowchart / graphic suggesting different techniques.

Here’s the secret sauce:

1. Login to the DEA VM (called “runner_z1/0” in the list provided by bosh ssh).

2. Identify your Warden container… there are a lot showing below, but I happen to know that these are several instances of the same app. The important part is the instance-17ij46hadt2 – the second part or that value maps to the location of the container’s private space on disk.

$ ps -ef | grep warden
root        49    42  1 11:41 ?        00:00:41 /var/vcap/bosh/bin/ruby /var/vcap/bosh/bin/bosh_agent -c -I warden -P ubuntu
root      5390 32634  0 12:12 ?        00:00:00 /var/vcap/data/packages/warden/38.1/warden/src/oom/oom /tmp/warden/cgroup/memory/instance-17ij46hadss
root      5503 32634  0 12:12 ?        00:00:00 /var/vcap/data/packages/warden/38.1/warden/src/oom/oom /tmp/warden/cgroup/memory/instance-17ij46hadsu
root      5697 32634  0 12:12 ?        00:00:00 /var/vcap/data/packages/warden/38.1/warden/src/oom/oom /tmp/warden/cgroup/memory/instance-17ij46hadt3
root      6779 32634  0 12:12 ?        00:00:00 /var/vcap/data/warden/depot/17ij46hadsu/bin/iomux-spawn /var/vcap/data/warden/depot/17ij46hadsu/jobs/58 /var/vcap/data/warden/depot/17ij46hadsu/bin/wsh --socket /var/vcap/data/warden/depot/17ij46hadsu/run/wshd.sock --user vcap /bin/bash
root      6780  6779  0 12:12 ?        00:00:00 /var/vcap/data/warden/depot/17ij46hadsu/bin/wsh --socket /var/vcap/data/warden/depot/17ij46hadsu/run/wshd.sock --user vcap /bin/bash
root      6784 32634  0 12:12 ?        00:00:00 /var/vcap/data/warden/depot/17ij46hadsu/bin/iomux-link -w /var/vcap/data/warden/depot/17ij46hadsu/jobs/58/cursors /var/vcap/data/warden/depot/17ij46hadsu/jobs/58
root      6930 32634  0 12:12 ?        00:00:00 /var/vcap/data/warden/depot/17ij46hadss/bin/iomux-spawn /var/vcap/data/warden/depot/17ij46hadss/jobs/59 /var/vcap/data/warden/depot/17ij46hadss/bin/wsh --socket /var/vcap/data/warden/depot/17ij46hadss/run/wshd.sock --user vcap /bin/bash
root      6931  6930  0 12:12 ?        00:00:00 /var/vcap/data/warden/depot/17ij46hadss/bin/wsh --socket /var/vcap/data/warden/depot/17ij46hadss/run/wshd.sock --user vcap /bin/bash
root      6934 32634  0 12:12 ?        00:00:00 /var/vcap/data/warden/depot/17ij46hadss/bin/iomux-link -w /var/vcap/data/warden/depot/17ij46hadss/jobs/59/cursors /var/vcap/data/warden/depot/17ij46hadss/jobs/59
root      6950 32634  0 12:12 ?        00:00:00 /var/vcap/data/warden/depot/17ij46hadt3/bin/iomux-spawn /var/vcap/data/warden/depot/17ij46hadt3/jobs/60 /var/vcap/data/warden/depot/17ij46hadt3/bin/wsh --socket /var/vcap/data/warden/depot/17ij46hadt3/run/wshd.sock --user vcap /bin/bash
root      6955  6950  0 12:12 ?        00:00:00 /var/vcap/data/warden/depot/17ij46hadt3/bin/wsh --socket /var/vcap/data/warden/depot/17ij46hadt3/run/wshd.sock --user vcap /bin/bash
root      6960 32634  0 12:12 ?        00:00:00 /var/vcap/data/warden/depot/17ij46hadt3/bin/iomux-link -w /var/vcap/data/warden/depot/17ij46hadt3/jobs/60/cursors /var/vcap/data/warden/depot/17ij46hadt3/jobs/60
vcap     23713 16807  0 12:26 pts/0    00:00:00 grep --color=auto warden
root     32634     1  0 11:52 ?        00:00:09 ruby /var/vcap/data/packages/warden/38.1/warden/vendor/bundle/ruby/1.9.1/bin/rake warden:start[/var/vcap/jobs/dea_next/config/warden.yml]

3. Head over to the directory for your chosen Warden instance:

$ cd /var/vcap/data/warden/depot/17ij46hadt2

4. Notice that the Warden containers are running as root. If you run wsh now as an unprivileged user, you’ll get a connect: Permission denied error. Time to switch to root, and then run wsh specifying the command to run inside the shell, as a parameter:

$ sudo su -
# cd /var/vcap/data/warden/depot/17ij46hadt2
# bin/wsh /bin/bash

5. At this point, we’re inside the Warden container with a bash shell, and all commands are scoped inside it. So, let’s take a look at what is running:

root@17ij46hadt2:~# ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 12:12 ?        00:00:00 wshd: 17ij46hadt2
vcap        29     1  0 12:12 ?        00:00:00 /bin/bash
vcap        31    29  0 12:12 ?        00:00:00 ruby /home/vcap/app/vendor/bundle/ruby/1.9.1/bin/rackup config.ru -p 61031
vcap        32    31  0 12:12 ?        00:00:00 /bin/bash
vcap        33    31  0 12:12 ?        00:00:00 /bin/bash
vcap        34    32  0 12:12 ?        00:00:00 tee /home/vcap/logs/stdout.log
vcap        35    33  0 12:12 ?        00:00:00 tee /home/vcap/logs/stderr.log
root        39     1  0 12:27 pts/0    00:00:00 /bin/bash
root        52    39  0 12:27 pts/0    00:00:00 ps -ef

This is our Ruby app, running on port 61031, and we can see the logs being written as well.

Hopefully this is useful information for folks wanting to dig around inside bosh-lite and a running Cloud Foundry system!

Pivotal CF – the enterprise platform for software development

My boss and mentor, James Watters, just blogged about the launch of what we’ve been working on since before Pivotal was formed earlier this year – Pivotal One, powered by Pivotal CF (based on Cloud Foundry).

As I wrote back in April

Pivotal is bringing together a number of key technology assets – our Open Source cloud platform (Cloud Foundry), agile development frameworks like Spring, Groovy and Grails, a messaging fabric (RabbitMQ), and big, fast data assets like Pivotal HD.

What we’re announcing today delivers on that promise and our vision – the consumer-grade enterprise, enabling organisations to create new applications with unprecedented speed. The cloud – infrastructure clouds, IaaS like Amazon EC2, VMware vSphere, OpenStack, CloudStack, etc – can be thought of as the new hardware. It’s like buying a beige server box back in the 90s – the IaaS layer gives you a bunch of CPU, network, and storage resources, and for your application to use them, you need a layer in between – an operating system, if you like. We’ve spoken of our ambition for Cloud Foundry as “the Linux of the Cloud”, and it already runs on all of those infrastructures I’ve listed above – in the future, hopefully more.

Why is that important? Why should developers care about this Platform (PaaS) layer? A development team shouldn’t have to go through an 18 month delivery cycle to deliver an app! We’re putting an end to the whole cycle of calling up the infrastructure team, having new servers commissioned, operating systems installed, databases configured etc etc just to get an application deployed and running. When you first push an application to Cloud Foundry, and can then bind data services and scale out with simple individual commands, it really is a liberating experience compared to what traditionally has been required to get your application running. We’re making it quicker and easier to get going – a friction-free, turnkey experience. You should just be able to write your code and make something amazing.

We’re also delivering choice – of runtimes and languages, data services, and also importantly, a choice of “virtual hardware”. When Comic Relief ran in the UK this year, in order to avoid any risk of hardware failure (we all know there’s a risk that Amazon might go down), the applications were deployed on Cloud Foundry running on both Amazon EC2 with geographical redundancy, and on VMware vSphere – no lock-in to any cloud provider, and the developers didn’t have to learn all of the differences of operating different infrastructures, they just pushed their code. We’re happy to know that it was a very successful year for the Comic Relief charity, and that Cloud Foundry helped.

Pivotal One also includes some amazing data technologies – Pivotal HD (a simple to manage Hadoop distribution) and Pivotal AX (analytics for the enterprise). We recognise that as well as building applications, you need to store and analyse the data, so rather than just shipping a Cloud Foundry product, we roll up both the elastic scalable runtime, cutting-edge technologies like Spring.io, and and our big data offerings. That’s different from many of the others in the same market. We’ve been running our own hosted cloud, now available at run.pivotal.io, on AWS for over a year now, so we’ve learned a lot about running systems at scale and Pivotal One can do just that.

Above all, I wanted to say just how excited I am to be part of this amazing team. It is an honour to work with some incredibly talented engineers and leaders. I’m also personally excited that our commercial and our open source ecosystems continue to grow, including large organisations like IBM, SAP, Piston … it’s a long list. We took out an ad in the Wall Street Journal to thank them. I also want to thank our community of individual contributors (the Colins, Matts, Davids, Dr Nics, Yudais… etc etc!) many of whom, coincidentally for me, are in the UK – check out the very cool Github community where some of their projects are shared.

I’m convinced that this Platform is the way forward. It’s going to be an even more exciting year ahead.

A small selection of other coverage, plenty more to read around the web:

Running tinytinyRSS on Cloud Foundry

Google Reader is going away in a week or so, and my friends have been asking me where I’m migrating all of my feed reading activities to. The answer for me is a combination of Flipboard and Feedly (both of which I recommend), but for those who prefer a more traditional Reader-style UI and also to retain ownership of their data, running tinytinyRSS is a possible alternative. I’d heard about it, but was tipped off to it again by my friend Dave Neary over at Red Hat 🙂

tinytinyRSS is a PHP application and needs a MySQL or PostgreSQL database. It offers the ability to import an OPML file (basically an XML format for listing RSS subscriptions), as well as various other capabilities and plugins.

Since we launched Cloud Foundry Hosted Developer Edition (aka CF v2) last week, I thought I’d find out how much effort it would be to install and run ttRSS on our new platform. It should “just work” – with buildpack support, you can now bring your own runtime to the platform… and we currently have free Marketplace SQL offerings from ElephantSQL and clearDB. Checks all the boxes!

Here’s what happened when I set up ttRSS on run.pivotal.io (the new URL where Cloud Foundry Hosted Developer Edition from Pivotal runs, replacing the old cloudfoundry.com beta hosted service).

First, I read the installation guide and downloaded the latest release tarball (linked at the bottom of the main wiki page). Then I unpacked the tarball on my Mac.

Once inside the release directory, I decided to just “push” the app to Cloud Foundry. I knew I’d need a PHP runtime, so my first thought was to point at the Heroku PHP buildpack (CF v2 is compatible with many Heroku buildpacks). I grabbed the URL and entered the following:

Tiny-Tiny-RSS-1.8  cf push --buildpack=https://github.com/heroku/heroku-buildpack-php
Name> tinytiny

Instances> 1

1: 64M
2: 128M
3: 256M
Memory Limit> 256M

Creating tinytiny... OK

1: tinytiny
2: none
Subdomain> tinytiny

1: cfapps.io
2: mqttbridge.com
3: none
Domain> 1

Creating route tinytiny.cfapps.io... OK
Binding tinytiny.cfapps.io to tinytiny... OK

Create services for application?> y

1: blazemeter n/a, via blazemeter
2: cleardb n/a, via cleardb
3: cloudamqp n/a, via cloudamqp
4: elephantsql n/a, via elephantsql
5: mongolab n/a, via mongolab
6: rediscloud n/a, via garantiadata
7: treasuredata n/a, via treasuredata
What kind?> 4

Name?> elephantsql-53b67

1: turtle: Tiny Turtle
Which plan?> 1

Creating service elephantsql-53b67... OK
Binding elephantsql-53b67 to tinytiny... OK
Create another service?> n

Bind other services to application?> n

Save configuration?> y

Saving to manifest.yml... OK
Uploading tinytiny... OK
Starting tinytiny... OK
-----> Downloaded app package (3.1M)
Initialized empty Git repository in /tmp/buildpacks/heroku-buildpack-php/.git/
Installing heroku-buildpack-php.
-----> Bundling Apache version 2.2.22
-----> Bundling PHP version 5.3.10
-----> Uploading staged droplet (12M)
-----> Uploaded droplet
Checking tinytiny...
Staging in progress...
Staging in progress...
  0/1 instances: 1 starting
  1/1 instances: 1 running
OK

Hurrah! The app is deployed! Note that while I was running through the steps here, I also chose to provision an ElephantSQL instance and bind it to my app. I could also have done that via the Marketplace in the Web Console before pushing the app. The tinytinyRSS wiki suggested that it performs better with Postgres than it does with MySQL, so I chose to use that.

The next step in the regular installation is to visit the URL (in this case http://tinytiny.cfapps.io) and check that things are working OK. When I got there, I found a form asking me to fill in the database credentials.

That’s a small issue – right now, there is no autoconfiguration for PHP apps with databases on Cloud Foundry, and I hadn’t modified the application code to grab the information from anywhere in the environment. Fortunately, there is a way to find out what the settings should be – via the env.log file in the application container. Running cf logs got me back the contents of the file. VCAP_SERVICES is where I needed to look.

VCAP_SERVICES={"elephantsql-n/a":[{"name":"elephantsql-53b67","label":"elephantsql-n/a","plan":"turtle","credentials":{"uri":"postgres://xxxxkjkj:lbI7r3Bh@babar.elephantsql.com:5432/xxxxkjkj"}}]}

I’ve modified the values here, for obvious reasons, but I plugged the values from the elephantsql service right into the form… hit the Test DB button… and got an error that my PHP runtime didn’t have support for mbstring…

Hmm!

Fortunately, there’s another buildpack for Heroku which adds PHP support, and does have support for mbstring (as well as using nginx instead of Apache, and a few other tweaks). I thought I’d give that one a go instead. I’d already saved my application settings to the manifest.yml file, so I could not just push a second time with a different buildpack, I had to use the --reset flag to apply the change:

Tiny-Tiny-RSS-1.8  cf push --buildpack=https://github.com/iphoting/heroku-buildpack-php-tyler.git --reset
Using manifest file manifest.yml

Uploading tinytiny... OK
Changes:
  buildpack: 'https://github.com/heroku/heroku-buildpack-php' -> 'https://github.com/iphoting/heroku-buildpack-php-tyler.git'
Updating tinytiny... OK
Stopping tinytiny... OKStarting tinytiny... OK
-----> Downloaded app package (3.1M)
-----> Downloaded app buildpack cache (4.0K)
Initialized empty Git repository in /tmp/buildpacks/heroku-buildpack-php-tyler.git/.git/
Installing heroku-buildpack-php-tyler.git.
-----> Fetching Manifest
       https://s3.amazonaws.com/heroku-buildpack-php-tyler/manifest.md5sum
-----> Installing Nginx
       Bundling Nginx v1.4.1
       https://s3.amazonaws.com/heroku-buildpack-php-tyler/nginx-1.4.1-heroku.tar.gz
-----> Installing libmcrypt
       Bundling libmcrypt v2.5.8
       https://s3.amazonaws.com/heroku-buildpack-php-tyler/libmcrypt-2.5.8.tar.gz
-----> Installing libmemcached
       Bundling libmemcached v1.0.7
       https://s3.amazonaws.com/heroku-buildpack-php-tyler/libmemcached-1.0.7.tar.gz
-----> Installing PHP
       Bundling PHP v5.4.12
       https://s3.amazonaws.com/heroku-buildpack-php-tyler/php-5.4.12-with-fpm-heroku.tar.gz
-----> Installing newrelic
       Bundling newrelic daemon v2.9.5.78
       https://s3.amazonaws.com/heroku-buildpack-php-tyler/newrelic-2.9.5.78-heroku.tar.gz
-----> Copying config files
-----> Installing boot script
-----> Done with compile
-----> Uploading staged droplet (38M)
-----> Uploaded droplet
Checking tinytiny...
Staging in progress...
Staging in progress...
Staging in progress...
  0/1 instances: 1 starting
  0/1 instances: 1 starting
  0/1 instances: 1 starting
  1/1 instances: 1 running
OK

Success again! reloading the configuration page, I was greeted with confirmation that the database connection was now working.

Screenshot_21_06_2013_14_58-4

After this, I simply needed to initialise the database, save the configuration, login, change my password, and import my Google Reader OPML file (there are ttRSS plugins which also allow you to import your whole Google Takeout from Reader, including likes and shares).

Screenshot_21_06_2013_15_16-2

As I said, I’m personally a big fan of Feedly and I don’t think I’ll be using ttRSS full-time, but this was a really nice and very quick way to prove that Cloud Foundry v2 is ready to host these kind of apps – even with the redeployment step to swap buildpacks. You might want to give it a try!