Some random reflections on 2014

As time has moved on, I’ve lavished less attention on my blog, which is a shame… “back in the day” I enjoyed writing for it, and gained a lot of value from doing so. It’s of no particular surprise to me that I’ve spent less time writing here in 2014 than in any of the previous years; but it is a regret. I blame my schedule, a general change in the way I interact online, and a lack of inspiration. Actually, that last one isn’t quite true: I’ve often been inspired, or felt the need to blog, but have found myself mentally blocked. I need to get over that!

Anyway… 2014, looking back… a little bit of a year in review.

LEGO AndyThe major life change this year was my move to Twitter, which has been very exciting and energising. I’m thrilled to have been invited to work with a team of exceptional people under Jeff Sandquist. In particular, this year I’ve had the short-lived opportunity to work with three brilliant and talented guys I want to say “thank you” to, for making my transition to my new role such a pleasure: Taylor Singletary, Sylvain Carle and Isaac Hepworth. A special shout-out too to my close friend and colleague based in London, Romain Huet, without whom I would have found the past nine months much less fun or easy-to-navigate! The whole team has been just amazing to work with, as have all my wonderful colleagues at the Twitter office in London #gratefulpipes

The work we’re doing on the Developer and Platform Relations team at Twitter is something I’m incredibly passionate about. Connecting with the third-party community and acting as the face and voice of Twitter with those developers, listening to them and responding to their concerns, is the reason I joined the company.

I’ve been involved in the launch of a couple of APIs (most notably the Mute API), and I’m getting to work on much of the external API surface, which plays well with my background and developer experience. We’ve completely relaunched our developer-facing website and forums in the past few months, which the whole team has worked hard on. I’m happy to see the focus of discussion on the developer forums substantially improved now that we’ve moved to the Discourse platform – the user experience is far better than we had with the previous solution.

Most importantly, this past quarter we launched Fabric, our new free mobile SDK and platform for iOS and Android, and delivered a swathe of improvements to the developer experience for mobile enthusiasts. We also ran our first mobile developer conference, Flight – I was excited to be there, and I’m looking forward to seeing that experience continue in 2015.

My background in the Internet of Things and MQTT space has partly carried over into my new life at Twitter, and I’ve had the opportunity to speak at a couple of events (including Flight) about how Twitter’s platform plays into that space. However, I’ve substantially stepped back from playing a major role in the MQTT community this year; a decision in part driven by the need to refocus on my new role, partly due to some personal hostility and “burnout” with a couple of specific issues, but mostly because – I’m no longer “needed”! It has been incredibly satisfying to see the MQTT community grow over the past few years. The standardisation of the protocol at OASIS, the large number of implementations, and the ability of many other much smarter people to pick up the kinds of speaking engagements I was previously doing as a matter of course – all of these things make me immensely proud to have helped to lay the foundations for the success of that community over the past six years or so.

I’ve also been very happy to see the success of the Cloud Foundry platform and the people involved – having devoted the previous two years of my career to that nascent Open Source community, it is just fantastic to see it take off and the Foundation get started. Nice work to everyone involved.

I’ve again thoroughly enjoyed my speaking opportunities this year, and the chance to broaden my range. Obviously that has included a lot about the Twitter API and developer platform, and lots again about IoT; but I’ve also spoken on wearables, developer advocacy, and API management. I’m very happy that I got to be a part of the first Twitter Flight conference – one of my speaking career highlights.

Personally, I’ve tried to stay healthy this year (no heart scares, no falls…!); although my travel schedule has been demanding again (TripIt tells me I covered 66613 miles in the air). That did at least include a couple of trips for fun, rather than being all about business 🙂

The next year looks to be busy with more events to speak at (and organise!), and much more to do around the Twitter platform. As an historian, a sociologist and someone with a keen interest in the intersection of technology and people, I’m very excited to be a part of this wave of change.

Happy New Year – here’s to 2015!

Getting inside Cloud Foundry for debug (and profit?)

I’ve recently started to play with some more of the internals of Cloud Foundry than I’ve been used to. This has been made much easier by the advent of bosh-lite, a system for deploying all of Cloud Foundry’s components using the bosh continuous deployment and configuration tool, into a single virtual machine. bosh-lite achieves this by using containers (Cloud Foundry’s own Warden container technology) to “emulate” the individual VMs where jobs would run in a full distributed topology.

bosh-lite has actually been around for a number of months now, but I’ve not had much of a chance to play with it until recently. This is partly due to other activities, and also that my earlier attempts to get an environment up-and-running were hampered by lack of memory. It should be possible to run bosh-lite with a Cloud Foundry deployment in 8Gb of RAM, but given my laptop’s configuration and the amount of other stuff I’m usually running, that was never comfortable – now I’m rocking 16Gb in a MacBook Pro, things are running more smoothly.

I don’t intend to spend this post documenting how to install bosh-lite and get a running single-node Cloud Foundry system. I followed the instructions in the README and things went well on this occasion. One suggestion that I’d make is if you can, to use VMware Fusion (assuming like me, you’re on OS X) and the Vagrant provider for Fusion, seems quite a lot better than Virtualbox. If you do, don’t forget to pass the --provider=vmware_fusion flag when you bring your Vagrant image up (that’s something I do usually forget). One other little thing to mention is that after I started the bosh deployment, the bosh CLI gem timed out and returned a REST error – but the deployment process itself continued without any issues, and I was able to use bosh tasks to check in on the progress. If you are interested, I used cf-release-157 this time around.

Once I had my minty-fresh Cloud Foundry running, I deployed Matt Stine’s handy, simple, Ruby scale demo app and pushed up the number of instances.

So what’s the point of this post? I want to mention two things…

Note: this is not about debugging applications on Cloud Foundry in general – a PaaS is an opinionated system and you generally shouldn’t need to poke around inside it like this. This is for debugging the Cloud Foundry runtime itself, or aspects that might run inside a container. Oh, and I’m sorry about the formatting of some of the shell output examples below!

Peeking at NATS traffic

NATS is the internal, lightweight message bus that Cloud Foundry components use to talk to one another. I’d read blog posts from Cornelia and from Dr Nic about digging into this before.

First of all, I used bosh ssh to access the NATS host:

$ bosh ssh
1. ha_proxy_z1/0
2. nats_z1/0
3. postgres_z1/0
4. uaa_z1/0
5. login_z1/0
6. api_z1/0
7. clock_global/0
8. api_worker_z1/0
9. etcd_leader_z1/0
10. hm9000_z1/0
11. runner_z1/0
12. loggregator_z1/0
13. loggregator_trafficcontroller_z1/0
14. router_z1/0
Choose an instance: 2
Enter password (use it to sudo on remote host): ***
Target deployment is `cf-warden'

Setting up ssh artifacts

Director task 9

Task 9 done
Starting interactive shell on job nats_z1/0

So now I’m on the NATS host – now what? well, strictly speaking I didn’t need to login to that host / container, since of course, as a messaging system, the other hosts can connect to it anyway. The reason I wanted to login to it was to find out how NATS was configured.

$ ps -ef | grep nats
root 1470 1 0 12:09 ? 00:00:12 /var/vcap/packages/gnatsd/bin/gnatsd -V -D -c /var/vcap/jobs/nats/config/nats.conf

$ more /var/vcap/jobs/nats/config/nats.conf

net: "10.244.0.6"
port: 4222

pid_file: "/var/vcap/sys/run/nats/nats.pid"
log_file: "/var/vcap/sys/log/nats/nats.log"

authorization {
user: "nats"
password: "nats"
timeout: 15
}

cluster {
host: "10.244.0.6"
port: 4223

authorization {
user: "nats"
password: "nats"
timeout: 15
}

routes = [

]
}

From this, I can see that NATS is listening on IP 10.244.0.6, port 4222 (the NATS default), and that it is configured for username/password authentication. Handy to know!

I borrowed a little script from Dr Nic, but needed to modify it slightly to talk to authenticated NATS (his original script assumed there was no auth in place):


#!/usr/bin/env ruby
require "nats/client"
NATS.start(:uri => "nats://nats:nats@10.244.0.6:4222") do
NATS.subscribe('>') { |msg, reply, sub| puts "Msg received on [#{sub}] : '#{msg}'" }
end

view raw

nats-all.rb

hosted with ❤ by GitHub

[update – Dr Nic has provided a more convenient method to do this, in the comments below – check out nats-sub – but this works, as well]

$ ./nats-all.sh
Msg received on [router.register] : '{"host":"10.244.0.134","port":8080,"uris":["login.10.244.0.34.xip.io"],"tags":{"component":"login"},"index":0,"private_instance_id":"e6194fe8-4910-4cb1-9f7c-d5ee7ff3f36b"}'
Msg received on [router.register] : '{"host":"10.244.0.130","port":8080,"uris":["uaa.10.244.0.34.xip.io"],"tags":{"component":"uaa"},"index":0,"private_instance_id":"7713dd5b-3613-41a6-9c67-c48f22a769b4"}'
Msg received on [router.register] : '{"dea":"0-1ba3459ea4cd406db833c1d188a78c02","app":"b8550851-37a0-4bd5-bdce-1d787b087887","uris":["andyp.10.244.0.34.xip.io"],"host":"10.244.0.26","port":61021,"tags":{"component":"dea-0"},"private_instance_id":"b52dfd91d68144cabb14b6c7bae77daae8b493acf1354c99941d49772a1f61fb"}'
Msg received on [router.register] : '{"dea":"0-1ba3459ea4cd406db833c1d188a78c02","app":"b8550851-37a0-4bd5-bdce-1d787b087887","uris":["andyp.10.244.0.34.xip.io"],"host":"10.244.0.26","port":61025,"tags":{"component":"dea-0"},"private_instance_id":"090f5c5aeee94fdfb4a4e0f0afde2553480dcd97c018431db37b4dffdc80fde4"}'
Msg received on [router.register] : '{"dea":"0-1ba3459ea4cd406db833c1d188a78c02","app":"b8550851-37a0-4bd5-bdce-1d787b087887","uris":["andyp.10.244.0.34.xip.io"],"host":"10.244.0.26","port":61028,"tags":{"component":"dea-0"},"private_instance_id":"92e10af77b274836a3f54373c9b7feee025c5b72f41a4c4982bde97d241ebd5b"}'
Msg received on [router.register] : '{"dea":"0-1ba3459ea4cd406db833c1d188a78c02","app":"b8550851-37a0-4bd5-bdce-1d787b087887","uris":["andyp.10.244.0.34.xip.io"],"host":"10.244.0.26","port":61039,"tags":{"component":"dea-0"},"private_instance_id":"86edf0c0a7f84f04b52693b489ad93b7f857f77271b84d568d8f5600b34f7054"}'
Msg received on [router.register] : '{"host":"10.244.0.26","port":34567,"uris":["8b24c0a7d28f4e03aa028a3dc89fb8c3.10.244.0.34.xip.io"],"tags":{"component":"directory-server-0"}}'
Msg received on [dea.advertise] : '{"id":"0-1ba3459ea4cd406db833c1d188a78c02","stacks":["lucid64"],"available_memory":23296,"available_disk":22528,"app_id_to_count":{"b8550851-37a0-4bd5-bdce-1d787b087887":10},"placement_properties":{"zone":"default"}}'
Msg received on [staging.advertise] : '{"id":"0-1ba3459ea4cd406db833c1d188a78c02","stacks":["lucid64"],"available_memory":23296}'
Msg received on [dea.heartbeat] : '{"droplets":[{"cc_partition":"default","droplet":"b8550851-37a0-4bd5-bdce-1d787b087887","version":"a420d371-0816-4baf-9649-4e21255a66a4","instance":"d92d3c0c43ce4b6981e443e5c2064580","index":0,"state":"RUNNING","state_timestamp":1392639135.9526377},{"cc_partition":"default","droplet":"b8550851-37a0-4bd5-bdce-1d787b087887","version":"a420d371-0816-4baf-9649-4e21255a66a4","instance":"898e632697e246de9cf6b7330444227c","index":1,"state":"RUNNING","state_timestamp":1392639136.3117783},{"cc_partition":"default","droplet":"b8550851-37a0-4bd5-bdce-1d787b087887","version":"a420d371-0816-4baf-9649-4e21255a66a4","instance":"56d023e374aa49d88720daabac58e862","index":2,"state":"RUNNING","state_timestamp":1392639135.2225387},{"cc_partition":"default","droplet":"b8550851-37a0-4bd5-bdce-1d787b087887","version":"a420d371-0816-4baf-9649-4e21255a66a4","instance":"f11d86a7f4ad47f1ad554ae1b087d5f6","index":3,"state":"RUNNING","state_timestamp":1392639136.1042},{"cc_partition":"default","droplet":"b8550851-37a0-4bd5-bdce-1d787b087887","version":"a420d371-0816-4baf-9649-4e21255a66a4","instance":"c9e6de77f0484e6cae47f73ad6ca778a","index":4,"state":"RUNNING","state_timestamp":1392639135.9426212},{"cc_partition":"default","droplet":"b8550851-37a0-4bd5-bdce-1d787b087887","version":"a420d371-0816-4baf-9649-4e21255a66a4","instance":"924c387fc33444289b2db2762eefac42","index":5,"state":"RUNNING","state_timestamp":1392639135.940636},{"cc_partition":"default","droplet":"b8550851-37a0-4bd5-bdce-1d787b087887","version":"a420d371-0816-4baf-9649-4e21255a66a4","instance":"69866b260b1a49a09c03e178c4add2c5","index":6,"state":"RUNNING","state_timestamp":1392639135.944143},{"cc_partition":"default","droplet":"b8550851-37a0-4bd5-bdce-1d787b087887","version":"a420d371-0816-4baf-9649-4e21255a66a4","instance":"94bc605505d94dc1832e55bf2f671a99","index":7,"state":"RUNNING","state_timestamp":1392639135.4456258},{"cc_partition":"default","droplet":"b8550851-37a0-4bd5-bdce-1d787b087887","version":"a420d371-0816-4baf-9649-4e21255a66a4","instance":"8420df9bbe64456385dfa91285641ba4","index":8,"state":"RUNNING","state_timestamp":1392639135.9456131},{"cc_partition":"default","droplet":"b8550851-37a0-4bd5-bdce-1d787b087887","version":"a420d371-0816-4baf-9649-4e21255a66a4","instance":"ed9ad14f6599494c96f90296c59e6041","index":9,"state":"RUNNING","state_timestamp":1392639135.938359}],"dea":"0-1ba3459ea4cd406db833c1d188a78c02"}'
Msg received on [router.register] : '{"host":"10.244.0.10","port":8080,"uris":["loggregator.10.244.0.34.xip.io"]}'
Msg received on [router.register] : '{"host":"10.244.0.138","port":9022,"uris":["api.10.244.0.34.xip.io"],"tags":{"component":"CloudController"},"index":0,"private_instance_id":null}'
Msg received on [router.register] : '{"host":"10.244.0.134","port":8080,"uris":["login.10.244.0.34.xip.io"],"tags":{"component":"login"},"index":0,"private_instance_id":"e6194fe8-4910-4cb1-9f7c-d5ee7ff3f36b"}'

Warden containers and shells

Cloud Foundry’s native container technology is called Warden. When an application is deployed, Cloud Foundry starts up a Warden container based on the limits assigned in terms of memory etc, and the applications run inside that. How can you get “inside” the container to see what is going on?

Well, there are a couple of techniques. Cloud Foundry Loggregator provides streaming access to the standard application logs (stdout/stderr) via the cf logs command. Another option is James Bayer’s cool websocket-based method for getting access to the container. Yet another option is Warden’s own shell, wsh. This does assume you can access the DEA machine with ssh, however.

wsh doesn’t seem to be very well documented, although I knew Cornelia had played around with it – see her excellent blog post on troubleshooting CF and applications, including a great flowchart / graphic suggesting different techniques.

Here’s the secret sauce:

1. Login to the DEA VM (called “runner_z1/0” in the list provided by bosh ssh).

2. Identify your Warden container… there are a lot showing below, but I happen to know that these are several instances of the same app. The important part is the instance-17ij46hadt2 – the second part or that value maps to the location of the container’s private space on disk.

$ ps -ef | grep warden
root        49    42  1 11:41 ?        00:00:41 /var/vcap/bosh/bin/ruby /var/vcap/bosh/bin/bosh_agent -c -I warden -P ubuntu
root      5390 32634  0 12:12 ?        00:00:00 /var/vcap/data/packages/warden/38.1/warden/src/oom/oom /tmp/warden/cgroup/memory/instance-17ij46hadss
root      5503 32634  0 12:12 ?        00:00:00 /var/vcap/data/packages/warden/38.1/warden/src/oom/oom /tmp/warden/cgroup/memory/instance-17ij46hadsu
root      5697 32634  0 12:12 ?        00:00:00 /var/vcap/data/packages/warden/38.1/warden/src/oom/oom /tmp/warden/cgroup/memory/instance-17ij46hadt3
root      6779 32634  0 12:12 ?        00:00:00 /var/vcap/data/warden/depot/17ij46hadsu/bin/iomux-spawn /var/vcap/data/warden/depot/17ij46hadsu/jobs/58 /var/vcap/data/warden/depot/17ij46hadsu/bin/wsh --socket /var/vcap/data/warden/depot/17ij46hadsu/run/wshd.sock --user vcap /bin/bash
root      6780  6779  0 12:12 ?        00:00:00 /var/vcap/data/warden/depot/17ij46hadsu/bin/wsh --socket /var/vcap/data/warden/depot/17ij46hadsu/run/wshd.sock --user vcap /bin/bash
root      6784 32634  0 12:12 ?        00:00:00 /var/vcap/data/warden/depot/17ij46hadsu/bin/iomux-link -w /var/vcap/data/warden/depot/17ij46hadsu/jobs/58/cursors /var/vcap/data/warden/depot/17ij46hadsu/jobs/58
root      6930 32634  0 12:12 ?        00:00:00 /var/vcap/data/warden/depot/17ij46hadss/bin/iomux-spawn /var/vcap/data/warden/depot/17ij46hadss/jobs/59 /var/vcap/data/warden/depot/17ij46hadss/bin/wsh --socket /var/vcap/data/warden/depot/17ij46hadss/run/wshd.sock --user vcap /bin/bash
root      6931  6930  0 12:12 ?        00:00:00 /var/vcap/data/warden/depot/17ij46hadss/bin/wsh --socket /var/vcap/data/warden/depot/17ij46hadss/run/wshd.sock --user vcap /bin/bash
root      6934 32634  0 12:12 ?        00:00:00 /var/vcap/data/warden/depot/17ij46hadss/bin/iomux-link -w /var/vcap/data/warden/depot/17ij46hadss/jobs/59/cursors /var/vcap/data/warden/depot/17ij46hadss/jobs/59
root      6950 32634  0 12:12 ?        00:00:00 /var/vcap/data/warden/depot/17ij46hadt3/bin/iomux-spawn /var/vcap/data/warden/depot/17ij46hadt3/jobs/60 /var/vcap/data/warden/depot/17ij46hadt3/bin/wsh --socket /var/vcap/data/warden/depot/17ij46hadt3/run/wshd.sock --user vcap /bin/bash
root      6955  6950  0 12:12 ?        00:00:00 /var/vcap/data/warden/depot/17ij46hadt3/bin/wsh --socket /var/vcap/data/warden/depot/17ij46hadt3/run/wshd.sock --user vcap /bin/bash
root      6960 32634  0 12:12 ?        00:00:00 /var/vcap/data/warden/depot/17ij46hadt3/bin/iomux-link -w /var/vcap/data/warden/depot/17ij46hadt3/jobs/60/cursors /var/vcap/data/warden/depot/17ij46hadt3/jobs/60
vcap     23713 16807  0 12:26 pts/0    00:00:00 grep --color=auto warden
root     32634     1  0 11:52 ?        00:00:09 ruby /var/vcap/data/packages/warden/38.1/warden/vendor/bundle/ruby/1.9.1/bin/rake warden:start[/var/vcap/jobs/dea_next/config/warden.yml]

3. Head over to the directory for your chosen Warden instance:

$ cd /var/vcap/data/warden/depot/17ij46hadt2

4. Notice that the Warden containers are running as root. If you run wsh now as an unprivileged user, you’ll get a connect: Permission denied error. Time to switch to root, and then run wsh specifying the command to run inside the shell, as a parameter:

$ sudo su -
# cd /var/vcap/data/warden/depot/17ij46hadt2
# bin/wsh /bin/bash

5. At this point, we’re inside the Warden container with a bash shell, and all commands are scoped inside it. So, let’s take a look at what is running:

root@17ij46hadt2:~# ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 12:12 ?        00:00:00 wshd: 17ij46hadt2
vcap        29     1  0 12:12 ?        00:00:00 /bin/bash
vcap        31    29  0 12:12 ?        00:00:00 ruby /home/vcap/app/vendor/bundle/ruby/1.9.1/bin/rackup config.ru -p 61031
vcap        32    31  0 12:12 ?        00:00:00 /bin/bash
vcap        33    31  0 12:12 ?        00:00:00 /bin/bash
vcap        34    32  0 12:12 ?        00:00:00 tee /home/vcap/logs/stdout.log
vcap        35    33  0 12:12 ?        00:00:00 tee /home/vcap/logs/stderr.log
root        39     1  0 12:27 pts/0    00:00:00 /bin/bash
root        52    39  0 12:27 pts/0    00:00:00 ps -ef

This is our Ruby app, running on port 61031, and we can see the logs being written as well.

Hopefully this is useful information for folks wanting to dig around inside bosh-lite and a running Cloud Foundry system!

Upcoming speaking gigs

Fresh from a quick presentation and supporting Hackference this past weekend (more on that soon), I’ve turned my attention to the next couple of months of travel and events. There’s a lot of stuff happening!

Firstly, to my enormous regret I have to miss the Brighton Mini Maker Faire this coming weekend – if you are in the UK then it is a great day out, and I encourage you to go along, with or without a young family in tow.  I wrote about attending the first one in 2011, and helped as a volunteer last year. I’m sure it is going to be fabulous!

platform

Instead of being in the UK, this coming weekend I’m headed to Santa Clara for Platform: the Cloud Foundry Conference – our first developer summit for the whole Cloud Foundry community. On the back of partnership announcements with companies like IBM, Savvis and Piston, this is looking extremely exciting. I don’t have a formal speaking slot, but I’m going to be heavily involved and have helped with the planning and scheduling. I’m hoping to get a couple of topics onto the agenda for the unconference slot on the Monday afternoon, too!

Follow along via the Twitter hashtag #platformcf

SpringOne2GX

Immediately after Platform is the annual SpringOne 2GX event. There has been a huge amount of activity in the Spring community over the past couple of months and I think it is safe to say that this year there is some major excitement around where Spring has been headed. I’ve been privileged to spend some time with folks like Adrian Colyer recently, and I know the entire team has been working hard on many projects, so expect some very interesting news about the evolution of Spring and its capabilities. I’m speaking on the Cloud Foundry track, on the first morning of the conference, with my good friend (and Spring Developer Advocate) Josh Long, covering the topic “Build your Spring Applications on Cloud Foundry”.

The Twitter hashtags are #s2gx or #springone2gx

Later on the same day I’ll be zipping up to San Francisco to participate in a panel discussion at CloudBeat 2013, alongside my friend Diane Mueller and others. The panel topic is “Is PaaS Still Coming?” and we’re on at 1.50 in the afternoon slot. If you are interested in coming along, full event details can be found here, and you can save 20% on a ticket (there is a bunch of great content throughout the event, so if you are in the Bay Area it looks worthwhile). Hashtag for this one is #cloudbeat2013.

[pause for breath… and relax]

structure-europe_media-badge_see-me-speak

The following week I’m enormously honoured to have been invited to a panel at GigaOM Structure Europe, at home here in London.  The topic of this one is “DevOps: Is Synchronicity Here?” and rounds out day 2 of the event by taking a look at the current state of DevOps. This link should save you 25% on a ticket and I’d be delighted to see you there.

Next up, the speaking circuit takes me to Aarhus in Denmark, which is exciting as I’ve only ever visited Copenhagen before. I’ll be at GOTO Aarhus 2013, speaking on Cloud Foundry and why it is a great platform for running Java apps in the cloud.

Later in October I have a trip to Singapore, to talk to Pivotal customers about the products, projects and technologies we are developing, at our first Asia Pacific Pivotal Summit.

Finally – last but by no means least – to finish off October, I have two talks on the slate at JAX London 2013: “Run your Java code on Cloud Foundry” and (with my non-Pivotal, Open Source Community hat on) “Eclipse Paho and MQTT – Java messaging in the Internet of Things“. Both of these are on October 30th in London. If you want to get a ticket to come along to JAX London (it looks jam-packed with great content) then the promo code JL13AP should get you a 15% discount on the ticket price.

Running tinytinyRSS on Cloud Foundry

Google Reader is going away in a week or so, and my friends have been asking me where I’m migrating all of my feed reading activities to. The answer for me is a combination of Flipboard and Feedly (both of which I recommend), but for those who prefer a more traditional Reader-style UI and also to retain ownership of their data, running tinytinyRSS is a possible alternative. I’d heard about it, but was tipped off to it again by my friend Dave Neary over at Red Hat 🙂

tinytinyRSS is a PHP application and needs a MySQL or PostgreSQL database. It offers the ability to import an OPML file (basically an XML format for listing RSS subscriptions), as well as various other capabilities and plugins.

Since we launched Cloud Foundry Hosted Developer Edition (aka CF v2) last week, I thought I’d find out how much effort it would be to install and run ttRSS on our new platform. It should “just work” – with buildpack support, you can now bring your own runtime to the platform… and we currently have free Marketplace SQL offerings from ElephantSQL and clearDB. Checks all the boxes!

Here’s what happened when I set up ttRSS on run.pivotal.io (the new URL where Cloud Foundry Hosted Developer Edition from Pivotal runs, replacing the old cloudfoundry.com beta hosted service).

First, I read the installation guide and downloaded the latest release tarball (linked at the bottom of the main wiki page). Then I unpacked the tarball on my Mac.

Once inside the release directory, I decided to just “push” the app to Cloud Foundry. I knew I’d need a PHP runtime, so my first thought was to point at the Heroku PHP buildpack (CF v2 is compatible with many Heroku buildpacks). I grabbed the URL and entered the following:

Tiny-Tiny-RSS-1.8  cf push --buildpack=https://github.com/heroku/heroku-buildpack-php
Name> tinytiny

Instances> 1

1: 64M
2: 128M
3: 256M
Memory Limit> 256M

Creating tinytiny... OK

1: tinytiny
2: none
Subdomain> tinytiny

1: cfapps.io
2: mqttbridge.com
3: none
Domain> 1

Creating route tinytiny.cfapps.io... OK
Binding tinytiny.cfapps.io to tinytiny... OK

Create services for application?> y

1: blazemeter n/a, via blazemeter
2: cleardb n/a, via cleardb
3: cloudamqp n/a, via cloudamqp
4: elephantsql n/a, via elephantsql
5: mongolab n/a, via mongolab
6: rediscloud n/a, via garantiadata
7: treasuredata n/a, via treasuredata
What kind?> 4

Name?> elephantsql-53b67

1: turtle: Tiny Turtle
Which plan?> 1

Creating service elephantsql-53b67... OK
Binding elephantsql-53b67 to tinytiny... OK
Create another service?> n

Bind other services to application?> n

Save configuration?> y

Saving to manifest.yml... OK
Uploading tinytiny... OK
Starting tinytiny... OK
-----> Downloaded app package (3.1M)
Initialized empty Git repository in /tmp/buildpacks/heroku-buildpack-php/.git/
Installing heroku-buildpack-php.
-----> Bundling Apache version 2.2.22
-----> Bundling PHP version 5.3.10
-----> Uploading staged droplet (12M)
-----> Uploaded droplet
Checking tinytiny...
Staging in progress...
Staging in progress...
  0/1 instances: 1 starting
  1/1 instances: 1 running
OK

Hurrah! The app is deployed! Note that while I was running through the steps here, I also chose to provision an ElephantSQL instance and bind it to my app. I could also have done that via the Marketplace in the Web Console before pushing the app. The tinytinyRSS wiki suggested that it performs better with Postgres than it does with MySQL, so I chose to use that.

The next step in the regular installation is to visit the URL (in this case http://tinytiny.cfapps.io) and check that things are working OK. When I got there, I found a form asking me to fill in the database credentials.

That’s a small issue – right now, there is no autoconfiguration for PHP apps with databases on Cloud Foundry, and I hadn’t modified the application code to grab the information from anywhere in the environment. Fortunately, there is a way to find out what the settings should be – via the env.log file in the application container. Running cf logs got me back the contents of the file. VCAP_SERVICES is where I needed to look.

VCAP_SERVICES={"elephantsql-n/a":[{"name":"elephantsql-53b67","label":"elephantsql-n/a","plan":"turtle","credentials":{"uri":"postgres://xxxxkjkj:lbI7r3Bh@babar.elephantsql.com:5432/xxxxkjkj"}}]}

I’ve modified the values here, for obvious reasons, but I plugged the values from the elephantsql service right into the form… hit the Test DB button… and got an error that my PHP runtime didn’t have support for mbstring…

Hmm!

Fortunately, there’s another buildpack for Heroku which adds PHP support, and does have support for mbstring (as well as using nginx instead of Apache, and a few other tweaks). I thought I’d give that one a go instead. I’d already saved my application settings to the manifest.yml file, so I could not just push a second time with a different buildpack, I had to use the --reset flag to apply the change:

Tiny-Tiny-RSS-1.8  cf push --buildpack=https://github.com/iphoting/heroku-buildpack-php-tyler.git --reset
Using manifest file manifest.yml

Uploading tinytiny... OK
Changes:
  buildpack: 'https://github.com/heroku/heroku-buildpack-php' -> 'https://github.com/iphoting/heroku-buildpack-php-tyler.git'
Updating tinytiny... OK
Stopping tinytiny... OKStarting tinytiny... OK
-----> Downloaded app package (3.1M)
-----> Downloaded app buildpack cache (4.0K)
Initialized empty Git repository in /tmp/buildpacks/heroku-buildpack-php-tyler.git/.git/
Installing heroku-buildpack-php-tyler.git.
-----> Fetching Manifest
       https://s3.amazonaws.com/heroku-buildpack-php-tyler/manifest.md5sum
-----> Installing Nginx
       Bundling Nginx v1.4.1
       https://s3.amazonaws.com/heroku-buildpack-php-tyler/nginx-1.4.1-heroku.tar.gz
-----> Installing libmcrypt
       Bundling libmcrypt v2.5.8
       https://s3.amazonaws.com/heroku-buildpack-php-tyler/libmcrypt-2.5.8.tar.gz
-----> Installing libmemcached
       Bundling libmemcached v1.0.7
       https://s3.amazonaws.com/heroku-buildpack-php-tyler/libmemcached-1.0.7.tar.gz
-----> Installing PHP
       Bundling PHP v5.4.12
       https://s3.amazonaws.com/heroku-buildpack-php-tyler/php-5.4.12-with-fpm-heroku.tar.gz
-----> Installing newrelic
       Bundling newrelic daemon v2.9.5.78
       https://s3.amazonaws.com/heroku-buildpack-php-tyler/newrelic-2.9.5.78-heroku.tar.gz
-----> Copying config files
-----> Installing boot script
-----> Done with compile
-----> Uploading staged droplet (38M)
-----> Uploaded droplet
Checking tinytiny...
Staging in progress...
Staging in progress...
Staging in progress...
  0/1 instances: 1 starting
  0/1 instances: 1 starting
  0/1 instances: 1 starting
  1/1 instances: 1 running
OK

Success again! reloading the configuration page, I was greeted with confirmation that the database connection was now working.

Screenshot_21_06_2013_14_58-4

After this, I simply needed to initialise the database, save the configuration, login, change my password, and import my Google Reader OPML file (there are ttRSS plugins which also allow you to import your whole Google Takeout from Reader, including likes and shares).

Screenshot_21_06_2013_15_16-2

As I said, I’m personally a big fan of Feedly and I don’t think I’ll be using ttRSS full-time, but this was a really nice and very quick way to prove that Cloud Foundry v2 is ready to host these kind of apps – even with the redeployment step to swap buildpacks. You might want to give it a try!

 

Busy times, but let’s talk Cloud Foundry!

Users of the existing beta Cloud Foundry hosted service cloudfoundry.com were sent emails this week explaining that we are almost ready to launch version 2 of the service. If you’re a current user, or if you have signed up in the past, dig through your inbox filters for the email (mine ended up under the “Promotions” label thanks to Gmail’s auto-filing magic).

Cloud Foundry v2, sometimes known as “next-gen” or ng, is a big set of updates. I wrote about some of them in my last blog post, and also noted there that we are going to run the new version on AWS.

Some of the things worth getting excited about are:

  • custom domains (the number one thing I’ve been asked for after every talk!)
  • buildpacks – the ability to use “any” language, framework or runtime that has a buildpack, not just Java, Ruby, or node.js (Matt and Brian seem to be competing to find interesting ones!). By the way, you should totally be trying your Spring and Groovy apps on v2! 🙂
  • organisations and spaces – the ability to share apps with a team and collaborate
  • a web management console for your apps
  • a Marketplace, which we will be expanding over time, allowing you to bind third-party services in to your applications.

These are all big changes, and there are many more under the hood (Warden, a new staging process, a new router… it’s a very long list).

My colleague Nima posted a nice slide deck giving a more technical overview of some of the internal changes:

In addition, our demo ninja Dekel has shared a great video of some of the things you can expect from version 2:

Over the past 24 hours or so I’ve been doing my best to respond to questions on Twitter and elsewhere. The existing v1 version will go away on June 30th, so if you are using it now you’ll want to look at migrating apps in the next couple of weeks, and we’ll share more on that soon. The new version will have pricing attached, with a free trial period too. Of course, the code is always available on Github and you’re free to spin up your own CF instance running on AWS, OpenStack or vSphere.

I know folks will have many more questions about the specifics over the next week or so, and we will be looking out for them.

Supercharging the community

As we have grown closer to v2 release, there has been ever-increasing activity in the vcap-dev mailing list and around the community. We’ve had more and more code contributions (so much so that I recently wrote a blog post about how you can contribute to the CF core projects). Projects like the cf-vagrant-installer and cf-nise-installer are helping people get local environments running very quickly. Our friends from PistonCloud released their turtles project. Best of all, the super Dr Nic Williams recently set up a cloudfoundry-community organisation on Github to act as an umbrella for many of these community contributions (info on how to join is here).

Let’s talk! (in London)

Over the past year or so I’ve spent a lot of time out in the developer community in London, and it has become apparent that a lot of folks are interested, already contributing to the community, or in some cases, already running their own CF instances in production 🙂

So, I thought it would be a good idea to do some bridge building and bring folks together to get to know one another. A brief unscientific Twitter poll suggested that other people liked the idea, so we’ve stuck a stake in the ground (evening of July 3rd) and I set up an Eventbrite page for a meet up. If you’d like to chat with people about Cloud Foundry over a drink, do sign up and come along. I’ll sort out a venue in the next couple of weeks, but I imagine it will be “around Shoreditch” or possibly over towards Waterloo, for purely selfish reasons! Totally informal, this is just a community meet up, so I’m not planning to do slides and talks and stuff – just come and share ideas or ask questions!