Tag Archives: Twitter

#newjob

In late 2011, I was contacted by a very charming, smart and persuasive French gentleman who spoke of clouds, platform-as-a-service, and polyglot programming. It took him and his team a couple of months to get me thinking seriously about a career change, after 10 great years at IBM. I’d spent that period with “Big Blue” coding in Java and C, and primarily focused on enterprise application servers, message queueing, and integration – and yet the lure of how easy vmc push[1] made it for me to deploy and scale an app was astounding! Should I make the transition to a crazy new world? Over Christmas that year, I decided it would be a good thing to get in on this hot new technology and join VMware as Developer Advocate on the Cloud Foundry team. I joined the team early in 2012.

The Cloud Foundry adventure has been amazing. The day after I joined the team, the project celebrated its first anniversary, and we announced the BOSH continuous deployment tool; I spent much of that first year with the team on a whirlwind of events and speaking engagements, growing the community. The Developer Relations team that Patrick Chanezon and Adam Fitzgerald put together was super talented, and it was brilliant to be part of that group. Peter, Chris, Josh, Monica, Raja, Rajdeep, Alvaro, Eric, Frank, Tamao, Danny, Chloe, D, Giorgio, friends in that extended team… it was an honour.

A year after I joined, VMware spun out Cloud Foundry, SpringSource and other technologies into a new company, Pivotal – headed up by Paul Maritz. I’ve been privileged to work under him, Rob Mee at Pivotal Labs, and most closely, my good friend James Watters on the Cloud Foundry team. I’ve seen the opening of our new London offices on Old Street, welcomed our partners and customers into that unique collaborative and pairing environment, and observed an explosion of activity and innovation in this space. We launched an amazing productJames Bayer heads up a remarkable group of technologists working full-time on Cloud Foundry, and it has been a pleasure to get to know him and his team. Most recently, I’ve loved every minute working with Cornelia, Ferdy, Matt, Sabha and Scott (aka the Platform Engineering team), another talented group of individuals from whom I’ve learned much.

Over the course of the last two years I’ve seen the Platform-as-a-Service space grow, establish itself, and develop – most recently resulting in my recent talk at bcs Oxfordshire:

Last week, we announced the forthcoming Cloud Foundry Foundation – and one could argue that as a community and Open Source kinda guy, this was the direction I’ve helped to move things in the past two years, although I can claim no credit at all for the Foundation announcement itself. I’ve certainly enjoyed hosting occasional London Cloud Foundry Community meetups and drinks events (note, next London PaaS User Group event has 2 CF talks!), and I’ve made some great friends locally and internationally through the ongoing growth of the project. I’m proud of the Platform event we put on last year, I think the upcoming Cloud Foundry Summit will be just as exciting, and I’m happy to have been a part of establishing and growing the CF community here in Europe.

Cloud Foundry is THE de facto Open Source PaaS standard, the ecosystem is strong and innovative, and that has been achieved in a transparent and collaborative way, respectful to the community, in a good-natured way in the face of competition. Rest assured that I’ll continue to watch the project and use PaaSes which implement it (I upgraded to a paid Pivotal Web Services account just this past week, I tried BlueMix, and I’m an ongoing fan of the Anynines team).

There are many missing shout-outs here… you folks know who you are, and should also know that I’ve deeply enjoyed learning from you and working with you. Thank you, Pivotal team! I do not intend to be a stranger to the Bay Area! In my opinion, Pivotal is positioned brilliantly in offering an end-to-end mobile, agile development, cloud platform and big data story for the enterprise. I look forward to continuing the conversations around that in the next couple of weeks.

[…]

What happens after “the next couple of weeks”? Well, this is as good time as any (!) to close that chapter, difficult though it is to leave behind a team I’ve loved working with, on a product and project that is undoubtedly going to continue to be fantastically successful this year and beyond. So, it is time to announce my next steps, which may or may not be clear from the title of this post…🙂

Joining Twitter!

I joined Twitter as a user on Feb 21 2007. On the same day, seven years later, I accepted a job offer to go and work with the Twitter team as a Developer Advocate, based in London.

If you’ve been a long-term follower of mine either here on this blog, or on Twitter, or elsewhere, you’ll know that Twitter is one of my favourite tools online. It has been transformational in my life and career, and it changed many of my interactions. True story: between leaving IBM and joining VMware I presented at Digital Bristol about social technologies, and I was asked, which one I would miss the most if it went away tomorrow; the answer was simple: Twitter. As an Open Source guy, too, I’ve always been impressed with Twitter’s contributions to the broader community.

I couldn’t be more #excited to get started with the Twitter Developer Relations team in April!

Follow me on Twitter – @andypiper – to learn more about my next adventure…

[1] vmc is dead, long live cf!

A little bit of Spring and MQTT

I’ve been involved with Spring (the Java framework, not the season…) for a couple of years now – since I joined VMware in 2012 through the former SpringSource organisation in the UK – and I’ve remained “involved” with it through my transition to Pivotal and the framework’s evolution into Spring 4 / Spring.IO under Pivotal’s stewardship.

To be clear, although I’ve been a “Java guy” / hacker through my time at IBM, I have never been a hardcore JEE coder, and my knowledge of Spring itself has always been limited. My good buddy Josh (MISTER Spring, no less) has done his best to encourage me, I briefly played with Spring Shell last year with @pidster, and the brilliant work done by the Spring Boot guys has been helpful in making me look at Spring again (I spoke about Boot – very briefly, as a newcomer to it – at the London Java Community unconference back in November).

Taking all of that on board, then, I’m still an absolute Spring n00b. I recognise a lot of the benefits, I see the amazing work that has gone into Spring 4, and I’m excited by the message and mission of the folks behind the framework. I would say that though… wouldn’t I?🙂

[considering this is my first blog post in a while, I’m taking a while to get past the preamble…]

This week, I chose to flex my coding muscles (!) with a quick diversion into Spring Integration. With a long history in WebSphere Integration back in my IBM days, this was both something of return to my roots, and also a learning experience!

With the new Spring.IO website (written with Spring, and hosted on Pivotal Web Services Cloud Foundry, fact fans!), the Spring team introduced the new Spring Guides – simple and easy-to-consume starter guides to the different technologies available in the Spring family. I knew that the team had added MQTT support via Eclipse Paho, so I thought I’d take a look at how I could modify a Spring Integration flow to take advantage of MQTT.

Incidentally, there’s complete reference documentation on the MQTT support, which is helpful if you’re already familiar with how Spring Integration works.

The resulting simple project is on Github.

Once I’d figured out that it is useful to ensure the latest versions of the various modules are listed as dependencies in the build.gradle file, it wasn’t too hard to adapt the Guide sample. In the example, the docs lead a developer through creating a new flow which searches Twitter and then passes tweets through a transformer (simply flattening the tweet sender and body into a single line of text), into an outbound file adapter.

The bulk of the changes I made were in the integration.xml file. I wanted to replace the file output with the tweets being published to an MQTT topic. To do that, I added the int-mqtt XML namespace information to the file, and configured an outbound-channel-adapter. It was also necessary to add a clientFactory bean configuration for a Paho MQTT connection. You’ll notice that, by default, my example posts results to the test broker at Eclipse (iot.eclipse.org port 1883 – an instance of mosquitto for public testing). Full information on how to build and test the simple example can be found in the project README file.

Thanks to my colleague Gary Russell for helping me to figure out a couple of things, as I was a Spring Integration newcomer!

Digging through what Twitter knows about me

I joined Twitter on February 21, 2007, at exactly 15:14:48, and I created my account via the web interface. As you can see, my first tweet was pretty mundane!

I remember discussing this exciting cool “new Web 2.0 site” with Kim Plowright @mildlydiverting in Roo’s office in Hursley a couple of days before, and before long he, Ian and I were all trying this new newness out. It was just before the 2007 SXSWi, where Twitter really started to get on the radar of the geekerati.

But wait a moment! It’s impossible to pull back more than just over the last 3,000 tweets using the API, so how was I able to get all the way back to 5 years ago and display that tweet when I’ve got over 33,000 of them to my name?

It’s a relatively little-known fact that you can ask Twitter to disclose everything they hold associated with your account – and they will (at least, in certain jurisdictions – I’m not sure whether they will do this for every single user but in the EU they are legally bound to do so). I learned about this recently after reading Anne Helmond’s blog entry on the subject, and decided to follow the process through. I first contacted Twitter on April 24, and a few days later faxed (!) them my identity documentation, most of which was “redacted” by me🙂 Yesterday, May 11, a very large zip file arrived via email.

I say very large, but actually it was smaller than the information dump that Anne received. Her tweets were delivered as 50Mb of files, but mine came in nearer to 9Mb zipped – 17Mb unzipped. I’d expected a gigantic amount of data in relation to my tweets, but it seems as though they have recently revised their process and now only provide the basic metadata about each one rather than a whole JSON dump.

So, what do you get for your trouble? Here’s the list of contents, as outlined by Twitter’s legal department in their email to me.

– USERNAME-account.txt: Basic information about your Twitter account.
– USERNAME-email-address-history.txt: Any records of changes of the email address on file for your Twitter account.
– USERNAME-tweets.txt: Tweets of your Twitter account.
– USERNAME-favorites.txt: Favorites of your Twitter account.
– USERNAME-dms.txt: Direct messages of your Twitter account.
– USERNAME-contacts.txt: Any contacts imported by your Twitter account.
– USERNAME-following.txt: Accounts followed by your Twitter account.
– USERNAME-followers.txt: Accounts that follow your Twitter account.
– USERNAME-lists_created.txt: Any lists created by your Twitter account.
– USERNAME-lists_subscribed.txt: Any lists subscribed to by your Twitter account.
– USERNAME-lists-member.txt: Any public lists that include your Twitter account.
– USERNAME-saved-searches.txt: Any searches saved by your Twitter account.
– USERNAME-ip.txt: Logins to your Twitter account and associated IP addresses.
– USERNAME-devices.txt: Any records of a mobile device that you registered to your Twitter account.
– USERNAME-facebook-connected.txt: Any records of a Facebook account connected to your Twitter account.
– USERNAME-screen-name-changes.txt: Any records of changes to your Twitter username.
– USERNAME-media.zip: Images uploaded using Twitter’s photo hosting service (attached only if your account has such images).
– other-sources.txt: Links and authenticated API calls that provide information about your Twitter account in real time.

Of these, let’s dig a bit more deeply into just a few of the items, no need to pick everything to pieces.

The “tracking data” is contained in andypiper-devices.txt and andypiper-ipaudit.txt – interesting. The devices file essentially contains information on my phone, presumably for the SMS feature. They know my number and the carrier. The IP address list tracks back to the start of March, so they have 2 months of data on what IPs have been used to access my account. I’ve yet to subject that to a lot of scrutiny to check where those are located, that’s another script I need to write.

I took a look at andypiper-contacts.txt and was astonished to find out how much of my contact data Twitter’s friend finder and mobile apps had slurped up. I mean, I don’t even have all of this in my address book… given the fact that the information contained the sender email addresses for various online retailer newsletters, I’m guessing that Google’s API (I’m a Gmail user) probably coughed up not just my defined contact list, but also all of the email addresses from anyone I’d ever heard from, ever.

Fortunately, there’s a way to remove this information permanently, which Anne has written about. I went ahead and did that, and then Twitter warned me that the Who To Follow suggestions might not be so relevant. That’s OK because I don’t use that feature anyway – and in practice, I’ve noticed no difference in the past 24 hours!

I use DMs a lot for quick communication, particularly with colleagues (it was a pretty reliable way of contacting @andysc when I needed him at IBM!). That’s reflected in the size of andypiper-dms.txt, which is also a scary reminder to myself that I used to delete them, but since Twitter now makes it harder to get to and delete DMs, I’ve stopped removing them and there’s a lot of private data I wish I’d scrubbed.

Taking a peek at the early tweets in andypiper-tweets, I’m trying to remember when the @reply syntax was formalised and when Twitter themselves started creating links to the other person’s profile. Many of my early tweets refer to @roo and @epred and I don’t think they ever went by those handles. 5 years is a long time.

I mentioned that the format used to deliver the data appears to have changed since Anne made her request. She got a file containing a JSON dump of each tweet including metadata like retweet information, in_reply_to, geo, etc etc.. By comparison, I now have simply creation info, status ID (the magic that lets you get back to the tweets via web UI), and the text itself:

********************
user_id: 786491
created_at: Wed Feb 21 15:43:54 +0000 2007
created_via: web
status_id: 5623961
text: overheating in an office with no 
comfort cooling or aircon. About to drink water.

It’s a real shame that they have taken this approach, as it means the data is now far more cumbersome to parse and work with. However, using some shell scripts I did some simple slicing-and-dicing because I was curious how my use of Twitter had grown over time. Here’s a chart showing the numbers of tweets I posted per year (2012 is a “to date” figure of course). It looks like it was slow growth initially but last year I suddenly nearly doubled my output.

Still considering what other analysis I’d like to do. I can chart out the client applications I’ve used, or make a word cloud showing how my conversational topics have changed over time… now that all of the information is mine, that is. It is just a shame I have to do so much manual munging of the output beforehand.

Oh, and the email I received from Twitter Legal also said:

No records were found of any disclosure to law enforcement of information about your Twitter account.

So, that’s alright then…

Why did I do this? firstly, because I believe in the Open Web and ownership of my own data. Secondly, because I hope that I’ll now be able to archive this personal history and make it searchable via a tool like ThinkUp (which I’ve been running for a while now, but not for the whole 5 years). Lastly… no, not “because I could”… well OK at least partly because I could… because I believe that companies like Twitter, Facebook, Google and others should be fully transparent with their users and the data they hold, and that by going through this currently-slightly-painful procedure it will encourage Twitter to put in place formal tools to provide this level of access to everyone in a frictionless manner.

If you’ll excuse me, I’m off to dig around some more…

Several weeks ago – in Lotus

A very quick, and very belated, post to note that I was one of the guests on the episode of the This Week in Lotus podcast recorded March 18th 2011 (episode 43, for those keeping count). A good fun, panel discussion about what was new online and in collaborative and social spaces that week. In particular, we picked off a bunch of topics such as LotusLive supporting the earthquake disaster in Japan, Twitter’s new guidance on developing client apps, and IBM’s broader software capabilities.

You may want to dip in and take a listen… it’s far broader than just being about “Lotus” software, and the regular co-hosts Stuart and Darren are always worth a listen. Give it a try.

A Kind(l)er way of consuming tweets

Kindle CoverI picked up an Amazon Kindle 3 over the Christmas period, primarily because I wanted to be able to support a family member who also acquired one. I’d been impressed by the hardware when I’d had a chance to play with a Kindle 3 recently (I’d always thought that the screen refresh and form factor would put me off, but they don’t), and I may also want to dabble in the possibility of developing kindlet applications for the platform. To my mind, despite some limitations, it could be a fantastic slate for displaying relatively-static business content like facts and figures, and of course it is light and has fantastic battery life. I’ve gone for the wifi-only model, not because I wasn’t tempted by the possibility of global free 3G access, but purely because I didn’t consider that I’d need to use it to connect to the wireless much when out-and-about and away from a wifi network.

So far I’ve been very impressed with the device. It is simple, has reasonable usability – although a web interface via Amazon’s website for creating and organising Collections would be exceedingly welcome – and it is definitely encouraging me to read a lot more. It’s a tiny point, but I’m enjoy the progress bars at the bottom of the page that show me how far I’ve got through each book.

Almost by accident the other day I noticed one of my colleagues retweet a comment from David Singleton:

Now to be fair, this hit me squarely between the eyes – I have the former, and do indeed like the latter. So I just had to ping him and find out more!

Moments later, I had been invited to blootwee.

After a short signup process on the website (hint: it didn’t work brilliantly on the Kindle browser, but it can be done very quickly on a desktop machine), my Kindle refreshed itself with a new document “blootwee for andypiper”.

This slideshow requires JavaScript.

So what is this doing? Well, essentially, it is scooping my tweets up, grabbing the associated / linked content, creating an ebook, and emailing it to my Kindle – for free. As you will see from the gallery above, the book has tweets at the start, one per page. By following any links, you can jump forward to the point where that web page content is embedded. You can then hit the Back button to return to where you were in the Twitter timeline.

David is currently offering the ability to do this for free on an ad-hoc basis, but he also has some very low-cost paid options to enable this to happen on a daily basis… so you end up essentially with a “newspaper” based on tweets and interesting web pages from your network. The transcoding of web content is not ideal – obviously Flash is not present and image-based content is missing – but it provides a nice way of summarising the content.

I like it. I’m not sure it will become my default way of reading tweets by any means, but what it does give me is a very convenient way of gathering up interesting web content on a daily basis, and reviewing it as I travel. With a 25-hour trip to Australia coming up in the near future, I can see this could be quite useful!

Ping me via Twitter or comment below if you want an invite, and I’ll update this when they are gone.

Notes, because people might ask:

  1. To take a screenshot on the Kindle 3, hit Shift-Alt-G… then hook up via USB and grab the .gif files from the Documents folder.
  2. The linen slip case for my Kindle came from an etsy seller called kindlecovers.
  3. I have a few more images of my Kindle on Flickr.