Tag Archives: content

YouTube – everywhere?

Yesterday, YouTube turned seven.

I’ve recently become aware of just how pervasive YouTube has become. It’s available on a range of “computer” platforms – desktops and laptops, mobile phones and tablets. I’m able to access it via AppleTV, XBox, and a Sony Bluray player. Friends who recently updated their home media setup have it on their internet-enabled TV as well as via their Virgin Tivo box on the same “system”. Alongside BBC iPlayer, it’s actually more pervasive in the UK across the various devices many of us have in our living rooms, than the broadcast DTV/satellite/cable channels themselves. It’s also noticeably more present across a very broad range of devices, than alternatives like Vimeo which are arguably better at presenting more beautiful, longer HD content on the web itself.

This is both exciting, and also potentially problematic.

For those of us who have been seeing a multichannel, multimedia future ahead for some time, it’s a validation of the success of streaming web video in breaking the monopolies of the existing broadcasters and media companies. Over time, Google has added some tremendous value to YouTube – enabling creators to rapidly upload, perform simple edits, add soundtracks, and share content all within a rich HTML browser experience. It is also easy to reach a wide range of devices simply by ticking the “make this video available to mobile” box on the video management page – Google does all the heavy lifting of transcoding, resizing, and deciding on whether Flash or HTML5 is a better delivery mechanism, etc.

However, at the same time, it’s kind of… well, clunky. In order to consume content from YouTube on any of the platforms I mentioned before, you have to visit a dedicated YouTube widget, app or channel and then navigate around content within that box (oh, and each platform has a slightly different way of presenting that content). It’s not integrated with the viewing experience – I can’t just say to my TV or viewing device, “show me videos of kittens” and have it aggregate between different sources which include YouTube. Not only that, but we all know just how variable YouTube content can be, both in terms of production quality, duration, and the antisocial nature of comments and social interactions around videos. For some of the most popular videos I’ve posted on my channel, I can’t tell you how long I spend on moderating the most unbelievably asinine comments! Oh and, when we consider the increasing use of streaming video online – be it iPlayer, YouTube, Netflix or any other source – we constantly have to consider the impact on available bandwidth. Bandwidth and connectivity are not universal, no matter how much we may wish they were.

The other side of this is the group of voices who will point to the dominance of Google and their influence over brands and advertising. All very well, but I like to remind people that for all of the amazing “free” services we enjoy (Facebook, Twitter, Google and others), we do have to pay with an acceptance of advertising, and/or sharing of some personal data of our choice – or go back to paying cable and satellite providers for their services. It’s really a simple transaction.

I guess I don’t really have a message with this blog entry, other than to share my observation of the amazingly rapid rise of the new media titan(s). If I was going to offer any further thoughts or advice, it would be the following:

  • explore online video services more – you probably have access to them in more places than you think.
  • remember that video you produce may be viewed on any device from the smaller mobile handsets, to a nice HD television – so always try to produce your content at the highest quality setting possible, and let YouTube or the other video hosts do the rest.
  • richly tag and describe your content to make it easier to find. “Video1.mov” tells me nothing.
  • learn about the parameters which control how your content is displayed. I’ve previously written about this; the content is still useful but I should probably create an update.

This omnipresence across platforms is one of the reasons why I’ve started to primarily use my YouTube channel as the canonical source for all of my video content. Previously I’d used Viddler and Vimeo and occasionally posted a clip to Facebook, but now that I am able to post longer movies, I’ve also posted the full videos of various talks that I’d only previously been able to host at Vimeo. I’m not abandoning all other sources, but a focus on one channel makes a certain amount of sense.

 

About these ads

Digging through what Twitter knows about me

I joined Twitter on February 21, 2007, at exactly 15:14:48, and I created my account via the web interface. As you can see, my first tweet was pretty mundane!

I remember discussing this exciting cool “new Web 2.0 site” with Kim Plowright @mildlydiverting in Roo’s office in Hursley a couple of days before, and before long he, Ian and I were all trying this new newness out. It was just before the 2007 SXSWi, where Twitter really started to get on the radar of the geekerati.

But wait a moment! It’s impossible to pull back more than just over the last 3,000 tweets using the API, so how was I able to get all the way back to 5 years ago and display that tweet when I’ve got over 33,000 of them to my name?

It’s a relatively little-known fact that you can ask Twitter to disclose everything they hold associated with your account – and they will (at least, in certain jurisdictions – I’m not sure whether they will do this for every single user but in the EU they are legally bound to do so). I learned about this recently after reading Anne Helmond’s blog entry on the subject, and decided to follow the process through. I first contacted Twitter on April 24, and a few days later faxed (!) them my identity documentation, most of which was “redacted” by me :-) Yesterday, May 11, a very large zip file arrived via email.

I say very large, but actually it was smaller than the information dump that Anne received. Her tweets were delivered as 50Mb of files, but mine came in nearer to 9Mb zipped – 17Mb unzipped. I’d expected a gigantic amount of data in relation to my tweets, but it seems as though they have recently revised their process and now only provide the basic metadata about each one rather than a whole JSON dump.

So, what do you get for your trouble? Here’s the list of contents, as outlined by Twitter’s legal department in their email to me.

- USERNAME-account.txt: Basic information about your Twitter account.
- USERNAME-email-address-history.txt: Any records of changes of the email address on file for your Twitter account.
- USERNAME-tweets.txt: Tweets of your Twitter account.
- USERNAME-favorites.txt: Favorites of your Twitter account.
- USERNAME-dms.txt: Direct messages of your Twitter account.
- USERNAME-contacts.txt: Any contacts imported by your Twitter account.
- USERNAME-following.txt: Accounts followed by your Twitter account.
- USERNAME-followers.txt: Accounts that follow your Twitter account.
- USERNAME-lists_created.txt: Any lists created by your Twitter account.
- USERNAME-lists_subscribed.txt: Any lists subscribed to by your Twitter account.
- USERNAME-lists-member.txt: Any public lists that include your Twitter account.
- USERNAME-saved-searches.txt: Any searches saved by your Twitter account.
- USERNAME-ip.txt: Logins to your Twitter account and associated IP addresses.
- USERNAME-devices.txt: Any records of a mobile device that you registered to your Twitter account.
- USERNAME-facebook-connected.txt: Any records of a Facebook account connected to your Twitter account.
- USERNAME-screen-name-changes.txt: Any records of changes to your Twitter username.
- USERNAME-media.zip: Images uploaded using Twitter’s photo hosting service (attached only if your account has such images).
- other-sources.txt: Links and authenticated API calls that provide information about your Twitter account in real time.

Of these, let’s dig a bit more deeply into just a few of the items, no need to pick everything to pieces.

The “tracking data” is contained in andypiper-devices.txt and andypiper-ipaudit.txt – interesting. The devices file essentially contains information on my phone, presumably for the SMS feature. They know my number and the carrier. The IP address list tracks back to the start of March, so they have 2 months of data on what IPs have been used to access my account. I’ve yet to subject that to a lot of scrutiny to check where those are located, that’s another script I need to write.

I took a look at andypiper-contacts.txt and was astonished to find out how much of my contact data Twitter’s friend finder and mobile apps had slurped up. I mean, I don’t even have all of this in my address book… given the fact that the information contained the sender email addresses for various online retailer newsletters, I’m guessing that Google’s API (I’m a Gmail user) probably coughed up not just my defined contact list, but also all of the email addresses from anyone I’d ever heard from, ever.

Fortunately, there’s a way to remove this information permanently, which Anne has written about. I went ahead and did that, and then Twitter warned me that the Who To Follow suggestions might not be so relevant. That’s OK because I don’t use that feature anyway – and in practice, I’ve noticed no difference in the past 24 hours!

I use DMs a lot for quick communication, particularly with colleagues (it was a pretty reliable way of contacting @andysc when I needed him at IBM!). That’s reflected in the size of andypiper-dms.txt, which is also a scary reminder to myself that I used to delete them, but since Twitter now makes it harder to get to and delete DMs, I’ve stopped removing them and there’s a lot of private data I wish I’d scrubbed.

Taking a peek at the early tweets in andypiper-tweets, I’m trying to remember when the @reply syntax was formalised and when Twitter themselves started creating links to the other person’s profile. Many of my early tweets refer to @roo and @epred and I don’t think they ever went by those handles. 5 years is a long time.

I mentioned that the format used to deliver the data appears to have changed since Anne made her request. She got a file containing a JSON dump of each tweet including metadata like retweet information, in_reply_to, geo, etc etc.. By comparison, I now have simply creation info, status ID (the magic that lets you get back to the tweets via web UI), and the text itself:

********************
user_id: 786491
created_at: Wed Feb 21 15:43:54 +0000 2007
created_via: web
status_id: 5623961
text: overheating in an office with no 
comfort cooling or aircon. About to drink water.

It’s a real shame that they have taken this approach, as it means the data is now far more cumbersome to parse and work with. However, using some shell scripts I did some simple slicing-and-dicing because I was curious how my use of Twitter had grown over time. Here’s a chart showing the numbers of tweets I posted per year (2012 is a “to date” figure of course). It looks like it was slow growth initially but last year I suddenly nearly doubled my output.

Still considering what other analysis I’d like to do. I can chart out the client applications I’ve used, or make a word cloud showing how my conversational topics have changed over time… now that all of the information is mine, that is. It is just a shame I have to do so much manual munging of the output beforehand.

Oh, and the email I received from Twitter Legal also said:

No records were found of any disclosure to law enforcement of information about your Twitter account.

So, that’s alright then…

Why did I do this? firstly, because I believe in the Open Web and ownership of my own data. Secondly, because I hope that I’ll now be able to archive this personal history and make it searchable via a tool like ThinkUp (which I’ve been running for a while now, but not for the whole 5 years). Lastly… no, not “because I could”… well OK at least partly because I could… because I believe that companies like Twitter, Facebook, Google and others should be fully transparent with their users and the data they hold, and that by going through this currently-slightly-painful procedure it will encourage Twitter to put in place formal tools to provide this level of access to everyone in a frictionless manner.

If you’ll excuse me, I’m off to dig around some more…

In five years’ time we could be walking round the zoo…

A cryptic title (although a fairly easy lyric to identify) to note that five years ago I started blogging “seriously” outside of the corporate firewall… although I’d had a couple of little online journals before that, 12th December 2005 was the day I kicked off my more active participation in the blogosphere.

It has been a period of enormous change – in online technology, hardware and software capabilities, and in my life, profile and career. I started blogging for a couple of reasons… I tend to mention this when I do my “social @ IBM” talk as a speaker. Primarily it was to share information, knowledge and opinion with colleagues and customers, when I was often working along as a Software Services consultant. It was also to act as a journal.

I mentioned in my last blog entry that I’ve recently taken on a new role as WebSphere Messaging Community Lead at IBM Hursley, and that is in part a reflection and validation of the “social bridgebuilding” I’ve been doing across the corporate firewall and into various spaces over this period. In the past five years I’ve actually ended up moving out of my services / consulting career and into our lab where I try to bring my field experience and customer relationships to bear on what we’re developing. Often it’s actually just about helping to expose some of IBM software’s existing strengths and capabilities to new folks, rather than changing things!

Looking back over five years of this blog (and the others that I contribute to) it’s interesting to see the directions in which my interests have moved. Fundamentally I believe I’m still interested in the impact of technology on society, and in people and individuals. As a pointer to the future, though, I think the next 12 months will probably see a lot more content here focused on solutions I work with. I’ll still continue to sprinkle in other interests – the web, podcasting, video, gaming, photography – but I can feel a body of content building up in my mind that centres more on WebSphere technology. We’ll see what 2011 holds :-)

As ever – thanks for reading – I hope I continue to provide useful content!

Guest posting

Although this blog has been slightly quiet I’ve been posting content elsewhere lately as well:

  1. There’s a guest post on the SOMESSO blog about whether or not corporate blogs are still relevant in the world of more dispersed social media.
  2. I’ve contributed to the revived HomeCamp blog with some links to good sources of information.
  3. Talking of revivals, eightbar continues to attract some great content from my fellow Hursley folks, with more changes to come soon.

If my content is my CV – where’s my content?

I’ve frequently told folks who come to my presentations that “my content is my CV”. Sometimes, that content can feel a bit dispersed, especially given my habit of playing with a lot of the new services that come along.

I posted about a similar topic a few months ago, but mainly talked about the different blogs I contribute to. Time for a quick round-up of some of the main places you can find that content (you’ll find longer lists on my About, Audio/Video, and Writings pages).

Oh, and the easiest addresses to remember may be andypiper.co.uk or theandypiper.com – both of which will bring you back here.

What is really called for on my part is either a visual CV, or something a bit different like a launch page or an experimental format. When I have time…