Bookmarks in the Fediverse

Last week, there was a flurry of interest in a new addition to the #Fediverse: Postmarks. It’s social bookmarking (like Digg, del.icio.us, or more recently, Pinboard), now with ActivityPub support. Neat!

Organising stuff, “back in the day”

Back in the 2000s I was a huge fan of a site called del.icio.us, and the original iteration of our weekly podcast – currently called Games at Work dot Biz – was named Dogear Nation. Back when Michael and Michael kicked off that show, there was a podcast called Digg Nation which tried to round up the interesting community links and trends from the week on Digg. IBM at the time had an internal version of a social bookmarking / folksonomical platform similar to del.icio.us called “dogear” (like, folding the page of a book to mark it), so Dogear Nation encouraged listeners to tag links on del.icio.us for us to discuss each week… del.icio.us was bought by Yahoo! in 2005, and eventually, went away.

Fast forward 15 years to our current podcast, and we still love it when listeners share links for us to discuss, but there’s less of an organised way to do it!

Join the Federation

A brief diversion, because I’ve not written too much about this on my blog up until now.

Unlike the centralised “Web 2.0”-based, largely corporate-owned sites that dominate the current web, the Fediverse is a set of related services that share some common protocols (ActivityPub is one, but there are others involved) and are loosely-connected. As well as each service usually having some form of “flagship” instance, it is is also very common to encourage diversity by location and interests, and often self-hosting, so it won’t be possible for an unsavoury billionaire to buy the things you use, or misuse and steal the data that you’ve put into them. Your network and your data are your own.

I’m very active across a range of sites and services that are analogous to those you might be familiar with. On Mastodon, for instance, I currently do some work with Mastodon gGmbH, the non-profit behind the project and host of two of the larger service instances; and although my original account was on one of those instances, at the end of last year I moved my account (taking the related network of connections with me) to a much smaller server run by a former coworker, mostly populated by other former coworkers, but I’m still connected with users across the rest of the Fediverse.

You can also find me on PixelFed (Instagram-like photo sharing), on Lemmy (Reddit-like groups and communities), on PeerTube (YouTube-like video channels) where I live on the diode.zone instance for makers and electronics enthusiasts, on Bookwyrm (GoodReads-like community), and so on. Basically there are a number of slices of “me” out there, in spaces where it makes sense. Essentially, if you’re on Mastodon and you’re interested in my videos, you can follow my PeerTube account from Mastodon without having to sign up for PeerTube. It’s pretty cool.

I strongly believe that federated services are the best opportunity for us to maintain a free and open Web.

– me, 2023

So, Postmarks?

Yes! Postmarks is a single-user, super small and simple server for managing your own bookmarks. When I add a bookmark on my own Postmarks server, my Postmarks account effectively publishes the new entry to the rest of the Fediverse as an activity. So, if you’re interested in what I’m bookmarking and you have a Mastodon account, you can follow @andypiper@pipesmarks.glitch.me and you’ll see the new entries as they get added. If you’re not interested, don’t follow my account, and we’re all good. Oh, and it supports Atom feeds for different tags (categories), too.

Postmarks runs on Glitch – or, anywhere else you can stand up a Node.js / Express app. Personally I love Glitch, and I’ve been using it for many years now for hosting demos and trying out different projects – in fact, my main links page runs on Glitch. The Postmarks developer Casey Kolderup works there, and Casey has made it really straightforward to remix directly on Glitch, or import from GitHub there or to another service of your choice – it has very few dependencies.

Getting involved

My usual pattern for reading and saving content is whilst mobile. There’s a bookmarklet that’s part of the project, but no easy way to add it to my system for links to end up on Postmarks from my phone or tablet. I turned to Apple Shortcuts to help out.

A screenshot of Apple Shortcuts on iPadOS 17 beta, showing the sequence of steps to send a link to Postmarks

This does not do too much – it takes a link from the share sheet or clipboard, and opens the add bookmark page popup in a browser tab. At the moment there’s no full API for Postmarks, so this is a bit of a stopgap or workaround. Annoyingly, it will also leave you with an empty browser tab you’ll need to close, but it works.

If you’d like to try the automation, you can get it via RoutineHub, which links to the Shortcut in iCloud. You’ll be prompted to add the hostname of your Postmarks instance, and you will already need to have signed in to that site in your web browser of choice.

Beyond that, Glitch makes it easy to hack on features, because everything runs in the browser, including a code editor. So far I’ve been adding small features such as support for the nodeinfo endpoint used by other Fediverse servers, and a slightly improved Atom feed. There’s lots I can think of to add, but not so much time to play – this is giving me a chance to learn a bit more about ActivityPub internals, as well as “scratching an itch”.

I’m also playing with another single-user ActivityPub server, Shuttlecraft, but that’s a post for another day.

Still messing with Helperbot

This little model has been fun to play with!

A silver blocky robot laying on top of a sheet of black paper. The paper has a wireframe version of the same robot drawn on it in silver ink.

Trying out some plotter work. We have an AxiDraw in the studio and we’ll be using it in an upcoming art show in the winter. There’s a nice STL-to-SVG hidden wireframe converter that I tried out to get this image. Plotted using a silver Posco paint pen.

A large silver blocky robot with red eyes, with a LEGO minifigure about half the size in front, holding a LEGO banana. In front of that is a very very small copy of the same blocky robot, but about half the size of the LEGO figure.

Testing out tolerances on the Bambu X1 Carbon. tl;dr I was able to print the model as small as 23mm tall, but it was tricky with smaller, and even at this scale, movements of the print head were liable to break off an arm or leg at the end of the print (depending on orientation and supports). LEGO banana for scale!

I also tested out some Gedeo gilding wax on the larger model for texture and interest, it’s a nice effect and I may use that with other makes in the future.

Experiments in digital making

I’ve been learning about 3D printing for around 9 months now, and while I’ve printed a fair variety of objects, I still struggle to sit down and properly learn to design my own items[1].

A week ago I learned about an online so-called “AI” tool that can be used to make 3D assets, I think primarily aimed at the digital gaming and metaverse spaces. You can either provide a 2D image or a text description, and Kaedim will provide 3D object files in a variety of formats – including .OBJ, which can be ingested by most 3D printing software, or passed to other software that can massage it appropriately.

My 2D drawing skills and imagination are roughly as terrible as my ability to drive 3D modelling and CAD software, so I turned to the Midjourney generative tool for some additional help.

Screenshot of Midjourney in Discord, with the prompt "a small boxy helper robot, cartoon animation style. Plain background." and the resulting image of an orange, blocky, basic robot.

This was an acceptable and simple enough start, and it looked mostly printable – no giant overhangs or complex angles. Fully removing the background was straightforward using Preview on macOS, leaving me with a PNG image of the robot on a transparency. [side note, a previous version of the robot returned by Midjourney was made up of Amazon boxes, right down to the company logo… I chose not to use that one]

I signed up for Kaedim and uploaded my image. It is not the cheapest tool – it is not free – an object generation on the pay-as-you-go plan was £20 – but I was willing to see how well it did. I was able to compare the image to the model in the online viewer, and then download all of the assets in a zip file, including geometry and texture definitions (Kaedim also allows textures to be added, but I just wanted the model itself). All of this was fairly fast.

Screenshot of the previous robot image inset in the left bottom corner, alongside a 3D untextured plain grey model of the same robot in a white space

It turned out that my slicer (Bambu Slicer in this case, but I checked using others as) did not, in fact, want to load the object file – it reported errors and lack of geometry. However, it was straightforward to check the geometry and mesh using MeshLab, and from there, export an STL file that the slicers were happier with.

Screenshot of a plain grey blocky robot model in a 3D modelling editor

Since the model was not uniform – the body, arms and legs are smaller and potentially fragile – I printed with supports. The initial version was completely acceptable for something essentially magicked out of a couple of computer algorithms, but highlighted a number of structural issues that I hadn’t thought about at all: the shoulder joints were completely inappropriate, with the arms barely attached; and of course, since I only provided one angle of view for the 2D image, the other sides were simply blank, leaving a disappointingly asymmetric head.

A hand holding a small cuboid robot head printed in white PLA plastic, next to a computer screen showing the original 2D image of the same robot A white blocky robot resting on a black and grey keyboard. One arm is hanging limply downwards due to a weak joint.

Some improvements were needed. I ended up using the 3D tools in Xcode to edit the .USD file and move the arms upwards and inwards to improve the jointing. I also mirrored the square detail on the side of the head across onto the opposite side (but, I eyeballed the placement, and they are not strictly symettrical). In the second print I thought I would go with a colour, and used some Vertex silver PLA filament, along with a bit of black and white to add to the details.

A silvery grey robot standing inside a 3D printer, surrounded by tree supports A silver/grey boxy robot with black eyes and a black line for a mouth.

At this point I realised that I could make my new robot friend a bit more spiffy, so for the final (?) iteration, I chose a more shiny silver PLA (Ziro PLA Silk) and added some colour to the features using the AMS (multi filament system) on the Bambu. I also sized the model up to 120%.

Three robots in a row. The right-hand model is larger and has red eyes and green details on the silver chest plate.

That’s all I wanted to write about here – purely a learning exercise, and no specific use for the model other than decoration at this stage. I can imagine iterating on it some more, potentially upsizing further and re-creating the head as a hollow space for a small microcontroller and some lights for the eyes, for example.

If you’d like to hear more of my thoughts, I talked about it on our Games at Work dot Biz podcast this week, so give that a listen! In the meantime, I’ve uploaded Helperbot to my profile on Printables, which I find is by far the nicest and most usable 3D model community. Look, they even gave me badges! (OK, OK, I printed the badges… but, I have badges!)

Three Printables.com achievement badges in grey and orange, on a black pegboard

[1] It is not for lack of reading materials, video tutorials, and software options! I’m playing with various tools, from the programmatic (OpenSCAD) to the Open Source (FreeCAD) to the online (Tinkercad) to the commercial (Autodesk Fusion 360). I just need to stop and actually use one of them for enough time to get my head around!

Goodbye to my life on Twitter, 2007-2023

In November last year, I abandoned my Twitter account – I set it to private, did not visit, did not interact, ignored any direct messages, etc. It was simply too painful to watch friends and coworkers suddenly and systematically being fired, the company culture destroyed, and the developer communities that I supported for 9 years, finally cut off without support or API access. It has been a heartbreaking time.

Today, I took the last step in going back through my password manager vault and deleting all of my X/Twitter accounts. I’ve watched the shambolic rebranding over the past week, and frankly, I wish it had all happened far sooner – rather than seeing my beloved bird being dragged down, and the brand and memory ruined, piece by piece.

There are a few accounts that I share access to with others (for podcasts, sites or communities) that remain, but over the past hour or so I deleted 15 accounts, four of which had associated Twitter Developer Accounts.

Why so many?

  • Of course, I had my main account, @andypiper, which was first created after hanging out with my friend Roo Reynolds in his office at IBM Hursley, and hearing about Twitter, just starting to gather buzz from events like SxSW. Created February 21, 2007. The title of the blog entry I wrote that day seems accidentally prophetic (although, in truth, I do not regret it at all).
    • my jobs at VMware / Cloud Foundry in 2012, and at Twitter from 2014, were both direct results of being on Twitter, sharing my knowledge, interacting with different communities, and doing my work on the platform.
    • I’ve made countless friends through being on Twitter, and I’m grateful for that. It truly changed my life to be there.
  • Back at the start, those heady times of 2007-2009, it was not unusual to have a few accounts for fun, so certainly there were a few of those that just went away.
  • There was the time when I was copying friends like Andy Stanford-Clark and Tom Coates, and putting sensors around my house online (there’s brief mention of it in this 2009 post).
  • There were test accounts I created for projects as far back as my time doing Service Oriented Architecture things at IBM.
  • There were a couple of accounts I’d created during education sessions, literally to show others how to get started on Twitter, growing the user base.
  • There were a couple of accounts from my demo apps and projects on the @TwitterDev team, such as the IoT sensors I demonstrated on stage at the first Twitter Flight conference in 2014.
  • There were the super-sekrit accounts I had for testing features, such as the original internal test for ten thousand character Tweets (yes, this nearly happened, a long time back), the customisable Tweet Tiles we would have launched at the developer conference that was cancelled at the end of last year, and so on.

Finally, it’s time to say goodbye to my main @andypiper account. Twitter is not Twitter any more, it is X – and I never signed up for X.

In the near future, I’ll upload a searchable archive of my Twitter content, likely using Darius’ Twitter Archive tool. For now, it’s all done. I’m very happy elsewhere (personal sites and links here and here), and I will not be sad that X is out of my life.

… apart from the laptops that they still have not collected!

A 3D print, 8 years in the making

Back in July 2015, I was in Portland, Oregon, for the O’Reilly Open Source conference, otherwise known as OSCON.

It was my third or fourth OSCON, and sadly it turned out to be my last one (and the event itself came to an end when O’Reilly decided to cancel their in-person events). As an aside, I have very fond memories of OSCON, and was privileged to be able to speak there as well, so it’s a shame that those events have gone away.

My friend Diane Mueller was at OSCON back in 2015, and she had driven down from Canada in her mobile maker lab (a really cool winnebago / trailer kitted out for teaching hardware projects to young people). The next year, she spoke about her GetMakered project at Monkigras – another of my all-time favourite events. At OSCON, the GetMakered team were offering “3D Selfies”, via a combination of the Xbox 360 Kinect sensor hardware, and some Open Source processing software called Skanect.

OSCON 2015 GetMakered 3DSelfie Trailer Sponsored by @OpenShift @IntelOpenSource
The mobile GetMakered Print Studio and Maker Space (via Flickr)

I decided I definitely needed one of these, and before long I was sitting on a stool, on a rotating wooden board, inside a winnebago / caravan, inside the Portland Convention Center, holding still, while the Kinect scanned my head and shoulders. I wasn’t sure whether I’d be able to prove it, but fortunately, I was able to dig around on the Twitter website to find and screenshot the evidence – I don’t trust the ability to embed Tweets now 😞️ oh, and I also found another similar image on Flickr, because that was also still a thing in 2015.

Shortly after the event, I received an email with the STL file attached, along with a warning that only limited cleanup had been done to the file. I was pretty new to 3D modelling and software – and, I still am – so I think I opened it up once in MeshLab or something similar, had a quick look, Tweeted about it (obviously), and then left it alone.

That was eight years ago.

Now, I have a 3D printer (uh, actually, well… three of them… a story for one or more additional posts).

The first thing I needed to do was to figure out whether I could print the file. Not so simple!

Importing the STL file into any 3D printer slicer software, immediately threw up a lot of errors about non-manifold edges and such. It also turned out that the scale was hugely off, the actual scan was effectively a hollow shell (with no closure at the bottom), with some holes (right in the top of my head…!), a few stray, disconnected pixels of data somewhere disconnected from my torso that were breaking the bounding box, and also, everything was at a weird angle.

My first step was to clean all of that up, and close the base of the design so that had a flat, straight base. Don’t ask me to explain the process, I wish I had written it all down so that I could be better prepared if this happens again…

Once I had a refined STL file, I thought it would be fun to “downmix” it into more of a low-poly design (that style is good enough for Pokémon, after all). I’d come across the Low Poly 3D Generator by Andrew Sink (also available in source form on GitHub), and decided to run the model through that. A couple of mesh decimations later, and I had something that looked pretty good.

To check that the model was not going to cause too many complaints from any slicer software, I also opened it in a couple of other tools, including Tinkercad.

The last thing to do, was print! I used pretty standard settings on the Bambu X1 Carbon, using a fun eSun PLA Silk Red/Blue dichromatic filament.

It came out nicely.

Now I have a small model of myself that can sit on the studio shelf. Maybe I should print some as giveaway gifts at meetups…

I put together a little recording of how it all came together. I’m embedding a YouTube version below because WordPress.com will allow that, but it does not support PeerTube embeds; but, if you’re interested in my video content I recommend taking a look at the original video I posted on diode.zone, an instance of PeerTube, a federated alternative to YouTube that I’m using for studio and maker-related content. You can also follow my account there (@andypiper@diode.zone) by searching for that in your Mastodon (or other Fediverse network) client application.