Why Am I Writing This Blog?

When I decided to take a break from the blog, I decided that the first thing I would be reflecting upon, and posting about, is my reasons to keep this blog running, and to keep writing on it. Because the answer to that should definitely feed into the decision of returning from the break and writing again.

The reasons why I started, continued, and am currently writing are all different. The only constant part is that I always wanted to make something that would be read or used by others. And while I hated writing essays for school, I always liked sitting down and writing on a topic I cared about. I remember before blogs were easy to get a start with, I wrote “articles” in LaTeX and posted it as PDF to the local Linux Users Group mailing list¹.

But the truth is that those “articles” were pretty much the same (low) quality of blog posts — as I already wrote about, blog posts are not very involved. I have written articles for actual publications: NewsForge back when it existed, the Italian Linux Journal (also gone), and LWN.net. The amount of work put on by the editors varied widely, with LWN having taught me lots, and being also the only one who paid for the articles — I feel it’s unfair, they did the most work and they gave money to me rather than the other way around.

Most of the readers of this blog probably know it from my blogging related to Gentoo Linux, but before I held a Planet Gentoo blog, I had a blog in Italian on Blogspot (for which I lost the backups, and only recovered some sparse posts thanks to the Wayback Machine), and in between the two I had a few posts on a KDE-sponsored shared blog (KDevelopers), which I have folded into this site, together with the few guest posts I did for Axant and for David’s Boycott Boycott Novell.

When I started blogging regularly for Gentoo Linux, it was mostly daily updates on the work I had been doing there. Whether it was multimedia packages changes or the Gentoo/FreeBSD progress — and that’s why a lot of the early blog posts look more like Twitter than the current blog, particularly those that predate Twitter. I still use this blog for updating the progress of various projects I’m involved in, but Twitter took over the “daily” updates, and the blog only includes “milestone” updates. Also, I have much fewer public projects compared to what I used to contribute to ten to fifteen years ago, for good or bad.

At some point, in addition to providing a status update, I used the blog also as a “showroom” — as a way to find work. Turns out that when I was a contractor I did indeed find a few gigs thanks to the blog itself — but since I have been working full time for many years now, that’s no longer a reason. Similarly, while before having a stable job, I have experimented with different ways to monetize the blog, from various referral systems to ads — none ever managed to cover the costs of running the blog at all, but in particular they would all now fit into the category of “rounding error”, as a former colleague would call them.

These last two points are important to the motivations discussion — a monetized blog, or a blog of someone who’s struggling to find a job, are very good reason to want more eyeballs on the posts, but both are not reasons I care for, at least not at this point in time. So why am I feeling disappointed that there aren’t more visitors, beside the psychological effects of counters and stats?

I guess the answer is that I have strong opinions, and the main motivation for me to write this blog nowadays is to voice them, and try to sway others — or be proven wrong and be swayed myself to a more positive and optimistic view of the world. Some are more active opinions than other: comments on working from home are very general and with the only action item to please consider the effect of it on others with different experiences and problems, while my repeated rants about licensing have action items that you can all pick up on.

I also still want to write so that other people can find out how to do stuff — because I love finding out how stuff works, and sometimes I even get to make use of that knowledge. I said this some time ago, that there’s significant value to spread the word, and share how things are done with others. Most of the stuff I have produced myself is not an invention of mine — it’s a refinement of someone else’s idea. Yes, even the free ideas that I have thrown out there but never managed to work on myself.

And then, there’s been quite a few personal posts on this blog over time – as I said before when sharing it at work, «there is a whole lot of me in [this] blog» – and those are there for… different reasons. Sometimes it’s personal therapy, sometimes a reminder to myself that I went through stuff, and I don’t need to squander opportunities. In many cases, it’s to share my experiences with others who might go through similar troubles. When I complained the first time about alcohol culture in Free Software, I was a very dissonant voice — nowadays this is a much more common complaint, and a number of conferences replaces beer parties with tea parties, though sometimes more to make fun of the complains… except the joke’s on them.

So what does all of this come down to, when it comes to the blog? Well, not really much to be honest. It means that there will still be project reports, opinions (and rants), explanations, and some personal point of view posts. I’ll also probably keep posting sARTSurday – even if not as regular as I tried at the beginning of the lockdown – including personal reviews of books and videogames, because I did write those before, and I see no reason not to keep doing that.

What it does say to me, is that my focus on the tight two-posts-per-week schedule is misplaced. While it did work great to keep my mind off the pandemic, particularly during the two months sabbatical between jobs, it’s proving more of a chore than a relief now that I’m back working full time and (mostly) ramped up in my new position. The tight schedule would have made sense if I tried to keep as many eyeballs on the blog as possible – which again is not really an useful goal to have for my motivations – but it also can reduce the quality of posts if I’m posting something early just so that I have a paced release of it.

So from now on, the schedule of blog posts will be once per week, on Tuesday, for regular post. sARTSurday posts will not be regular, but will appear when I find something particularly interesting to share with all. I’ll stop chasing timing and opportunities, and will instead post just what is ready to be posted, with no particular regard to scheduling the posts.

While thinking the blog’s motivation over, I also started wondering on whether I should spend more time on doing something… different. You might remember I have now a few times streamed on Twitch (and once on Facebook Live) — that started mostly as me trying to figure out how to convey information over the Internet that I would usually convey on a whiteboard. I still haven’t found a good answer to that, so I might end up doing more of that as time goes by, to experiment and find something that will work as well for work meetings. But it’s not going to be the kind of thing I expect people to care about or follow — after all, I have tried this before, over 11 years ago, and it wasn’t my cup of tea to continue.

What I might want to try is to prepare a “talk” out of some of the knowledge I have. Somewhere between a blog post and a conference talk, with a few of the things that I learnt over time and that might be worth sharing… but the motivation for that is less to become a famous streamer, and more that I might need to do that at work, and it’s worth trying to learn to make content in a way that can be used for training the newbies arriving. But don’t hold your breath on that, and don’t expect it to be any high quality to begin with.

Rather, if you find anything here, new or old that it might be, that is worth discussing further, feel free to bring it up — I might do a whiteboarding session about it, or I might take it for a jumpstart topic for a talk. Or at the very least I might write a refresher blog post to correct mistakes or update information of how things evolved in the meantime. And feel free to share it on aggregation sites like Reddit and Lobsters, just don’t expect me to be proactively there to answer questions — ask them here!

¹ Those articles are still to be found in this blog! I used to keep them on a page of my site, but have eventually folded them into blog posts. Which is how the archives go back to 2004!

It’s Time For A Break

Since before the beginning of the lockdown, I’ve been striving to keep a two-posts-per-week schedule to the blog, talking about my work philosophy, my electronics projects, and even trying a third post a week for a while with sARTSurday. Keeping the schedule was not easy, but I tried and only messed it twice: once when I mis-scheduled a post, and once when Microsoft “stole” my thunder.

About six months later, I’m running out of steam to keep the schedule. It might be because I spent the last few weeks worrying about whether we would have a flat to stay in as a new lockdown started. Or it might be that I’m now engaging gears with my new dayjob and it’s using all of my mental capacity.

I even tried whiteboarding — both with a physical whiteboard and on Twitch with Microsoft Whiteboard. Part of the reason why I did that is that with the lack of an office, I was looking for better venue to engage with my colleagues to discuss ideas and come up with plans. I can’t say it worked.

I have been mulling about options. I even briefly considered figuring out how much it would cost me to hire an editor to make the blog post more… polished. But the truth is that it wouldn’t make much sense — while I have been known for the blog in the past, blogs are the past. I never became a speaker when conferences were at their highest point, and I’ll never be a streamer now that they have been replaced by virtual events. I described myself recently as a C-list blogger – and I meant that. It seems nowadays to be B-list you need to have statically generated blog with no comments, and to be A-list you need to have not your own blog but just a Medium account. I don’t fit, nor I care to fit, into that world.

I guess I’m like a sportsman who’s too old to keep playing, but not well known enough to become a coach or a celebrity. And you know what? That’s okay. I’ll keep focusing on my dayjob as a “software mechanic” for as long as I can at least keep up to date to the new bubble’s stack. And maybe I can still get an idea or two out in the future, even when I won’t be able to do anything good with it myself.

This is not a goodbye, it’s just a “see you later” — I’ve been blogging for over 15 years and I’m not going to fully stopping now. If you have any questions or comments or suggestions on any of my old blog posts, feel free to leave a comment there, as I will be monitoring those, although possibly not as closely as before.

Update 2020-09-25: A couple of weeks into the break, I feel I’m finding myself more relaxed, and trying to get myself into a better position to get back to blogging later. Also in the meantime we finally finalized the paperwork for moving to a new apartment (that will also be a tale for later on in the blog), which means that we have a timeline for when we’ll have even less time.

So the current plan is that I’ll be taking time off posting until November 2020. After which I’ll come back on a one post per week schedule, until further notice. With the post going out likely on Tuesday or Wednesday, not sure yet. The reason for reducing frequency is to give myself some more time to work on content without rushing through incomplete posts.

Upcoming electronics projects (and posts)

Because of a strange alignment between my decision to leave Google to find a new challenge, and the pandemic causing a lockdown of most countries (including the UK, where I live), you might have noticed more activity on this blog. Indeed for the past two months I maintained an almost perfect record of three posts a week, up from the occasional post I have written in the past few years. In part this was achieved by sticking to a “programme schedule” — I started posted on Mondays about my art project – which then expanded into the insulin reminder – then on Thursday I had a rotating tech post, finishing the week up with sARTSurdays.

This week it’s a bit disruptive because while I do have topics to fill in the Monday schedule, they start being a bit more scatterbrained, so I want to give a bit of a regroup, and gauge what’s the interest around them in the first place. As a starting point, the topic for Mondays is likely going to stay electronics — to follow up from the 8051 usage on the Birch Books, and the Feather notification light.

As I have previously suggested on Twitter, I plan on controlling my Kodi HTPC with a vintage, late ’80s Sony SVHS remote control. Just for the craic, because I picked it up out of nostalgia, when I went to Weird Stuff a few years ago — I’m sad it’s closed now, but thankful to Mike for having brought me there the first time. The original intention was to figure out how the complicated VCR recording timer configuration worked ­— but not unexpectedly the LCD panel is not working right and that might not be feasible. I might have to do a bit more work and open it up, and that probably will be a blog post by itself.

Speaking of Sony, remotes and electronics — I’m also trying to get something else to work. I have a Sony TV connected to an HDMI switcher, and sometimes it get stuck with the ARC not initializing properly. Fixing it is relatively straightforward (just disable and re-enable the ARC) but it takes a few remote control button presses… so I’m actually trying to use an Adafruit Feather to transmit the right sequence of infrared commands as a macro to fix that. Which is why I started working on pysirc. There’s a bit more than that to be quite honest, as I would like to have a single-click selection of inputs with multiple switchers, but again that’s going to be a post by itself.

Then there’s some trimming work for the Birch Books art project. The PCBs are not here yet, so I have no idea if I have to respin them yet. If so, expects a mistakes-and-lessons post about it. I also will likely spend some more time figuring out how to make the board design more “proper” if possible. I also still want to sit down and see how I can get the same actuator board to work with the Feather M0 — because I’ll be honest and say that CircuitPython is much more enjoyable to work with than nearly-C as received by SDCC.

Also, while the actuator board supports it, I have currently left off turning on the fireplace lights for Birch Books. I’m of two minds about this — I know there are some flame effect single-LEDs out there, but they don’t appear to be easy to procure. Both bigclive and Adam Savage have shown flame-effect LED bulbs but they don’t really work in the small scale.

There are cheap fake-candle LED lamps out there – I saw them the first time in Italy at the one local pub that I enjoy going to (they serve so many varieties of tea!), and I actually have a few of them at home – but how they work is by using PWM on a normal LED (usually a warm light one). So what I’m planning on doing is diving into how those candles do that, and see if I can replicate the same feat on either the 8051 or the Feather.

I don’t know when the ESP32 boards I ordered will arrive, but probably will spend some time playing with those and talking about it then. It would be nice to have an easy way to “swap out the brains” of my various projects, and compare how to do things between them.

And I’m sure that, given the direction this is going, I’ll have enough stuff to keep myself entertained outside of work for the remaining of the lockdown.

Oh, before I forget — turns out that I’m now hanging out on Discord. Adafruit has a server, which seems to be a very easygoing and welcoming way to interact with the CircuitPython development team, as well as discussing options and showing off. If you happen to know of welcoming and interesting Discord servers I might be interested in, feel free to let me know.

I have not forgotten about the various glucometers I acquired in the past few months and that I still have not reversed. There will be more posts about glucometers, but for those I’m using the Thursday slot, as I have not once gone down to physically tapping into them yet. So unless my other electronics projects starve out that’s going to continue that way.

Blog Redirects, Azure Style

Last year, I set up an AppEngine app to redirect the old blog’s URLs to the WordPress install. It’s a relatively simple Flask web application, although it turned out to be around 700 lines of code (quite a bit to just serve redirects). While it ran fine for over a year on Google Cloud without me touching anything, and fitting into the free tier, I had to move it, as part of my divestment from GSuite (which is only vaguely linked to me leaving Google).

I could have just migrated the app on a new consumer account for AppEngine, but I decided to try something different, to avoid the bubble, and to compare other offerings. I decided to try Azure, which is Microsoft’s cloud offering. The first impressions were mixed.

The good thing of the Flask app I used for redirection being that simple is that nothing ties it to any one provider: the only things you need are a Python environment, and the ability to install the requests module. For the same codebase to work on AppEngine and Azure, though, there seems to be a need for a simple change. Both providers appear to rely on Gunicorn, but AppEngine appears to be looking for an object called app in the main module, while Azure is looking for it in the application module. This is trivially solved by defining the whole Flask app inside application.py and having the following content in main.py (the command line support is for my own convenience):

#!/usr/bin/env python3

import argparse

from application import app


if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument(
        '--listen_host', action='store', type=str, default='localhost',
        help='Host to listen on.')
    parser.add_argument(
        '--port', action='store', type=int, default=8080,
        help='Port to listen on.')

    args = parser.parse_args()

    app.run(host=args.listen_host, port=args.port, debug=True)

The next problem I encountered was with the deployment. While there’s plenty of guides out there to use different builders to set up the deployment on Azure, I was lazy and went straight for the most clicky one, which used GitHub Actions to deploy from a (private) GitHub repository straight into Azure, without having to install any command line tools (sweet!) Unfortunately, I hit a snag in the form of what I think is a bug in the Azure GitHub Action template.

You see, the generated workflow for the deployment to Azure is pretty much zipping up the content of the repository, after creating a virtualenv directory to install the requirements defined for it. But while the workflow creates the virtualenv in a directory called env, the default startup script for Azure is looking for it in a directory called antenv. So for me it was failing to start until I changed the workflow to use the latter:

    - name: Install Python dependencies
      run: |
        python3 -m venv antenv
        source antenv/bin/activate
        pip install -r requirements.txt
    - name: Zip the application files
      run: zip -r myapp.zip .

Once that problem was solved, the next issue was to figure out how to set up the app on its original domain and have it serve TLS connections as well. This turned out to be a bit more complicated than expected because I had set up CAA records in my DNS configuration to only allow Let’s Encrypt, but Microsoft uses DigiCert to provide the (short lived) certificates, so until I removed that it wouldn’t be able to issue (oops.)

After everything is set up, here’s a few more of the differences between the two services, that I noticed.

First of all, Azure does not provide IPv6, although since they use CNAME records this can change at any time in the future. This is not a big deal for me, not only because the IPv6 is still dreamland, but also because the redirection would point to WordPress, that does not support IPv6. Nonetheless, it’s an interesting point to make, that despite Microsoft having spent years preparing for IPv6 support, and having even run Teredo tunnels, they also appear to not be ready to provide modern service entrypoints.

Second, and related, it looks like on Azure there’s a DNAT in front of the requests sent to Gunicorn — all the logs show the requests coming from 172.16.0.1 (a private IP address). This is opposite to AppEngine that shows the actual request IP in the log. It’s not a huge deal, but it does make it a bit annoying to figure out if there’s someone trying to attack your hostname. It also makes it funny that it’s not supporting IPv6, given it does not appear to need for the application itself to support the new addresses.

Speaking of logs, GCP exposes structured request logs. This is a pet peeve of mine, which GCP appears to at least make easier to deal with. In general, it allows you to filter logs much more easily to find out instances of requests being terminated with an error status, which is something that I paid close attention to in the weeks after deploying the original AppEngine redirector: I wanted to make sure my rewriting code didn’t miss some corner cases that users were actually hitting.

I couldn’t figure out how to get a similar level of detail in Azure, but honestly I have not tried too hard right now, because I don’t need that level of control for the moment. Also, while there does seem to be an entry in the portal’s menu to query logs, when I try it out I get a message «Register resource provider ‘Microsoft.Insights’ for this subscription to enable this query» which suggests to me it might be a paid extra.

Speaking of paid, the question of costs is something that clearly needs to be kept in clear sight, particularly given recent news cycles. Azure seems to provide a 12 months free trial, but it also gives you £150 of credit for 14 days, which don’t seem to match up properly to me. I’ll update the blog post (or write a new one) with more details after I have some more experience with the system.

I know that someone will comment complaining that I shouldn’t even consider Cloud Computing as a valid option. But honestly, from what I can see, I will be likely running a couple more Cloud applications out there, rather than keep hosting my own websites, and running my own servers. It’s just more practical, and it’s a different trade-off between costs and time spent maintaining thing, so I’m okay with it going this way. But I also want to make sure I don’t end up locking myself into a single provider, with no chance of migrating.

Blog Redirects & AppEngine

You may remember that when I announced I moved to WordPress, I promised I wouldn’t break any of the old links, particularly as I kept them working since I started running the blog underneath my home office’s desk, on a Gentoo/FreeBSD, just shy of thirteen years ago.

This is not a particularly trivial matter, because Typo used at least three different permalink formats (and two different formats for linking to tags and categories), and Hugo used different ones for all of those too. In addition to this, one of the old Planet aggregators I used to be on had a long-standing bug and truncated URLs to a certain length (actually, two certain lengths, as they extended it at some point), and since those ended up indexed by a number of search engines, I ended up maintaining a long mapping between broken URLs and what they were meant to be.

And once I had such a mapping, I ended up also keeping in it the broken links that other people have created towards my blog. And then when I fixed typos in titles and permalink I also added all of those to the list. And then, …

Oh yeah, and there is the other thing — the original domain of the blog, which I made a redirect for the newest one nearly ten years ago.

The end result is that I have kept holding, for nearly ten years, an unwieldy mod_rewrite configuration for Apache, that also prevented me to migrate to any other web server. Migrating to a new hostname when I migrated to WordPress was always my plan, if nothing else not to have to deal with all those rewrites in the same configuration as the webapp itself.

I have kept, until last week, the same abomination of a configuration, running on the same vserver as the blog used to run. But between stopping relationships with customers (six years ago when I moved to Dublin), moving the blog out, and removing the website of a friend of mine who decided to run his own WordPress, the amount of work needed to maintain the vserver is no longer commensurate to the results.

While discussing my options with a few colleagues, one idea that came out was to just convert the whole thing to a simple Flask application, and run it somewhere. I ended up wanting to try my employer’s own offerings, and ran it on AppEngine (but the app itself does not use any AppEngine specific API, it’s literally just a Flask app).

This meant having the URL mapping in Python, with a bit of regular expression magic to make sure the URL for previous blog engines are replaced with WordPress compatible ones. It also meant that I can have explicit logic of what to re-process and what not to, which with Apache was not easily done (but still possible).

Using an actual programming language instead of Apache configuration also means that I can be a bit smarter on how I process the requests. In particular, before returning the redirect to the requester, I’m now verifying whether the target exists (or rather, whether WordPress returns an OK status for it), and use that to decide whether to return a permanent or temporary redirect. This means that most of the requests to the old URLs will return permanent (308) redirects, and whatever is not found raises a warning I can inspect and see if I should add more entries to the maps.

A very simple UML Sequence Diagram of the redirector, at a high level.

The best part of all of this is of course that the AppEngine app is effectively always below the free tier quota marker, and as such has an effectively zero cost. And even if it wasn’t, the fact that it’s a simple Flask application with no dependency on AppEngine itself means I can move it to any other hosting option that I can afford.

The code is quite of a mess right now, not generic and fairly loose. It has to workaround an annoying Flask issue, and as such it’s not in any state for me to opensource, yet. My plan is to do so as soon as possible, although it might not include the actual URL maps, for the sake of obscurity.

But what is very clear from this for me is that if you want to have a domain whose only task is to redirect to other (static) addresses, like projects hosted off-site, or affiliate links – two things that I have been doing on my primary domain together with the rest of the site, by the way – then the option of using AppEngine and Flask are actually pretty good. You can get that done in a few hours.

Some of my thoughts on comments in general

One of the points that is the hardest for me to make when I talk to people about my blog is how important comments are for me. I don’t mean comments in source code as documentation, but comments on the posts themselves.

You may remember that was one of the less appealing compromises I made when I moved to Hugo was accepting to host the comments on Disqus. A few people complained when I did that because Disqus is a vendor lock-in. That’s true in more ways than one may imagine.

It’s not just that you are tied into a platform with difficulty of moving out of it — it’s that there is no way to move out of it, as it is. Disqus does provide you the ability to download a copy of all the comments from your site, but they don’t guarantee that’s going to be available: if you have too many, they may just refuse to let you download them.

And even if you manage to download the comments, you’ll have fun time trying to do anything useful with them: Disqus does not let you re-import them, say in a different account, as they explicitly don’t allow that format to be imported. Nor does WordPress: when I moved my blog I had to hack up a script that took the Disqus export format, a WRX dump of the blog (which is just a beefed up RSS feed), and produces a third file, attaching the Disqus comments to the WRX as WordPress would have exported them. This was tricky, but it resolved the problem, and now all the comments are on the WordPress platform, allowing me to move them as needed.

Many people pointed out that there are at least a couple of open-source replacements for Disqus — but when I looked into them I was seriously afraid they wouldn’t really scale that well for my blog. Even WordPress itself appears sometimes not to know how to deal with a >2400 entries blog. The WRX file is, by itself, bigger than the maximum accepted by the native WordPress import tool — luckily the Automattic service has higher limits instead.

One of the other advantages of having moved away from Disqus is that the comments render without needing any JavaScript or third party service, make them searchable by search engines, and most importantly, preserves them in the Internet Archive!

But Disqus is not the only thing that disappoints me. I have a personal dislike for the design, and business model, of Hacker News and Reddit. It may be a bit of a situation of “old man yells at cloud”, but I find that these two websites, much more than Facebook, LinkedIn and other social media, are designed to take the conversation away from the authors.

Let me explain with an example. When I posted about Telegram and IPv6 last year, the post was sent to reddit, which I found because I have a self-stalking recipe for IFTTT that informs me if any link to my sites get posted there. And people commented on that — some missing the point and some providing useful information.

But if you read my blog post you won’t know about that at all, because the comments are locked into Reddit, and if Reddit were to disappear the day after tomorrow there won’t be any history of those comments at all. And this is without going into the issue of the “karma” going to the reposter (who I know in this case), rather than the author — who’s actually discouraged in most communities from submitting their own writings!

This applies in the same or similar fashion to other websites, such as Hacker News, Slashdot, and… is Digg still around? I lost track.

I also find that moving the comments off-post makes people nastier: instead of asking questions ready to understand and talk things through with the author, they assume the post exist in isolation, and that the author knows nothing of what they are talking about. And I’m sure that at least a good chunk of that is because they don’t expect the author to be reading them — they know full well they are “talking behind their back”.

I have had the pleasure to meet a lot of people on the Internet over time, mostly through comments on my or other blogs. I have learnt new thing and been given suggestions, solutions, or simply new ideas of what to poke at. I treasure the comments and the conversation they foster. I hope that we’ll have more rather than fewer of them in the future.

WordPress, really?

If you’re reading this blog post, particularly directly on my website, you probably noticed that it’s running on WordPress and that it’s on a new domain, no longer referencing my pride in Europe, after ten years of using it as my domain. Wow that’s a long time!

I had three reasons for the domain change: the first is that I didn’t want to keep the full chain of redirects of extremely old link onto whichever new blogging platform I would select. And the second is it that it made it significantly easier to set up a WordPress.com copy of the blog while I tweaked and set it up, rather than messing up with the domain at once. The third one will come with a separate rant very soon, but it’s related to the worrying statement from the European Commission regarding the usage of dot-EU domains in the future. But as I said, that’s a separate rant.

I have had a few people surprised when I was talking over Twitter about the issues I faced on the migration. I want to give some more context on why I went this way.

As you remember, last year I complained about Hugo – to the point that a lot of the referrers to this blog are still coming from the Hacker News thread about that – and I started looking for alternatives. And when I looked at WordPress I found that setting it up properly would take me forever, so I kept my mouth shut and doubled-down on Hugo.

Except, because of the way it is set up, it meant not having an easy way to write blog posts, or correct blog posts, from a computer that is not my normal Linux laptop with the SSH token and everything else. Which was too much of a pain to keep working with. While Hector and others suggested flows that involved GIT-based web editors, it all felt too Rube Goldberg to me… and since moving to London my time is significantly limited compared to before, so I either spend time on setting everything up, or I can work on writing more content, which can hopefully be more useful.

I ended up deciding to pay for the personal tier of WordPress.com services, since I don’t care about monetization of this content, and even the few affiliate links I’ve been using with Amazon are not really that useful at the end of the day, so I gave up on setting up OneLink and the likes here. It also turned out that Amazon’s image-and-text links (which use JavaScript and iframes) are not supported by WordPress.com even with the higher tiers, so those were deleted too.

Nobody seems to have published an easy migration guide from Hugo to WordPress, as most of the search queries produced results for the other way around. I will spend some time later on trying to refine the janky template I used and possibly release it. I also want to release the tool I wrote to “annotate” the generated WRX file with the Disqus archive… oh yes, the new blog has all the comments of the old one, and does not rely on Disqus, as I promised.

On the other hand, there are a few things that did get lost in the transition: while JetPack Plugin gives you the ability to write posts in Markdown (otherwise I wouldn’t have even considered WordPress), it doesn’t seem like the importer knows at all how to import Markdown content. So all the old posts have been pre-rendered — a shame, but honestly that doesn’t happen very often that I need to go through old posts. Particularly now that I merged in the content from all my older blogs into Hugo first, and now this one massive blog.

Hopefully expect more posts from me very soon now, and not just rants (although probably just mostly rants).

And as a closing aside, if you’re curious about the picture in the header, I have once again used one of my own. This one was taken at the maat in Lisbon. The white balance on this shot was totally off, but I liked the result. And if you’re visiting Lisbon and you’re an electronics or industrial geek you definitely have to visit the maat!

How blogging changed in the past ten years

One of the problems that keeps poking back at me every time I look for an alternative software for this blog, is that it somehow became not your average blog, particularly not in 2017.

The first issue is that there is a lot of history. While the current “incarnation” of the blog, with the Hugo install, is fairly recent, I have been porting over a long history of my pseudo-writing, merging back into this one big collection the blog posts coming from my original Gentoo Developer blog, as well as the few posts I wrote on the KDE Developers blog and a very minimal amount of content from my (mostly Italian) blog when I was in high school.

Why did I do it that way? Well the main thing is that I don’t want to lose the memories. As some of you might know already, I faced my mortality before, and I came to realize that this blog is probably the only thing of substance that I had a hand on, that will outlive me. And so I don’t want to just let migration, service turndowns, and other similar changes take away what I did. This is also why I did publish to this blog the articles I wrote for other websites, namely NewsForge and Linux.com (back when they were part of Geeknet).

Some of the recovery work actually required effort. As I said above there’s a minimal amount of content that comes from my high school days blog. And it’s in Italian that does not make it particularly interesting or useful. I had deleted that blog altogether years and years ago, so I had to use the Wayback Machine to recover at least some of the posts. I will be going through all my old backups in the hope of finding that one last backup that I remember making before tearing the thing down.

Why did I tear it down in the first place? It’s clearly a teenager’s blog and I am seriously embarrassed by the way I thought and wrote. It was 1314 years ago, and I have admitted last year that I can tell so many times I’ve been wrong. But this is not the change I want to talk about.

The change I want to talk about is the second issue with finding a good software to run my blog: blogging is not what it used to be ten years ago. Or fifteen years ago. It’s not just that a lot of money got involved in the mean time, so now there are a significant amount of “corporate blogs”, that end up being either product announcements in a different form, or the another outlet for not-quite-magazine content. I know of at least a couple of Italian newspapers that provide “blogs” for their writers, which look almost exactly like the paper’s website, but do not have to be reviewed by the editorial board.

In addition to this, a lot of people’s blogs stopped providing as much details of their personal life as they used to. Likely, this is related to the fact that we now know just how nasty people on the Internet can be (read: just as nasty as people off the Internet), and a lot of the people who used to write lightheartedly don’t feel as safe, correctly. But there is probably another reason: “Social Media”.

The advent of Twitter and Facebook made it so that there is less need to post short personal entries, too. And Facebook in particular appears to have swallowed most of the “cutesy memes” such as quizzes and lists of things people have or have not done. I know there are still a few people who insist on not using these big names social networks, and still post for their friends and family on blogs, but I have a feeling they are quite the minority. And I can tell you for sure that since I signed up for Facebook, a lot of my smaller “so here’s that” posts went away.

Distribution chart of blog post sizes over time

This is a bit of a rough plot of blog sizes. In particular I have used the raw file size of the markdown sources used by Hugo, in bytes, which make it not perfect for Unicode symbols, and it includes the “front matter”, which means that particularly all the non-Hugo-native posts have their title effectively doubled by the slug. But it shows trends particularly well.

You can see from that graph that some time around 2009 I almost entirely stopped writing short blog posts. That is around the time Facebook took off in Italy, and a lot of my interaction with friends started happening there. If you’re curious of that visible lack of posts just around half of 2007, that was the pancreatitis that had me disappear for nearly two months.

With this reduction in scope of what people actually write on blogs, I also have a feeling that lots of people were left without anything to say. A number of blogs I still follow (via NewsBlur since Google Reader was shut down), post once or twice a year. Planets are still a thing, and I still subscribe to a number of them, but I realize I don’t even recognize half the names nowadays. Lots of the “old guard” stopped blogging almost entirely, possibly because of a lack of engagement, or simply because, like me, many found a full time job (or a full time family), that takes most of their time.

You can definitely see from the plot that even my own blogging has significantly slowed down over the past few years. Part of it was the tooling giving up on me a few times, but it also involves the lack of energy to write all the time as I used to. Plus there is another problem: I now feel I need to be more accurate in what I’m saying and in the words I’m using. This is in part because I grew up, and know how much words can hurt people even when meant the right way, but also because it turns out when you put yourself in certain positions it’s too easy to attack you (been there, done that).

A number of people that think argue that it was the demise of Google Reader1 that caused blogs to die, but as I said above, I think it’s just the evolution of the concept veering towards other systems, that turned out to be more easily reachable by users.

So are blogs dead? I don’t think so. But they are getting harder to discover, because people use other platforms and it gets difficult to follow all of them. Hacker News and Reddit are becoming many geeks’ default way to discover content, and that has the unfortunate side effect of not having as much of the conversation to happen in shared media. I am indeed bothered about those people who prefer discussing the merit of my posts on those external websites than actually engaging on the comments, if nothing else because I do not track those platforms, and so the feeling I got is of talking behind one’s back — I would prefer if people actually told me if they shared my post on those platforms; for Reddit I can at least IFTTT to self-stalk the blog, but that’s a different problem.

Will we still have blogs in 10 years? Probably yes, but they will not look like the ones we’re used to most likely. The same way as nowadays there still are personal homepages, but they clearly don’t look like Geocities, and there are social media pages that do not look like MySpace.


  1. Usual disclaimer: I do work for Google at the time of writing this, but these are all personal opinions that have no involvement from the company. For reference, I signed the contract before the Google Reader shutdown announcement, but started after it. I was also sad, but I found NewsBlur a better replacement anyway.
    [return]

Tiny Tiny RSS: don’t support Nazi sympathisers


XKCD #1357 — Free Speech

After complaining about the lack of cache hits from feed readers, and figuring out why NewsBlur (that was doing the right thing), and then again fixing the problem, I started looking at what other readers kept being unfixed. It turned out that about a dozen people used to read my blog using Tiny Tiny RSS, a PHP-based personal feed reader for the web. I say “used to” because, as of 2017-08-17, TT-RSS is banned from accessing anything from my blog via ModSecurity rule.

The reason why I went to this extent is not merely technical, which is why you get the title of this blog the way it is. But it all started with me filing requests to support modern HTTP features for feeds, particularly regarding the semantics of permanent redirects, but also about the lack of If-Modified-Since, which allows significant reduction on the bandwidth usage of a blog1. Now, the first response I got about the permanent redirect request was disappointing but it was a technical answer, so I provided more information. After that?

After that the responses stopped being focused on the technical issues, but rather appear to be – and that’s not terribly surprising in FLOSS of course – “not my problem”. Except, the answers also came from someone with a Pepe the Frog avatar.2 And this is August of 2017, when America shown having a real Nazi problem, and willingly associating themselves to alt-right is effectively being Nazi sympathizers. The tone of the further answers also show that it is no mistake or misunderstanding.

You can read the two bugs here: and . Trigger warning: extreme right and ableist views ahead.

While I try to not spend too much time on political activism on my blog, there is a difference from debating whether universal basic income (or even universal health care) is a right nor not, and arguing for ethnic cleansing and the death of part of a population. So no, no way I’ll refrain from commenting or throwing a light on this kind of toxic behaviour from developers in the Free Software community. Particularly when they are not even holding these beliefs for themselves but effectively boasting them by using a loaded avatar on their official support forum.

So what you can do about this? If you get to read this post, and have subscribed to my blog through TT-RSS, you now know why you don’t get any updates from it. I would suggest you look for a new feed reader. I will as usual suggest NewsBlur, since its implementation is the best one out there. You can set it up by yourself, since it’s open source. Not only you will be cutting your support to Nazi sympathisers, but you also will save bandwidth for the web as a whole, by using a reader that actually implements the protocol correctly.

Update (2017-08-06): as pointed out in the comments by candrewswpi, FreshRSS is another option if you don’t want to set up NewsBlur (which admittedly may be a bit heavy). It uses PHP so it should be easier to migrate given the same or similar stack. It supports at least proper caching, but I’m not sure about the permanent redirects, it needs testing.

You could of course, as the developers said on those bugs, change the User-Agent string that TT-RSS reports, and keep using it to read my blog. But in that case, you’d be supporting Nazi sympathisers. If you don’t mind doing that, I may ask you a favour and stop reading my blog altogether. And maybe reconsider your life choices.

I’ll repeat here that the reason why I’m going to this extent is that there is a huge difference between the political opinions and debates that we can all have, and supporting Nazis. You don’t have to agree with my political point of view to read my blog, you don’t have to agree with me to talk with me or being my friend. But if you are a Nazi sympathiser, you can get lost.


  1. you could try to argue that in this day and age there is no point in worrying about bandwidth, but then you don’t get to ever complain about the existence of CDNs, or the fact that AMP and similar tools are “undemocratizing” the web.
    [return]
  2. Update (2017-08-03): as many people have asked: no it’s not just any frog or any Pepe that automatically makes you a Nazi sympathisers. But the avatar was not one of the original illustrations, and the attitude of the commenter made it very clear what their “alignment” was. I mean, if they were fans of the original character, they would probably have the funeral scene as their avatar instead.
    [return]

Modern feed readers

Four years ago, I wrote a musing about the negative effects of the Google Reader shutdown on content publishers. Today I can definitely confirm that some of the problems I foretold materialized. Indeed, thanks to the fortuitous fact that people have started posting my blog articles to reddit and hacker news (neither of which I’m fond of, but let’s leave that aside), I can declare that the vast majority of the bandwidth used by my blog is consumed by bots and in particular by feed readers. But let’s start from the start.

This blog is syndicated over a feed, the URL and format of which changed a number of times before, mostly with the software, or with the update of the software. The most recent change was due to switching from Typo to Hugo, and the feed name changing. I could have kept the original feed name, but it made little sense at the time, so instead I set up permanent redirects from the old URLs to the new URLs, as I always do. I say I always do because I keep working even the original URLs from when I ran the blog off my home DSL.

Some services and feed reading software know how to deal with permanent redirects correctly, and will (eventually) replace the old feed URL with the new one. For instance NewsBlur will replace URLs after ten fetches replied with a permanent redirect (which is sensible, to avoid accepting a redirection that was set up by mistake and soon rolled back, and to avoid data poisoning attacks). Unfortunately, it seems like this behaviour is extremely rare, and so on August 14th I received over three thousands requests for the old Typo feed URL (admittedly, that was the most persistent URL I used). In addition to that, I also received over 300 requests for the very old Typo /xml/ feeds, of which 122 still pointing at my old dynamic domain, which is now pointing at the normal domain for my blog. This has been the case now for almost ten years, and yet some people still have subscription to those URLs! At least one Liferea and one Akregator pointing at those URLs.

But while NewsBlur implements sane semantics for handling permanent redirects, it is far from a perfect implementation. In particular even though I have brought this up many times, Newsblur is not actually sending If-Modified-Since or If-None-Match headers, which means it will take a copy of the feed at every request. Even though it does support compressed responses (non fetch of the feed is allowed without compressed responses), NewsBlur is requesting the same URL more than twice an hour, because it seems to have two “sites” described by the same URL. At 50KiB per request, that makes up about 1% of the total bandwidth usage of the blog. To be fair, this is not bad at all, but one has to wonder why they can’t be saving the last modified or etag values — I guess I could install my own instance of NewsBlur and figure out how to do that myself, but who knows when I would find the time for that.

Update (2017-08-16): Turns out that, as Samuel pointed out in the comments and on Twitter, I wrote something untrue. NewsBlur does send the headers, and supports this correctly. The problem is an Apache bug that causes 304 never to be issued when using If-None-Match and mod_deflate.

To be fair, even rawdog, which I use for Planet Multimedia, does not appear to support these properly. Oh and speaking of Planet Multimedia, would someone be interested in providing a more modern template so that Monty’s pictures don’t take over the page, that would be awesome!

There actually are a few other readers that do support these values correctly, and indeed receive 304 (Not Modified) status code most of the time. These include Lighting (somebody appears to be still using it!) and at least yet-another-reader-service Willreadit.com — this latter appears to be in beta and being invite only; it’s probably the best HTTP implementation I’ve seen for a service with such a rough website. Indeed the bot landing page points out how it supports If-Modified-Since and gzip-compressed responses. Alas it does not appear to learn from persistent redirects though, so it’s currently fetching my blog’s feed twice, probably because there are at least two subscribers for it.

Also note that supporting If-Modified-Since is a prerequisite for supporting delta feeds which is an interesting way to save even more bandwidth (although I don’t think this is feasible to do with a static website at all).

At the very least it looks like we won the battle for supporting compressed responses. The only 406 (Not Acceptable) responses for the feed URL are for Fever, which is no longer developed or supported. Even Gwene, which I pointed out was hammering my blog last time I wrote about this, is now content to get the compressed version. Unfortunately it does not appear like my pull request was ever merged, which means it’s likely the repository itself is completely out of sync with what is being run.

So in 2017, what is the current state of the art feed reader support? NewsBlur has recently added support for JSON Feed which is not particularly exciting – when I read the post I was reminded, by the screenshot of choice there, where I heard of Brent Simmons before: Vesper, which is an interesting connection, but I should not go into that now – but at least shows that Samuel Clay is actually paying attention to the development of the format — even though that development right now appears to just avoiding XML. Which to be honest is not that bad of an idea: since HTML (even HTML5) does not have to be well-formatted XML, you need to provide it as cdata in an XML feed. And the way you do that makes it very easy to implement it incorrectly.

Also, as I wrote this post I realized what else I would like from NewsBlur: the ability to subscribe to an OPML feed as a folder. I still subscribe to lots of Planets, even though they seem to have lost their charm, but a few people are aggregated in multiple planets and it would make sense to be able to avoid duplicate posts. If I could tell NewsBlur «I want to subscribe to this Planet, aggregate it into a folder», it would be able to tell the duplicated feeds, and mark the posts as read on all of them at the same time. Note that what I’d like is something different from just importing an OPML description of the planet! I would like for the folder to be kept in sync with the OPML feed, so that if new feeds are added, they also get added to the folder, and same for removed feeds. I should probably file that on GetSatisfaction at some point.