The Blog’s Most Clicked Articles in 2025

I’m not a big fan of the whole “wrapped” meme from services — a lot of things for me don’t quite line up on a calendar year basis, so I don’t find that particularly compelling to look at. I also find it fairly silly to “wrap” an year at the beginning of December for any service that doesn’t have seasonal patterns — the reason why Spotify, the ur-wrapper, releases theirs well before the year’s end, is because for many people, most of December is spent playing Holiday music of one kind or another, often on repeat.

But for the last couple of years I’ve been looking at the Google Webmaster Tools at the end of the year to see what people have been looking for, which my blog possibly answered, just to remind myself that, in a world of short-term memory, microblogging, social media, YouTube video essays, TikTok, Reels and shorts, “AI” summaries, and LLM search agents, long-form blogging still has value — and not just to influence said LLMs! Though, hey, I’m going to count it as a win if my point of view ends up amplified by LLMs simply because nobody else cares to write anymore!

Anyway, I decided to take a closer look of grouping searches by topic (with a little bit of help from an LLM for the mechanical processing only — as always, this blog’s content is entirely Flameeyes-produced, though as I said before I do find LLMs useful CASE tools), and the results aren’t really surprising per-se, but they definitely feel a bit bittersweet.

Aside, despite the large presence of em dash throughout this post, and most others in the site, there’s zero LLM editing. If you still believe that em dash is a “tell” of LLM-generated content, you’re misinformed, likely easily manipulate, and you really should learn to distinguish reality from memes. I’ve been using WinCompose to have access to symbols like that for literal years, as I can’t possibly type this blog without it!

First of all, the largest click source for this blog is still ELF, the Executable and Linkable Format. If I put together the topics that relate directly to ELF, there is, on average, at least one click per day to the blog. Of these, the vast majority are for people looking for what rpath is, which is unsurprising, given the feature had been sparsely documented, while the errors have been fairly visible. I think I’ll know that linkers’ documentation will have improved when people will not be looking for rpath in here.

Speaking of documentation to be improved, once GNU make will figure out how to handle parallel builds without requiring the complication of a jobserver, maybe I’ll stop being the fallback documentation for it. Though I guess this is exactly the best case for the blog: a whole lot of posts of mine are effectively breadcrumbs to augment, or interpret, documentation, though not a replacement.

The second most clicked topic is, uh, open source washing machines (and other appliances in general) and not for the first year — it is the reason why I revisited it in 2025! I might actually have to do another round of revisiting this topic both because of the continuous interest, and because there are a few more things I learnt in the past year – particularly with the deep dive into shopping for appliances for the new house – that would make for a good follow-up. It’s still unlikely to be a practical follow-up since even with the new house I don’t have the kind of workshop space that your average YouTube makers do, but maybe one day I’ll be told my writing has had a part in the design of the appliances I’ll be using.

Speaking of appliances, it looks like quite a lot of people have been caught by the trap of the too-cheap-to-be-true Whirlpool AP330W and the inability to find replacement filters for it — or more realistically the fact that there is no way for it to register the replacement filter at all! Thankfully it does appear that, at least in the UK, it is now entirely out of stock. And that probably means my blog post is climbing up the ranking for that particular search, as most stores won’t be showing out of stock items at all.

Coming third is my only topic written in Italian — there doesn’t seem to be a whole lot of material covering the Italian staple of crosswords, rebus, and general puzzles that is La Settimana Enigmistica (which I grew up with), so despite it being an older post, for very niche audience, people keep clicking on it. I’m surprised the publisher hasn’t tried to do something to make me take down that page, as it doesn’t really paint them in a good light — though that is what does paint htem in a good light to me, that they aren’t trying to hide that.

The final “significant” topic (that is, one that had more than 100 clicks throughout the year, at least from Google Search), is also the only one that appeared in 2025: Sweet Home 3D. It does look like most people trying to use it need something more than just the generic furniture samples that come with the program, so hopefully I helped at least a few of them.

The remaining clicks are a lot more evenly spread across different topics — there’s a few people still looking for Google Reader, configuring HTTPD, or looking for various reviews of hardware or services. In general I believe this is a fairly healthy mix of what I wrote about in the past. Sounds good, right?

Well, the thing is I’m a bit worried about what’s going on with my ex-colleagues at Google Search, in particularly when it comes to Crawl and Indexing. According to Webmaster Tools (which, admittedly, was a very neglected tool even when I was working there), only 2320 pages from this blog are indexed though, at the time of writing, I have 2830 posts published, and while there’s probably a few who are not particularly interesting to index, when I look at the list of pages that are “Crawled, currently not indexed”, I do see a few that would actually be good references for specific topics. I also see a number of “AMP variants” and the per-post feed of comments, most of which should be easily discarded — also, maybe I should find a way to disable this in WordPress? I don’t think anyone has a reason to subscribe to the per-post comment feed! I definitely disabled that in Typo back in the days.

To be fair, there’s also a note in the UI that says «Due to internal issues, this report has not been updated to reflect recent data.» — it doesn’t specify what “recent” means though, and knowing that particular tool, it might very well be that the data is multiple-months stale! As for the email that I receive with possible crawl failures, almost all of them in recent memory have been a variant of “duplicate” — which is likely because a lot of the links to the blog that get discovered include query string parameters.

But there’s another kink — while Google Webmaster Tools gives me some statistics for the clicks from Search itself it does not appear to have any information in terms of their newer verticals such as AI Overview and Gemini (nor it ever had information on Web Answers, which was a cursed product well before the AI bubble.) How often was my blog referenced by those? Who knows! Did people ever click on the links to see the full version after that? They won’t tell me.

Bing, which provides a similar Webmaster Tools portal, seems to at least aggregate some of that data, as my understanding of their “Chat” vertical is that it refers to their Copilot integration. They do not provide breakdown by page or keyword for that, though.

If you’re curious, the number of reported clicks from Bing is roughly 15% of the clicks reported from Google, and while the topic are roughly the same as noted here, there’s a few Windows specific posts that show up only there, which is unsurprising after all.

But what about all of the other solutions? As far as I can tell, neither OpenAI nor Anthropic provide “Webmaster Tools” — not that I’d expect the former to care, but I’m honestly surprised about the latter. Having a way to verify that the crawler is indeed doing its job appropriately (for those of us who are okay with letting our content be ingested, that is), and to know how often my blog is used as a reference for people would definitely be helpful. Reporting keywords safely and respecting users’ privacy is not easy, so I wouldn’t even expect that — but a simple “these are the pages that we linked most often, and these are how many clicks happened” would have been an interesting metric to observe.

To close this up, I went to compare all of this with what my self-hosted analytics platform reports — I self-host a Matomo instance, which tracks anonymous statistics and doesn’t share information with any third party. While there’s no point in trying to look up the keywords there, at least it does manage to capture a partial view of the referrals coming from ChatGPT and Perplexity — though I’d expect it’s a very small portion of people who click on those links, so (service-side) impressions would be a more useful statistic.

In terms of keywords, for privacy reasons, almost all search engines no longer report them directly in the referrer. This has mostly been accomplished through the use of Referrer-Policy header, though there are also a number of other steps that a savvy, privacy-focused search engine can take, including the usage of “shim” URLs to hide the provenance even from JavaScript. Yes, the very same shim URLs that some people think are there to snoop on people are more likely there to protect from easily trackable referrers.

In my current view, I see some keywords for Google, which I trust not at all (likely spam sources trying to poison analytics platforms, as Google definitely does its best to stop the search keywords from passing down to the websites that are clicked), and the only other two search engines reporting any are Baidu and Yandex — and the former appears to have mostly generated garbage in there, so it’s actively harmful to look at it. If you wonder why there’s even a keywords breakdown at this point, the answer appears to be because Matomo tries to upsell you, and unfortunately shows their true colours as not actually be that kind of a software after all.

The statistics on the pages that have been visited show a story that is quite close to what Google (and in smaller part Bing) show — at least once removing the requests loaded from the feed, which are biased both due to me not writing much in 2025, and by the fact that there appear to be a few feed readers who constantly reload every post linked in the feed! Sweet Home 3D is the most clicked page for the year, followed by more washing machines, and rpath, showing again the importance of long-term availability for some of my content, this is a post from 2010, that outperforms most of my output of the year.

Unfortunately, if the “average time on page” is anywhere to trust (and I generally don’t), it’s very few people who actually read anything at all I write, as almost all of the loads end up in one to ten seconds “time on page.” So if you got to here you’re among the very few — and I thank you for your continuing attention share. Hopefully I’ll have more (and more interesting) posts for 2026.

Exit mobile version