“Planets” in the World of Cloud

As I have written recently, I’m trying to reduce the amount of servers I directly manage, as it’s getting annoying and, honestly, out of touch with what my peers are doing right now. I already hired another company to run the blog for me, although I do keep access to all its information at hand and can migrate where needed. I also give it a try to use Firebase Hosting for my tiny photography page, to see if it would be feasible to replace my homepage with that.

But one of the things that I still definitely need a server for is keep running Planet Multimedia, despite its tiny userbase and dwindling content (if you work in FLOSS multimedia, and you want to be added to the Planet, drop me an email!)

Right now, the Planet is maintained through rawdog, which is a Python script that works locally with no database. This is great to run on a vserver, but in a word where most of the investments and improvements go on Cloud services, that’s not really viable as an option. And to be honest, the fact that this is still using Python 2 worries me no little, particularly when the author insists that Python 3 is a different language (it isn’t).

So, I’m now in the market to replace the Planet Multimedia backend with something that is “Cloud native” — that is, designed to be run on some cloud, and possibly lightweight. I don’t really want to start dealing with Kubernetes, running my own PostgreSQL instances, or setting up Apache. I really would like something that looks more like the redirector I blogged about before, or like the stuff I deal with for a living at work. Because it is 2019.

So sketching this “on paper” very roughly, I expect such a software to be along the lines of a single binary with a configuration file, that outputs static files that are served by the web server. Kind of like rawdog, but long-running. Changing the configuration would require restarting the binary, but that’s acceptable. No database access is really needed, as caching can be maintained to process level — although that would men that permanent redirects couldn’t be rewritten in the configuration. So maybe some configuration database would help, but it seems most clouds support some simple unstructured data storage that would solve that particular problem.

From experience with work, I would expect the long running binary to be itself a webapp, so that you can either inspect (read-only) what’s going on, or make changes to the database configuration with it. And it should probably have independent parallel execution of fetchers for the various feeds, that then store the received content into a shared (in-memory only) structure, that is used by the generation routine to produce the output files. It may sounds like over-engineering the problem, but that’s a bit of a given for me, nowadays.

To be fair, the part that makes me more uneasy of all is authentication, but Identity-Aware Proxy might be a good solution for this. I have not looked into that but used something similar at work.

I’m explicitly ignoring the serving-side problem: serving static files is a problem that has mostly been solved, and I think all cloud providers have some service that allows you to do that.

I’m not sure if I will be able to work more on this, rather than just providing a sketched-out idea. If anyone knows of something like this already, or feels like giving a try to building this, I’d be happy to help (employer-permitting of course). Otherwise, if I find some time to builds stuff like this, I’ll try to get it released as open-source, to build upon.

Planets, feeds and blogs

You have probably noticed that last month I replaced Harvester with rawdog for Planet Multimedia. The reasons was easy to explain: Harvester requires libraries that only work with Ruby 1.8 — and while on one hand moving to Ruby 1.9 or 2 would mean being able to use the feedfetcher (the same one used by IFTTT), my attempts at updating the code to work with a more modern version of Ruby have been all failures.

Since I did not intend to be swamped with one more custom tool to maintain I turned to another standard tool to implement the aggregator, rawdog — holding my nose on the use of darcs for source control, and the name (please don’t google for it without safe search on, at work). The nice part about using this tool is that it’s packaged in Gentoo already, so it’s handled straight by portage with binary packages. Unfortunately, the default templates are terrible, and the settings non-obvious, but Luca was able to make the best out of it.

But more and more problems got obvious with time. The first is that the tool is does not respect the return codes at exit — it always returns zero (success) even if the processing was incomplete; it took me two weeks to figure out that the script failed when running in cron because the environment lacked the locale settings, as the cron logs said that everything was alright, and since I use fcron, it also did not send me any email, as I set it to mail me only for errors.

A couple of days ago, I got complains again that the Planet was not updating; again, no error in the cron logs, no error in my email. I ran the command manually, and I was told by it that Luca’s feed, on blogs.gentoo.org, was unreachable. Okay, sure. But then it did not solve itself when it came back up. Today I looked back into it and J-B’s and Rémi’s blogs feed were unreachable. Once again, no non-zero exit status, thus no mail, no error in the logs. This is not the way it should behave.

But that’s not enough. the other problem with rawdog is that it does not, by default, support generating a feed for the aggregation, like Harvester (and Planet/Venus) does. I found that Jonathan Riddell actually built a plugin for Planet KDE to generate the feed, but I haven’t tested it yet because I have not found the authoritative source of it, but just multiple copies of it in different websites. It also produces RSS feeds, rather than Atom feeds. And I’m sorry to say but Atom is much preferred, for me.

So where does it leave us? I’m not going to try fixing rawdog I’m afraid. Mostly because I don’t intend spending time with darcs. My options are either go back to Harvester and fix it to not use DBI and support Ruby 1.9, or try to adapt parts of NewsBlur – that already deal with aggregating feeds and producing new feeds – to make up an alternative to rawdog. If I am to do something like that, though, I’m most likely going to take my dear time and make it a web-configurable tool, rather than something that needs to be configured on the command line or with configuration files.

The reason for that is, very simply, that I’m growing fond of doing most of my work on a browser when I can, and this looks like a perfect solution to the problem. Even more so if you can give access to someone else to look into it — and if you can avoid storing passwords.

So, any takers to help me with this project?