Kind Software

This post sprouts in part from a comment in my previous disclaim of support to FSFE, but it’s a standalone post, which is not related to my feelings towards FSFE (which I already covered elsewhere). It should also not be a surprise to long time followers, since I’m going to cover arguments that I have already covered, for better or worse, in the past.

I have not been very active as a Free Software developer in the past few years, for reasons I already spoke about, but that does not mean I stopped believing in the cause or turned away from it. At the same time, I have never been a fundamentalist, and so when people ask me about “Freedom 0”, I’m torn, as I don’t think I quite agree on what Freedom 0 consists of.

On the Free Software Foundation website, Freedom 0 is defined as

The freedom to run the program as you wish, for any purpose (freedom 0).

At the same time, a whole lot of fundamentalists seem to me to try their best to not allow the users to run the programs as they wish. We wouldn’t, otherwise, be having purity tests and crusade against closed-source components that users may want to actually use, and we wouldn’t have absurdist solutions for firmware, that involve showing binary blobs under the carpet, and just not letting the user ever update them.

The way in which I disagree with both formulation and interpretation of this statement, is that I think that software should, first of all, be usable for its intended purpose, and that software that isn’t… isn’t really worth discussing about.

In the case of Free Software, I think that, before any licensing and usage concern, we should be concerned about providing value to the users. As I said, not a novel idea for me. This means that software that that is built with the sole idea of showing Free Software supremacy, is not useful software for me to focus on. Operating systems, smart home solutions, hardware, … all of these fields need users to have long-term support, and those users will not be developers, or even contributors!

So with this in mind, I want to take a page out of the literal Susan Calman book, and talk about Kind Software, as an extension of Free Software. Kind Software is software that is meant for the user to use and to keep the user as its first priority. I know that a number of people would make this to be a perfect overlap and contrast, considering all Free Software as Kind Software, and all proprietary software as not Kind Software… but the truth is that it is significantly more nuanced than that.

Even keeping aside the amount of Free Software that is “dual-use” and that can be used by attackers just as much as defenders – and that might sometimes have a bit too much of a bias towards the attackers – you don’t need to look much further than the old joke about how “Unix is user friendly, it’s just very selective of who its friends are”. Kind software wouldn’t be selective — the user use-cases are paramount, any software that would be saying “You don’t do that with {software}, because it is against my philosophy” would by my definition not be Kind Software.

Although, obviously, this brings us back to the paradox of tolerance, which is why I don’t think I’d be able to lead a Kind Software movement, and why I don’t think that the solution to any of this has to do with licenses, or codes of ethics. After all, different people have different ideas of what is ethical and what isn’t, and sometimes you need to make a choice by yourself, without fighting an uphill battle so that everyone who doesn’t agree with you is labelled an enemy. (Though, if you think that nazis are okay people, you’re definitely not a friend of mine.)

What this tells me that I can define my own rules for what I consider “Kind Software”, but I doubt I can define them for the general case. And in my case, I have a mixture of Free Software and proprietary software in the list, because I would always select the tools that first get their job done, and second are flexible enough for people to adapt. Free Software makes the latter much easier, but too often is the case that the former is not the case, and the value of a software that can be easily modified, but doesn’t do what I need is… none.

There is more than that of course. I have ranted before about the ethical concerns with selling routers, and I’ve actually been vocal as a supporter for law requiring businesses to have their network equipment set up by a professional — although with a matching relaxation of the requirements to be considered a professional. So while I am a strong believer in the importance of OpenWRT I do think that trying to suggest it as a solution for general final users is unkind, at least for the moment.

On the other side of the room, Home Assistant to me looks like a great project, and a kind one to it. The way they handled the recent security issues (in January — pretty much just happened as I’m writing this) is definitely part of it: warned users wherever they could, and made sure to introduce safeties to make sure that further bugs in components that they don’t even support wouldn’t introduce this very same problem again. And most importantly, they are not there to tell you how to use your gadgets, they are there to integrate with whatever is possible to.

This is, by the way, the main part of the reason why I don’t like self-hosting solutions, and why I would categorically consider software needing to be self-hosted as unkind: it puts the burden of it not being abused on the users themselves, and unless their job is literally to look after hosted services, it’s unlikely that they will be doing a good job — and that’s without discussing the fact that they’d likely be using time that they meant to be spending on something else just to keep the system running.

And speaking of proprietary, yet kind, software — I have already spoken about Abbott’s LibreLink and the fact that my diabetes team at the hospital is able to observe my glucose levels remotely, in pretty much real-time. This is obviously a proprietary solution, and not a bug-free one at that, and I’m also upset they locked it in, but it is also a kind one: the various tools that don’t seem to care about the expiration dates, that think that they can provide a good answer without knowing the full extent of the algorithm involved, and that insist it’s okay to not wait for the science… well, they don’t sound kind to me: they not just allow access to personal data, which would be okay, but they present data that might not be right for people to take clinical decisions and… yeah that’s just scary to me.

Again, that’s a personal view on this. I know that some people are happy to try open-source medical device designs on themselves, or be part of multi-year studies for those. But I don’t think it’s kind to expect others to do the same.

Unfortunately, I don’t really have a good call to action here, except to tell Free Software developers to remember to be kind as well. And to think of the implications of the software they write. Sometimes, just because we’re able to throw something out there, doesn’t mean it’s the kind thing to do so.

My Take on What I Would Replace FSFE With

So it looks like my quick, but on-the-spot renegation of FSFE last December made the round much further than most of the blog posts I ever write. I think that, in comparison, it made a much wider range than my original FSFE support post.

So I thought it would be worth spending a little more time to point out why I decided to openly stop supporting FSFE — I did provide most of this reasoning in short form on Twitter, but I thought this is better summarised in a blog post that others can reference, and that I can point people at.

So first of all, this is not all about the allegations. It was very easy to paint my post, and all the other critical outbursts against FSFE, as a position taken on hearsay. But as I already said, this was just the “flash trigger” of me calling back the support for an organization for which my feeling cooled down significantly for years. Again, I said already in the other post that I got in touch with Matthias a few years ago already about my concerns with the organization, and it was Public Money, Public Code that kept me as a supporter since then.

The reason why I decided to write renege my support when the allegations were extended, and I even changed my posts schedule for it, is that I didn’t want my (much older) post on supporting FSFE to be used as an excuse to support an organization that was in the middle of a controversy. I have been a strong supporter and have been talking people about FSFE for years, including about their more recent REUSE initiative last year, and I wouldn’t have wanted to be used as a shield from criticism.

I had an entire draft complaining about the way FSFE made me feel most like I was supporting FSFG (Free Software Foundation Germany), and that doesn’t seem to have changed that much since I wrote it two years ago. Both the news page and the activities page at the time of writing are clearly highlighting a tight focus on German issues, including talking in very broad strokes about how the German Corona Warn App doesn’t need Google – strokes so broad that make it feel like a lot of smoke and no meat underneath – and still more focus on dealing with router lock-ins (missing a lot of nuance).

I do understand that, if most of the volunteers engaging are German they will care about German issues the most, and that if the “wins” come from Germany, obviously the news will be filled with German wins. But at the same time, an organization that wants to be European should strive to have some balance and decide not to use all the news coming from a single country. Looking at the news archive page at the time I’m writing this post, there’s seven references to «Germany», one to «France», and none to «Italy», «Ireland», «United Kingdom», «Great Britain», and so on.

And it’s not that there’s nothing happening in those other countries. COVID Tracker Ireland, to stay in the topic of Covid tracing apps, is also Free Software (licensed under MIT license), and a number of other apps have been literally built based on its code. Public Money, Public Code to its best! But nothing about it on the FSFE’s website, while there’s a number of references to the German app instead.

And again speaking of Public Money, Public Code, Italy doesn’t seem to be represented at all in their list of news, with the only reference being a two years old entry about “FSFE Italy” asking for support to the project by political parties. This despite the fact that the Italian Team Digitale and the established pagoPA company have been also releasing a lot of Free Software.

Once again, if you want to change the direction of an organization, joining directly and “walking the walk” would help. But there’s a number of reasons why that might be difficult for people. While I was working for Google – a cloud provider, very clearly – it would have been fairly difficult for me to join an organization with loud complaints about “the cloud” (which I anyway disagree with). And similarly given the amount of coverage of privacy, even when not related to Free Software directly, it would be hard for me to be an activist given my current employer.

Before you suggest that this is my problem, and that I’m not the target to such an organization, I want to point out that this is exactly why I didn’t go and say that they are terrible organization and called for a boycott. I just pointed out that I no longer support them. I did say that, out of my experience, I have no reason to disbelieve their accusation, and even after reading their response statement I don’t have any reason to change my mind about that.

But I also have been a Free Software developer and advocate for a long time. I believe in the need for more Free Software, and I agree that government-developed software should be released to the public, even if it doesn’t benefit directly the taxpayers of the government that developed it. I made that case in Italian over ten years ago (I should possibly translate that blog post or at least re-tell the tale). I would enjoy being an activist for an organization that cares about Free Software, but also cares to get more people onboard rather than fewer, and would rather then not build “purity tests” into its role.

Another big problem is with the engagement method. Because of the abovementioned purity test, FSFE appears to only be engaging with its community in a “write only” media over Twitter. If you want to be an activist for FSFE you need to use email and mailing list, or maybe you can use Mastodon. In-person meetings seemed to still be all the rage when I discussed this a few years ago, and I do wonder if with 2020 happening they manage to switch at least to Jitsi, or if they ended up just using an Asterisk server connected to a number of landlines to call into.

I’m still partially comfortable with mailing lists for discussion, but it’s honestly not much of a stretch to see how this particular communication medium is not favorable to younger people. It’s not just the lack of emoji and GIF reactions — it’s also a long-form medium, where you need to think careful about all the words you use, and that persists over time. And that counts double when you have to handle discussion with an organization that appears to have more lawyers than developers.

I joked on Twitter that for a Gen-Z person, asking to use email to partecipate is the equivalent of asking a Millennial (like me) to make a phone call. And I say that knowing full well how much time I used to spend on the phone when I ran my own company: it’s not fun to me at all.

But that means you’re cutting out two big categories of people who could have both the intentions and the means to help: younger people with time on their hand, who can actively partecipate in programs and organization, and professionals who might have the expertise and the contacts.

And speaking of the professionals — you may remember that I came to the REUSE tool (which I contributed a number of fixes to myself) after complaining about having a hard time contributing while at Google because projects, among others, often didn’t provide a proper license that I could refer to, to submit patches. At the time of writing, just like a few years ago when I first tried correcting something on the website, the FSFE Website repository does not provide a license or SPDX headers (to comply with REUSE).

What would I like is for an actual European-wide organization, focused not on government policy, but rather on making it possible to have a sustainable ecosystem of Free Software development, particularly when it comes to all of those nuances that differ from a discussion of Free Software and licensing that is always too USA-centric.

The organization I have in mind, that I would love to provide monetary contribution to (if not outright be an activist for, time constraint being a thing), would be spending time in universities and high school, showing the usefulness of Free Software to learn new things. Convincing both professors and students of the usefulness of sharing and receiving, and respecting the licensing.

This is not as obvious: back when I was in high school, BSA was the only enforcement of license compliance in schools, and Free Software advocates were explicitly undermining those efforts, as they were working for a few proprietary software manufacturers. But as I said before, undermining proprietary software licenses undermines the Free Software movement. If people are trained to ignore licensing requirements, they are going to do so for Free Software, and that’s now you end up with projects ignoring the limits of GPL and other licenses.

And this connects to the next problem: for the movement to be sustainable, you also need people to make a living off it, and that requires taking licensing seriously. It’s a topic I come back over and over: business-oriented Free Software is already lacking, and people need money to survive. When the option for more experience developers, project managers, community managers, … is basically to barely make ends meet or go work for one of the big companies that don’t focus on Free Software… well the answer is obvious for a lot more people you may imagine. Not everyone gets to be the “star” that Greg KH or Linus Torvalds are, and get paid to pretty much keep their Free Software engagement — most others either do it as a side hustle, or have a side hustle.

The Document Foundation found out the hard way that there’s need for actual business plans if you want to keep maintaining big, complex Free Software projects. And even Mozilla, once shown as a core pillar of Free Software paid development, has shown this year how hard it is to keep “running the show” without a sustainable plan on the long term.

An organization focused on sustainability in Free Software should, at least in my hopes, focus on providing this kind of support. Providing blueprints for business engagements, providing outreach on license compliance to the benefit of Free Software, but also providing the pragmatic tools for Free Software enthusiast consultants to engage with their customers and take down the market barriers that make it so hard for single developers to find customers.

FSFE has lots of public policy engagements particularly with the European Union — and some of those are extremely valuable. They are required to level the playing field between Free Software developers and big corporations with entire organizations of lawyers and marketers. But they shouldn’t be the only think that an European organization focusing on Free Software should be remembered for.

Video: unpaper with Meson — pytesting our way out

Another couple of hours spent looking at porting Unpaper to Meson, this time working on porting the tests from the horrible mix of Automake, shell, and C to a more manageable Python testing framework.

I’ll write up a more complex debrief of this series of porting videos, as there’s plenty to unpack out of all of them, and some might be good feedback for the Meson folks to see if there’s any chance to make a few things easier — or at least make it easy to find the right solution.

Also, you can see a new avatar in the corner to make the videos easier to recognize 😀 — the art is from the awesome Tamtamdi, commissioned by my wife as a birthday present last year. It’s the best present ever, and it seemed a perfect fit for the streams.

And as I said at the end, the opening getting ready photo stars Pesto from Our Super Adventure, and you probably saw it already when I posted about Sarah and Stef’s awesome comics.

As a reminder, I have been trying to stream a couple of hours of Free Software every so often on my Twitch channel — and then archiving these on YouTube. If you’re interested in being notified about these happening, I’m usually announcing them with a few hours to spare (rarely more than that due to logistics) on Twitter or Facebook.

Progress Logging and Results Logging

There is one thing that my role as software mechanic seems to get me attracted to, and that’s the importance of logging information. Logging is one of those areas that tend to bring up opinions, and with the idea of making this into a wider area of observability, it brought on entire businesses (shout out to friends Honeycomb.io). But even in smaller realities, I found myself caring about logging, setting up complex routing with metalog, or hoping to get a way to access Apache logs in a structured format.

Obviously, when talking about logging in bubbles, there’s a lot more to consider than just which software you send the logs to — even smaller companies nowadays need to be careful with PII, since GDPR makes most data toxic to handle. I can definitely tell you that some of the analysis I used to do for User-Agent filtering would not pass muster for a company at the time of GDPR — in a very similar fashion as the pizzeria CRM.

But leaving aside the whole complicated legal landscape, there’s a distinction in logs that I have not seen well understood by engineers – no matter where they are coming from – and that is the difference between what I call progress logging and results logging. I say that I call them this way, because I found a number of other different categorizations of logs, but none that matches my thoughts on the matter, and I needed to give it names.

Distinctions that I did hear people talk about are more like “debug logs” versus “request logs”, or “text logs” versus “binary logs”. But this all feels like it’s mixing media and message, in too many cases — as I said in my post about Apache request logs, I would love for structured (even binary would do) request logs, which are currently “simple” text logs.

Indeed, Apache (and any other server) request logs to me fit neatly in the category of results logging. They describe what happened when an action completed: the result of the HTTP request includes some information of the request, and some information of the response. It provides a result of what happened.

If you were to oversimplify this, you could log each full request and each full response, and call that results logging: a certain request resulted in a certain response. But I would expect that there is a lot more information available on the server, which does not otherwise make it to the response, for many different reasons (e.g. it might be information that the requestor is not meant to find out, or simply doesn’t have to know, and the response is meant to be as small as possible). In the case of an HTTP request to a server that act as a reverse proxy, the requestor should not be told about which backend the request was handled by — but it would be an useful thing to log as part of the result.

When looking at the practicality of implementing results logging, servers tend to accrue the information needed for generating the result logs in data structures that are kept around throughout the request (or whatever other process) lifetime, and then extract all of the required information from them at the time of generating the log.

This does mean that if the server terminates (either because it’s killed, the power goes off, or the request caused it to crash), and the result is not provided, then you don’t get a log about the request happening at all — this is the dirty secret of Apache request logs (and many other servers): they’re called request logs but they actually log responses. There are ways around this, by writing parts of the results logs as they are identified – this helps both in terms of persistence and in terms of memory usage (if you’re storing something in memory just because you should be logging it later) but that ends up getting much closer to the concept of tracing.

Progress logs, instead, are closer to what is often called shotgun debugging or printf() debugging. They are log statement emitted as the code goes through them, and they are usually free-form for the developer writing the code. This is what you get with libraries such as Python’s logging, and can assume more or less structured form depending on a number of factors. For instance, you can have a single formatted string with maybe the source file and line, or you may have a full backtrace of where the log event happened and what the local variables in each of the function calls were. What usually make you choose between the two is cost, and signal-to-noise ratio, of course.

For example, Apache’s mod_rewrite has a comprehensive progress log that provides a lot of details of how each rewrite is executed, but if you turn that on, it’ll fill up your server’s filesystem fairly quickly, and it will also make the webserver performance go down the drain. You do want this log, if you’re debugging an issue, but you most definitely don’t want it for every request. The same works for results logs — take for instance ModSecurity: when I used to maintain my ruleset, I wouldn’t audit-log every request, but I had a special rule that, if a certain header was provided in the request, would turn on audit-logging. This allowed me to identify problems when I was debugging a new possible rule.

Unfortunately, my experience straddling open-source development and industry bubbles means I don’t have overall good hopes for an easy way to implement logging “correctly”. Both because correctly is subjective, and because I really haven’t found a good way to do this that scales all the way from a simple tool like my pdfrename to a complex Cloud-based solution. Indeed , while the former would generally be caring less about structured logs and request tracing, a Cloud software like my planned-and-never-implemented Tanuga would get a significant benefit from using OpenTelemetry to connect feed fetching and rendering.

Flexible and configurable logging libraries, such as are available for Python, Ruby, Erlang, and many more, provide a good “starting point” but by experience they don’t scale well between in and out of an organization or unit. It’s a combination of problems similar to the schema issue and the RPC issue: within an organization you can build a convention of what you expect logs to be, and you can pay the cost of updating the configuration for all sorts of tools to do the right thing, but if you’re an end user, that’s unlikely — besides, sometimes that’s untested.

So it makes sense that, up to this day, we still have a significant reliance on “simple”, unstructured text logs. They are the one universally accessible way to provide information to users. But I would argue that we should be better off to build an ecosystem of pluggable, configurable backends, where the same tool, without being recompiled or edited, can be made to output simple text on the standard error stream, or to a more structured event log. Unfortunately, judging by how the FLOSS world took the idea of standardizing services’ behaviours with systemd, I doubt that’s going to happen any time soon in the wide world… but you can probably get away with it in big enough organizations that control what they run.

Also, for a bit of fun related tidbit, verbose (maybe excessively so) progress logging is what made my reverse engineering the OneTouch Verio so easy: on Windows the standard error is not usually readable… unless you run the application through a debugger. So once I did that, I could see every single part of the code as it processed the requests and responses for the device. Now, you could think that just hiding the logs by default, without documenting the flag to turn them on would be enough — but as it turns out, as long as the logging calls are built into a binary, it’s not too hard to understand them while reverse engineering.

What this is meant to say is that, just because easy access to logs is a great feature for open source tools, and for most internal tools in companies and other institutions, the same cannot be said for proprietary software: indeed, the ability to obfuscate logs, or even generate “encrypted” logs, is something that proprietary software (and hardware) thrive on: it makes it harder to reverse engineer. So it’s no surprise if logs are a complicated landscape with requirements that are not only divergent, but at times opposite, between different stakeholders.

Moving Notes #3: Paperwork, Service Providers, Xenophobia

This is part 3 of a series of posts talking about moving apartments, after me and my wife moved, the past October.

So, in spite of most agents we worked with, and in spite of Rightmove letting them lie out of their teeth on their pages, we eventually found a place we liked, and that we could actually rent. To be honest, we were a bit scared because we ended up renting on the higher end of our budget despite finding only a two bedroom flat, but it gave us a good impression, and in particular it didn’t involve dealing with a property management agency.

Unfortunately, it still required dealing with the real estate agency to make sure the paperwork was taken care of: referencing, credit check, and finally rental agreement. Even more so because our actual landlord is once again overseas, and they instead rely on a family member to look after the apartment.

Tenancy Agreements

As I already noted previously, this agency required us to pay a deposit when making an offer — the only one that we have found doing that. This deposit would have been refundable if our offer was not accepted, or if they proven to be “unreasonable” in their demands during the rental negotiations. If we decided to withdraw the offer, then we would have lost the deposit. But if the offer was accepted, and we completed the paperwork, that deposit would add up to the normal rental hold.

As we completed the offer, and the landlord had been reasonable throughout, we had no problem with this particular hold. It also seemed to give the whole process a bit more “propriety”, although obviously it’s also a privilege to be able to just provide the hold deposit on the spot without knowing that beforehand.

But of course, it wouldn’t just go completely smooth. After the offer was accepted we had two main objectives: completing the referencing process, and signing the contract. Neither was successful on the first try.

The Referencing Process

The referencing process was something that we partially feared, due both to our previous experience with Dexters, and because of at least one of the previous agents saying that me being on probation at work would be an issue. Unsurprising, given the current situation between lockdowns and Brexit, my wife had been left without a job at the time, and we were wondering how that would fly through — as it turns out, that was not an issue, possibly because we have been married for nearly a year already when we started the process.

What turned out to be a problem there was our “old friend” institutional xenophobia. The referencing process needs to be started directly by the agency, who types in your name, and the system does not let you amend it, but rather just note additional names you may be known by. Turns out that English agents are not very good at either typing, or splitting in different components, foreign names. Not that it it is a surprise, and that’s not something I would call xenophobia in and by itself, but you can see how it becomes an extra burden, after receiving a half filled-in form, having to go back and say “Yeah, no, that’s not my name, please write such-and-such into it.”

Where I do apply the label of xenophobia is the service provider not even considering this option, and not allowing for the referenced person to correct the spelling of their name, despite requesting them to swear they are providing truthful information. And just like I said previously about the three years of addresses in the energy supply switch group, that’s not to say that it would have been a problem for me to fill it in as “Diego Pettenò” instead of “Diego Elio Pettenò” – half of everything appears to know me as the former, and that sweet tech bubble job insulates me from most of those problems – but there will be people who get in trouble for silly stuff like that.

Thankfully, of the two sets of referencing issues, only mine had further problems, and for once they had nothing to do with my nationality — just my place of work. When they asked me to provide the address of my employer, it was through a postcode lookup…

Let me explain postcode lookup for those who are not aware of this works in the UK, as it’s not common in many other countries. Postcodes in the UK designate a very limited space — a handful of buildings in most cases. In other cases, they may designate a single building, or – like is the case for my employer’s London HQ – a single address. Which means it’s not uncommon for services to ask you for your postcode first, then show you the list of addresses that correspond to that postcode. In our flat’s case, this means around eighty different addresses for the 80 flats in the building.

… and, since the postcode in my contract only has one valid address, that should have been easy. Except, when I tried submitting that, the second postcode field – not the one for the search, but the one in the actual address – complained that it was not valid. Well, turns out that the reason is that the postcode was a non-geographical postcode, because the office used to be the Royal Mail’s headquarters, and so it was issued a custom postcode to replace their original W1A 1HQ special one.

Quite a few back-and-forth with the agency later we managed to get all of the paperwork through and we got our references, so it was time to look into signing the tenancy agreements.

Tenancy Agreement Templates

So it turns out most tenancy agreements are based on some kind of standard template, which is not surprising. Even in Ireland, the Private Rental Tenancy Board (PRTB) provides a base template for the landlords to use. But here, it looks like different agencies use significantly different templates with similar, but not identical terms.

Most of these templates have a specified section for an “additional schedule”, which include additional requests from the offer – e.g. we wanted a couple of rooms emptied out, a new mattress, and authorization for a few changes – but what is not obvious to many is that additional clauses can also be waived if you agree to it. And indeed, on the original tenancy agreement when I moved to London, the consultancy company that was helping me asked a number of clauses to be explicitly voided, including all of those that wouldn’t apply anyway, such as talking about gardens.

So when I received the agreement draft, I went through with the fine comb to figure out if any of the clauses would make sense to remove. And I did find a few interesting ones. For instance, the tenancy agreement required us to be cleaning internal and external windows every month. We live on the twelfth floor of a high raiser — there is no way for us to clean the external windows and even if there was it wouldn’t make sense for us to be responsible for that. I got the clause removed, together with a number of other ones.

Most importantly, I was looking for references to the headlease. And this is yet another thing that appears to be very UK specific and not sure if I have all the context to explain. You see, in the UK most flats are not outright owned by the landlords who rent them out, but they are on a so-called “leasehold” for often hundreds of years. Add to that a complex system of shell companies in partnership between the leaseholder of the land, the development company, the management company, and the whole set of leaseholders of the flats in the building, and in addition to the tenancy agreement you’re subject to the terms of the headlease between the … nebulous set of shell companies and your landlord.

I found this out by chance, by the way. When I moved, the consultancy company didn’t say much about it. When Dexters then eventually sent me a hot water charge bill, and I fell of the sky without having heard about anything like that before, I was informed that most likely that bill was referenced in the headlease, and as a tenant it was my right to request a copy of it, to verify it. I did request the copy — Dexters sent me a scanned PDF, except that whoever scanned it put it through the machine in the wrong angle, so it was showing a “landscape” page on screen, with only two thirds of the page visible, while the right side was blank. They had to get a new copy for it — they didn’t even check before mailing it out.

So of course the tenancy agreement had a clause that said “If this tenancy agreement is subordinated to a headlease, you will find this attached” (phrasing to the best of my knowledge). Except that no headlease was attached, so I asked for it, only to be told that no headlease was involved in the agreement. That would be surprising! Buying out a leasehold into a “freehold” is expensive, and not usually done for flats in high-raisers. But so be it, I asked then, if they are absolutely certain that no headlease would be involved, to remove the clause… which they did!

But here’s a trick a former colleague (Andrew) told me about: in the UK, you can find information about leases via the HM Land Registry, for a nominal fee (of £3 when I used this). With that, you can find out who holds the chain of lease of an apartment. While this did not answer the question of “is a headlease involved?” it did make it make clear that it was not a freehold, and at that point the agency was convinced to ask the landlord whether a headlease was involved, received it, and forwarded it over. The clause in the tenancy agreement was not re-established, though — they had already sent the lease to sign, and they didn’t want to wait the turnaround of re-signing the contract.

And just to make this clear, that doesn’t mean that we want to ignore the headlease — we just wanted to make sure we wouldn’t be surprised if requests came through that we were not aware of. Your mileage is likely to vary, but I hope that this kind of information might help someone else in the future. It’s not all of my own invention — it’s applying the lessons from Getting More: I did my homework and showed it to the agency, and instead of circling around whether something was immutable or not, we made it business.

Suppliers, Addresses, Banks

Once we signed the agreement, it was time to sort out the various bills, addresses, and stuff like that. These mainly fell into three categories: utilities, banks, and stores. There’s a few other cases beside those, but those are the big ones. The ease of updating addresses between these ranged… drastically.

When it comes to utilities in particular, it’s important to know the start and end date for the services, and most of the providers that supply a flat will handle overlaps in the supply, with more or less care.

So Energy (affiliate link because I like them, and recommend them for a simple and ease to use interface) requires at least six weeks notice to the start of the new furniture, allows overlap and will roll-up the due on the old account into the new one, but it required all the interaction to happen with a person, rather than through any automated system. This was not terrible — once I gave them enough notice and the address of the new supply, they didn’t have any hurry in providing the remaining details, since they were just accounting issues. Also they preferred email over phone, which as a millennial, is always a plus.

Thames Water also supported overlapping supply, but in their case the start date had to be no more than four weeks in the future, when requesting the move. And while the process can be kicked off from their website, their website is prone to crashing. Like So Energy, Thames Water creates different “accounts” per supply – even if you only have one online account – but unlike the other supplier, they don’t carry over the amount due, and indeed require a different direct debit to be set up, which I nearly missed.

Hyperoptic, which is an awesome Internet provider, doesn’t support overlapping supply. But on the other hand they did switch the supply from one port to the other while on the phone. Yeah the irony of the Internet provider being the one requiring me to stay on the phone is not lost on me. But the whole thing took less than half an hour start to finish. Except for one issue with the MAC address.

Indeed, the MAC address of my UniFi Security Gateway appears not to work to connect to their network — I solved the problem by changing the address on the gateway. I actually have been convinced there might have been some MAC filtering going on, though Hyperoptic has repeatedly told me they have none… until the last time I rebooted the gateway and realized it’s not that it needs the MAC that they used to have over here — it just doesn’t work with the MAC of the gateway itself. I now start wondering if it might be a routing table cache somewhere.

Most banks (and yes, between the two of us that’s… a few of them, because it’s not usually worth closing old accounts) were not eventful at all. Except for two, in both direction. M&S Bank, which is operated by HSBC, was the only bank that allowed you to provide a date from when to change the address on file. I wish I had noticed that earlier, because then I would have put it on in time to get at least one starting proof of address.

The other case was the Metro Bank, and this is worth talking a moment about. All of the other banks we had to update the account on allowed us to request the change online. Except Metro Bank. Their solution involved either going into one of their “stores” (branches), or calling them on the phone. Given that I’m still avoiding going out, I opted to call them. Unfortunately when you call, they need to confirm your identity with a bunch of questions followed by an SMS OTP. The SMS didn’t arrive within their expected timeframe, and they pretty much just said goodbye and told me to show up in store.

I’m not sure if the problem was due to coverage, or if the 4G/LTE support on my phone means I can’t receive SMS while on the phone (I heard rumors of that, but it’s outside the area of my expertise). But the end result was that the messages only reached me the moment I disconnected the call. So I had a decision to make — clearly, with the current pandemic, going to a branch to fix the address wouldn’t be a good idea.

I could have ignored the address change until the pandemic situation improved, since most of the mail would be forwarded by Royal Mail at least for the next year. But then I looked around, and found that at that point NatWest had a £125-one-off offer for new customers (even those with other NatWest group accounts, like me — I still have my Northern Irish Ulster Bank account), when requesting a current account switch (which implies, closing an old account).

And yes, I managed to open a new account – a joint account while we were at it – with NatWest without having to talk with anyone on the phone. It did take a lot longer than a simple address update would have taken, and now we have two more of the annoying EMV-CAP devices, but it also meant that the new account paid itself for the time taken — and it didn’t require us to leave the flat, or even speak with anyone on the phone.

Oh Yeah, The Stores

So I said above that we had to make sure to update the addresses for stores as well (and a few other things). Well, that’s not the hardest problem of course — most of the communication we get is electronic, very few of the stores will send us mail. So instead of proactively going out of our way to update stuff, we did what everyone does: we procrastinated. With exceptions. I wrote down on a piece of paper all of the sources of mail that sent us something in the last month at our old address, and those, together with whatever would be forwarded to the new address, would get their address updated immediately.

I say immediately of course, but it still takes time. My wife’s membership of Cats Protection was updated fairly quickly on their backend, but the mail merge source they use to send the membership letters out takes a few weeks to update. So we received a few of those letters addressed to the old place. The only annoying part is the waste of purrfectly lovely return address labels that are for the wrong address now.

After those are tackled, there’s the matter of the mail arriving for the previous occupants of the flat, who did not set up Royal Mail redirection. As it turns out, we didn’t have to do much work for that — they have a friend living in the same building (just as we have in the old building), so we were just asked to drop it off on the other mailbox. But we noticed that we kept receiving, every fortnight or so, the Harrods catalogue.

So here’s another trick for those who might not be aware: if you moved into a flat, and the previous occupier keeps receiving subscriptions, offers, discount, and similar things, it’s perfectly reasonable to contact the senders and ask them to cancel those mail. Most will ask you to prove you have access to the mail (not your proof of address, but the proof of having the other person’s mail) and will then remove the address from the file. Turning this around, if there’s something you really care about and don’t want someone else to unsubscribe you from, make sure not to throw away the address label in one piece. If you don’t have a shredder at home, at least make sure to tear up the address in half.

And to finish off the post with a note of levity, in December I also received a surprising letter from Amazon, to our old address. I say surprising, because we made a number of purchases to the new flat (even the week before moving into it), and so there’s no reason for them to reach at the old address. Even the credit card I got from them just before Prime Day had its address updated on file very quickly after receiving it.

Well, it turned out to be addressed to me, but not quite to the old address. It was addressed to the address I used on AliExpress — slightly different formatting. It was addressed to the Diego Elio Pettenò who has an account to leave review of terrible, cheap products, as part of the usual brushing scam. Once again, Amazon is unable to deactivate the account created in bad faith that happen to use my name (and possibly my profile picture, I cannot tell).

Video: unpaper with Meson — From DocBook to ReStructured Text

I’m breaking the post-on-Tuesday routine to share the YouTube-uploaded copy of the stream I had yesterday on Twitch. It’s the second part of the Unpaper conversion to Meson, which is basically me spending two hours faffing around Meson and Sphinx to update how the manual page for Unpaper is generated.

I’m considering trying to keep up with having a bit of streaming every weekend just to make sure I get myself some time to work on Free Software. If you think this is interesting do let me know, as it definitely helps with motivations, to know I’m not just spending time that would otherwise be spent playing Fallout 76.

unpaper: re-designing an interface

This is going to be part of a series of post that will appear over the next few months with plans, but likely no progress, to move unpaper forward. I have picked up unpaper many years ago, and I’ve ran with it for a while, but beside a general “code health” pass over it, and back and forth on how to parse images, I’ve not managed to move the project forward significantly at all. And in the spirit of what I wrote earlier, I would like to see someone else pick up the project. It’s the reason why I create an organization on GitHub to keep the repository in.

For those who are not aware, unpaper is a project I did not start — it was originally written by Jens Gulden, who I understand worked on its algorithms as part of university. It’s a tool that processes scanned images of text pages, to make them easier to OCR them, and it’s often used as a “processing step” for document processing tools, including my own.

While the tool works… acceptably well… it does have a number of issues that always made me feel fairly uneasy. For instance, the command line flags are far from standard, and can’t be implemented with a parser library, relying instead on a lot of custom parsing, and including a very long and complicated man page.

There’s also been a few requests of moving the implementation to a shared library that could be used directly, but I don’t feel like it’s worth the hassle, because the current implementation is not really thread-safe, and that would be a significant rework to make it so.

So I have been having a bit of a thought out about it. The first problem is that re-designing the command line interface would mean breaking all of the programmatic users, so it’s not an easy decision to take Then there’s been something else that I learnt about that made me realize I think I know how to solve this, although it’s not going to be easy.

If you’ve been working exclusively on Linux and Unix-like systems, and still shy away from learning about what Microsoft is doing (which, to me, is a mistake), you might have missed PowerShell and its structured objects. To over-simplify, PowerShell piping doesn’t just pipe text from one command to another, but structured objects that are kept structured in and out.

While PowerShell is available for Linux nowadays, I do not think that tying unpaper to it is a sensible option, so I’m not even suggesting that. But I also found out that the ip command (from iproute2) has recently received a -J option, which, instead of printing the usual complex mix of parsable and barely human readable output, generates a JSON document with the same information. This makes it much easier to extract the information you need, particularly with a tool like jq available, that allows “massaging” the data on the command line easily. I have actually used this “trick” at work recently. It’s a very similar idea to RPC, but with a discrete binary.

So with this knowledge in my head, I have a fairly clear idea of what I would like to have as an interface for a future unpaper.

First of all, it should be two separate command line tools — they may both be written in C, or the first one might be written in Python or any other language. The job of this language-flexible tool is to be the new unpaper command line executable. It should accept exactly the same command line interface of the current binary, but implement none of the algorithm or transformation logic.

The other tool should be written in C, because it should just contain all the current processing code. But instead of having to do complex parsing of the command line interface, it should instead read on the standard input a JSON document providing all of the parameters for the “job”.

Similarly, there’s some change needed to the output of the programs. Some of the information, particularly debugging, that is currently printed on the stderr stream should stay exactly where it is, but all of the standard output, well, I think it makes significantly more sense to have another JSON document from the processing tool, and convert that to human-readable form in the interface.

Now, with a proper documentation of the JSON schema, it means that the software using unpaper as a processing step can just build their job document, and skip the “human” interface. It would even make it much easier to write extensions in Python, Ruby, and any other language, as it would allow exposing a job configuration generator following the language’s own style.

Someone might wonder why I’m talking about JSON in particular — there’s dozens of different structured data formats that could be used, including protocol buffers. As I said a number of months ago, the important part in a data format is its schema, so the actual format wouldn’t be much of a choice. But on the other hand, JSON is a very flexible format that has good implementations in most languages, including C (which is important, since the unpaper algorithms are implemented in C, and – short of moving to Rust – I don’t want to change language).

But there’s something even more important than the language support, which I already noted above: jq. This is an astounding work of engineering, making it so much easier to handle JSON documents, particularly inline between programs. And that is the killer reason to use JSON for me. Because that gives even more flexibility to an interface that, for the longest time, felt too rigid to me.

So if you’re interested to contribute to an open source project, with no particular timeline pressure, and you’re comfortable with writing C — please reach out, whether it is to ask questions for clarification, or with a pull request to implement this idea altogether.

And don’t forget, there’s still the Meson conversion project which I also discussed previously. For that one, some of the tasks are not even C projects! It needs someone to take the time to rewrite the man page in Sphinx, and also someone to rewrite the testing setup to be driven by Python, rather than the current mess of Automake and custom shell scripts.

NewsBlur Review

One of the very, very common refrain I hear in my circles, probably because my circles are full of ex-users of it, and at the same time of Googlers and Xooglers, is that the Internet changed when Google Reader was shut down, and that we would never be able to come back. This is something that I don’t quite buy out right — Google Reader, like most of the similar tools, was used only by a subset of the general population, while other tools, such as social networks, started being widely used right around the same time.

But in the amount of moaning about Google Reader not existing anymore, I rarely hear enough willingness to look for alternatives. Sure there was a huge noise about options back then, which I previously called the “Google Reader Exodus“, but I rarely hear of much else. I see tweets going by of people wishing that Reader still existed, but I don’t think I have seen many willing to go out of their way to do something about it.

Important aside here: while I did work at Google when Reader was shut down in effect, the plan was announced in-between me signing my contract and my start date. And obviously it was not something that was decided there and then, but rather a long-term decision taken who knows how long before. So while I was at Google for the “funeral”, I had no saying, or knowledge, of any of it.

Well, the good news is that NewsBlur, which I have started using right before the Reader shut down, is still my favourite tool for this, it’s open source, and it has a hosted service that costs a reasonable $36/year. And it doesn’t even have a referral program, so if you had any doubt of me shilling, you can vacate it now.

So first of all, NewsBlur has enough options for layout that look so much like Google Reader “of back then” — before Google+ and before losing the “Shared Stories” feature. Indeed, it supports both its own list of followers/following, and global sharing of stories on the platform. And you don’t even need to be an user to follow what I share on it, since it also automatically creates a blurblog, which you can subscribe to with whatever you want.

I have in the past used IFTTT to integrate further features, including saving stories to Pocket, and sharing stories on Twitter, Facebook, and LinkedIn. Unfortunately while NewsBlur has great integration, IFTTT is now a $4/month service, which does not have nearly enough features for me to consider subscribing to, sorry. So for now I’m talking about direct features only.

In addition to the sharing features, NewsBlur has what is for me one of the killer features: the “Intelligence Trainer”. Which is not any type of machine learning system, but rather a way for you to tell NewsBlur to hide, or highlight, certain content. This is very similar to a feature I would have wanted twelve years ago: filtering. Indeed, this allowed me to hide my own posts from Gentoo Universe – back when I was involved in the project – and to only read Matthew’s blog posts in one of the many Planets he’s syndicated, like I wanted. But there’s much more to it.

I have used this up to this day to hide repetitive posts (e.g. status updates for certain projects being aggregated together with blogs), to stop reading authors that didn’t interest me, or wrote in languages I couldn’t read. But I also used the “highlighting” feature to know when a friend posted on another Planet, or to get information about new releases or tours from metal bands I followed, through some of the dedicated websites’ feeds.

But where this becomes extremely interesting is when you combine it with another feature that nowadays I couldn’t go without, particularly as so much content that used to be available as blogs, sites, and feeds is becoming newsletters: it’s the ability to receive email newsletters and turn them into a feed. I do this for quite a few of them: the Adafruit Python for Microcontrollers newsletter (which admittedly is also available through their blog), the new tickets alerts from a bunch of different venues (admittedly not very useful this year), Tor.com, and Patreon.

And since the intelligence trainer does not need to have tags or authors to go along, but can match a substring in the title (subject), this makes it an awesome tool to filter out certain particular messages from a newsletter. For instance, while I do support a number of creators on Patreon, a few of them share all their public videos as updates — I don’t need to see those in the Patreon feed, as I get them directly at source, so I can hide those particular series from the Patreon feed for myself. And instead, while I can wait for most of the Tor.com releases, I do want to know quickly if they are giving away a free book, or if there’s a new release from John Scalzi that I missed. And again, the highlighting helps me there: it makes a green counter appear next to the “feed”, that tells me there’s something I want to look at sooner, rather than later.

As I said the intelligence trainer doesn’t have to use tags — but it can use them if they are there at all. So for instance for this very blog, if I were to post something in Italian and you wouldn’t be able to read it, you could train NewsBlur to hide posts in Italian. Or if you think my opinions are useless, you can just hide those, too.

But this is not where it ends. Beside having an awesome implementation of HTTP, which supports all bandwidth-saving optimizations I know of, NewsBlur thinks about the user a lot more than Google Reader would have. Whenever you decide to do some spring cleaning of your subscription, NewsBlur will send you by email an OPML file with all of your subscribed feed before you made the first change (for the day, I think). That way you never risk deleting a subscription without having a way to find it agian. And it supports muting sites, so you don’t need to unsubscribe not to get a high count of unread posts of, say, a frequent flyers’ blog during a pandemic.

Plus it’s extremely tweakable and customizable — you can choose to see the stories as they appear in the feed, or load into a frame the original website linked by the story, or try to extract the story content from the linked site (the “reader mode”).

Overall, I can only suggest to those who keep complaining about Google Reader’s demies, that it’s always a good time to join NewsBlur instead.

Moving Notes #2: Sigh, Agents

This is part two in a series. See the previous post for a bit more context.

When we embarked into this whole process, I had very little experience with moving and flathunting: I’ve lived in my mother’s house back in Italy, in Ireland I found an apartment fairly quickly thanks to a colleague “passing on” a viewing he didn’t need, and in London I found the flat through the relocation consultants that were assigned to me after the move. The same was true for my wife, who’s been mostly living in flatshares before.

And in the middle of a pandemic, the flathunting process seemed even more annoying, as it had a number of immediate and delayed effects. The first one was restricting our options in how far we wanted to move. While the whole situation meant that work is not expecting me back to the office for quite a while longer, and that meant we could have looked at options further away from London, such as Birmingham (which we had considered briefly before, particularly as I was looking for a new job earlier in the year), going and finding a place would still have involved a significant travel on mass transit (trains) and spending time in shared accommodation (hotel). Plus risking of being locked up there if a new lockdown was announced before we would have found a place.

So at the end of the day, we decided to focus in the same area in West London where we’ve been living. This had the non-negligible advantage of letting us keep the “support network” of friends we found here – most of them while playing Pokémon Go, of all things – and of sensible takeaways, shops, delivery services, …

It also had an effect that I hadn’t figured out when we started. As we knew that virtual viewings weren’t going to be particularly useful to gauge a new place, including the feeling of the area or neighbourhood, we had to take a difficult decision: as my health issues make me particularly vulnerable to Covid-19, my wife would be taking the vast majority of the viewings. What we didn’t realize then, is that the real estate agents wouldn’t be able to drive her to the apartments they selected — and they totally failed to account for walking between different flats.

While she’s perfectly capable of walking miles, and she did – including hatching a number of Pokémon eggs! – when an agent books two flats that are 40 minutes walk apart to be viewed within 20 minutes of each other, you know that something is wrong. If something could have managed to make me more annoyed at car users who can’t figure out not everyone wants to be in a car all the time.

Eyes On The Prize

Before I start rambling on about the horrible services provided by most agencies I dealt with, let me explain what was that we have been looking for.

When we started the process, we weren’t sure we would leave the apartment. We were just informed that our landlord was trying to sell the apartment, and if he did we would have some time after the sale process starts to find a new place. Then again, as I did say in the previous part, we got to the point where the agency dropped so many balls, that we felt compelled to leave anyway.

And while the apartment we were living in was doing okay for us, beside the noise and the agency, there were a few things we were happy to change when moving. The flat we had been living was what I chose myself when I moved to London: a bachelor working at an office, with an occasional need to work from home, and with the far-fetched possibility of hosting guests for board games (only happened a handful of times in three years) and an even less likely chance of hosting friends visiting from abroad (I did technically have space to host one person sleeping over, but then turns out that living sandwiched between three hotels, it’s much easier to just let them have their own space).

As by then it was clear that for at least another year there wouldn’t be a commute to the office in my plans, it was clear that the office needed more space (particularly, storage space) and that it would be used nearly exclusively for working, rather than gaming. Turns out that after spending eight hours in the same room having meetings and writing docs, you don’t get to feel very good about sitting in the same chair and fire up a game, even one you like a lot.

What we definitely wanted was to keep Hyperoptic as ISP, or if that wouldn’t have been possible, at least have another gigabit fiber provider. It turns out to be very useful to not have to worry about my wife’s streaming lectures while I would be having a meeting. Plus the Hyperoptic support has been one of the best ones I’ve ever dealt with, and I know how annoying ISPs can be.

So our aim was, if possible, to find a three-bedroom flat – that way we could each have our home office, and we would have more space to “change the view” – with Hyperoptic. But we also would have been happy to settle for a more spacious, or more comfortable two-bedroom, particularly one where the master bedroom is not shaped like an S-tetromino like our previous flat.

It’s 2020, Learn Your ISPs!

I hate the words “unskilled labour”, because they fail to convey the importance of a variety of skills, but I would lie if I said I hadn’t chuckled at people calling real estate agents such in the past. The reason for it was that three years ago I had significantly different experience between the best and the worst agents I interacted with. This time wasn’t an improvement. But before I go on ranting, let me say that there’s plenty of skill in being a successful real estate agent — we could tell who was safe to deal with and who to run away from fairly quickly. So, kudos to the good ones, it’s not an easy job.

The first problem with pretty much all of the agencies (except one) has been that going through an aggregator such as Rightmove, they will ignore the details you provide in the contact form. I had explicitly sent a message stating that I’d like to book a viewing for the shown property, and possibly a selection of similar flats with Hyperoptic or similar level of connectivity. I also stated how I was busy with work and meetings, and wouldn’t be able to take phone calls easily, so email would be my preferred contact method.

Only one agency read the message and followed up on it. And turned out to be the most professional agent we have dealt with. So let me praise them: riverhomes and Tamir Gotfried in particular, did an exceptional job in taking in our requirements, and not wasting our time showing us unrelated or unsuitable flats. Unfortunately, they didn’t have a flat that fitted our requirements (Virgin Media being the best ISP they had available at that point — and I have personal reasons to sticking to Hyperoptic at least).

From nearly every other agent, we got the same type of excuses of not knowing what ISPs would be available — or not knowing how to check. Let me be clear here, I have no problem with checking that myself, but most of the agents refused to give the address of the flats they wanted to show us until the day, if not after the viewings. So instead of being given a list that we could pre-filter, they insisted in showing us a lot of flats that had vDSL as the best connectivity option.

Now, the Rightmove website (but at the time not their mobile app) had a drop-down from CompareTheMarket that shows the average speed available “at the postcode” — which for us would have been a good proxy, as we were looking for a flat in an apartment building, and buildings generally get their own postcodes. Unfortunately, most agencies lie out of their teeth on Rightmove (we’ll get back to that in a moment).

This is not particularly new. When I moved here, I had one agent insisting that a 25Mbit DSL line, that the landlord subscribed to the flat, and couldn’t be changed, was “fiber”. She wouldn’t accept my point that “That’s not fiber”. Sure the marketing material may call it “fiber-powered” or “SuperFast”, but it’s not fiber in any way shape or form. And in 2017 I expected an agent able to tell me whether a flat has floor heating or radiators should be able to tell me if a flat has DSL or fiber.

On the other hand, the agent that showed us the apartment we eventually rented said it had floor heating, while the only heating we have is heatpump based.

Do You Even Rent?

As I said, most agencies beside one ended up being a lost cause. Overall, the worst experience we had was with Foxtons — and it feels like we dodged a bullet of an agency worse than Dexters. But similar problems appeared with many.

Among the selections of flats we saw, with different agencies, there was one flat one floor up from a nursery, with the balcony overlooking into their back lot. We’re a childfree couple – as I noted talking about Sarah Millican as she makes us feel quite a bit more welcome than others – and that kind of flat would be a very bad fit. And, by the way, that’s an important part: if I did choose a flat knowing that there’s a nursery literally under my feet, and then complained about the noise, I would be a horrible person. Instead, I just want my peace and quiet and will avoid that location, stop.

Another flat had a thermostat (or possibly AC control unit) that was enclosed on all sides by the back wall of a “built-in” cabinet. With no separate sensor. It’s a great way to have basically no control over the heating in your bedroom, but the agent couldn’t even tell that this would be a problem. Maybe not even the landlord. As we saw a different flat in the same building with Foxtons, we also found that the built-in wardrobe was not part of the first flat at all — it was probably added to look like the flats in the upper floors, but for those, the thermostat is by the door, and outside of the cabinet.

Speaking of Foxtons, the first few options they showed us were not exactly what we were looking for. When they asked us our “approximate budget”, I gave them a bit of leeway in what to show us, and said there would be a bit of room to stretch. The stuff they showed us at first was well within the budget, even conservative I’d say… but smaller (and significantly so) than the place we were living in. So I explicitly pointed at one of their properties and said “Here’s more of what we’ve been looking for — this one seems well out of our budget right now, but if there’s any chance for it to drop by 10%, we’d be happy to stretch our budget to meet it.” And that kind-of helped.

Aside: the reason why I pointed at a flat outside of the budget and asked if it could come down is to apply a bit more of the techniques discussed by Getting More. We did our homework: we knew that the rent demanded for the property was on the high side of the market at that point, and we could tell the flat hadn’t moved in a number of months according to Rightmove. There was a chance that the 10% discount could still be lower than the loss in not finding anyone to rent the flat.

I say kind of, because the agent then did propose showing us a few more flats that, overall, did fit our needs a bit better — except that only the one we pointed at had Hyperoptic available. One of them was still tempting, and we were very disappointed by the lack of ISP options, given we knew the building right next door was Hyperoptic-ready, but it also was a “duplex” (which in this country means on two floors, but is a word that would confuse most Italians), and my wife was (reasonably) worried about me trying to go downstairs to grab sugars during a sugar low. I already nearly fell on the stairs during the visit.

But we did end up seeing the flat that we pointed at at first. It turned out to be even more spacious than the images shown, but it was also… dirty. I can’t use any other word, the wall over the cabinets was full of black spots that looked like mold, the extraction fans had dustbunnies visible inside, and in general it seemed to have quite the layer of grime all over, but that was partially understandable given that it still had tenants inside. We still put in an offer for a rent a bit higher than we were hoping for, but still in the “stretchable” part of the budget, and on the advice of the agent, we suggested a three years contract — the landlord was supposedly looking for someone to stay long term.

“I don’t feel comfortable renting from your agency”

After we put in the offer, the trouble started — the first call (from another agent at a different office) was to tell us that the landlord wouldn’t accept a three years contract, and requested a single year renewable contract. They also wouldn’t accept our first rent offer, and so they asked what would be our best for it. I did say we could go up £50/month but no more, but since that was enough they tried convincing me reminding me that I wouldn’t have to pay for heating — because that’s part of the service charge and so paid by the landlord. And according to them the law changed so that wouldn’t be possible to do anymore. It started smelling fishy, but then I relented, and accepted to raise up to £100 from our original offer.

The second call, informed us that the tenants of the time wouldn’t be leaving on the 1st of October as originally intended, so we wouldn’t be able to enter the property on the 15th as discussed. Instead the tenants would be leaving (hopefully) on the 1st of November. This is, unsurprisingly, Covid-19 related: the tenants were going to be flying back to their country, but the flights for October were cancelled, so hopefully they’d make it for November. I was back then sceptical, but I have not bothered checking if those flights reopened at all. That had us a bit worried, but since at that point we hadn’t given notice for the flat yet, we were okay to moving it to one more month later. Ensue call number three, asking us to move in on the 1st of November rather than 15th — despite the fact that our tenancy was terminating on the 26th, so the options would have been no overlap, or a much longer overlap than expected.

The fourth, but not final, call was to let us know that once the agents explained to the landlord that they wouldn’t be able to charge the hot water to us anymore, the landlord decided that our offer was not just too low, but even the advertised rent was too low! Indeed, they decided to ask more than 10% more money for the rent than originally advertised. We said we were no longer interested in the property, and thought we left it at that.

Yet another call, this time from the agent that showed us around had her just short of begging us to reconsider — saying that the rent would be “all bills inclusive, except council tax”. I said I wouldn’t trust it but we’d think on it, while I did the math. The only way that the increase in rent would be even covering the costs of bills would be if the heating would cost more than double what we were paying for the two bed (which sounded unlikely) and if they also paid for the same Hyperoptic service we had. But that also meant that we wouldn’t have control over the bills, which sounded very unlikely.

In particular, the thing they said about the hot water not being chargeable to the tenants was totally a lie. While the management company for the development (which is still the same for the old apartment, our current apartment, and the apartment we were discussing) did make things more complicated by not issuing separate hot water bills, hot water is counted as an utility and can be charged to the tenants. So, I really doubt that it was going to be “all bills included”.

Anyway, at that point we started looking further afield, and given we had done the math for the budget stretching, we started looking at slightly higher rents too, as options. That turned out to find other snags, which you can continue reading about afterwards, but also meant we found the flat we currently live in through another agency altogether. That agency, by the way, requires you to pay a deposit when you make an offer, which is only refundable if the offer is not accepted, and not if you withdraw the offer.

After that, the Foxtons agent who showed us around contacted us asking to show us three more flats, one in the same development, one across the street, and one… well, the last one we don’t know, because from the night before to the day we were supposed to see the flat, it was taken off the market. But this time, we were promised no more back-and-forths: the flats were managed from the same office as the agent, and her own manager would be the point of contact.

One of the flats was actually interesting. While the total square footage was not higher than the one we did end up renting, it was a three-bedrooms apartment — so smaller rooms, but with more space for privacy. And supposedly we could have had it for a bit less than we ended up renting (even considering the lost offer deposit). We considered it, and put in an offer with a couple of requirements (namely to remove the furniture that would be redundant to us, and to get the Hyperoptic socket installed — the flat was “ready” but the socket was never installed).

Then we got another one of those calls that we started dreading from them: the landlord appeared to have accepted the offer from another couple some time before with a different agency, but then some money didn’t change hands, and so it wasn’t clear if the place was officially rented or not. She would call us back by afternoon to confirm. We heard nothing until 8pm, by which time we sent an email pointing out that we weren’t interested in the property anymore, and that we would take an offer elsewhere.

The day after, the agent tried to call me (I was in a meeting, couldn’t pick up), texted me, spoke with my wife, texted her, trying to convince us to see a few more properties. I had to be rude and state explicitly that we wouldn’t feel comfortable to rent a property from Foxtons by that point, since two of the flats that we considered with them ended up having so much drama.

Agents, Lies, and Rightmove

Rightmove is probably the most commonly used website to look for housing, to rent or buy, in this country. It aggregates listings from any agency that would publish (I assume, for a fee), and provides a way to contact the agencies without exposing too much personal information up front.

Unfortunately, it’s also a nest of liars.

Since we have been looking for properties not too far from where we were living, we knew quite a bit about the area already. So when we would see a listing with a GPS point attached to one of the fancy, posh buildings of the development, but with the name referencing one of the older, still-to-be-fixed for cladding buildings, we knew we were made fun of.

Some of the listings are just slightly confusing. The flat we used to live in was advertised as having a “residents’ gym”, which turned out to be a half-truth: there’s a residents’ gym, and technically we could have gotten access to it, but as my wife went to check, the management company asked her to pay around £200. Turns out that being a resident is a necessary condition, but not a sufficient one. Leaseholders do get free access by virtue of paying the service charge, but tenants need to pay separately. Except some tenants might have access already because the landlord already got the fob on the accesslist, and nobody is checking. Our fobs were not in the accesslist.

The same flat was advertised for sale (and still is at the time of writing) as having a concierge service. There is no concierge service for the building we lived in. I think it was meant to be there, as there is a strange door on the corner that looks like it could have been a concierge, but it wasn’t — only two buildings in the development have a concierge, and that was not one of them.

But the biggest lie is for properties that are not actually on the market at all! We found lots of those, and I complained to Rightmove about it. The first one we saw with Foxtons was from a building that’s still being finished, so they are releasing flats in “drops” — when my wife went there with the agent, the building’s concierge told them that they didn’t have any available flats to show. When we contacted another agency because of a very nice looking, spacious apartment down the road, we found out that it was not available at all.

Turns out that my complaints to Rightmove fell on deaf ears: according to them, even if a flat is already off the market because someone sighed up on it, the agencies are not required to take them off their site at all. They may mark it as “let agreed”, but they are not required to by their Terms of Service. The only moment when they are required to remove it from the site is when the new tenants move in.

So it seems like most agencies have incentive to sign up their best properties to be rented months before they are to be resided in, and keep them on Rightmove as a way to catch contacts. That way they will show you something else, which might not be what you’re looking for, but they might have more margins on.

Final Results

At the end we settled for a two bedrooms apartment, like we had before. We stretched our budget, I want to say, significantly, in part considering the likeliness to spend at least one year, possibly more, working from home, and so wanting to have a more comfortable living for the time being. We didn’t move very far — we literally are in the next building over, and the savings in doing the move mostly ourselves (more to that in a future most) probably made up for the first year of extra rent.

The agency we found the flat with was one of those with the trap listings, but they acted more professionally than Foxtons overall (again, there will be more to say about that), and we no longer have to deal with a property management agency.

But of course the trouble, or the annoyances, didn’t just disappear after finding a flat, so you’ll be able to read more notes and more trouble later on.

Software Defined Remote Control

A number of months ago I spoke about trying to control a number of TV features in Python. While I did manage to get some of the adapter boards that I thought I would use printed, I hadn’t had the time to go and work on the software to control this before we started looking for a new place, which meant I shelved the project until we could get to the new place, and once we got there it was a matter of getting settled down, and then, … you got the idea.

As it turns out, I had one week free at the end of November — my employer decided to give three extra days on the (US) Thanksgiving week, and since my birthday was at the end of the week, I decided to take the remaining two days off myself to make it a nice nine days contiguous off. Perfect timeframe to go and hack on some projects such as that.

Also, one thing changed significantly since the time I started thinking about this: I started using Home Assistant. And while it started mostly as a way for me to keep an eye on the temperature of the winter garden, I found that with a bit of configuration, and a pull request, changing the input on my receiver with it was actually easier than using the remote control and trying to remember which input was mapped to what.

That gave me finally the idea of how to implement my TV input switch tree: expose it as one or more media players in Home Assistant!

Bad (Hardware) Choices

Unfortunately, as soon as I went to start implementing the switching code, I found out that I had made a big mistake in my assumptions: the Adafruit FT232H breakout board does not support PWM outputs, including the general time-based pulsing (without a carrier frequency). Indeed, while the Blinka library can technically support some of the features, it seems like none of the Linux-running platforms would be able to manage that. So there goes my option of just using a computer to drive the “fake remote” outputs directly. Well, at least without rewriting it in some other language and find a different way to send that kind of signals.

I looked around for a few more options, but all of it ended up being some compromise: MicroPython doesn’t have a very usable PulseOut library as far as I could tell; Arduino-based libraries don’t seem to allow two outputs to happen at roughly the same time; and as I’m sure I already noted in passing, CircuitPython lacks a good “secondary channel” to be instructed from a computer (the serial interface is shared with the REPL control, and the HID is gadget-to-host only).

After poking around a few options and very briefly considering writing my own C version on an ATmega, I decided to just go for the path of least resistance, and go back to CircuitPython, and try to work with the serial interface and its “standard input” to the software.

The problem with doing that is that the Ctrl-C command is intended to interrupt the command, and that means you cannot send the byte 0x03 un-escaped. At the end I thought about it, and decided that CircuitPython is powerful enough that just sending the commands in ASCII wouldn’t be an issue. So I decided to write a simplistic Flask app that would take a request over HTTP and send the command via the serial port. It worked, sort of. Sometimes while debugging I would end up locking the device (a Trinket M0) in the REPL, and that meant the commands wouldn’t be sent.

The solution I came up with was to reset the board every time I started the app, by sending Ctrl-C and Ctrl-D (0x03, 0x04) to force the board to reset. It worked much better.

Not-Quite-Remote Controlled HDMI Switch

After that worked, the problem was ensuring that the commands sent actually worked. The first component I needed to send the commands to was the HDMI switch. It’s a no-brand AliExpress-special HDMI switch. It has one very nice feature for what I need to do right now. It obviously has an infrared remote control – one of those thin, plasticky domes one – but it particularly has the receiver for it on a cord, which is connected with a pretty much standard 3.5mm “audio jack”.

This is not uncommon. Randomly searching Amazon or AliExpress for “HDMI switch remote” can find you a number of different, newer switches that use the same remote receiver, or something very similar to it. I’m not sure if the receivers are compatible between each other, but the whole idea is the same: by using a separate receiver, you can stick the HDMI switch behind a TV, for instance, and just make the receiver poke from below. And most receivers appear to be just a dome-encased TSOP17xx receiver, which is a 3-pin IC, which works great for a TRS.

When trying this out, I found that what I could do would be to use a Y-cable to allow both the original receiver and my board to send signals to the switch — at which point, I can send in my own pulses, without even bothering with the carrier frequency (refer to the previous post for details on this, it’s long). The way the signal is sent, the pulses need to ground the “signal” line (that is usually at 5V); to avoid messing up the different supplies, I paired it on an opto-coupler, since they are shockingly cheap when buying them in bulk.

But now that I tried setting this up with an input selection, I found myself not able to get the switch to see my signal. This turned out to require an annoying physical debugging session with the Saleae and my TRRS-to-Saleae adapter (that I have still not released, sorry folks!), which showed I was a bit off on the timing of the NEC protocol the switch used for the remote control. This is now fixed in the pysirc library that generates the pulses.

Once I got the input selector working for the switch with the Flask app, I turned to Home Assistant and added a custom component that exposes the switch as a “media_player” platform. In a constant state of “Idle” (since it doesn’t have a concept of on or off), it allowed me and my wife to change the input while seeing the names of the devices, without hunting for the tiny remote, and without having to dance around to be seen by the receiver. It was already a huge improvement.

But it wasn’t quite enough where I wanted it to be. In particular, when our family calls on Messenger, we would like to be able to just turn on the TV selected to the right input. While this was partially possible (Google Assistant can turn on a TV with a Chromecast), and we could have tried wiring up the Nabu Casa integration to select the input of the HDMI switch, it would have not worked right if the last thing we used the TV for was the Nintendo Switch (not to be confused with the HDMI switch) or for Kodi — those are connected via a Yamaha receiver, on a different input of the TV set!

Enter Sony

But again, this was supposed to be working — the adapter board included a connection for an infrared LED, and that should have worked to send out the Sony SIRC commands. Well, except it didn’t, and that turned out to be another wild goose chase.

First, I was afraid that when I fixed the NEC timing I broke the SIRC ones — but no. To confirm this, and to make the rest of my integration easier, I took the Feather M4 to which I hard-soldered a Sony-compatible IR LED, and wrote what is the eponymous software defined remote control: a CircuitPython program that includes a few useful commands, and abstractions, to control a Sony device. For… reasons, I have added VCR as the only option beside TV; if you happen to have a Bluray player by Sony, and you want to figure out which device ID it uses, please feel free.

It might sound silly, but I remember seeing a research paper in UX from the ’90s of using gesture recognitions on a touchpad on a remote control to allow more compact remote controls. Well, if you wanted, you could easily make this CircuitPython example into a touchscreen remote control for any Sony device, as long as you can find all the right device IDs, and hard code a bunch of additional commands.

So, once I knew that at least on the software side I was perfectly capable of control the Sony TV, I had to go and do more hardware debugging, with the Saleae, but this time with the probes directly on the breadboard, as I had no TRS cable to connect to. And that was… a lot of work, to rewire stuff and try.

The first problem was that the carrier frequency was totally off. The SIRC protocol specifies a 40kHz carrier frequency, which is supposedly easier to generate than the 38kHz used by NEC and others, but somehow the Saleae was recording it as a very variable frequency that oscillated between 37kHz and 41kHZ. So I was afraid that trying to run two PWM outputs on the Trinket M0 was a bad idea, even if one of them was set to nought hertz — as I said, the HDMI switch didn’t need a carrier frequency.

I did toy briefly with the idea of generating the 40kHz carrier wave separately, and just gating it to the same type of signal I used for the HDMI switch. Supposedly, 40kHz generators are easy, but at least for the circuits I found at first glance, it requires a part (640kHz resonator) that is nearly impossible to find in 2020. Probably fell out of use. But as it turn out it wouldn’t have helped.

Instead, I took another Feather. Since I ran out of M4, except for the one I hardwired already an IR LED to, I instead pulled up the nRF52840 that I bought and barely played with. This should have been plenty capable to give me a clean 40kHz signal and it indeed was.

At that point I noticed another problem, though: I totally screwed up the adapter board. In my Feather M4, the IR LED was connected directly between 3V and the transistor switching it. A bit out of spec, but not uncommon given that it’s flashed for very brief impulses. On the other hand when I designed the adapter, I connected it to the 5V rail. Oops, that’s not what I was meant to be doing! And I did indeed burn out the IR LED with it. So I had to solder a new one on the cable.

Once I fixed that, I found myself hitting another issue: I could now turn on and off the TV with my app, but the switch stopped responding to commands either from the app or from the original remote! Another round of Saleae (that’s probably one of my favourite tools — yes I splurged when I bought it, but it’s turning out to be an awesome tool to have around, after all), and I found that the signal line was being held low — because the output pin is stuck high…

I have not tried debugging this further yet — I can probably reproduce this without my whole TV setup, so I should do that soonish. It seems like opening both lines for PWM output causes some conflicts, and one or the other end up not actually working. What I solved this with was only allowing one command before restarting the Feather. It meant taking longer to complete the commands, but it allowed me to continue with my life without further pain.

One small note here: since I wasn’t sure how Flask concurrency would interact with accessing a serial port, I decided to try something a bit out of the ordinary, and set up the access to the Feather via an Actor using pykka. It basically means leaving one thread to have direct access to the serial port, and queue commands as messages to it. It seems to be working fine.

Wrapping It All Up

Once the app was able to send arbitrary commands to the TV via infrared, as well as changing the input of the HDMI, I extended the Home Assistant integration to include the TV as a “media_player” entity as well. The commands I implemented were Power On and Off (discrete, rather than toggle, which means I can send a “Power On” to the TV when it’s already on and not bother it), and discrete source selection for the three sources we actually use (HDMI switch, Receiver, Commodore 64). There would be a lot more commands I could theoretically send, including volume control, but I can already access those via the receiver, so there’s no good reason to.

After that it was a matter of scripting some more complicated acts: direct selection of Portal, Chromecast, Kodi, and Nintendo Switch (which are the four things we use the most). This was easy at that point: turn on the TV (whether it was on or not), select the right input on either the receiver or the switch, then select the right input ion the TV. The reason why the order seems a bit strange is that it takes a few seconds for the TV to receive commands after turning on, but by doing it this way we can switch between Chromecast and Portal, or Nintendo Switch and Kodi, in pretty much no time.

And after that worked, we decided the $5/month to Nabu Casa were worth it, because that allows us to ask Alexa or Google Assistant to select the input for us, too.

Eventually, this lead me to replace Google’s “Turn off the TV” command in our nightly routine to trigger a Home Assistant script, too. Previously, it would issue the command to the Chromecast, routing through the whole Google cloud services between the device that took the request and the Chromecast. And then the Chromecast would be sending the CEC command to power off… except that it wouldn’t reach the receiver, which would stay on for another two hours until it finally decided it was time to turn off.

With the new setup, Google is triggering the Home Assistant script, and appears to do that asynchronously. Then Home Assistant sends the request to my app, that then sends it to the Feather, that sends the power off to the TV… which is also read by the receiver. I didn’t even need to send the power off command to the receiver itself!

All in all, the setup is satisfying.

What remains to be done is to try exposing a “Media Player” to Google Home, that is not actually any of the three “media_player” entities I have, but is a composite of them. That way, I could actually just expose the different input trees as discrete inputs to Google, and include the whole play, pause, and volume control that is currently missing from the voice controls. But that can wait.

Instead, I should probably get going at designing a new board to replace the breadboard mess I’m using right now. It’s hidden away enough that it’s not in our face (unlike the Birch Books experiments), but I would still like having a more… clean setup. And speaking of that, I really would love if someone already contributed an Adafruit Feather component for EAGLE, providing the space for soldering in the headers, but keeping the design referencing the actual lines as defined in it.