Rose Tinted Glasses: On Old Computers and Programming

The original version of this blog post was going to be significantly harder to digest and it actually was much more of a rant than a blog post. I decided to discard that, and try to focus on the positives, although please believe me when I say that I’m not particularly happy with what I see around me, and sometimes it takes strength not to add to the annoying amount of negativity out there.

In the second year of Coronavirus pandemic, I (and probably a lot more people) have turned to YouTube content more than ever, just to keep myself entertained in lieu of having actual office mates to talk with day in and day out. This meant, among other things, noticing a lot more the retrocomputing trend: a number of channels are either dedicated to talk about both games from the 80s and 90s and computers from the same era, or they seem to at least spend a significant amount of time on those. I’m clearly part of the target audience, having grown up with some of those games and systems, and now being in my 30s with disposable income, but it does make me wonder sometimes about how we are treating the nostalgia.

One of the things that I noted, and that actually does make me sad, is when I see some video insisting that old computers were better, or that people who used them were smarter because many (Commodore 64, Apple II, BBC Micro) only came with a BASIC interpreter, and you were incentivised to learn programming to do pretty much anything with them. I think that this thesis is myopic and lacks not just in empathy, but also in understanding of the world at large. Which is not to say that there couldn’t be good ways to learn from what worked in the past, and make sure the future is better.

A Bit Of Personal History

One of the things that is clearly apparent watching different YouTube channels is that there are chasms between different countries, when it comes to having computers available at an early age, particularly in schools. For instance, it seems like a lot of people in the USA have had access to a PET in elementary or junior high schools. In the UK instead the BBC Micro has been explicitly designed as a learning computer for kids, and clearly the ZX Spectrum became the symbol of an entire generation. I’m not sure how much bias there is in this storytelling — it’s well possible that for most people, all of these computers were not really within reach, and only a few expensive schools would have access to it.

In Italy, I have no idea what the situation was when I was growing up, outside of my own experience. What I can say is that until high school, I haven’t seen a computer in school. I know for sure that my elementary school didn’t have any computer, not just for the students, but also for the teachers and admins, and it was in that school that one of the teachers took my mother aside one day and told her to make me stop playing with computers because «they won’t have a future». In junior high, there definitely were computers for the admins, but no students was given access to anything. Indeed, I knew that one of the laboratories (that we barely ever saw, and really never used) had a Commodore (either 64 or 128) in it. This was the same years that I finally got my own PC at home: a Pentium 133MHz. You can see there is a bit of a difference in generations there.

Indeed, it might sound even strange that I even had a Commodore 64. As far as I know, I was the only one having it in my school: a couple of other kids had a family PC at home (which later I kind of did too), and a number of them had NES or Sega Master Systems, but the Commodore best years were long gone by the time I could read, so how did I end up with one? Well, as it turns out, not as a legacy from anyone older than me, which would be the obvious option.

My parents bought the Commodore 64 around the time I was seven, or at least that’s the best I can date it. It was, to the best of my knowledge, after my grandfather died, as I think he would have talked a bit more sense into my mother. Here’s a thing: my mother has had a quirk for encyclopaedias and other books collection, so when me and my sisters were growing up, the one thing we never missed was access to general knowledge. Whether it was a generalist encyclopedia with volumes dedicated to the world, history, and science, or a “kids’ encyclopedia” that pretty much only covers stuff aimed at preteens, or a science one that goes into details of the state of the art scientific thinking in the 80s.

So when a company selling a new encyclopedia, supposedly compiled and edited locally, called my parents up and offered a deal of 30 volumes, bound in a nice and green cover, and printed in full colour, together with a personal computer, they lapped it up fairly quickly. Well, my mother did mostly, my father was never someone for books, and couldn’t give a toss generally about computers.

Now, to be honest, I have fond memories of that encyclopedia, so it’s very possible that this was indeed one of the best purchases my parents undertook for me. Not only most of it was aimed at elementary-to-junior high ages, including a whole volume on learning grammar rules and two on math, but it also came with some volumes full to the brim of questionable computer knowledge.

In particular, the first one (Volume 16, I still remember the numbers) came with a lot of text describing computers, sometimes in details so silly that I still don’t understand how they put it together: it is here that I first read about core memory, for instance. It also went into long details about videogames of the time, including text and graphical adventures. I really think it would be an interesting read for me nowadays that I understand and know a lot more about computers and games at the time.

The second volume focused instead on programming in BASIC. Which would have been a nice connection to the Commodore 64 if it wasn’t that the described language was not the one used by the Commodore 64 in the first place, and it didn’t really go into details of how to use the hardware, with POKE and PEEK and the like. Instead it tried to describe some support for printers and graphics, that never worked on the computer I actually had. Even when my sister got a (second) computer, it came with GW-BASIC and it was also not compatible.

What the second volume did teach me, though, was something more subtle, which would take me many years to understand fully. And that is that programming is a mean to an end, for most people. The very first example of a program in the book, is a father-daughter exercise in writing a BASIC program to calculate the area of the floor of a room based on triangles and Heron’s Formula. This was a practical application, rather than teaching concepts first, and that may be the reason why I liked learning from that to begin with.

Now let me rant aside for a moment that the last time I wrote something about teaching, I ended up tuning out of some communities because I got tired of hearing someone complain that I cannot possibly have an opinion on teaching materials without having taught in academia. I have a feeling that this type of behaviour is connected with the hatred for academia that a number of us have. Just saying.

You may find it surprising that these random volumes of an encyclopedia my mother brought home when I could barely read would stay this long with me, but the truth is that I pretty much carried them along with me for many years. Indeed, they had two examples in the book that I nearly memorized, that were connected to each other. The first was a program that calculated the distance in days between two dates — explaining how the Gregorian calendar worked, including the rules for leap years around centuries. The second used this information to let you calculate a “biorhythm” that was sold as some ancient greek theory but was clearly just a bunch of “mumbo-jumbo” as Adam Savage would say.

The thing with this biorhythm idea, though, is that it’s relatively straightforward to implement: the way they describe it is that there’s three sinusoidal functions that set up three “characteristics” on different period lengths, so you calculate the “age in days” and apply a simple mathematical formula, et voilà! You have some personalised insight that is worth nothing but some people believe in. I can’t tell for sure if I ever really believed in those, or if I was just playing along like people do with horoscopes. (One day I’ll write my whole rant on why I expect people may find horoscope sign traits to be believable. That day is not today.)

So, having a basis of something to lay along with, I pretty much reimplemented this same idea over, and over, and over again. It became my “go to” hello world example, and with enough time it allowed me to learn a bit more of different systems. For example, when I got my Pentium 133 with Windows 95, and one of the Italian magazines made Visual Basic 5 CCE available, I reimplemented the same for that. When the same magazine eventually included a free license of Borland C++ Builder 1.0, as I was learning C++, I reimplemented it there. When I started moving to Linux more of the time and I wanted to write something, I did that.

I even got someone complaining that my application didn’t match the biorhythm calculated with some other app, and I had to find a diplomatic way to point out that there’s nothing scientific with either of thsoe and why should they even expect two apps to agree with it?

But now I’m digressing. The point I’m making is that I have, over the years, kept the lessons learned from those volumes with me, in different forms, and in different contexts. As I said, it wasn’t until a few years back that I realized that for most people, programming is not an art or a fun thing to do in their spare time, but it’s just a mean to an end. They don’t care how beautiful, free, or well designed a certain tool is, if the tool works. But it also means that knowing how to write some level of software gives empowers. It gives people power to build the tools they don’t have, or to modify what is already there but doesn’t quite work the way they want.

My wife trained as a finance admin, she used to be an office manager, and has some experience with CAFM software (Computer Aided Facilities Management). Most CAFM suites allow extensions in Python or JavaScript, to implement workflows that would otherwise be manual and repeating. This is the original reason she had to learn programming: even in her line of work, it is useful knowledge to have. It also comes with the effect of making it easier to understand spreadsheets and Excel — although I would say that there’s plenty of people who may be great at writing Python and C, but would be horrible Excel wranglers. Excel wrangling is its own set of skills and I submit to those who actually have them.

So Were Old Computers Better?

One of the often repeated lines is that old computers were better because either they were simpler to understand in one’s mind, or because they all provided a programming environment out of the box. Now, this is a particularly contentious point to me, because pretty much every Unix environment always had the same ability of providing a programming environment. But also, I think that the problem here is that there’s what I would call a “bundling of concerns”.

First of all, I definitely think that operating systems should come with programming and automation tools out of the box. But in fact that has (mostly) been the case since the time of Commodore 64 for me personally. On my sister’s computer, MS-DOS came with GW-BASIC first (4.01), and QBasic later (6.22). Windows 98 came with VBScript, and when I first got to Mac OS X it came with some ugly options, but some options nonetheless. The only operating system that didn’t have a programming environment for me was Windows 95, but as I said above, Visual Basic 5 CCE covered that need. It was even better with ActiveDesktop!

Now, as it turns out, even Microsoft appears to work to make it easier to code in Windows, with Visual Studio Code being free, Python being available in the Microsoft Store, and all those trimmings. So it’s hard to argue that there aren’t more opportunities to start programming now than there were in the early ’90s. What might be arguable is that nowadays you do not need to program to use a computer. You can use a computer perfectly fine without ever having learnt a programming language, and you don’t really need to know the difference between firmware and operating system, most of the time. The question becomes, whether you find this good, or bad.

And personally, I find it good. As I said, I find it natural that people are interested in using computers and software to do something, and not just for the experience of using a computer. In the same way I think most people would use a car to go to the places they need to go to, rather than just for the sake of driving a car. And in the same spirit of the car, there are people who enjoy the feeling of driving even when they don’t have a reason to, and there are people who find unnecessary things to be required when it comes to computers and technology.

I wish I found it surprising, but I just find it saddening that so many developers seem to be falling into the trap of thinking that just because they became creative by writing programs (or games, or whatever), the fact that computer users stopped having to learn programming means that they are less creative. John Scalzi clearly writes it better than me: there’s a lot of creativity in modern devices, even those that are attacked for being “passive consumption devices”. And a lot of that is not about programming in the first place.

What I definitely see is a pattern of repeating the behaviour of the generation that came before us, or maybe the one who came before them, I’m not sure. I see a number of parents (but thankfully by no mean all of them), insisting that since they learnt their trade and their programming a certain way, their kids should have the same level of tools available, no more and no less. It saddens me, even sometimes angers me, because it feels so similar to the way my own father kept telling me I was wasting my time inside, and wanted me to go and play soccer as he did in his youth.

This is certainly not only my experience, because I have talked and compared stories with quite a few people over the years, and there’s definitely a huge amount of geeks in particular who have been made fun of by their parents, and left scarred by that. And some of them are going to do the same to their kids, because they think their choice of hobbies is not as good as the ones we had in the good old days.

Listen, I said already in the past that I do not want to have children. Part of it has always been the fear of repeating the behaviour my father had with me. So of course I should not be the one to judge what others who do have kids do. But I do see a tendency from some, to rebuild the environment they grew up in, expecting that their kids would just pick up the same strange combination of geekiness they have.

At the same time I see a number of parents feeding the geekiness in their children with empowerment, giving them tools and where possible a leg up in life. Even this cold childfree heart warms up to see kids being encouraged to learn Scratch, or Minecraft.

What About All The Making, Then?

One of the constant refrains I hear is that older tools and apps were faster and more “creative”. I don’t think I have much in terms of qualifications to evaluate that. But I’m also thinking that for the longest time, creativity tools and apps were only free if you pirated them. This is obviously not to dismiss the importance of FLOSS solutions (otherwise why would I still be writing on the topic?) but the fact that a lot of the FLOSS solutions for creativity appear to have a similar spirit to the computers in the ’80s: build the tools you want to be creative.

I’m absolutely sure that there will be people arguing that you can totally be creative with Gimp and Inkscape. I also heard a lot more professionals laughing in the face of such suggestions, given the lack of important features that tools like that have had in comparison with proprietary software for many years. They are not bad programs per se, but they do find their audience in a niche compared to Photoshop, Illustrator, or Affinity Designer. And it’s not to say that FLOSS tools can’t become that good. I have heard the very same professionals who sneered (and still sneer) at Inkscape, point out how Krita (which has a completely different target audience) is a fascinating tool.

But when we look back at the ’90s, not even many FLOSS users would consider Gimp an useful photo-editing tool. If you didn’t have the money for the creativity, your option was most likely chosen between a pirate copy of Photoshop, or maybe if you’re lucky and an Italian magazine gifted it out, a license for Macromedia xRes 2.0. Or maybe FreeHand. Or Micrografx Windows Draw!.

The thing is, a lot of free-but-limited tools online are actually the first time that a wide range of people have finally been able to be creative. Without having to be “selected” as a friend of Unix systems. Without having to pirate software to be able to afford it, and without having to pony up a significant investment for something that they may not be able to make good use of. So I honestly welcome that, when it comes to creativity.

Again: the fact that someone cannot reason around code, or the way that Inkscape or Blender work, does not mean that they are less creative, or less skilled. If you can’t see how people using other tools are being just as creative, you’re probably missing a lot of points I’m making.

But What About The Bloated Web?

I’ve been arguing for less bloat in… pretty much everything, for the past 17 years on blogs and other venues. I wrote tools to optimize (even micro-optimize in some cases) programs and libraries so that they perform better on tiny systems. I have worked on Gentoo Linux, that pretty much allows you to turn off everything you can possibly turn off so you can build the minimalistic system you can think of. So I really don’t like bloat.

So is the web bloated? Yes, I’d say so. But not all of it is bloat, even when people complain about it. I see people suggesting that UTF-8 is bloat. That dynamic content is bloat. That emojis are bloat. Basically anything they don’t need directly is bloat.

So it’s clearly easy to see how your stereotypical 30-something US-born-and-raised, English-only-speaking “hacker” would think that an unstyled, white-on-black-background (or worse, green-on-black) website in ASCII would be the apotheosis of usable web. But that is definitely not what everyone would find perfect. People who speak languages needing more than ASCII exist, and are out there. Heck, people for whom the actual bloat from UTF-8 (vs UTF-16) is the wasteful optimization for ASCII are probably the majority of the world! People who cannot read on black backround exist, and they are even developers themselves at times (I’m one of them, which is why all my editors and terminals use light backgrounds, I get migraines from black backgrounds and dark themes).

Again, I’m not suggesting that everything is perfect and nothing needs to change. I’m actually suggesting that a lot needs to change, but it is not everything needs to change. So if you decide to tell me that Gmail is bloated and slow and use that as the only comparison to ’90s mail clients, I would point out to you that Gmail has tons of features that are meant for users not to shoot themselves in the feet, as well as being a lot more reliable than Microsoft Outlook Express or Eudora (which I know has lots of loyal followers, but I could never get behind myself), and also that there are alternatives.

Let me beat this dead horse a bit more. Over on Twitter when this topic came up, I was given the example of ICQ vs Microsoft Teams. Now the first thing is, I don’t use Teams. I know that Teams is an Electron app, and I know that most Electron app are annoyingly heavy and use a ton of resources. So, fair, I can live with calling them “bloated”. I can see why they chose this particular route, and disagree with it, but there is another important thing to note here: ICQ in 1998 is barely comparable with a tool like Teams, that is pretty much a corporate beast.

So instead, let’s try to compare something that is a bit more close: Telegram (which is already known I use — rather than talking about anything that I would have a conflict of interest on). How fast is Telegram to launch on my PC? It’s pretty much a single click to start and it takes less than a second on the beast that is my Gamestation. It also takes less than a second on my phone. How much did ICQ take to load? I don’t remember, but quite a lot longer because I remember seeing a splash screen. Which may as well have been timed to stay on the screen for a second or so because the product manager requested that, like it happened at one of my old jobs (true story!)

And in that, would ICQ provide the same features of Telegram? No, not really. First of all, it was just messages. Yes it’s still instant messaging and in that it didn’t really change much, but it didn’t have the whole “send and receive pictures” we have on modern chat applications, you ended up with having to do peer-to-peer transfers and good luck with that. It also had pretty much *no* server-side support for anything, at least when I started using it in 1998: your contact list was entirely client-side, and even the “authorization” to add someone to your friend list was a simple local check. There were plenty of ways to avoid these checks, too. Back in the day, I got in touch with a columnist from the Italian The Games Machine, Claudio Todeschini (who I’m still in touch with, but because life is strange and we met in person in a completely different situation many, many years later); the next time I re-installed my computer, having forgotten to back up ICQ data, I didn’t have him in my contacts anymore, and unsure on whether he would remember me, I actually used a cracked copy of ICQ to re-add him to my contacts.

Again, this was the norm back then. It was a more naive world, where we didn’t worry that much about harassment, we didn’t worry so much about SWATing, and everything was just, well, simpler. But that doesn’t mean it was good. It only meant that if you did worry about harassment, if someone was somehow trying to track you down, if the technician at your ISP was actually tapping your TCP sessions, they would be able to. ICQ was not encrypted for many years after I started using it, not even c2s, let alone e2e like Telegram secret chats (and other chat clients) are.

Someone joked about trying to compare running software on the same machine to see the performance fairly, but that is an absolute non-sequitur. Of course we use a lot more resources in absolute terms, compared to 1998! Back then I still had my Pentium 133MHz, with 48MiB of RAM (I upgraded!), a Creative 3D Blaster Banshee PCI (because no AGP slots, and the computer came with a Cirrus Logic that was notorious for not working well with Voodoo 2), and a Radio card (I really liked radio, ok?). Nowadays, my phone has a magnitude or two more resources, and you can find 8051s just as fast.

Old tech may be fascinating and easier to get into when it comes into learning how it all fits together, but the usable modern tech is meant to take trade offs toward the users more and more. That’s why we have UIs, that’s why we have touch inputs, that’s even why we have voice-controlled assistants, much as a number of tech enthusiasts appear to want to destroy them all.

Again, this feels like a number of people are yelling “Kids these days”, and repeating how “in their days” everything was better. But also, I fear there are a number of people who just don’t appreciate how a lot of the content you see on YouTube, particularly in the PC space of the ’90s and early ’00s, is not representative of what we experienced back then.

Let me shout out to two YouTubers that I find are doing it right: LGR and RetroSpector78. The former is very open to point out when he’s looking at a ludicrous build of some kind, that would never be affordable back in the day; the latter is always talking about what would be appropriate for the vintage and usage of a machine.

Just take all of the videos that use CF2IDE or SCSI2SD to replace “spinning rust” hard drives of yonder. This alone is such a speed boost on loading stuff that most people wouldn’t even imagine. If you were to try to load a program like Microsoft Works on a system that would be perfect for the time except for the storage, you would be experiencing a significant different loading time than it was back in the day.

And, by the way, I do explicitly mean Microsoft Works, not Office because, as Avery pointed out on Twitter, that was optimized for load speed — by starting a ton of processes early on, trading memory usage for startup speed. The reason why I say that is because, short of pirated copies of Office, most people in the ’90s that I know would be able to use at best Works, because it came pre-installed on their system.

So, What?

I like the retrocomputing trend, mostly. I love Foone’s threads, because one of the most important things he does is explain stuff. And I think that, if what you want is to learn how a computer works in detail, it’s definitely easier to do that with a relatively uncomplicated solution first, and build up to more modern systems. But at the same time, I think there is plenty of abstraction that don’t need to be explained if you don’t want to. This is the same reason why I don’t think that using C to teach programming and memory is a great idea: you need to know too much of details that are not actually meant to be understood for newcomers.

I also think that understanding the techniques used in both designing, and writing software for, constrained systems such as the computers we had in the ’80s and ’90s does add to the profession as a whole. Figuring out which trade off was and was not possible at the time is one step, finding and possibly addressing some of the bugs is another. And finally there is the point we’re getting to a lot lately: we can now build replacement components with tools that are open to everyone!

And you know what? I do miss some of the constrained systems, because I have personal nostalgia for them. I did get myself a Commodore 64 a couple of years ago, and I loved the fact that, in 2021, I can get the stuff I could have never afforded (or even didn’t exist) back when I was using it: fast loaders, SD2IEC, a power supply that wouldn’t be useful as a bludgeoning instrument, and a SCART cable to a nice and sharp image, rather than the fuzzy one when using the RF input I had to.

I have been toying with the idea of trying to build some constrained systems myself. I think it’s a nice stretch for something I can do, but with the clear note that it’s mostly art, and not something that is meant to be consumed widely. It’s like Birch Books to me.

And finally, if you only take a single thing away from this post, is that you should always remember that an usable “bloated” option will always win over a slim option that nobody but a small niche of people can use.

Interns in SRE and FLOSS

In addition to the usual disclaimer, that what I’m posting here is my opinions and my opinions only, not those of my employers, teammates, or anyone else, I want to start with an additional disclaimer: I’m neither an intern, a hiring manager, or a business owner. This means that I’m talking from my limited personal experience that might not match someone else’s. I have no definite answers, I just happen to have opinions.

Also, the important acknowledgement: this post comes from a short chat on Twitter with Micah. If you don’t know her, and you’re reading my blog, what are you doing? Go and watcher her videos!

You might remember a long time ago I wrote (complaining) of how people were viewing Google Summer of Code as a way to get cash rather than a way to find and nurture new contributors for the project. As hindsight is 2020 (or at least 2019 soon), I can definitely see how my complaint sounded not just negative, but outright insulting for many. I would probably be more mellow about it nowadays, but from the point of view of an organisation I stand from my original idea.

If anything I have solidified my idea further with the past five and a half years working for a big company with interns around me almost all the time. I even hosted two trainees for the Summer Trainee Engineering Program a few years ago, and I was excitedly impressed with their skill — which admittedly is something they shared with nearly all the interns I’ve ever interacted with.

I have not hosted interns since, but not because of bad experiences. It had more to do with me changing team much more often than the average Google engineer — not always by my request. That’s a topic for another day. Most of the teams I have been in, including now, had at least an intern working for them. For some teams, I’ve been involved in brainstorming to find ideas for interns to work on the next year.

Due to my “team migration”, and the fact that I insist on not moving to the USA, I often end up in those brainstorms with new intern hosts. And because of that I have over time noticed a few trends and patterns.

The one that luckily appears to be actively suppressed by managers and previous hosts is that of thinking of interns as the go-to option to work on tasks that we would define “grungy” — that’s a terrible experience for interns, and it shouldn’t be ever encouraged. Indeed, my first manager made it clear that if you come up with a grungy task to be worked on, what you want is a new hire, not an intern.

Why? There are multiple reasons for that. Start with the limited time an intern has, to complete a project: even if the grungy task is useful to learn how a certain system works, does an intern really need to get comfortable with it that way? For a new hire, instead, time is much less limited, so giving them a bit more boring tasks while they go through whatever other training they need to go through is fine.

But that’s only part of the reason. The much more important part is understanding where the value of an intern is for the organisation. And that is not in their output!

As I said at the start, I’m not a hiring manager and I’m not a business person, but I used to have my own company, and have been working in a big org for long enough that I can tell a few patterns here and there. So for a start, it becomes obvious that an intern’s output (as in the code they write, the services they implement, the designs they write) are not their strongest value proposition, from the organisation point of view: while usually interns are paid less than the full-time engineers, hosting an intern takes a lot of time away from the intern host, which means the cost of the intern is not just how much they get paid, but also a part of what the host get paid (it’s not by chance that Google Summer of Code reimburses the hosting project and not just the student).

Also, given interns need to be trained, and they will likely have less experience in the environment they would be working, it’s usually the case that letting a full-time engineer provide the same output would take significantly less time (and thus, less money).

So no, the output is not the value of an intern. Instead an internship is an opportunity both for the organisation and for the interns themselves. For the organisation, it’s almost like an extended interview: they get to gauge the interns’ abilities over a period of time, and not just with nearly-trick questions that can be learnt by heart — it includes a lot more than just their coding skills, but also their “culture fit” (I don’t like this concept), and their ability to work in a team — and I can tell you that myself, at the age of most of the interns I worked with, I would have been a terrible team player!

And let’s not forget that if the intern is hired afterwards, it’s a streamlined training schedule, since they already know their way around the company.

For the intern, it’s the experience of working in a team, and figuring out if it’s what they want to do. I know of one brilliant intern (who I still miss having around, because they were quite the friendly company to sit behind, as well as a skilled engineer) who decided that Dublin was not for them, after all.

This has another side effect for the hosting teams, that I think really needs to be considered. An internship is a teaching opportunity, so whatever project is provided to an intern should be meaningful to them. It should be realistic, it shouldn’t be just a toy idea. At the same time, there’s usually the intention to have an intern work on something of value for the team. This is great in the general sense, but it goes down to two further problems.

The first is that if you really need something, assigning it as a task to an intern is a big risk: they may not deliver, or underdeliver. If you need something, you should really assign it to an engineer; as I said it would also be cheaper.

The second is that the intern is usually still learning. Their code quality is likely to not be at the level you want your production code to be. And that’s okay. Any improvement in the code quality of the intern over their internship is of value for them, so helping them to improve is good… but it might not be the primary target.

Because of that, my usual statement during the brainstorms is “Do you have two weeks to put the finishing polish on your intern’s work, after they are gone?” — because if not, the code is unlikely to be made into production. There are plenty of things that need to be done after a project is “complete” to make it long-lasting, whether they are integration testing and releasing, or “dotting the is and crossing the ts” on the code.

And when you don’t do those things, you end up with “mostly done” code, that feels unowned (because the original author left by that point), and that can’t be easily integrated into production. I have deleted those kind of projects from codebases (not just at Google) too many times already.

So yes, please, if you have a chance, take interns. Mentor them, teach them, show them around on what their opportunities could be. Make sure that they find a connection with the people as well as the code. Make sure that they learn things like “Asking your colleagues when you’re not sure is okay”. But don’t expect that getting an intern to work on something means that they’ll finish off a polished product or service that can be used without a further investment of time. And the same applies to GSoC students.

Software systems and institutional xenophobia

I don’t usually write about politics, because there are people with more sophisticated opinions and knowledge out there, compared to me, playing at the easiest level, to quote John Scalzi, and rarely having to fear for my future (except for when it comes to health problems). But today I need to point out something that worries me a lot.

We live in a society that, for good or bad (and I think it’s mostly for good), is more and more tied to computer systems. This makes it very easy for computer experts of one kind or another (like me!) to find a job, particularly a good paying job. But at the same time it should give us responsibilities for what we do with our jobs.

I complained on Twitter how most of the credit card application forms here in the UK are effectively saying «F**k you, immigrant scum» by not allowing you to complete the application process if you have less than three years’ addresses in the UK. In the case of a form I tried today, even though the form allows you to specify an “Overseas address” as previous address, which allows you to select Ireland as a country, it still verifies the provided post code to UK standards, and refuses you to continue the process without it.

This is not the first such form. Indeed, I ended up getting an American Express credit card because they were the only financial institution that could be convinced to take me on as a customer, with just two months living in this country, and a full history of addresses for the previous five years and more. And even for them, it was a bit of an issue to find an online form that did indeed allow me to type that in.

Yet another of the credit card companies rejected my request because “[my] file is too thin” — despite being able to prove to them I’m currently employed full time with a very well paying company, and not expecting to change any time soon. This is nearly as bad as the NatWest employee that wanted my employer’s HR representative to tell them how long they expected me to live in the UK.

But it’s not just financial institutions, it’s just at any place where you provide information, and you may end up putting up limitations that, though obviously fine for your information might not be for someone else. Sign-up forms where putting a space in a name or surname field is an error. Data processing that expects all names to only have 7-bit ASCII encoding. Electoral registries where names are read either as Latin 1 or Latin 2.

All of these might be considered smaller data issues of nearsighted developers, but they also show how these can easily turn into real discrimination.

When systems that have no reason to discard your request on the basis of the previous address have a mistake that causes the postcode validation to trigger on the wrong format, you’re causing a disservice and possible harm to someone who might really just need a credit card to be able to travel safely.

When you force people to discard part of their name, you’re going to cause them disservice and harm when they will need a full history of what they did — I had that problem in Ireland, applying for a driving learner permit, not realising that the bills for Bord Gáis Energy wrote down my name wrong (using Elio as my surname).

The fact that my council appears to think that they need to use Latin-2 to encode names, suggests they may expect that their residents are all either English or Eastern European, which in turn leads to the idea of some level of segregation of them away from Italian, French or Irish, all of which depend on Latin-1 encodings instead.

The “funnies” in Ireland was a certain bank allowing you to sign up online with no problems… as long as you had a PPS (tax ID) issued before 2013 — after that year, a new format for the number was in use, and their website didn’t consider it valid. Of course, it’s effectively only immigrants who, in 2014, would be trying to open a bank account with such codes.

Could all of these situation be considered problems with incompetence? Possibly yes. Lots of people are incompetents, in our field. But it also means that there was no coverage for these not-so-corner cases in the validation. So it’s not just an incompetent programmer, it’s an incompetent programmer paired with an incompetent QA engineer. And an incompetent product manager. And an incompetent UX designer… that’s a lot of incompetence put together for a product.

Or the alternative is that there is a level of institutional xenophobia when it comes to software development. In the UK just as in Ireland, Italy and in the United States. The idea that the only information that are being tested are those that are explicitly known to the person doing the development is so minimalist as to be useless. You may as well not validate anything.

Not having anyone from the stakeholders to the developers and testers consider “Should a person from a different culture with different naming, addressing, or {whatever else} norms be able to use this?” (or worse, consider it and answering themselves “no”), is something I consider xenophobia¹.

I keep hearing calls to pledge ethics in the field of machine learning (“AI”) and data collection. But I have a feeling that those fields have much less impact on the “median” part of the population. Which is not to say you shouldn’t have ethical consideration in them at all. But rather than we should start with teaching ethics in everyday’s data processing too.

And if you’re looking for some harsh laugh after this mood-killing post, I recommend this article from The Register.

¹ Yes I’m explicitly not using the word “racism” here, because then people will focus on that, rather than the problem. A form does not look at the colour of your skin, but does look at whether you comply with its creators idea of what’s “right”.

The importance of teams, and teamwork

Today, on Twitter, I have received a reply with a phrase that, in its own sake and without connecting back with the original topic of the thread, I found significant of the dread I feel with working as a developer, particularly in many opensource communities nowadays.

Most things don’t work the way I think they work. That’s why I’m a programmer, so I can make them work the way I think they should work.

I’m not going to link back to the tweet, or name the author of the phrase. This is not about them in particular, and more about the feeling expressed in this phrase, which I would have agreed with many years ago, but now feels so much off key.

What I feel now is that programmers don’t make things work the way they think they should. And this is not intended as a nod to the various jokes about how bad programming actually is, given APIs and constraints. This is about something that becomes clear when you spend your time trying to change the world, or make a living alone (by running your own company): everybody needs help, in the form of a team.

A lone programmer may be able to write a whole operating system (cough Emacs), but that does not make it a success in and by itself. If you plan on changing the world, and possibly changing it for the better, you need a team that includes not only programmers, but experts in quite a lot of different things.

Whether it is a Free Software project, or a commercial product, if you want to have users, you need to know what they want — and a programmer is not always the most suitable person to go through user stories. Hands up all of us who have, at one point or another, facepalmed at an acquaintance taking a screenshot of a web page to paste it into Word, and tried to teach them how to print the page to PDF. While changing workflows so that they make sense may sound the easiest solution to most tech people, that’s not what people who are trying to just do their job care about. Particularly not if you’re trying to sell them (literally or figuratively) a new product.

And similarly to what users want to do, you need to know what the users need to do. While effectively all of Free Software comes with no warranty attached, even for it (and most definitely for commercial products), it’s important to consider the legal framework the software has to be used on. Except for the more anarchists of the developers out there, I don’t think anyone would feel particularly interested in breaching laws for the sake of breaching them, for instance by providing a ledger product that allows “black book accounting” as an encrypted parallel file. Or, to reprise my recent example, to provide a software solution that does not comply with GDPR.

This is not just about pure software products. You may remember, from last year, the teardown of Juicero. In this case the problems appeared to step by the lack of control over the BOM. While electronics is by far not my speciality, I have heard more expert friends and colleagues cringe at seeing the spec of projects that tried to actually become mainstream, with a BOM easily twice as expensive as the minimum.

Aside here, before someone starts shouting about that. Minimising the BOM for an electronic project may not always be the main target. If it’s a DIY project, making it easier to assemble could be an objective, so choosing more bulky, more expensive parts might be warranted. Similarly if it’s being done for prototyping, using more expensive but widely available components is generally a win too. I have worked on devices that used multi-GB SSDs for a firmware less than 64MB — but asking for on-board flash for the firmware would have costed more than the extremely overprovisioned SSDs.

And in my opinion, if you want to have your own company, and are in for the long run (i.e. not with startup mentality of getting VC capital and get acquired before even shipping), you definitely need someone to follow up the business plan and the accounting.

So no, I don’t think that any one programmer, or a group of sole programmers, can change the world. There’s a lot more than writing code, to build software. And a lot more than building software, to change society.

Consider this the reason why I will plonk-file any recruitment email that is looking for “rockstars” or “ninjas”. Not that I’m looking for a new gig as I type this, but I would at least give thought if someone was looking for a software mechanic (h/t @sysadmin1138).

Does it come with includes or?

I’m not sure if I went to a large extent writing about this before, but I probably should try to write about it here again, because, once again, it helps removing some useless .la files from packages (and again, this is just something done right, not something I’m pulling out of thin air; if you think I’m pulling it out of thin air, you have no clue about libtool to begin with — and I’m not referring to leio’s complains, that’s another topic entirely).

Shared objects are, generally, a good thing because they allow different programs to share the same code in memory; this is why we consider shared libraries better than the static archives. Unfortunately, simply using, boilerplate, a shared object is a bad thing: you need to know what you’re doing.

For instance, if your software simply uses a single executable, propose no API for other software to use and you don’t use plugins then you really should not be using libraries at all, and should rather link everything together in the executable. This avoids both the execution overhead of PIC code, and the memory overhead of relocated data .

Library install flowchart

And again, here are some explanation:

  • if you’re installing a plugin you usually just need the shared object, but if the software using it supports external built-ins (I can’t think of even a single example of that but it’s technically possible), then you might want to consider making the static archive version optional;
  • you only install header files if your package provides a public API (it’s a library) or if it uses plugins (plugins need an interface to talk with the main program’s body);
  • if you’re going to share code between different executables, like inkscape does for instance (it does it wrong, by the way), what you want is to install a shared object (there is an alternative technique, but that’s a different matter and I’ll discuss that in the near future hopefully);
  • if you’re installing a single executable, you probably want to install no library at all; this might not be the case if you use plugins though, so you might have to think about it; while application can easily make use of plugins (zsh does that for instance, this takes away at least some error checking at linking time; this is, anyway, simply a matter of development practice, you can still use plugins with no library at all);
  • if you’re installing a library (that is, anything with a public API in form of header files), then you’re obviously going to install a shared object copy of it; the static archive version might be actively discouraged (for plugin-based libraries such as PAM, xine, Gtk+, …), or might simply be made optional for the remaining libraries.

Who Pays the Price of Pirated Programs

I have to say sorry before all, because most likely you’ll find typos and grammar mistakes in this post. Unfortunately I have yet to receive my new glasses so I’m typing basically blind.

Bad alliteration in the title, it should have been “pirated software“ but it didn’t sound as good.

I was thinking earlier today who is really paying the price of pirated software in the world of today; we all know that the main entity losing from pirated software is, of course, the software’s publisher and developer. And of course most of them, starting from Microsoft, try their best to reverse the game, saying that the cost is mostly on the user itself (remember Windows Genuine Advantage?). I know this is going to be a flamethrower, but I happen to agree with them nowadays.

Let me explain my point: when you use pirate software, you end up not updating the software at all (‘cause you either have no valid serial code, or you have a crack that would go away); and this include security vulnerabilities, that often enough, for Windows at least, lead to virus infecting the system. And of course, the same problem applies, recursively, to antivirus software. And this is without counting the way most of that software is procured (eMule, torrents, and so on… — note that I have ethical uses of torrent sites for which I’d like at least some sites to be kept alive), which is often the main highway for viruses to infect systems.

So there is already an use case for keep legit with all the software; there is one more reason why you, a Linux enthusiast, should also make sure that your friends and family don’t use pirate software: Windows (as well as Linux, but that’s another topic) botnets send spam to you as well!

Okay, so what’s the solution? Microsoft – obviously – wants everybody to spend money on their licenses (and in Italy they cost twice as much; I had to buy a Microsoft Office 2007 Professional license – don’t ask – in Italy it was at €622 plus VAT; from Amazon UK it was €314, with VAT reimbursed; and Office is multi-language enabled, so there is not even the problem of Italian vs. English). I don’t entirely agree with that; I think that those who really need to use proprietary software that costs, should probably be paying for it, this will give them one more reason to want a free alternative. All the rest, should be replaced with Free (or at least free) alternatives.

So for instance, when a friend/customer is using proprietary software, I tend to replace it along these lines: Nero can be replaced with InfraRecorder (I put this first because it’s the least known); Office with the well-known OpenOffice and Photoshop with Gimp (when there are no needs for professional editing at least).

The main issue here is that I find a lot of Free Software enthusiasts who seem to accept, and foster pirate software; sorry I’m not along those lines, at all. And this is because I loathe proprietary software, not because I like it! I just don’t like being taken for an hypocrite.

No more WD for me, I’m afraid

So I finally went to ge tth enew disks I ordered, or rather I sent my sister since I’m at home sick again (seems like my health hasn’t recovered fully yet). I ordered two WD SATA disks, two Samsung SATA disks and an external WD MyBook Studio box with two disks, with USB, FireWire and eSATA interfaces. My idea was to vary around the type and brand of disks I use so that I don’t end up having problems when exactly one of them goes crazy, like it happened with Seagate’s recent debacle.

The bad surprise started when I tried to set up the MyBook; I wanted to set it up as RAID1, to store my whole audio/video library (music, podcasts, audiobooks, tv series and so on so forth), then re-use the space that is now filled with the multimedia stuff to store the archive of downloaded software (mostly Windows software, which is what I use to set up Windows systems, something that I unfortunately still do), ISO files (for various Windows versions, LiveCDs and stuff like that), and similar. I noticed right away that contrary to the Iomega disk I had before, this disk does not have a physical hardware switch to enable raid0, raid1 or jbod. I was surprised and a bit appalled, but the whole marketing material suggests that the thing works fine with Mac OS X, so I just connected it to the laptop and looked for the management software (which is inside the disk itself, rather than on a different CD, that’s nice).

Unfortunately once the software was installed, it failed to install itself in the usual place for Applications under OSX, and it also failed to detect the disk itself. So I went online and checked the support site, there was an upate to both the firmware of the drive (which means the thing is quite more complex than I’d expect it to be) and to the management software. Unfortunately, neither solved my issue, so I decided it had to be a problem with Leopard, and thus I could try with my mother’s iBook which is still running Tiger, still no luck. Even installing the “turbo” drivers from WD solved the problem.

Now I’m stuck with a 1TB single-volume disk set which I don’t intend to use that way, I’ll probably ask a friend to lend me a Windows XP system to set it up, and then hope that I’ll never have to use it, but the thing upsets me. Sure from a purely external hardware side it seems quite nice, but the need for software to configure a few parameters, and the fact that there is no software to do so under Linux, really makes the thing ludicrous.

I can’t stand secrecy

I say this as it was a defect, but I’m actually not sure if it is a defect indeed.

I can’t stand secrecy and this is something I felt to say some time ago, but I ended up not saying anything, but as now I can’t sleep, I suppose I might as well say this so that I can get rid of another weight on my thoughts.

Of course this is not a generic “secrecy” thing, I suppose there are things that have to remain secret, like personal feelings and stuff like that, that shouldn’t be opened up too easily, to avoid injurying yourself when other people misunderstand them.

There is also the secrecy needed for Police operations and similar, which of course can’t be simply ignored or underestimated. But those things I leave to CIA, MI6 and “our national” SISMI/SISDE (it’s so secret that you don’t actually hear it talked about that much here, well of course, unless one of its men is killed by friendly fire).

But again, those are not my concerns. What concerns me is secrecy in Free, Open Source Projects. By definition, why there should be secrecy in Free, Open projects? It does not make much sense, does it?

Why should someone develop something in a closed circle, would it be a softwre or an idea, or a project or whatever else? Why people have to “surprise the world”? One of the good things about Open Source and Free Software is that you can check the things yourself, no more “factory secrets”, no more “surprise moves”, at least for some aspects.

It’s still understandable that companies like Sun and Apple release their code under “surprise”, after developing it secretely, but this is just because those are commercial projects that lives on selling a product, rather than on its development.

I’m not referring to a single issue, I think I seen this happening too many times before, and every time, I still wonder why oh why people have to be so egocentric. Why I say egocentric? Because I can’t find any other reason for secrecy.

Let me precise, there are things that has to be handled in limited groups, like on gentoo-core when it’s matter of organising Gentoo itself or developer conferences or stuff like that, or the private Summer of Code mailing list where opinions, ideas and bug reports are exchanged between mentors and Google staff. There are channels that are correctly +s (secret) because they are “service channels” (like #gentoo-infra), but the most of the secret channels are pointless, actually they are dangerous for Free Software (unless they have a public accessible “mainline” channel, and the secret is just for “slow noise” development).

I don’t like secrecy, and I’m somehow having problems even to accept the secrecy behind Summer of Code mentoring. I don’t trust secrecy, either by actions or by obfuscating: I don’t trust code that’s written in ways that are not understandable by public.

Maintenance is important as much as information availability, if a software is writte in such a way that only a guru can understand it, as free (on paper) it can be, as quick, as performant, as optimised it can be, I find it a bad idea to use and support.

Myself, I always try to give as much information on what I do, even on what I plan to do. My blog is also for this, I actually often blog on things before even starting them, and then, I try to keep anyone up to date on what my status is with that. You’ll never hear me talking about secret stuff. I like to think that this is something people like of me (although I still find hard to think that people like me in any way :P), and if in the past I’ve fallen on accepting exceptions to this rule, I’ll always do my best in the future to not end up stuck in “secret plans”.

Freedom (in software) is something you conquer with ideas and actions, and with freedom itself. Freedom conquered with secrecy is not true Freedom.