Can you run a brick and mortar store on Free Software?

I have written before about the CRM I wrote for a pizzeria and I am happy to see that even FSFE started looking into Free Software for SME. I also noted the needs for teams to develop healthy projects. Today I want to give an example of why I think these things are not as easy as most people expect them to be, and how many different moving parts exist that are required to align to make Free Software for SME.

As I’m no longer self-employed, and I have no intention of going back to be a MSP in my lifetime, what I’m writing here is more of a set of “homework pointers” if a community of SME-targeted Free Software projects would be formed.

I decided to focus in my thoughts on the need of a brink and mortar store (or high street store if you prefer), mostly because it has a subset of the requirements that I could think of, compared to a restaurant like the pizza place I actually worked with.

These notes are also probably a lot more scattered and incomplete than I would like, because I have only worked retail for a short while, between high school and the two miserable week of university, nearly fifteen years ago, in a bookstore to be precise.

For most of the people who have not worked retail, it might seem like the most important piece of software/hardware for a store is the till, because that is what they interact with most of the time. While the till systems (also called POS) are fairly important, as those are in direct contact with the customer, they are only the tip of the iceberg.

But let’s start with the POS: whether you plan on integrating them directly with a credit card terminal or not, right now there are a number of integrated hardware/software solution for these, that include a touchscreen to input the receipt components and a (usually thermal) printer for the receipts to be printed on, while sometimes allowing the client to be emailed the receipt instead. As far as I know, there’s no Free Software system for this. I do see an increasing number of Clover tills in Europe, and Square in the United States (but these are not the only ones).

The till software is more complicated than one would think, because in addition to the effects that the customers can see (select line items, print receipt, eventually take payment), it has to be able to keep track of the cash flow, whether it is in form of actual cash, or in the form of card payments. Knowing the cash flow is a requisite for any business, as without that information you cannot plan your budgets.

In bigger operations, this would feed into a dedicated ERP system, which would often include an inventory management software — because you need to know how much stock you have and how fast it is moving, to know when to order new stock.

There is also the need to handle invoices, which usually don’t get printed by the till (you don’t want an invoice printed on thermal paper, particularly in countries like Italy, where you’re meant to keep the original of an invoice for over ten years).

And then there is the filing of payable invoices and, well, their payment. This is part of the accounting procedures, and I know of very few systems that allow integration with a bank to the point of automating this part. PSD2 is meant to require financial institutions to provide APIs to make this possible, at least in Europe, but that has been barely received yet, and we’ll have to see what the solution will be.

Different industries have different expected standards, too. When I worked in the bookstore, there was a standard piece of software that was used to consult the online stock of books from various depots, which was required to handle orders of books for people looking for something that was not in the store. While Amazon and other online services have for the most part removed the need for many to custom order books in a store, I know still a few people who do so, simply to make sure the bookstore stays up. And I assume that very similar, yet different, software and systems exist for most other fields of endeavour, such as computer components, watches, and shoes.

Depending on the size of the store, and the amount of employees, and in general the hours of operation, there may also be need for a roster management software, so that the different workers have fair (and legal) shifts, while still being able to manage days off. I don’t know how well solutions like Workday work for small realities, but in general I feel this is likely going to be one area in which Free Software won’t make an easy dent: following all the possible legal frameworks to actually be compliant with the law is the kind of work that requires a full-time staff of people, and unless something changes drastically, I don’t expect any FLOSS project to keep up with that.

You can say that this post is not giving any answer and is just adding more questions. And that’s the case, actually. I don’t have the time or energy of working on this myself, and my job does not involve working with retailers, or even developing user-focused software. I wanted to write this as a starting point of a project if someone is interested in doing so.

In particular, I think that this would be prime territory for a multi-disciplinary university project, starting from asking questions to store owners of their need, and understanding the whole user journey. Which seems to be something that FSFE is now looking into fostering, which I’m very happy about.

Please, help the answer to the question “Can you run a brink and mortar store on Free Software?” be Yes!

The importance of teams, and teamwork

Today, on Twitter, I have received a reply with a phrase that, in its own sake and without connecting back with the original topic of the thread, I found significant of the dread I feel with working as a developer, particularly in many opensource communities nowadays.

Most things don’t work the way I think they work. That’s why I’m a programmer, so I can make them work the way I think they should work.

I’m not going to link back to the tweet, or name the author of the phrase. This is not about them in particular, and more about the feeling expressed in this phrase, which I would have agreed with many years ago, but now feels so much off key.

What I feel now is that programmers don’t make things work the way they think they should. And this is not intended as a nod to the various jokes about how bad programming actually is, given APIs and constraints. This is about something that becomes clear when you spend your time trying to change the world, or make a living alone (by running your own company): everybody needs help, in the form of a team.

A lone programmer may be able to write a whole operating system (cough Emacs), but that does not make it a success in and by itself. If you plan on changing the world, and possibly changing it for the better, you need a team that includes not only programmers, but experts in quite a lot of different things.

Whether it is a Free Software project, or a commercial product, if you want to have users, you need to know what they want — and a programmer is not always the most suitable person to go through user stories. Hands up all of us who have, at one point or another, facepalmed at an acquaintance taking a screenshot of a web page to paste it into Word, and tried to teach them how to print the page to PDF. While changing workflows so that they make sense may sound the easiest solution to most tech people, that’s not what people who are trying to just do their job care about. Particularly not if you’re trying to sell them (literally or figuratively) a new product.

And similarly to what users want to do, you need to know what the users need to do. While effectively all of Free Software comes with no warranty attached, even for it (and most definitely for commercial products), it’s important to consider the legal framework the software has to be used on. Except for the more anarchists of the developers out there, I don’t think anyone would feel particularly interested in breaching laws for the sake of breaching them, for instance by providing a ledger product that allows “black book accounting” as an encrypted parallel file. Or, to reprise my recent example, to provide a software solution that does not comply with GDPR.

This is not just about pure software products. You may remember, from last year, the teardown of Juicero. In this case the problems appeared to step by the lack of control over the BOM. While electronics is by far not my speciality, I have heard more expert friends and colleagues cringe at seeing the spec of projects that tried to actually become mainstream, with a BOM easily twice as expensive as the minimum.

Aside here, before someone starts shouting about that. Minimising the BOM for an electronic project may not always be the main target. If it’s a DIY project, making it easier to assemble could be an objective, so choosing more bulky, more expensive parts might be warranted. Similarly if it’s being done for prototyping, using more expensive but widely available components is generally a win too. I have worked on devices that used multi-GB SSDs for a firmware less than 64MB — but asking for on-board flash for the firmware would have costed more than the extremely overprovisioned SSDs.

And in my opinion, if you want to have your own company, and are in for the long run (i.e. not with startup mentality of getting VC capital and get acquired before even shipping), you definitely need someone to follow up the business plan and the accounting.

So no, I don’t think that any one programmer, or a group of sole programmers, can change the world. There’s a lot more than writing code, to build software. And a lot more than building software, to change society.

Consider this the reason why I will plonk-file any recruitment email that is looking for “rockstars” or “ninjas”. Not that I’m looking for a new gig as I type this, but I would at least give thought if someone was looking for a software mechanic (h/t @sysadmin1138).

We Need More Practical, Business-Oriented Open Source — Case Study: The Pizzeria CRM

I’m an open source developer, because I think that open source makes for safer, better software for the whole community of users. I also think that, by making more software available to a wider audience, we improve the quality, safety and security of every user out there, and as such I will always push for more, and more open, software. This is why I support the Public Money, Public Code campaign by the FSFE for opening up the software developed explicitly for public administrations.

But there is one space that I found is quite lacking when it comes with open source: business-oriented software. The first obvious thing is the lack of good accounting software, as Jonathan has written extensively about, but there is more. When I was consulting as a roaming sysadmin (or with a more buzzwordy, and marketing-friendly term, a Managed Services Provider — MSP), a number of my customers relied heavily on nearly off-the-shelf software to actually run their business. And in at least a couple of cases, they commissioned me custom-tailored software for that.

In a lot of cases, there isn’t really a good reason not to open-source this software: while it is required to run certain businesses, it is clearly not enough to run them. And yet there are very few examples of such software in the open, and that includes from me: my customers didn’t really like the idea of releasing the software to others, even after I offered a discount on the development price.

I want to show the details of an example of one such custom software, something that, to give a name to it, would be a CRM (Customer Relationship Manager), that I built for a pizzeria in Italy. I won’t be opening the source code for it (though I wish I could do so), and I won’t be showing screenshots or provide the name of the actual place, instead referring to it as Pizza Planet.

This CRM (although the name sounds more professional than what it really was), was custom-designed to suit the work environment of the pizzeria — that is to say, I did whatever they asked me, despite it disagreeing with my sense of aesthetics and engineering. The basic idea was very simple: when a customer calls, they wanted to know who the customer was even before picking up the phone — effectively inspecting the caller ID, and connecting it with the easiest database editing facility I could write, so that they could give it a name and a freeform text box to write down addresses, notes, and preferences.

The reason why they called me to write this is that they originally bought a hardware PBX (for a single room pizzeria!) just so that a laptop could connect to it and use the Address Book functionality of the vendor. Except this functionality kept crashing, and after many weeks of back-and-forth with the headquarters in Japan, the integrator could not figure out how to get it to work.

As the pizzeria was wired with ISDN (legacy technology, heh), to be able to take at least two calls at the same time, the solution I came up with was building a simple “industrial” PC, with an ISDN line card and Asterisk, get them a standard SIP phone, and write the “CRM” so that it would initiate a SIP connection to the same Asterisk server (but never answer it). Once an inbound call arrived, it would look up if there was an entry in a simple storage layer for the phone number and display it with very large fonts, to be easily readable while around the kitchen.

As things moved and changed, a second pizzeria was opened and it required a similar setup. Except that, as ISDN are legacy technology, the provider was going to charge up to the nose for connecting a new line. Instead we decided to set up a VoIP account instead, and instead of a PC to connect the software, Asterisk ran on a server (in close proximity to the VoIP provider). And since at that point the limitation of an ISDN line on open calls is limited, the scope of the project expanded.

First of all, up to four calls could be queued, “your call is very important to us”-style. We briefly discussed allowing for reserving a spot and calling back, but at the time calls to mobile phones would still be expensive enough they wanted to avoid that. Instead the calls would get a simple message telling them to wait in line to contact the pizzeria. The CRM started showing the length of the queue (in a very clunky way), although it never showed the “next call” like the customer wanted (the relationship between the customer and the VoIP provider went South, and all of us had to end up withdrawing from the engagement).

Another feature we ended up implementing was opening hours: when call would arrive outside of the advertised opening hours, an announcement would play (recorded by a paid friend, who used to act in theatre, and thus had a good pronunciation).

I’m fairly sure that none of this would actually comply with the new GDPR requirements. At the very least, the customers should be advised that their data (phone number, address) will be saved.

But why am I talking about this in the context of Open Source software? Well, while a lot of the components used in this set up were open source, or even Free Software, it still requires a lot of integration to become usable. There’s no “turnkey pizzeria setup” — you can build up the system from components, but you need not just an integrator, you need a full developer (or development team) to make sure all the components fit together.

I honestly wish I had opensourced more of this. If I was to design this again right now, I would probably make sure that there was a direct, real-time API between Asterisk and a Web-based CRM. It would definitely make it easier to secure the data for GDPR compliance. But there is more than just that: having an actual integrated, isolated system where you can make configuration changes give the user (customer) the ability to set up things without having to know how the configuration files are structured.

To set up the Asterisk, it took me a week or two reading through documentation, books on the topic, and a significant amount of experimentation with a VoIP number and a battery of testing SIM cards at home. To make the recordings work I had to fight with converting the files to G.729 beforehand, or the playback would use a significant amount of CPU.

But these are not unknown needs. There are plenty of restaurants (who don’t have to be pizza places) out there that probably need something like this. And indeed services such as Deliveroo appear to now provide a similar all-in-one solution… which is good for restaurants in cities big enough to sustain Deliveroo, but probably not grate for the smaller restaurants in smaller cities, who probably would not have much of a chance of hiring developers to make such a system themselves.

So, rambling asides, I really wish we had more ready-to-install Open Source solutions for businesses (restaurants, hotels, … — I would like to add banks to that but I know regulatory compliance is hard). I think these would actually have a very good social impact on all those towns and cities that don’t have a critical mass of tech influence, that they come with their own collection of mobile apps, for instance.

If you’re the kind of person who complains that startups only appear to want to solve problems in San Francisco, maybe think of what problems you can solve in and around your town or city.

Inside the life of an embedded developer

Now that the new ultrabook is working almost fine (I’m still tweaking the kernel to make sure it works as intended, especially for what concerns the brightness, and tweaking the power usage, for once, to give me some more power), I think it’s time for me to get used to the new keyboard, which is definitely different from the one I’m used to on the Dell, and more similar to the one I’ve used on the Mac instead. To do so, I think I’ll resume writing a lot on different things that are not strictly related to what I’m doing at the moment but talk about random topic I know about. Here’s the first of this series.

So, you probably all know me for a Gentoo Linux developer, some of you will know me for a multimedia-related developer thanks to my work on xine, libav, and a few other projects; a few would know me as a Ruby and Rails developer, but that’s not something I’m usually boasting or happy about. In general, I think that I’m doing my best not to work on Rails if I can. One of the things I’ve done over the years has been working on a few embedded projects, a couple of which has been Linux related. Despite what some people thought of me before, I’m not an electrical engineer, so I’m afraid I haven’t had the hands-on experience that many other people I know had, which I have to say, sometimes bother me a bit.

Anyway, the projects I worked on over time have never been huge project, or very well funded, or managed projects, which meant that despite Greg’s hope as expressed on the gentoo-dev mailing list, I never had a company train me in the legal landscape of Free Software licenses. Actually in most of the cases, I ended up being the one that had to explain to my manager how these licenses interact. And while I take licenses very seriously, and I do my best to make sure that they are respected, even when the task falls on the shoulder of someone else, and that someone might not actually do everything by the book.

So, while I’m not a lawyer and I really don’t want to go near that option, I always take the chance of understand licenses correctly and, when my ass is on the line, I make sure to verify the licenses of what I’m using for my job. One such example is the project I’ve been working since last March, of which I’ve prepared the firmware, based on Gentoo Linux. To make sure that all the licenses were respected properly, I had to come up with the list of packages that the firmware is composed of, and then verify the license of each. Doing this I ended up finding a problem with PulseAudio due to linking in the (GPL-only) GDBM library.

What happens is that if the company you’re working for has no experience with licenses, and/or does not want to pay to involve lawyers to review what is being done, it’s extremely easy for mistake to happen unless you are very careful. And in many cases, if you, as a developer, pay more attention to the licenses than your manager does, it’s also seen as a negative point, as that’s not how they’d like for you to employ your time. Of course you can say that you shouldn’t be working for that company then, but sometimes it’s not like you have tons of options.

But this is by far not the only problem. Sometimes, what happens is a classic 801 — that is that instead of writing custom code for the embedded platform you’re developing for, the company wants you to re-use previous code, which has a high likelihood of being written in a language that is completely unsuitable for the embedded world: C++, Java, COBOL, PHP….

Speaking of which, here’s an anecdote from my previous experiences in the field: at one point I was working on the firmware for an ARM7 device, that had to run an application written in Java. Said application was originally written to use PostgreSQL and Tomcat, with a full-fledged browser, but had to run on a tiny resistive display with SQLite. But since at the time IcedTea was nowhere to be found, and the device wouldn’t have had enough memory for it anyway, the original implementation used a slightly patched GCJ to build the application to ELF, and used JNI hooks to link to SQLite. The time (and money, when my time was involved) spent making sure the system wouldn’t run out of memory would probably have sufficed to rewrite the whole thing in C. And before you ask, the code bases between the Tomcat and the GCJ versions started drifting almost immediately, so code sharing was not a good option anyway.

Now, to finish this mostly pointless, anecdotal post of mine, I’d like to write a few words of commentary about embedded systems, systemd, and openrc. Whenever I head one or the other side saying that embedded people love the other, I think they don’t know how different embedded development can be from one company to the next, and even between good companies there are so big variation, that make them stand lightyears apart from bad companies like some of the ones I described above. Both sides have good points for the embedded world; what you choose depends vastly on what you’re doing.

If memory and timing are your highest constraints, then it’s very likely that you’re looking into systemd. If you don’t have those kind of constraints, but you’re building a re-usable or highly customizable platform, it’s likely you’re not going to choose it. The reason? While if you’re half-decent in your development cross-compilation shouldn’t be a problem, the reality is that in many place, it is. What happen then is that you want to be able to make last-minute changes, especially in the boot process, for debugging purposes, using shell scripts is vastly easier, and for some people, doing it more easily is the target, rather than doing it right (and this is far from saying that I find the whole set of systemd ideas and designs “right”).

But this is a discussion that is more of a flamebait than anything else, and if I want to go into details I should probably spend more time on it than I’m comfortable doing now. In general, the only thing I’m afraid of, is that too many people make assumption on how people do things, or take for granted that companies, big and small, care about doing things “right” — by my experience, that’s not really the case, that often.

FreeRADIUS and 802.1x authentication

Sometimes my work require me to do stuff that is so interesting I work overtime without actually telling anybody to make it work better, like I’ve done for Munin and Icinga — most of the time, though, my work is just boring, repetitive, but that’s okay, excitement goes a long way to ruin a person’s health. Especially when said excitement hits you on the jawbone with a network where Ethernet ports are authenticated with 802.1x…

You might not know it but there is a way to authenticate clients more or less the same way you do on WiFi when you use actual wired Ethernet. This is done through 802.1x and RADIUS. What is RADIUS? Well, it’s basically an authentication and accounting protocol, which I guess was originally developed for dial-up Internet access… remember those 56k modem days? No? Sigh, I feel old now. At any rate, for that reason, you’ll find the FreeRADIUS server in net-dialup/freeradius as of this moment .. it really should be moved to sys-auth/freeradius-server but I don’t want to bother with that right now.

So what happens during 802.1x is simple: the switch act as a proxy between the client and the RADIUS server (called authenticator), passing through the authentication messages, most of the case EAP-based. Until the authentication is passed, all other packets sent over the network are simply dropped. Now depending on how you set up the network and the capability of your switch, you can make it so that if authentication does not happen in a given time you go to a guest VLAN, or you just keep dropping packets waiting for authentication. Unfortunately if you go with default DHCP configuration, with the default timeouts, it’s likely that you won’t get any network, which is the problem we hit with our device (comparatively, OSX had the same issue, and you had to renew the DHCP lease after about a minute of connecting the Ethernet cable).

So I bought the cheapest 802.1x capable switch I could find on Amazon (an eight ports Cisco SG-200, if you’re interested) and started setting up to simulate an 802.1x network — this switch does not support the guest vlan as far as I can tell, which is actually what bothers me quite a bit, but on the whole it looks okay for our testing. I actually found out after the whole work was done, that it was technically possible to authenticate against a local database instead of having to deal with an external authenticator, but I’ve not been able to get it running that way, so it’s okay.

For what concerns the main topic of this discussion from the Gentoo point of view, I was quite lucky actually; there is good documentation for it on nothing less than TLDP — it’s a 2004 howto but it’s still almost perfect. The only difference in syntax for FreeRADIUS’s configuration is the way the password is designed in the users configuration file. I’ve not checked the client-side configuration of course, since that is probably completely out of date nowadays thanks to WPA Supplicant and NetworkManager.

The big hurdle was getting FreeRADIUS in a decent shape: simply emerging it and trying to start it would get it to fail silently, so I had to kick it into submission, hard. It so happened that a new version of the server was just released in September, so I decided to update to that one version and get it working as a proper ebuild. The new ebuild in tree should work quite nicely, the problem is that if you look at it, it’s hideous. The big problem is that their build system is a complete fsckup, and you have to resolve to deleting whole subdirectories to configure the features to build, and it has quite a few of them as it has over half a dozen database backends, and so many integrations that it’s really not funny to deal with those optional dependencies.

If you used the older ebuilds, or compare the new one to the old ones you can probably notice that I dropped a number of USE flags, especially those that were so specific to FreeRADIUS that they had a fr- prefix. This is because I’ve followed my usual general idea, that USE flags are great, if you’re turning on/off features that are heavy, or that have external dependencies, but if you just enable/disable codepaths that are barely noticeable, they just add to the noise. For this reason there is now only one local USE flag for pcap (which is actually a global flag with a more detailed description).

Also, when you build with SSL (which you want to do when doing 802.1x!) you need a CA to sign the users’ certificates. While you can set up your own CA relatively easily, like you already do for OpenVPN, I’ve made it much easier by wiring the originally-provided script to the --config option for the package (so you just need to run emerge --config freeradius for it to work).

As I said, the build system is extremely bad, to the point that they are actually committing to their GIT repository the whole autotools-generated files, which is not a good idea. At least this time around I was able to free up the files directory as all the patches are handled as tarballed patchsets on my devspace; if you want to see the patches in a more friendly way, I also got a copy of the repository on Gentoo’s GitHub account — you can also find a number of other projects that I patched the same way, including Munin.

Due to security issues, the new version of FreeRADIUS I put in tree is now stable on the two arches that were stable before, and all the old versions are gone, together with their patches (it cleaned up nicely) for the love for our rsyncs. Hopefully that doesn’t screw with anybody’s plans — if somebody has a problem with my changes, feel free to prod me.

How down can you strip a Gentoo system?

In my previous post I noted I’m working on a project here in Los Angeles, of which I don’t wan to get much into the details of. On the other hand, what I’m going to tell you about this, for now, is that it’s a device and it’s running Gentoo as part of its firmware.

You can also guess, if you know us, that since both me and Luca are involved, there is some kind of multimedia work going on.

I came here to strip down as much as possible the production firmware so that the image could be as small as possible, and still allow all the features we need on the device. This is not a new task for me, as I’ve done my best to strip down my own router so that it would require the least amount of space as possible, and I’ve also done some embedded firmwares based on Gentoo before.

The first obstacle you have if you want to reduce the size of Gentoo is almost certainly the set of init scripts that come with OpenRC; for a number of reasons, the init scripts for things like loadkeys and hwclock are not installed by the package that install the commands (respectively sys-apps/kbd and sys-apps/util-linux) but rather are installed by OpenRC itself. They are also both enabled by default, which is okay for a general desktop system but fails badly on embedded systems, especially when they don’t even have a clock.

Then you have to deal with the insane amount of packages that form our base system; without going into the details of having man, tar and so on as part of the base untouchable system (which luckily is much easier to override with the new Portage 2.2, even if it insists bothering you about an overridden system set), and focusing on what you’re going to use to boot the system, OpenRC is currently requiring you a mixture of packages including coreutils, which (a single command that lies in its own package … for ESR’s sake, why was it not implemented within coreutils yet?), grep, findutils, gawk and sed (seriously, four packages for these? I mean I know they are more widely used than coreutils, as they are often used on non-Linux operating systems, but do they really deserve their own package, each of them?).

The most irritating part nowadays for me I guess is the psmisc vs procps battle: with the latter now maintained by Debian, I wonder why they haven’t merged the former yet. Given that they are implementing utilities for the same areas of the system… of course one could wonder why they are not all part of util-linux anyway — yes I know that Debian is also supporting GNU/kFreeBSD on their package. At any rate there is another consideration to be made: only the newer procps allows you to drop support for the ncurses library, earlier depended on it forcefully, and the same is still true for psmisc.

For what it’s worth, what I decided to do was to replace as much as possible with just busybox, including the troublesome psmisc, so that I could drop ncurses from our firmware altogether — interestingly enough, OpenRC depends explicitly on psmisc even though it is not bringing in most of the rest of its dependencies.

Public Service Announcement: if you’re writing an init script and you’re tempted to use which, please replace it with type -p use command -v instead … depending on an extra program when sh already has its built-in is bad, ‘mkay?

*Edit: people made me notice that type -p is not in POSIX so it does not work in Dash. I’m afraid that my only trials to run OpenRC without bash before have used BusyBox, which supports it quite fine; the option to use command -v is valid though, thanks Tim.*

Oh right, of course to replace coreutils usage with BusyBox you have to be able to drop it out of the dependency tree. Sounds easy, doesn’t it? Well, the problem is that even if you’re not deal with PulseAudio (which we are), which brings in eselect-esd, as of yesterday at least every package that could use Python would bring eselect-python in! Even when you were setting USE=-python.

Fortunately, after I bitched a bit about it to Luca, he made a change which at least solves the issue at hand until the stupid eclass is gone out of the tree. And yes I’m no longer even trying to consider it sane, the code is just so hairy and so encrypted that you can’t make heads or tails of it.

There are more issues with a project like this that I can discuss, outside of those part that are “trade secret” (or rather business logic), so expect to hear a bit more about it before it’s announced full out. And many of these have to do with how easy (or not) is to use Gentoo as a basis for devices’ firmwares.

My problem with networking

After my two parter on networking, IPv6 and wireless, I got a few questions on why I don just use a cable connection rather than dealing with wireless bridges. The answer is, unfortunately, that I don’t have a clean way to reach with a cable from the point where my ADSL is and where my office is, on the floor above.

This is mostly due to bad wiring in the house: too little space to get cables through, and too many cables already in there. One of the projects we have going on the house now (we’ve been working on a relatively long list of chores that has to be done since neither me nor my mother foresee leaving this house soon), is to rewire the burglar alarm system, in which case, I should get more space for my cables — modern burglar alarms do not require the equivalent of four Ethernet cables running throughout the house.

Unfortunately that is not going to be the end of the trouble. While I might be able to get the one cable running from my office to the basement (where the cable distribution ties up) and from there to the hallway (where the ADSL is), I’m not sure of how many metres of cables that would be. When I wired with cat5e cable between my office and bedroom (for the AppleTV to stream cleanly), I already had to sacrifice Gigabit speed. And I’m not even sure if passing the cable through there will allow the signal to pass cleanly, as it’ll be running together with the mains’ wires — the house is almost thirty years old, I don’t have a chance to get separate connection for the data cable and the power; I’m lucky enough that the satellite cable fits. And I should shorten that.

To be honest, I knew a way around my house if I wanted to pass a cable to reach here already. But the problem with that is that it would require me to go the widest route possible: while my office is stacked on top of the hallway (without a direct connection, that would have been too easy), to get from one to the other, without the alarm rewiring, I would have to get to the opposite side of the house, bring the cable upstairs and then back, using a mixture of passageways designed for telephone, power and aerial wiring; and crawling outside the wall for a few metres as well.

The problem with that solution, beside the huge amount of time that it would require me to invest in it, is that the total cable length is almost certainly over a hundred metres, which is the official physical limit of cat5e Ethernet cables. Of course many people would insist telling me that “it’s okay, there are high chance it would still work” .. sure, and what if it doesn’t? I mean I have to actually make a hole in the wall at one place, then spend more than a day (I’m sure I wouldn’t be able to do this in just a day, already had to deal with my wiring before), with the risk of not getting a clear enough signal for the connection to be established. No thanks.

I also considered the option of going fibre optic. I have no clue about the cabling itself, and I know it requires even more specific tools than the RJ45 plugs to be wired, but I have looked at the prices of the hardware capable of converting the signal between fibre and good old RJ45 cabling… and it’s way out of my range.

Anyway, back on topic of the current plan for getting the cable running. As I said the current “cable hub” is in the basement, which is mostly used as a storage room for my mother’s stuff. She’s also trying to clean that up, so in a (realistically, remote) future I might actually move most of my hardware down there rather than in the office — namely Yamato, the router itself (forwarding the ADSL connection rather than the whole network) and Archer, the NAS. Our basement is not prone to floods, and is generally cool in the summer, definitely cooler than my office is. Unfortunately for that to work out, I’ll probably need a real-life rack, and rackmount chassis, neither of which is really cheap.

Unfortunately with that being, as I said, in the future, if I were to pass the cable next month from there, and the signal wouldn’t be strong enough, the only option I’d have would be to add a repeater. Adding a repeater there, though, is troublesome. As I said in the other posts, and before as well, my area is plagued with a very bad power supply situation. To the point that I have four UPS units in the house, for a total of 3750 VA (which is, technically, probably more than the power provided by supplier). I don’t really like the idea of having to make room for yet another UPS unit just for a repeater; even less so considering that the cables would end up being over my head, on the stairs’ passage (yes it is a stupid position to add a control panel in the first place), and while most repeaters seem to be wall-mountable, UPS units are a different story.

So the only solution I can think for such a situation would be to add a PoE repeater there, if needed, and then relay its power through a switch, either in my office (unlikely) or in the hallway near the router (most likely), behind the UPS. Once again here, the factor is the cost.

Honestly, even though I decided not to get an office after seeing costs jumping higher and higher – having an office would increase my deductibles of course, but between renting the office, daily transportation, twice the power bill, and so on so forth, it’s not the taxes that worry me – I wonder if it is really as cheap as I prospected it to be, to keep working at home.

Sigh. I guess it’s more paid work, less free time next year as well.

Who does the anti-corporatism feeling serve?

I have, as a Free Software developer and enthusiast, a particular dislike for the anti-corporate websites, and the general anti-corporate feeling that seems to transpire from some of the communities that form around so-called “Free Software Advocates”. You probably know that already if you read me frequently.

In the past few days I have been again in open contrast with those trying to spread “hyperboles” which I’d sincerely call “sensationalistic name-calling”. Similarly to another point this started with one statement by Carlo Piana, who asked to stop calling “piracy” what actually is unauthorised copy. I do agree with his rational that it shouldn’t be called that way, but I’m a pragmatic, I live in this world, and like it or not, the word “piracy” as synonym for “unauthorised copy” is an unfortunate reality. Given that, you have two choice:

  • keep trying to get people to use the “right term” ever and ever — the so-called GNU/Linux method;
  • use their own weapons against them and (as I suggested) call piracy the disregard for copyleft licenses like the GNU GPL (note my use of words here: copyleft licenses; disregarding MIT and BSD is definitely much harder and yet they are Free Software licenses).

As I said I’m a pragmatic so it’s nothing new that I’d go with the second choice. But too many people either still think they can change the world with negative activism, or at least they pretend to, and suggested to call everything proprietary as piracy…. facepalm moment gals and guys.

I still think that most, if not all, of the people involved in anti-corporatism who pretend to care for Free Software, have no idea of what kind of effort is needed to create and maintain Free Software. Sure they might not want to be paid to do what they do, and they might have a different kind of job, so that they can do their job without “dirtying their hands” with proprietary software and proprietary vendors, but most of us, write software for a living, and usually the money come not from writing just pure Free Software — you rather have to compromise.

This does not mean that there is no business case for Free Software; we do know that a number of companies out there do Free Software mainly and can make money and pay developers to do their work, but they don’t make enough money to pay all of the people out there without at least partly compromising, leaving part of their business logic out of Free Software. Nokia and Intel, Sun before and Oracle both before and now, Canonical and RedHat, SuSE and even Apple… they all do lots of contributions to Free Software and yet their main business varies widely, just in a couple of case being mainly Free Software! Google, Yahoo and Facebook also work on Free Software, publish new, pay for maintenance of already present… yet they are not even software houses mostly (or originally).

If Free Software would require people not to be employed by companies producing any kind of proprietary software, the number of developers would be much, much reduced. Not everyone lives alone, many have a family to maintain, some have further complications, most don’t live like a hippy like Richard Stallman seems happy to. So what’s the solution? A few people, including the FSF last I checked, insist that if Free Software won’t pay for your living you can get another job, or settle for a lower wage.. but again, that is not always possible!

Do these activists put their money where their mouth is? I sincerely doubt so, as they most likely have no idea of how people sustain themselves in this environment while still keeping working on Free Software. I’ll try to give you myself as an example, but I’m sure there are situations that are more complex than mine (and quite a few that are easier, but that’s beside the point).

I don’t have to pay a rent, I’m lucky, but I’m still not working for myself alone, as I live with my mother and she’s not working. I have bills to pay each months, unhelped, that comprise phones, Internet and power, all three of which are needed for my Free Software work, as well as my “daily job” and my general living. I have obviously to buy food and general home supplies, and at the same time I have hardware to maintain, again for all the three cases. I have had a few health troubles, and I still have to both keep myself in check and be ready in case something else happens to me. I could do without entertainment expenses, but that would most likely burn myself out so I count those as an actual need as well.

In all of this, how much of the money I get is derived directly from Free Software? I’ll be honest: in the past five years, donations would probably have covered three or four months of basic need, without any saving. And mostly, that is covered by a handful of regular contributors. And before you tell me I should feel ashamed for having said this, I wish to say that I’m still very thankful to everybody who ever sent me something, be it a five euros donation, a flattr click, a book, a hardware component, or a more consistent money donation. Thank you all! Those are the things that let me keep doing what I do, as I feel it’s important for somebody.

I have written a few articles for LWN, but even that only covered a part of what I needed; the main reason is that being a non-native speaker, the time I need to write a proper article is disproportionate, again this is not to say that LWN does not pay properly – they actually pay nicely – it’s my own trouble not to be able to make a proper living from that. I actually tried finding a magazine in Italy that I could be paid to write for, getting rid of the language barrier, but the only one who ever published something (and the first article was an unpaid try) was the Italian edition of Linux Journal that has stopped publishing a couple of months later. Oh and by the way, this kind of work is also considered “proprietary work” as articles, and most books, are as far as I know not usually licensed under Creative Commons or otherwise Free licenses.

So if my pure Free Software work is not paying for bills or anything, nor my writing about it is, what am I to do? I considered for a while getting a job at the nearest Mediaworld (the Italian name for the German chain Mediamarkt), selling consumer electronics. I could do that, but then I wouldn’t probably be willing to contribute to Free Software in my spare time. What I actually do instead is, I work for companies that either make proprietary software (web software, firmware, or whatever else) or that commercialise Free Software (sorta, that’s the case for LScube for the most part). When I do, though, it often ends up with me working at least on the side for Gentoo, or Free Software in general.

I have already described my method a few months ago, I would like to say that a lot of my work on Ruby ebuilds in Portage has been done on paid time for some of my work, and the presence of gdbserver in the tree is due to a customer of mine having migrated to a Gentoo-based build system (to replace buildroot), and gdbserver was to be loaded in their firmware. A lot of the documentation I wrote also is related to that, as is my maintaining of Amazon EC2 software, …

And before this can be mistaken.. I have received more than a few job offers to do Free Software work. Most I had to turn down, either because they required me to go too much out of my way, or because of bad timing (I’m even currently in the mid of something). I also turned down Google, repeatedly, because I have no intention to ever come to USA because of my health trouble. The best offer I had was from a very well known Rails-based hosting company, I was actually very interested in the position and would have accepted even a lower wage than what I was offered, especially because Gentoo was well part of my responsibilities, but they never followed through; twice.

So anyway, what has all of this to do with the original statement, and my problem with anti-corporatism? Well, as I said most of my customers are using Free Software for developing appliances and software whose business logic is still proprietary. It’s better than nothing in the fact that they are still giving me money to keep doing what I’ve done in the past five years and counting. But at the same time, they are wary about Free Software, if they were to (like a few already do) think that Free Software is either too amateurish, or is trying to undermine their very existence entirely, they might decide that their money should not be spent on furthering those ideas.

And nothing is more dangerous than that, because if there is something that Free Software in general needs more, is competent people being paid to work mainly on Free Software. And the money often is in the hands of those companies that you’re scaring away with your “Fight da man” attitude; the same companies that Microsoft did their best to spread FUD to, regarding Linux and the Free Software and Open Source movements. I’d be surprised if there is nobody in Microsoft’s offices right now that is gloating, to see how the so-called “Advocates” are doing their best to isolate Free Software from the money it needs.

Ah yes and I was forgetting to say: if you don’t think that money is important for Free Software… take a hitch and don’t even try commenting, I will be deleting such inane and naïf comments.

Health, accounting and backups

For those who said that I have anger management issues regarding my last week’s post I’d like to point out that it’s actually a nervous breakdown that I got, not strictly (but partly) related to Gentoo.

Since work, personal life, Gentoo and (the last straw) taxes all merged this week, I ended up having to take a break from a lot of stuff; this included putting on hold for the week all kind of work, and actually spend most of my time making sure I have proper accounting, both for what concerns my freelancer activity, and home expenses (this is getting particularly important because I’m almost living alone – even if I technically am not – and thus I have to make sure that everything fits into the budget). Thankfully, GnuCash provides almost all the features I need. I ended up entering all the accounting information I had available, dating back to January 1st 2009 (my credit card company’s customer service site hasn’t worked in the past two weeks — since it’s the subsidiary of my own bank, I was able to get the most recent statements through them, but not the full archive of statements since issuing of the cards, which is a problem to me), and trying to get some data out of it.

Unfortunately, it seems like while GnuCash already provides a number of reports, it does not have the kind of reports I have, such as “How much money did the invoices from 2009 consists of?” (which is important for me to make sure I don’t go over the limit I’m given), or “How much money did I waste in credit card interests?”… I’ll have to check out the documentation and learn whether I can make some customised reports that produce the kind of data I need. And maybe there’s a way to set the term of payments that I have with a client of mine (30 days from the end of the month the invoice was issued in… which means if I issue the invoice tomorrow, I’ll be paid on May 1st).

On a different note, picking up from Klausman’s post I decided to also fix up my backup system, which was, before, based on single snapshots of the system on external disks and USB sticks; and moved to use a single rsnapshot system to back everything up in a single external disk, from the local system, the router, the iMac, the two remote servers, and so on. This worked out fine when I tried again the previous eSATA controller I had, but unfortunately it again failed (d’oh!) so I fell back to Firewire 400 but that’s way too slow for rsnapshot to do a full backup hourly. I’m thus trying to find a new setup for the external disk. I’m unsure whether to look up a FireWire 800 card or a new eSATA controller. I’m not sure about Linux’s support for the former though; I know that FireWire used to be not too well maintained, so I’m afraid it might just go down to FireWire 400, which is pointless. I’m not sure about eSATA because I’m afraid it might not be the controller’s fault but rather a problem with (three different kind of) disks or the cables; and if the problem is in the controller, I’m afraid about the chip on it; the one I have here is a JMicron-based controller, but with a memory chip that is not flashable with the JMicron-provided ROM (and I think there might be a fix in there for my problem) nor with flashrom as it is now.

So if you have to suggest an idea about this I’d be happy to hear of it; right now I only found a possibly interesting (price/features) card from Alternate (Business-to-business) “HighPoint RocketRAID 1742” which is PCI-based (I have a free PCI slot right now, and in case I can move it to a different box that has no PCI-E), and costs around €100. I’m not sure about driver support for that though, so if somebody have experience about it, please let me know. Interestingly enough my two main suppliers in Italy seem to not have any eSATA card, and of course high-grade, dependable controllers aren’t found at the nearest Saturn or Mediamarkt (actually, Mediaworld here, but it’s the very same thing).

Anyway, after this post I’m finally back to work on my job.

Remote debugging with GDB; part 2: GDB

When I first wrote the original part 1 of the tutorial I was working on a job project for a customer of mine; time passed without me writing the second part, and this actually went this way for over an year. Talk about long-term commitment. On the other hand, since this is going to be used almost literally for documentation for the same customer (different project, though), I’m resuming the work writing this.

On the bright side, at least one of the Gentoo changes that I have described as desired in the previous post is now solved: we have a dev-util/gdbserver ebuild in the tree (maintained by me, written because of the current project, so can be said to be contributed by the current customer of mine). So from the device side, you only need to install that to have it ready for debugging.

On the build-machine side, you need the cross-debugger for the target architecture and system; you can install it with crossdev like you do for the cross-linker and cross-compiler, but with a twist: for GDB 7.0 or later you need to install with expat support (because it seems like they now use XML as a protocol to talk between the client/master and the server/slave). So for instance you run this command: USE=expat crossdev -t $CHOST --ex-gdb.

If you’re not using Gentoo on either side, then you’re going to have to deal with both builds by hand… tough luck. On the other hand, the gdbserver executable is very small in size, on the latest version it’s just a bit over 100KiB, so building it takes very little time and takes up very little space even on embedded Linux systems.

On the remote system, there are three modes of operation: single program, attach to running process, and multi-execution daemon; the latter is my favourite mode of operations, as you just need to start gdbserver like a standard Unix daemon, waiting for incoming requests and handling them one by one as long as it’s needed (yes, I guess it would make sense to have an init script for gdbserver when used on not-too-embedded systems). In all three cases, it needs a communication media to work; on modern, and less embedded, systems that media is almost always a TCP/IP connection (just give it a ip:port address format), but the software supports using serial ports for that task as well.

The executable for gdbserver in my system is just 72KiB, so it’s very quick to upload it to the target, even if it’s an embedded Linux system. After uploading, you have to start it on the target; you can either start a program directly, attach to a process already running or, my favourite, use it for multiple debugging at once. In this latter mode, gdbserver acts a lot like a standard Unix daemon, waiting for incoming requests and handling them one by one for as long as it’s needed. This actually makes me consider the idea of setting up an init script to use for debug and test systems.

To start the daemon-like instance, just use the --multi option:

arm # gdbserver --multi 192.168.0.38:12345

Now you can connect to the server through the cross-debugger built earlier:

% arm-linux-gnu-gdb
(gdb) target extended-remote 192.168.0.38:12345

This drops us inside the server, or rather inside the target board, ready to debug. So let’s upload this crashy program:

% cat crash.c
#include 

int main() {
        char *str = NULL;
        return strcmp(str, "foo");
}
% arm-linux-gnu-gcc -ggdb crash.c -o crash
% arm-linux-gnu-strip crash -o crash_s
% wput crash_s ...

At this point you can note one very important point: we’re building the the program with the -ggdb option to produce the debug information for the program, so that GDB can tell which variable is which and which address correspond to which function; this is very important to have meaningful backtrace and is even more important if you plan on using further features including, for instance, break- and watch-points. You could technically even use -g3 to embed the source files into the final executable, which is particularly useful if you plan on having multiple firmwares around, but that’s not always working correctly so leave it as that for now.

But even if we need the debug information in the compiled executable object, we don’t need the (bigger) binary to be present on our target system; the reason is that the debug information is only composed of extra sections, and does not add or subtract any code or information to the runtime-executable code. This is why -ggdb does not make your system slower, and why stripping an executable of its debug information is enough to make it slimmer. At the same time, the extra sections added by -ggdb and stripped by strip are never going to be mapped from disk into memory anyway, at least when executing them with ld.so, so there is no extra memory usage caused by having non-stripped binaries. The only difference is in the size of the file, which might still be something if you’re parsing it for some reason, and might fragment it into multiple pieces.

Anyway, since the debug information, as just stated, is not needed, nor used at all, at runtime, there is no reason why the information has to stay on the target: the gdbserver program is just going to execute whatever the master GDB instance will ask it to, and has no reason to have a clue about the executable file or its debugging information. So you can just copy a stripped version of the file and upload it on the target; you can then use it normally, or run it through gdbserver.

After uploading the file you need to set up GDB to run the correct executable, and to load the symbols from the local copy of it:

(gdb) symbol-file crash
(gdb) set remote exec-file /uploaded/to/crash_s

and you’re completely set. You can now use the standard GDB commands (break, run, handle, x, kill, …) like you were running the debugger on the target itself! The gdbserver program will take the action, and the main GDB instance will direct it as per your instructions.

Have fun remote debugging your code!