Falsehoods in Tutorials: Database Schemas

It’s well possible that a number of people reading this post have already stumbled across a few of the “Falsehoods Programmers Believe…” documents. If not, there appears to be a collection of them, although I have honestly only read through the ones about names, addresses, and time. The short version of all of this, is that interfacing software with reality is complicated, and in many cases, programmers don’t know how complicated it is at all. And sometimes this turns into effectively institutional xenophobia.

I have already mused that tutorials and documentation are partially to blame, by spreading code memes and reality-hostile simplifications. But now I have some more evidence of this being the case, without me building an explicit strawman like I did last time, and that brings me to another interesting point, in regards to the raising importance of getting stuff right beforehand, as costs to correct these mistakes are raising.

You see, with lockdown giving us a lot of spare time, I spent some of it on artsy projects and electronics, while my wife spent it learning about programming, Python, and more recently databases. She found a set of tutorials on YouTube that explain the basis of what a database is, and how SQL works. And they were full of those falsehoods I just linked above.

The tutorials use what I guess is a fairly common example of using a database for employees, customers, and branches of a company. And it includes in the example the fields for first name and last name. Which frankly is a terrible mistake — with very few exception that include banks and airlines, there’s no need to distinguish between components of a name, and a simple full name field would work just as well, and don’t end up causing headaches to people from cultures that don’t split names the same way. The fact that I recently ranted about this on Twitter against VirusTotal is not totally coincidental.

It goes a bit beyond that though, by trying to explain ON DELETE triggers by attaching them to the deletion of an employee from the database. Now, I’m not a GDPR lawyer, but it’s my understanding that employee rosters are one of those things that you’re allowed to keep for essential business needs — and you most likely don’t want to ever delete employees, their commissions payment history, and tax records.

I do understand that a lot of tutorials need to be using simple examples, as setting up a proper HR-compatible database would probably take a lot more time, particularly with compartmentalizing information so that your random sales analyst don’t have access to the home phone numbers of their colleagues.

I have no experience with designing employee-related database schemas, so I don’t really want to dig myself into a hole I can’t come out of, by running with this example. I do have experience with designing database schemas for product inventory, though, so I will run with that example. I think it was a different tutorial that was talking about those, but I’ll admit I’m not sure, because I didn’t pay too much attention as I was getting annoyed at the quality.

So this other tutorial focused on products, orders and sales total — its schema was naïve and not the type of databases any real order history system would use — noticeably, it assumed that an order would just need to connect with the products, with the price attached to the product row. In truth, most databases like those would need to attach the price for which an item was sold to the order — because products change prices over time.

And at the same time, it’s fairly common to want to keep the history of price changes for an item, which include the ability to pre-approve time-limited discounts, so a table of products is fairly unlikely to have the price for each item as a column. Instead, I’ve commonly seen these database to have a prices table that references the items, and provides start and end dates for the price. This way, it’s possible to know at any time what is the “valid price” for an item. And as some of my former customers had to learn on their own, it’s also important to separate which VAT is used at which time.

Example ER diagram showing an example of a more realistic shop database.

There are five tables. * indicates the primary key.

Order (*ID, Customer_ID, Billing_Address, Shipping_Address)
Order_Products(*Order_ID, *Product_ID, Gross_Price, VAT_Rate)
Product(*ID, Name)
Product_VAT(*Product_ID, *Start_Date, End_Date, VAT_Rate)
Product_ID(*Product_ID, *Start_Date, End_Date, Gross_Price)

This is again fairly simplified. Most of the shopping systems you might encounter use what might appear redundant, particularly when you’re taught that SQL require normal form databases, but that’s just in theory — practice is different. Significantly so at times.

Among other things, if you have an online shop that caters to multiple countries within the European Union, then your table holding products’ VAT information might need to be extended to include the country for each one of them. Conversely, if you are limited to accounting for VAT in a single country you may be able to reduce this to VAT categories — but keep in mind that products can and do change VAT categories over time.

Some people might start wondering now why would you go through this much trouble for an online store, that only needs to know what the price is right now. That’s a good point, if you happen to have multiple hundreds’ megabytes of database to go through to query the current price of a product. In the example above you would probably need a query such as

SELECT Product.ID, Product.Name, Product_Price.Gross_Price, Product_VAT.VAT_Rate
FROM Product
  LEFT JOIN Product_Price ON Product_Price.Product_ID = Product.ID
  LEFT JOIN Product_VAT ON Product_VAT.Product_ID = Product.ID
WHERE
  Product.ID = '{whatever}' AND
  Product_Price.Start_Date <= TODAY() AND
  Product_Price.End_Date > TODAY() AND
  Product_VAT.Start_Date <= TODAY() AND
  Product_VAT.End_Date > TODAY();

It sounds like an expensive query, doesn’t it? And it seems silly to go and scan the price and VAT tables all the time throughout the same day. It also might be entirely incorrect, depending on its placement — I do not know the rules of billings, but it may very well be possible that an order be placed close to a VAT change boundary, in which case the customer could have to pay the gross price at the time of order, but the VAT at shipping time!

So what you do end up using in many places for online ordering is a different database. Which is not the canonical copy. Often the term used for this is ETL, which stands for Extract, Transform, Load. It basically means you can build new, read-only tables once a day, and select out of those in the web frontend. For instance the above schema could be ETL’d to include a new, disconnected WebProduct table:

The same ER diagram as before, but this time with an additional table:

WebProduct(*ID, *Date, Name, Gross_Price, VAT_Rate)

Now with this table, the query would be significantly shorter:

SELECT ID, Name, Gross_Price, VAT_Rate
FROM WebProduct
WHERE ID = '{whatever}' AND Date = TODAY();

The question that comes up with seeing this schema is “Why on Earth do you have a Date column as part of the primary key, and why do you need to query for today’s date?” I’m not suggesting that the new table is generated to include every single day in existence, but it might be useful to let an ETL pipeline generate more than just one day’s worth of data — because you almost always want to generate today’s and tomorrow’s, that way you don’t need to take down your website for maintenance around midnight. But also, if you don’t have any expectation that prices will fluctuate on a daily basis, it would be more resource-friendly to run the pipeline every few days instead of daily. It’s a compromise of course, but that’s what system designing is there for.

Note that in all of this I have ignored the issue of stock. That’s a harder problem, and one that might not actually be suited to be solved with a simple database schema — you need to come to terms with compromises around availability and the fact that you need a single source of truth for how many items you’re allowed to sell… consistency is hard.

Closing my personal rant on database design, there’s another problem I want to point a spotlight to. When I started working on Autotools Mythbuster, I explicitly wanted to be able to update the content, quickly. I have had multiple revisions of the book on the Kindle Store and Kobo, but even those lagged behind the website a few times. Indeed, I think the only reason why they are not lagging behind right now is that most of the changes on the website in the past year or two have only been cosmetics, and not applying to ePub.

Even for a project like that, which uses the same source of truth for the content, there’s a heavy difference in the time cost of updating the website rather than the “book”. When talking about real books, that’s an even bigger cost — and that’s without going into the print books realm. Producing content is hard, which is why I realised many years ago that I wouldn’t have the ability to carve out enough time to make a good author.

Even adding diagrams to this blog post has a slightly higher cost than just me ranting “on paper”. And that’s why sometimes I could add more diagrams with my ideas, but I don’t, because the cost of producing it, and keeping it current would be too high. The Glucometers Protocols site as a few rough diagrams, but they are generated with blockdiag so that they can be edited quickly.

When it comes to online tutorial, though, there’s an even bigger problem: the possibly vast majority of them are, nowadays, on YouTube, as videos shot with a person in frame, to be more like a teacher in a classroom, that can explain things. If something in the video is only minimally incorrect, it’s unlikely that those videos would be re-shot — it would be an immense cost in time. Also, you can’t just update a YouTube video like you do a Kindle book — you lose comments, likes, view-counts, and those things matter for monetization, which is what most of those tutorials out there are made for. So unless the mistakes in a video-tutorial are Earth-shattering, it’s hard to expect the creators to go and fix them.

Which is why I think that it’s incredibly important to get the small things right — Stop using first and last name fields in databases, objects, forms, and whatever else you are teaching people to make! Think a bit harder as for how a product inventory database would look like! Be explicit in pointing out that you’re simplifying to an extreme, rather than providing a real-world-capable design of a database! And maybe, just maybe, start using examples that are ridiculous enough that they don’t risk being used by a junior developer in the real world.

And let me be clear on this: you can’t blame junior developers for making mistakes such as using a naïve database schema, if that’s all they are taught! I have been saying this at previous dayjob for a while: you can’t complain about the quality of code of newbies unless you have provided them with the right information in the documentation — which is why I spent more time than average on example code, and tutorials, to fix up trimmings and make it easier to copy-paste the example code into a working change that follows best practices. In the words of a colleague wiser than me: «Example code should be exemplar.»

So save yourself some trouble in the future, by making sure the people that you’re training get the best experience, and can build your own next tool to the best of specs.

It’s all in the schema

When it comes to file formats, I’m old school, and I probably prefer XML over something like YAML. I also had multiple discussions with people over the years, that could be summarised in “If only we used $format, we wouldn’t have these problems” — despite the problems being fairly clearly a matter of schema rather than format.

These discussions didn’t even stop in the bubble, since while protocol buffers are the de-facto file format, for the whole time I worked there, there had been multiple option for expanding the format, with templates, additional languages built on top of them, and templates for those additional languages.

Schema is metadata: data describing data. And in particular, it describe how the data looks like in general terms: which fields it has, what’s the format of the fields, and what are their valid values, and so on. Don’t assume that with schema I refer to XML Schemas only! There’s a significant amount of information that is not usually captured by a schema description languages (and XML Schemas is only one of them) — things like does that epoch time represent a date in UTC or local timezone?

The reason why I don’t have have any particularly strong opinion on data formats as much as I do data schemas is that once you have a working abstracted interface for them, you don’t need to care what the format is. This is clearly easier said than done, of course. DOM and SAX are complicated enough, and the latter is so specific to XML that there is practical zero hope to reuse a design depending on it for anything but XML. And you may have a preference of one format over another for other reasons.

For example, if your configuration was stored in XML, the SAX API allows you to parse the configuration file and fill in a structure in a single pass, which may be more memory-efficient than parsing the files into key/value pairs and requesting them by string. I did something like that with other file types through Ragel, but let’s be honest, in most cases, configuration file parsing speed is not your bottleneck (except if it is and in that case you probably know how to handle that already).

The big problem for me with choosing a schema is that unless you have an easy way to expand it, you’ll find yourself stuck at some point. Just look at the amount of specifications around for complex file formats such as pcapng. Or think of the various revisions of RFCs just for HTTP/1.1 (without considering the whole upgrade to HTTP/2 and later). Committing to a schema is scary, because if you get it wrong, you’re likely going to be stuck for a long while, or you end up with the compatibility issue of changing the format every other release of whatever tool uses the format.

This is not far from what happens with office formats as well. If you look at the various formats used by Microsoft Word, they seems to change for each of the early releases, but then kind-of settled down by the time Word 97 came along, before standardizing on the OOXML format. And even in the open source world, OpenDocument took quite a while before being stable enough to be usable, but is now fairly stable.

I wish I now had an answer to give everyone about how to handle schemas and updates to them. Unfortunately, I don’t. In the bubble, the answer is not to worry too hard about the schema as long as your protocol buffer definitions are valid, because the monorepo will enforce their usage. It’s a bit of a misconception as well, since even with a single format and a nearly-globally enforced schema there can be changes that need to be propagated first, but it solves a lot of problems.

I had thought about schemas before, and I’m thinking of them again, in the context of glucometerutils because I would really like to have an easy way to export the data dumped from various meters into a format that can be processed with different software. This way, I only need to care about building the protocol tooling, and leave it to someone else who has better ideas about visualisation and analysis to build tools for that part.

My best guess right now about that is to just keep a tool that can upgrade the downloaded format from one version to the next — and make sure that there’s a library in-between the exporter and the consumer, so that as long as they both have the same library version, there’s no need to keep the two tools in sync. But I have not really written any code for that.

Again on libvirt’s XML configuration files

So, Daniel Velliard – among the other things the maintainer of libvirt – dared me to give further voice to my concerns about libvirt’s configuration files being “aXML, almost XML”, given, and I quote him here:

I’m also on the XML standard group at W3C and the main author of libxml2, I would have no troubles debunking your argument very quickly

I guess I’ll then have to cross-post this entry to the libvir-list to make sure that I’m not, to paraphrase him, “hiding” on my blog (do I hide on a blog with at least 15K visitors a month, with no comment moderation, and syndicated on the homepage of Gentoo’s website ?).

First of all, I’m not really undermining Daniel’s technical preparation; I’m pretty sure he generally knows what he’s doing. But his being the author of libxml2 or being part of W3C’s standard group does not mean that he’s perfect. Nor it means that somebody should refrain from commenting on his ideas. This really reminds me of what people said about Ryan when I criticised his idea and just wanted me to shut up because he did a good job at porting games before. My original title for this post was reading “Being part of groups or having written libraries does not make you infallible” but it wasn’t catchy enough.

So, this is the preamble for my blog’s readers only, since it’s definitely not relevant to the libvirt project. The rest is for the list as well.

Hello,

In a recent post on my blog I ranted on about libvirt and in particular I complained that the configuration files look like what I call “almost XML”. The reasons why I say that are multiple, let me try to explain some.

In the configuration files, at least those created by virt-manager there is no specification of what the file should be (no document type, no namespace, and, IMHO, a too generic root element name); given that some kind of distinction is needed for software like Emacs’s nxml-mode to know how to deal with the file, I think that’s pretty bad for interaction between different applications. While libvirt knows perfectly well what it’s dealing with, other packages might not. Might not sound a major issue but it starts tickling my senses when this happens.

The configuration seem somewhat contrived in places like the disk configuration: if the disk is file-backed it require the file attribute to the <source> element, while it needs the dev attribute if it’s a block device; given that it’s a path in both cases it would have sounded easier on the user if a single path attribute was used. But this is opinable.

The third problem I called out for in the block is a lack of a schema for the files; Daniel corrected me pointing out that the schemas are distributed with the sources and installed. Sure thing, I was wrong. On the other hand I maintain that there are problems with those schemas. The first is that both the version distributed with 0.7.4 and the git version as of today suffer from bug #546254 (secret.rng being not well formed) so it means nobody has even tested them as of lately; then there is the fact that they are never referenced by the human-readable documentation which is why I didn’t find it the first time around; add also to that some contrived syntax in those schema as well that causes trang to produce a non-valid rnc file out of them (nxml-mode uses rnc rather than rng).

But I guess the one big problem with the schemas is that they don’t seem to properly encode what the human-readable documentation says, or what virt-manager does. For instance (please follow me with selector-like syntax), virt-manager creates /domain/os/type[@machine='pc-0.11'] in the created XML; the same attribute seem to be documented: “There are also two optional attributes, arch specifying the CPU architecture to virtualization, and machine referring to the machine type”. The schema does not seem to accept that attribute though (“element type: Relax-NG validity error : Invalid attribute machine for element type” with xmllint, just to make sure that it’s not a bug in any other piece of software, this is Daniel’s libxml2).

Now after voicing my opinions here, as Daniel dared me to do, I’d like to explain a second why I didn’t post this on the list in the first place: of what I wrote here, my beefs for calling this aXML, the only things that can be solved easily are the schemas; schemas that, at the time I wrote the blog, I was unable to find. The syntax, and the lack of a “safe” identification of the files as libvirt’s are the kind of legacy problems one has to deal with to avoid wasting users’ time with migrations and corrections, so I don’t really think they should be addressed unless a redesign of the configuration is intended.

Just my two cents, you’re free to take them as you wish, I cannot boast a curriculum like Daniel’s, but I don’t think I’m stepping out of place to point out these things.