This Time Self-Hosted
dark mode light mode Search

Progress Logging and Results Logging

There is one thing that my role as software mechanic seems to get me attracted to, and that’s the importance of logging information. Logging is one of those areas that tend to bring up opinions, and with the idea of making this into a wider area of observability, it brought on entire businesses (shout out to friends Honeycomb.io). But even in smaller realities, I found myself caring about logging, setting up complex routing with metalog, or hoping to get a way to access Apache logs in a structured format.

Obviously, when talking about logging in bubbles, there’s a lot more to consider than just which software you send the logs to — even smaller companies nowadays need to be careful with PII, since GDPR makes most data toxic to handle. I can definitely tell you that some of the analysis I used to do for User-Agent filtering would not pass muster for a company at the time of GDPR — in a very similar fashion as the pizzeria CRM.

But leaving aside the whole complicated legal landscape, there’s a distinction in logs that I have not seen well understood by engineers – no matter where they are coming from – and that is the difference between what I call progress logging and results logging. I say that I call them this way, because I found a number of other different categorizations of logs, but none that matches my thoughts on the matter, and I needed to give it names.

Distinctions that I did hear people talk about are more like “debug logs” versus “request logs”, or “text logs” versus “binary logs”. But this all feels like it’s mixing media and message, in too many cases — as I said in my post about Apache request logs, I would love for structured (even binary would do) request logs, which are currently “simple” text logs.

Indeed, Apache (and any other server) request logs to me fit neatly in the category of results logging. They describe what happened when an action completed: the result of the HTTP request includes some information of the request, and some information of the response. It provides a result of what happened.

If you were to oversimplify this, you could log each full request and each full response, and call that results logging: a certain request resulted in a certain response. But I would expect that there is a lot more information available on the server, which does not otherwise make it to the response, for many different reasons (e.g. it might be information that the requestor is not meant to find out, or simply doesn’t have to know, and the response is meant to be as small as possible). In the case of an HTTP request to a server that act as a reverse proxy, the requestor should not be told about which backend the request was handled by — but it would be an useful thing to log as part of the result.

When looking at the practicality of implementing results logging, servers tend to accrue the information needed for generating the result logs in data structures that are kept around throughout the request (or whatever other process) lifetime, and then extract all of the required information from them at the time of generating the log.

This does mean that if the server terminates (either because it’s killed, the power goes off, or the request caused it to crash), and the result is not provided, then you don’t get a log about the request happening at all — this is the dirty secret of Apache request logs (and many other servers): they’re called request logs but they actually log responses. There are ways around this, by writing parts of the results logs as they are identified – this helps both in terms of persistence and in terms of memory usage (if you’re storing something in memory just because you should be logging it later) but that ends up getting much closer to the concept of tracing.

Progress logs, instead, are closer to what is often called shotgun debugging or printf() debugging. They are log statement emitted as the code goes through them, and they are usually free-form for the developer writing the code. This is what you get with libraries such as Python’s logging, and can assume more or less structured form depending on a number of factors. For instance, you can have a single formatted string with maybe the source file and line, or you may have a full backtrace of where the log event happened and what the local variables in each of the function calls were. What usually make you choose between the two is cost, and signal-to-noise ratio, of course.

For example, Apache’s mod_rewrite has a comprehensive progress log that provides a lot of details of how each rewrite is executed, but if you turn that on, it’ll fill up your server’s filesystem fairly quickly, and it will also make the webserver performance go down the drain. You do want this log, if you’re debugging an issue, but you most definitely don’t want it for every request. The same works for results logs — take for instance ModSecurity: when I used to maintain my ruleset, I wouldn’t audit-log every request, but I had a special rule that, if a certain header was provided in the request, would turn on audit-logging. This allowed me to identify problems when I was debugging a new possible rule.

Unfortunately, my experience straddling open-source development and industry bubbles means I don’t have overall good hopes for an easy way to implement logging “correctly”. Both because correctly is subjective, and because I really haven’t found a good way to do this that scales all the way from a simple tool like my pdfrename to a complex Cloud-based solution. Indeed , while the former would generally be caring less about structured logs and request tracing, a Cloud software like my planned-and-never-implemented Tanuga would get a significant benefit from using OpenTelemetry to connect feed fetching and rendering.

Flexible and configurable logging libraries, such as are available for Python, Ruby, Erlang, and many more, provide a good “starting point” but by experience they don’t scale well between in and out of an organization or unit. It’s a combination of problems similar to the schema issue and the RPC issue: within an organization you can build a convention of what you expect logs to be, and you can pay the cost of updating the configuration for all sorts of tools to do the right thing, but if you’re an end user, that’s unlikely — besides, sometimes that’s untested.

So it makes sense that, up to this day, we still have a significant reliance on “simple”, unstructured text logs. They are the one universally accessible way to provide information to users. But I would argue that we should be better off to build an ecosystem of pluggable, configurable backends, where the same tool, without being recompiled or edited, can be made to output simple text on the standard error stream, or to a more structured event log. Unfortunately, judging by how the FLOSS world took the idea of standardizing services’ behaviours with systemd, I doubt that’s going to happen any time soon in the wide world… but you can probably get away with it in big enough organizations that control what they run.

Also, for a bit of fun related tidbit, verbose (maybe excessively so) progress logging is what made my reverse engineering the OneTouch Verio so easy: on Windows the standard error is not usually readable… unless you run the application through a debugger. So once I did that, I could see every single part of the code as it processed the requests and responses for the device. Now, you could think that just hiding the logs by default, without documenting the flag to turn them on would be enough — but as it turns out, as long as the logging calls are built into a binary, it’s not too hard to understand them while reverse engineering.

What this is meant to say is that, just because easy access to logs is a great feature for open source tools, and for most internal tools in companies and other institutions, the same cannot be said for proprietary software: indeed, the ability to obfuscate logs, or even generate “encrypted” logs, is something that proprietary software (and hardware) thrive on: it makes it harder to reverse engineer. So it’s no surprise if logs are a complicated landscape with requirements that are not only divergent, but at times opposite, between different stakeholders.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.