Why do we still use Ghostscript?

Late last year, I have had a bit of a Twitter discussion on the fact that I can’t think of a good reason why Ghostscript is still a standard part of the Free Software desktop environment. The comments started from a security issue related to file-access from within a PostScript program (i.e. a .ps file), and at the time I actually started drafting some of the content that is becoming this post now. I then shelved most of it because I’ve been busy and it was not topical.

Then Tavis had to bring this back to the attention of the public, and so I’m back writing this.

To be able to answer the question I pose in the title we have to first define what Ghostscript is — and the short answer is, a PostScript renderer. Of course it’s a lot more than just that, but for the most part, that’s what it is. It deals with PostScript programs (or documents, if you prefer), and renders them into different formats. PostScript is rarely if at all use in modern desktops — not just because it’s overly complicated, but because it’s just not that useful in a world that mostly settled in PDF, which is essentially a “compiled PostScript”.

Okay not quite. There are plenty of qualifications that go around that whole paragraph, but I think it matches the practicalities of the case fairly well.

PostScript has found a number of interesting niche uses though, a lot of which focus around printing, because PostScript is the language that older (early?) printers used. I have not seen any modern printers speak PostScript though, at least after my Kyocera FS-1020, and even those who do, tend to support alternative “languages” and raster formats. On the other hand, because PostScript was a “lingua franca” for printers, CUPS and other printer-related tooling still use PostScript as an intermediate language.

In a similar fashion, quite a few software that deal with faxes (yes, faxes), tend to make use of Ghostscript itself. I would know because I wrote one, under contract, a long time ago. The reason is frankly pragmatic: if you’re on the client side, you want Windows to “print to fax”, and having a virtual PostScript printer is very easy — at that point you want to convert the document into something that can be easily shoved down the fax software throat, which ends up being TIFF (because TIFF is, as I understand it, the closest encoding to the physical faxes). And Ghostscript is very good at doing that.

Indeed, I have used (and seen used) Ghostscript in many cases to basically combine a bunch of images into a single document, usually in TIFF or PDF format. It’s very good at doing that, if you know how to use it, or you copy-paste from other people’s implementation.

Often, this is done through the command line, too, the reason for which is to be found in the licenses used by various Ghostscript implementations and versions over time. Indeed, while many people think of Ghostscript as an open source Swiss Army Knife of document processing, it actually is dual-licensed. The Wikipedia page for the project shows eight variant, with at least four different licenses over time. The current options are AGPLv3 or the commercial paid-for license — and I can tell you that a lot of people (including the folks I worked under contract for), don’t really want to pay for that license, preferring instead the “arms’ length” aggregation of calling the binary rather than linking it in. Indeed, I wrote a .NET Library to do just that. It’s optimized for (you guessed it right) TIFF files, because it was a component of an Internet Fax implementation.

So where does this leave us?

Back ten years ago or so, when effectively every Free Software desktop PDF viewer was effectively forking the XPDF source code to adapt it to whatever rendering engine they needed, it took a significant list of vulnerabilities that needed to be fixed time and time again for the Poppler project to take off, and create One PDF Rendering To Rule Them All. I think we need the same for Ghostscript. With a few differences.

The first difference is that I think we need to take a good look at what Ghostscript, and Postscript, are useful for in today’s desktops. Combining multiple images in a single document should _not_ require processing all the way to PostScript. There’s no reason to! Particularly not when the images are just JPEG files, and PDF can embed them directly. Having a tool that is good at combining multiple images into a PDF, with decent options for page size and alignment, would probably replace many of the usages of Ghostscript that I had in my own tools and scripts over the past few years.

And while rendering PostScript for either display or print are similar enough tasks, I have some doubt the same code would work right for both. PostScript and Ghostscript are often used in networked printing as well. In which case there’s a lot of processing of untrusted input — both for display and printing. Sandboxing – and possibly writing this in a language better suited to deal with untrusted input than C is – would go a long way to prevent problems there.

But there are a few other interesting topics that I want to point out on this. I can’t think of any good reason for desktops to support PostScript out of the box in 2019. While I can still think of a lot of tools, particularly from the old timers, that use PostScript as an intermediate format, most people _in the world_ would use PDF nowadays to share documents, not PostScript. It’s kind of like sharing DVI files — which I have done before, but I now wonder why. While both formats might have advantages over PDF, in 2019 they definitely lost the format war. macOS might still support both (I don’t know), but Windows and Android definitely don’t, which make them pretty useless to share knowledge with the world.

What I mean with that is that it’s probably due time that PostScript becomes an optional component of the Free Software Desktop, one that the users need to enable explicitly if they ever need it, just to limit the risks that accepting, displaying and thumbnailing full, Turing-complete programs masqueraded as documents. Even Microsoft stopped running macros in Office documents by default, when they realize the type of footgun it had become.

Of course talk is cheap, and I should probably try to help directly myself. Unfortunately I don’t have much experience with graphics formats, beside for maintaining unpaper, and that is not a particularly good result either: I tried using libav’s image loading, and it turns out it’s actually a mess. So I guess I should either invest my time in learning enough about building libraries for image processing, or poke around to see if someone wrote a good multi-format image processing library in, say, Rust.

Alternatively, if someone starts to work on this and want to have some help with either reviewing the code, or with integrating the final output in places where Ghostscript is used, I’m happy to volunteer my time. I’m fairly sure I can convince my manager to let me do some of that work as a 20% project.

Linux desktop and geek supremacists

I have written about those who I refer to as geek supremacists a few months ago, discussing the dangerous prank at FOSDEM — as it turns out, they overlap with the “Free Software Fundamentalists” I wrote about eight years ago. I have found another one of them at the conference I was at the day this draft is being written. I’m not going to refer the conference because the conference does not deserve to be associated with my negative sentiment here.

The geek supremacist in this case was the speaker of one of the talks. I did not sit through the whole talk (which also run over its allotted time), because after the basic introduction, I was so ticked off by so many alarm bells that I just had to leave and find something more interesting and useful to do. The final drop for me was when the speaker insisted that “Western values didn’t apply to [them]” and thus they felt they could “liberate” hardware by mixing leaked sources of the proprietary OS with the pure Free (obsolete) OS of it. Not only this is clearly illegal (as they know and admitted), but it’s unethical (free software relies on licenses that are based on copyright law!) and toxic to the community.

But that’s not what I want to complain about here. The problem was a bit earlier than that. The speaker defined themselves as a “freedom fighter” (their words, not mine!), and insisted they can’t see why people are still using Windows and macOS despite Linux and FreeBSD being perfectly good options. I take a big issue with this.

Now, having spent about half my life using, writing and contributing to FLOSS, you can’t possibly expect me to just say that Linux on the desktop is a waste of time. But at the same time I’m not delusional, and I know there are plenty of reasons to not use Linux on the desktop.

While there has been huge improvements in the past fifteen years, and SuSE or Ubuntu are somewhat usable as desktop environments, there is still no comparison with using macOS or Windows, particularly in terms of applications working out of the box, and support from third parties. There are plenty of applications that don’t work on Linux, and even if you can replace them, sometimes that is not acceptable, because you depend on some external ecosystem.

For instance, when I was working as a sysadmin for hire, none of my customers could possibly have used a pure-Linux environment. Most of them were Windows only companies, but even the one that was a mixed environment (the print shop I wrote about before), could not do without macOS and Windows. From one point, the macOS environment was their primary workspace: Adobe software is not available for Linux, nor is QuarkXpress, nor the Xerox print queue software (ironic, since it interfaces with a Linux system on board the printer, of course). The accounting software, which handled everything from ordering to invoicing to tax report, was developed by a local company, – and they had no intention to build a version for Linux – and because tax regulations in Italy are… peculiar, no off-the-shelf open source software is available for that. As it happens, they also needed a Mandriva workstation – no other distribution would do – because the software for their large-format inkjet printer was only available for either that, or PPC Mac OS X, and getting it running on a modern PC with the former is significantly less expensive than trying to recover the latter.

(To make my life more complicated, the software they used for that printer was developed by Caldera. No, not the company acquired by SCO, but Caldera Graphics, a French company completely unrelated to the other tree of companies, which was recently acquired again. It was very confusing when the people at the shop told me that they had a “Caldera box running Linux”.)

Of course, there are people who can run a Linux-only shop, or can only run Linux on their systems, personal or not, because they do not need to depend on external ecosystems. More power to them, and thank you for their support on improving desktop features (because they are helping, right?). But they are clearly not part of the majority of the population, as it’s clear by the fact that people are indeed vastly using Windows, macOS, Android and iOS.

Now, this does not mean that Linux on the desktop is dead, or will never happen. It just means that it’ll take quite a while longer, and in the mean time, all the work of Linux on the desktop is likely going to profit other endeavours too. LibreOffice and KDE are clearly examples of “Linux on the desktop”, but at the same time they provide Free Software with the visibility (and energy, to a point) even when being used by people on Windows. The same goes for VLC, Firefox, Chrome, and a long list of other FLOSS software that many people rely upon, sometimes realising it is Free Software. But even that, is not why I’m particularly angry after encountering this geek supremacist.

The problem is that, again in the introduction to the talk, that was about mobile phones, they said they don’t expect things changed significantly in the proprietary phones for the past ten years. Ten years is forever in computing, let alone mobile! Ten years ago, the iPhone was just launched, and it still did not have an SDK or apps! Ten years ago the state of the art in smartphones you could develop apps for was Symbian! And this is not the first time I hear something like this.

A lot of people in the FLOSS community appear to have closed their eyes to what the proprietary software environment has been doing, under any area. Because «Free Software works for me, so it has to be working for everyone!» And that is dangerous under multiple point of views. Not only this shortsightedness is what, in my opinion, is making distributions irrelevant but it’s also making Linux on the desktop worse than Windows, and is why I don’t expect FSF will come up with an usable mobile phone any time soon.

Free desktop environments (KDE and GNOME, effectively) have spent a lot of time in the past ten (and more) years, first trying to catch up to Windows, then to Mac, then trying to build new paradigms, with mixed results. Some people loved them, some people hated them, but at least they tried and, ignoring most of the breakages, or the fact that they still try to have semantics nobody really cares about (like KDE’s “Activities” — or the fact that KDE-the-desktop is no more, and KDE is a community that includes stuff that has nothing to do with desktops or even barely Linux, but let’s not go there), a modern KDE system is fairly close in usability to Windows… 7. There is still a lot of catch up to do, particularly around security, but I would at least say that for the most part, the direction is still valid.

But to keep going, and to catch up, and if possible to go beyond those limits, you also need to accept that there are reasons why people are using proprietary software, and it’s not just a matter of lock-in, or the (disappointingly horrible) idea that people using Windows are “sheeple” and you hold the universal truth. Which is what pissed me off during that talk.

I could also add another note here about the idea that non-smart phones are a perfectly valid option nowadays. As I wrote already, there are plenty of reasons why a smartphone should not be considering a luxury. For many people, a smartphone is the only access they have to email, and the Internet at large. Or the only safe way to access their bank account, or other fundamental services that they rely upon. Being able to use a different device for those services, and only having a ten years old dumbphone is a privilege not the demonstration that there is no need for smartphones.

Also, I sometimes really wonder if these people have any friends at all. I don’t have many friends myself, but if I was stuck on a dumbphone only able to receive calls or SMS, I would probably have lost those few I have as well. Because even with European, non-silly tariffs on SMS, sending SMS is still inconvenient, and most people will use WhatsApp, Messenger, Telegram or Viber to communicate with their friends (and most of these applications are also more secure than SMS). That may be perfectly fine, I mean if you don’t want to be easily reachable by people, that is a very easy way to do so, but it’s once again a privilege, because it means you either don’t have people who would want to contact you in different ways, or you can afford to limit your social contacts to people who accepts your quirk — and once again, a freelancer could never do that.

Back to … KDE‽

Now this is one of the things that you wouldn’t expect — Years after I left KDE behind me, today I’m back using it.. and honestly I feel again at home. I guess the whole backend changes that got through KDE4 were a bit too harsh for me at the time, and I could feel the rough edges. But after staying with GNOME2, XFCE, then Cinnamon.. this seems to be the best fit for me at this point.

Why did I decide to try this again? Well, even if it doesn’t happen that often, Cinnamon tends to crash from time to time, and that’s obnoxious when you’re working on something. Then there is another matter, which is that I’m using a second external monitor together with my laptop, as I’m doing network diagrams and other things on my dayjob, and for whatever reason the lockscreen provided by gnome-screensaver no longer works reliably: sometimes the second display is kept running, other times it doesn’t resume at all and I have to type blindly (with the risk that I’m actually typing my password on a different application), and so on so forth.

Then since I wanted to have a decent photo application, and Shotwell didn’t show itself as valid enough for what I want to do (digiKam is much better), I decided to give it another try…

So first of all I owe lots of congratulations to all the people working on KDE, you did a tremendously good job. There are though a few things that I’d like to point out. The first is that I won’t be using the term “KDE SC” anytime soon; that still sounds dumb to me, sorry. The second is that I wonder if some guys made it a point to exaggerate on the graphical department, and then having to roll back. I remember how much hype was created around Oxygen and now it feels, that just like Keramik at KDE 3 time, they had to drop it back.

Another sore spot is the Italian translation — it’s not just awkward but in some places it’s definitely wrong! They translated “Plastique” (the theme name) and “Plastik” (the reference to the KDE 3 theme) as if they were “Plastic” — this is not the case! Keep the names the same as the original please. There are also a few more problems, including the fact that they did translate terms like “Serif”, “Sans Serif” and “Monospace” which … well they don’t really sound that good in Italian. At the end I simply changed the system language back to English.

There are still quite a few things that I have to take care of to set this up properly; right now it’s just a messy first set up and there are a few bugs that I have to report (including the fact that to upload originally-sized files on Flickr, KIPI is actually copying the original file to /tmp which makes it very unpleasant especially when you have that directory in tmpfs.

At any rate, you’ll probably read some more comments on KDE4 from me in the next few weeks, so be ready for them.