Software Defined Remote Control

A number of months ago I spoke about trying to control a number of TV features in Python. While I did manage to get some of the adapter boards that I thought I would use printed, I hadn’t had the time to go and work on the software to control this before we started looking for a new place, which meant I shelved the project until we could get to the new place, and once we got there it was a matter of getting settled down, and then, … you got the idea.

As it turns out, I had one week free at the end of November — my employer decided to give three extra days on the (US) Thanksgiving week, and since my birthday was at the end of the week, I decided to take the remaining two days off myself to make it a nice nine days contiguous off. Perfect timeframe to go and hack on some projects such as that.

Also, one thing changed significantly since the time I started thinking about this: I started using Home Assistant. And while it started mostly as a way for me to keep an eye on the temperature of the winter garden, I found that with a bit of configuration, and a pull request, changing the input on my receiver with it was actually easier than using the remote control and trying to remember which input was mapped to what.

That gave me finally the idea of how to implement my TV input switch tree: expose it as one or more media players in Home Assistant!

Bad (Hardware) Choices

Unfortunately, as soon as I went to start implementing the switching code, I found out that I had made a big mistake in my assumptions: the Adafruit FT232H breakout board does not support PWM outputs, including the general time-based pulsing (without a carrier frequency). Indeed, while the Blinka library can technically support some of the features, it seems like none of the Linux-running platforms would be able to manage that. So there goes my option of just using a computer to drive the “fake remote” outputs directly. Well, at least without rewriting it in some other language and find a different way to send that kind of signals.

I looked around for a few more options, but all of it ended up being some compromise: MicroPython doesn’t have a very usable PulseOut library as far as I could tell; Arduino-based libraries don’t seem to allow two outputs to happen at roughly the same time; and as I’m sure I already noted in passing, CircuitPython lacks a good “secondary channel” to be instructed from a computer (the serial interface is shared with the REPL control, and the HID is gadget-to-host only).

After poking around a few options and very briefly considering writing my own C version on an ATmega, I decided to just go for the path of least resistance, and go back to CircuitPython, and try to work with the serial interface and its “standard input” to the software.

The problem with doing that is that the Ctrl-C command is intended to interrupt the command, and that means you cannot send the byte 0x03 un-escaped. At the end I thought about it, and decided that CircuitPython is powerful enough that just sending the commands in ASCII wouldn’t be an issue. So I decided to write a simplistic Flask app that would take a request over HTTP and send the command via the serial port. It worked, sort of. Sometimes while debugging I would end up locking the device (a Trinket M0) in the REPL, and that meant the commands wouldn’t be sent.

The solution I came up with was to reset the board every time I started the app, by sending Ctrl-C and Ctrl-D (0x03, 0x04) to force the board to reset. It worked much better.

Not-Quite-Remote Controlled HDMI Switch

After that worked, the problem was ensuring that the commands sent actually worked. The first component I needed to send the commands to was the HDMI switch. It’s a no-brand AliExpress-special HDMI switch. It has one very nice feature for what I need to do right now. It obviously has an infrared remote control – one of those thin, plasticky domes one – but it particularly has the receiver for it on a cord, which is connected with a pretty much standard 3.5mm “audio jack”.

This is not uncommon. Randomly searching Amazon or AliExpress for “HDMI switch remote” can find you a number of different, newer switches that use the same remote receiver, or something very similar to it. I’m not sure if the receivers are compatible between each other, but the whole idea is the same: by using a separate receiver, you can stick the HDMI switch behind a TV, for instance, and just make the receiver poke from below. And most receivers appear to be just a dome-encased TSOP17xx receiver, which is a 3-pin IC, which works great for a TRS.

When trying this out, I found that what I could do would be to use a Y-cable to allow both the original receiver and my board to send signals to the switch — at which point, I can send in my own pulses, without even bothering with the carrier frequency (refer to the previous post for details on this, it’s long). The way the signal is sent, the pulses need to ground the “signal” line (that is usually at 5V); to avoid messing up the different supplies, I paired it on an opto-coupler, since they are shockingly cheap when buying them in bulk.

But now that I tried setting this up with an input selection, I found myself not able to get the switch to see my signal. This turned out to require an annoying physical debugging session with the Saleae and my TRRS-to-Saleae adapter (that I have still not released, sorry folks!), which showed I was a bit off on the timing of the NEC protocol the switch used for the remote control. This is now fixed in the pysirc library that generates the pulses.

Once I got the input selector working for the switch with the Flask app, I turned to Home Assistant and added a custom component that exposes the switch as a “media_player” platform. In a constant state of “Idle” (since it doesn’t have a concept of on or off), it allowed me and my wife to change the input while seeing the names of the devices, without hunting for the tiny remote, and without having to dance around to be seen by the receiver. It was already a huge improvement.

But it wasn’t quite enough where I wanted it to be. In particular, when our family calls on Messenger, we would like to be able to just turn on the TV selected to the right input. While this was partially possible (Google Assistant can turn on a TV with a Chromecast), and we could have tried wiring up the Nabu Casa integration to select the input of the HDMI switch, it would have not worked right if the last thing we used the TV for was the Nintendo Switch (not to be confused with the HDMI switch) or for Kodi — those are connected via a Yamaha receiver, on a different input of the TV set!

Enter Sony

But again, this was supposed to be working — the adapter board included a connection for an infrared LED, and that should have worked to send out the Sony SIRC commands. Well, except it didn’t, and that turned out to be another wild goose chase.

First, I was afraid that when I fixed the NEC timing I broke the SIRC ones — but no. To confirm this, and to make the rest of my integration easier, I took the Feather M4 to which I hard-soldered a Sony-compatible IR LED, and wrote what is the eponymous software defined remote control: a CircuitPython program that includes a few useful commands, and abstractions, to control a Sony device. For… reasons, I have added VCR as the only option beside TV; if you happen to have a Bluray player by Sony, and you want to figure out which device ID it uses, please feel free.

It might sound silly, but I remember seeing a research paper in UX from the ’90s of using gesture recognitions on a touchpad on a remote control to allow more compact remote controls. Well, if you wanted, you could easily make this CircuitPython example into a touchscreen remote control for any Sony device, as long as you can find all the right device IDs, and hard code a bunch of additional commands.

So, once I knew that at least on the software side I was perfectly capable of control the Sony TV, I had to go and do more hardware debugging, with the Saleae, but this time with the probes directly on the breadboard, as I had no TRS cable to connect to. And that was… a lot of work, to rewire stuff and try.

The first problem was that the carrier frequency was totally off. The SIRC protocol specifies a 40kHz carrier frequency, which is supposedly easier to generate than the 38kHz used by NEC and others, but somehow the Saleae was recording it as a very variable frequency that oscillated between 37kHz and 41kHZ. So I was afraid that trying to run two PWM outputs on the Trinket M0 was a bad idea, even if one of them was set to nought hertz — as I said, the HDMI switch didn’t need a carrier frequency.

I did toy briefly with the idea of generating the 40kHz carrier wave separately, and just gating it to the same type of signal I used for the HDMI switch. Supposedly, 40kHz generators are easy, but at least for the circuits I found at first glance, it requires a part (640kHz resonator) that is nearly impossible to find in 2020. Probably fell out of use. But as it turn out it wouldn’t have helped.

Instead, I took another Feather. Since I ran out of M4, except for the one I hardwired already an IR LED to, I instead pulled up the nRF52840 that I bought and barely played with. This should have been plenty capable to give me a clean 40kHz signal and it indeed was.

At that point I noticed another problem, though: I totally screwed up the adapter board. In my Feather M4, the IR LED was connected directly between 3V and the transistor switching it. A bit out of spec, but not uncommon given that it’s flashed for very brief impulses. On the other hand when I designed the adapter, I connected it to the 5V rail. Oops, that’s not what I was meant to be doing! And I did indeed burn out the IR LED with it. So I had to solder a new one on the cable.

Once I fixed that, I found myself hitting another issue: I could now turn on and off the TV with my app, but the switch stopped responding to commands either from the app or from the original remote! Another round of Saleae (that’s probably one of my favourite tools — yes I splurged when I bought it, but it’s turning out to be an awesome tool to have around, after all), and I found that the signal line was being held low — because the output pin is stuck high…

I have not tried debugging this further yet — I can probably reproduce this without my whole TV setup, so I should do that soonish. It seems like opening both lines for PWM output causes some conflicts, and one or the other end up not actually working. What I solved this with was only allowing one command before restarting the Feather. It meant taking longer to complete the commands, but it allowed me to continue with my life without further pain.

One small note here: since I wasn’t sure how Flask concurrency would interact with accessing a serial port, I decided to try something a bit out of the ordinary, and set up the access to the Feather via an Actor using pykka. It basically means leaving one thread to have direct access to the serial port, and queue commands as messages to it. It seems to be working fine.

Wrapping It All Up

Once the app was able to send arbitrary commands to the TV via infrared, as well as changing the input of the HDMI, I extended the Home Assistant integration to include the TV as a “media_player” entity as well. The commands I implemented were Power On and Off (discrete, rather than toggle, which means I can send a “Power On” to the TV when it’s already on and not bother it), and discrete source selection for the three sources we actually use (HDMI switch, Receiver, Commodore 64). There would be a lot more commands I could theoretically send, including volume control, but I can already access those via the receiver, so there’s no good reason to.

After that it was a matter of scripting some more complicated acts: direct selection of Portal, Chromecast, Kodi, and Nintendo Switch (which are the four things we use the most). This was easy at that point: turn on the TV (whether it was on or not), select the right input on either the receiver or the switch, then select the right input ion the TV. The reason why the order seems a bit strange is that it takes a few seconds for the TV to receive commands after turning on, but by doing it this way we can switch between Chromecast and Portal, or Nintendo Switch and Kodi, in pretty much no time.

And after that worked, we decided the $5/month to Nabu Casa were worth it, because that allows us to ask Alexa or Google Assistant to select the input for us, too.

Eventually, this lead me to replace Google’s “Turn off the TV” command in our nightly routine to trigger a Home Assistant script, too. Previously, it would issue the command to the Chromecast, routing through the whole Google cloud services between the device that took the request and the Chromecast. And then the Chromecast would be sending the CEC command to power off… except that it wouldn’t reach the receiver, which would stay on for another two hours until it finally decided it was time to turn off.

With the new setup, Google is triggering the Home Assistant script, and appears to do that asynchronously. Then Home Assistant sends the request to my app, that then sends it to the Feather, that sends the power off to the TV… which is also read by the receiver. I didn’t even need to send the power off command to the receiver itself!

All in all, the setup is satisfying.

What remains to be done is to try exposing a “Media Player” to Google Home, that is not actually any of the three “media_player” entities I have, but is a composite of them. That way, I could actually just expose the different input trees as discrete inputs to Google, and include the whole play, pause, and volume control that is currently missing from the voice controls. But that can wait.

Instead, I should probably get going at designing a new board to replace the breadboard mess I’m using right now. It’s hidden away enough that it’s not in our face (unlike the Birch Books experiments), but I would still like having a more… clean setup. And speaking of that, I really would love if someone already contributed an Adafruit Feather component for EAGLE, providing the space for soldering in the headers, but keeping the design referencing the actual lines as defined in it.

Controlling Your TV via (Circuit)Python

This is yet another of the pre-announced projects, and possibly one of the most procrastinated ones. I own a Sony Bravia TV I bought in Ireland in 2013, and is still working very well for our needs (we don’t watch that much TV). It’s connected to a Yamaha RX-V475 receiver on one input and a cheap HDMI switch on the other input, because there’s too many devices, although we only use three or four of them most of the time: Chromecast, Portal TV, HTPC, and PlayStation 4. They are split equally between the two inputs. So far, so good.

The problem starts with the fact that sometimes if the TV is turned on by the Chromecast or the Portal, the ARC does not initialize properly, and we hear no audio. The solution is worthy of The IT Crowd: tell the TV to use the internal speakers, then tell it to use the external speakers again — turn off and on the ARC itself. It’s annoying and takes a few keypresses.

What I have been wanting for a while is a way to ask Assistant (or Alexa) to “unfuck the TV” — that is to reset the audio channel for us, recording a macro to do that ARC off/on dance. It was for this reason I bought the Feather M4 last year, but I only ended up starting to work on it just this month.

To make this possible, the first thing I needed was to know the commands sent by the remote, and have a way to replicate them back to the TV. I already had some experience with infrared receivers as, a long time ago in a different life, I maintained the LIRC patchset for Linux 2.6, for a while. I even tried writing Qt (3) bindings for LIRC. I wonder if I can find the source code anywhere. But that was not as useful as I thought.

Thankfully, Ken Shirriff wrote good documentation on the protocol, and linked to further details, and even to a full archive of all the Sony command codes. Which made my life much easier, but not completely easy. While Adafruit has an IRRemote library, it does not correctly interpret Sony SIRC commands. I considered adding the support directly into it, but it turned out to be a bit more invasive than expected, so I ended up instead writing a separate package that included both the decoder and the encoder (this was before I started the new job, so releasing it was easy — but now I’m having to wait a little bit to continue on it.)

Now, once I managed to decode the commands, I need to be able to send them. And here’s where things get interesting. What we usually refer to as commands are a bunch of bits. These are encoded, based on the protocol, as a series of pulses, which are modulated on top of a carrier wave with a certain frequency.

Unfortunately, it turns out that talking to a Sony TV is nowhere near as easy as it might sound. Let’s try to figure out what’s going on by providing a bit of vocabulary. An infrared signal as used by most remote controls usually carries a command and, in most cases, an address to specify which device should take the command (since they are usually used in place where there are multiple devices using infrared remote controls). These are encoded in bits according to the rules of the protocol, and then converted to pulses. These pulses are then applied to a carrier wave of a certain frequency, which defines the speed at which the infrared LED is “blinking” on and off. The LED itself has a wavelength which represent the “colour” of the light in the infrared spectrum.

For Sony’s SIRC, the TV expects a 40kHz carrier, and it seems a 950nm wavelength. It seems like you can use 940nm LEDs but they have worse reception, and only works if they are bright enough. The first batch of LEDs I bought, as well as the pre-soldered module from DFRobot, turned out to not be bright enough for my TV to recognize — so I decided to try with desoldering the LED from a replacement remote control from eBay, which worked fine, so I thought I needed 950nm LEDs — but no, it works with good enough 940nm LEDs, just not with the tiny ones I originally bought from Mouser.

So once I had a way to send arbitrary Sony commands to my TV, I started looking for options to trigger the ARC reset — unfortunately this is proving more complicated than I expected: there’s no command that I could send that would provide me with the ARC menu. Instead I can only bring up the Sync menu reliably — but that menu has different entries depending on whether the selected input has working HDMI CEC, which is extremely annoying.

On the other hand, I did find commands that select directly the different inputs directly, instead of showing up the input selection menu and choosing from there. Which gave me a different idea to start with: while I haven’t given up on the macro for fixing the audio, what I can do is to script input selection across the two-level mux.

I started by wondering if I could send the IR command to the HDMI switcher as well, so that I could select between those two easily — that turned out to be another pile of yaks to shave. The switcher uses the NEC protocol, which has a 38kHz carrier wave, but that turned out not to matter as much (the decoder it uses seem to accept 40kHz just as well) — instead I had a hard time to get it to receive the command because it expected a NEC “repeat signal” to seal the command. I guess that’s going to be a blog post in and by itself.

Now, my original plan was to get something running on the Feather, attach an AirLift wing to give it WiFi, and control that… somehow. I also considered re-coding this with ESP32 and ESPHome, despite it not having an obvious way to send SIRC commands while making sense — it doesn’t represent the commands the way the protocol expects, and the most reasonable way I could find was to generate the pulse sequence, and just sending that raw.

But then I thought it over and realised that, at least for the moment, it makes more sense for me to use an USB-to-GPIO bridge, and control this from a full blown computer — the reason for that is that I would want to be able to open a webpage on my phone, or my wife’s, and select the right input altogether. And while there’s a Python module for controlling the receiver, using that from CircuitPython or MicroPython would probably take some work. And even though I could just control the Feather remotely, via MQTT or USB-Serial, it would still likely be more work than driving this all from the same Flask app locally.

Unfortunately, I don’t have code to show you yet. While my new dayjob has much simpler policies than the last, I need to complete the training before I can start releasing new projects. Hopefully next update will come with code and demos.

More smartphones shenanigans: Ireland and the unlocked phones

In my previous rant I have noted that in Ireland it’s next to impossible to buy unlocked phones. Indeed when I went to look for a phone to travel to China at Carphone Warehouse (which at least in the UK is owned by Samsung), while they had plenty of selections for the phones, they all came with contracts.

Contracts are useful for most people, since effectively the carrier is giving you a discount on a phone so that you commit to stay their customer for a certain amount of time. When you do this, they lock you to their network, so that you can’t just switch to another carrier without either giving them their due in subscriptions or paying back the discount they gave you on the phone. In general, I see this approach as reasonable, although it has clearly created a bit of a mess to the market, particularly on the cheaper phone scale.

I have to admit that I have not paid enough attention to this in Ireland up to now simply because I have been using my company-provided phone for most of my day to day travel. Except in China, where it would not be really appropriate. So when I had to go back to Shanghai, I found myself in need of a new phone. I ended up buying one at Argos because they could source one for me by the following day, which is what I needed, and they also had last year’s Sony flagship device (Xperia X) at a decent discount, particularly when compared to the not-much-better Xperia XZ. Alternatively, Amazon would have worked, but that would have taken too long, and the price was actually lower at Argos, for this particular model.

As it is usual for most Android phones, the device started running through a number of system software updates as it was turned up. Indeed, after three cycles the device, which started off with Android 6.0, ended up on 7.0. Not only that, but by now I know that Sony appears to care about the device quite a bit. While they have not updated to 7.1, they have pushed a new system software — I noticed because my phone started downloading it while in Changi airport, in Singapore, while connected to a power pack and the Airport’s WiFi! With this update, the phone is running Android security update as of May 1st 2017.

That made me compare it with the Xperia XA, the locked phone I bought from Three, and that I now managed to unlock. The phone came “branded” by Three Ireland, which for the most part appeared to just mean it splashed their custom logo at boot. Unlocking the phone did not make it update to a newer version, or de-brand itself. But despite being the cheaper version of the X, and theoretically the same generation, it was still stuck on Android 6.0.

Indeed, before the last update, probably released at the same time as the latest Xperia X firmware, the security patch level was reported as April 1st 2016, over a year ago! Fortunately the latest update at least brings it to this year, as now the patch level is January 5th, 2017. As it turns out, even the non-branded versions of the phone is only available up to Android 6.0. At least I should say hat tip to Sony for actually caring about users, at least enough to provide these updates. My Samsung Tab A is at security level 1st June 2016, and it had no software updates in nearly as much time.

There is officially no way to de-brand a phone, but there are of course a number of options out there on how to do that otherwise, although a significant amount of them relied on CyanogenMod and nowadays they will rely on… whatever the name of the new project that forked from that is. I did manage to bring the phone to a clean slate with somewhat sketchy instructions, but as I said even the debranded version did not update to Android 7.0 and I’m not sure if now I would have to manually manage software update. But since the phone does not seem to remember that the phone ever was branded, and there is no Three logo, I guess it might be alright. And since I did not have to unlock the bootloader, I’m relatively safe that the firmware was signed by Sony to begin with.

What I found that is interesting in from using the tool to download Sony’s firmware, is that most of their phones are indeed sold in Ireland, but there is no unbranded Irish firmware. There are, though, a number of unbranded firmwares for other countries, including UK. My (unbranded, unlocked) Xperia X is indeed marked down as a UK firmware. Effectively it looks like that Ireland is once again acting like “UK lite” by not having its own devices, and instead relying on the UK versions. Because who would invest time and energy to cather to the 4.5M people market we have here? Sigh.

When (multimedia) fiefdoms crumble

Mike coined the term multimedia fiefdoms recently. He points to a number of different streaming, purchase and rental services for video content (movies, TV series) as the new battleground for users (consumers in this case). There are of course a few more sides in this battle, including music and books, but the idea is still perfectly valid.

What he didn’t get into the details of is what happens one of those fiefdoms capitulates, declaring itself won over, and goes away. It’s not a fun situation to be in, but we actually have plenty of examples of it, and these, more than anything else, should drive the discourse around and against DRM, in my opinion.

For some reasons, the main example of failed fiefdoms is to be found in books, and I lived through (and recounted) a few of those instances. For me personally, it all started four years ago, when I discovered Sony gave up on their LRF format and decided to adopt the “industry standard” ePub by supporting Adobe Digital Editions (ADEPT) DRM scheme on their devices. I was slow on the uptake, the announcement came two years earlier. For Sony, this meant tearing down their walled garden, even though they kept supporting the LRF format and their store for a while – they may even do still, I stopped following two years ago when I moved onto a Kindle – for the user it meant they were now free to buy books from a number of stores, including some publishers, bookstores with online presence and dedicated ebookstores.

But things didn’t always go smoothly: two years later, WHSmith partnered with Kobo, and essentially handed the latter all their online ebook market. When I read the announcement I was actually happy, especially since I could not buy books off WHSmith any more as they started looking for UK billing addresses. Unfortunately it also meant that only a third of the books that I bought from WHSmith were going to be ported over to Kobo due to an extreme cock-up with global rights even to digital books. If I did not go and break the DRM off all my ebooks for the sake of it, I would have lost four books, having to buy them anew again. Given this was not for the seller going bankrupt but for a sell-out of their customers, it was not understandable that they refused to compensate people. Luckily, it did port The Gone-Away World which is one of my favourite books.

Fast forward another year, and the Italian bookstore LaFeltrinelli decided to go the same way, with a major exception: they decided they would keep users on both platforms — that way if you want to buy a digital version of a book you’ll still buy it on the same website, but it’ll be provided by Kobo and in your Kobo library. And it seems like they at least have a better deal regarding books’ rights, as they seemed to have ported over most books anyway. But of course it did not work out as well as it should have been, throwing an error in my face and forcing me to call up Kobo (Italy) to have my accounts connected and the books ported.

The same year, I end up buying a Samsung Galaxy Note 10.1 2014 Edition, which is a pretty good tablet and has a great digitizer. Samsung ships Google Play in full (Store, Movies, Music, Books) but at the same time install its own App, Video, Music and Book store apps, it’s not surprising. But it does not take six months for them to decide that it’s not their greatest idea, in May this year, Samsung announced the turn down of their Music and Books stores — outside of South Korea at least. In this case there is no handover of the content to other providers, so any content bought on those platforms is just gone.

Not completely in vain; if you still have access to a Samsung device (and if you don’t, well, you had no access to the content anyway), a different kind of almost-compensation kicks in: the Korean company partnered with Amazon of all bookstores — surprising given that they are behind the new “Nook Tablet” by Barnes & Noble. Beside a branded «Kindle for Samsung» app, they provide one out of a choice of four books every month — the books are taken from Amazon’s KDP Select pool as far as I can tell, which is the same pool used as a base for the Kindle Owners’ Lending Library and the Kindle Unlimited offerings; they are not great but some of them are enjoyable enough. Amazon is also keeping honest and does not force you to read the books on your Samsung device — I indeed prefer reading from my Kindle.

Now the question is: how do you loop back all this to multimedia? Sure books are entertaining but they are by definition a single media, unless you refer to the Kindle Edition of American Gods. Well, for me it’s still the same problem of fiefdoms that Mike referred to; indeed every store used to be a walled garden for a long while, then Adobe came and conquered most with ePub and ADEPT — but then between Apple and their iBooks (which uses its own, incompatible DRM), and Amazon with the Kindle, the walls started crumbling down. Nowadays plenty of publishers allow you to buy the book, in ePub and usually many other formats at the same time, without DRM, because the publishers don’t care which device you want to read your book on (a Kindle, a Kobo, a Nook, an iPad, a Sony Reader, an Android tablet …), they only want for you to read the book, and get hooked, and buy more books.

Somehow the same does not seem to work for video content, although it did work to an extent, for a while at least, with music. But this is a different topic.

The reason why I’m posting this right now is that just today I got an email from Samsung that they are turning down their video store too — now their “Samsung Hub” platform gets to only push you games and apps, unless you happen to live in South Korea. It’s interesting to see how the battles between giants is causing small players to just get off the playing fields… but at the same time they bring their toys with them.

Once again, there is no compensation; if you rented something, watch it by the end of the year, if you bought something, sorry, you won’t be able to access it after new year. It’s a tough world. There is a lesson, somewhere, to be learnt about this.

The misery of the ePub format

I often assume that most of the people reading my blog have been reading it for a long enough time that they know a few of my quirks, one of which is my “passion” for digital distribution, and in particular my liking of eBooks over printed books. This passion actually stems from the fact I’d like to be able to move out of my current home soonish, and the least “physical” stuff I have to bring with me, the better.

I started buying eBooks back in 2010, when I discovered my Sony Reader PRS-505 (originally only capable of reading Sony’s own format) was updated to be able to read the “standard” ePub format, protected with Adobe’s Digital Editions DRM (ADEPT). One of my first suppliers for books in that format was WHSmith, the British bookstore chain. At the end I bought six books from them: Richard Dawkins’s The God Delusion, Nick Harkaway’s The Gone-Away World (which I read already, but wanted a digital copy of, after giving away my hardcopy to a friend of mine), and four books of The Dresden Files.

After a while, I had to look at other suppliers for a very simple reason: WHSmith started requiring me a valid UK post code, which I obviously don’t have. I then moved on to Kobo since they seemed to have a more varied selection, and weren’t tied to the geographical distribution of the UK vendor.

Here I got one of my first disappointments with the ePub “standard”: one of the books I bought from Kobo earlier, Douglas Adams’s The Salmon of Doubt I still haven’t been able to read!

(I really wish Kobo at least replaced the book on their catalogue, since even their applications can’t read it, or otherwise I would like some store credit to get a different book, since that one can’t be read with their own applications.)

Over time, I came to understand that the ePub specifications are just too lax to make it a good format: there are a number of ePub files that are broken simply because the ZIP file is “corrupted” (the names within the records don’t match); a few required me to re-package them to be readable by the Reader; and a few more are huge just because they decide to get their own copy of DejaVu font family in the zip file itself. Of course to fix any of these issues, you also have to take the DRM out of the picture, which is luckily very easy for this format.

Today, Kobo is once again the protagonist of a disappointment, a huge one, in terms of digital distribution; together with WHSmith. But first let’s take a step back to last week.

While in the United States with Luca, I got my hands on a Kindle (the version with keyboard); why? Well, from one side I was starting to be irked by the long list of issues I noted earlier about ePub books, but on the other hand, a few books such as Ian Fleming’s classic Bond novels were not available on Kobo or other ePub suppliers, while they were readily available on Amazon… plus a few of the books I could find on both Kobo and Amazon were slightly cheaper on the latter. I already started reading Fleming’s novels on the iPad through Amazon’s app, but I don’t like reading on a standard LCD.

Coming back home, we passed through London Heathrow; Luca went to look for a book to read on the way home, and we went to the WHBook shop there… and I was surprised to see it was now selling Kobo’s own reader device (the last WHSmith shop I was at, a couple of years ago, was selling Sony exclusively). This sounded strange, considering that WHSmith and Kobo were rivals, for me in particular but in general as well.

I wasn’t that far off, when I smelled something fishy; indeed, tonight I received a mail from WHSmith telling me they joined forces with Kobo, and that they will no longer supply eBooks on their webshop. The format being what it is, if they no longer kept the shop, you’d be found without a way to re-download or eBooks, which is why it is important for a digital distributor to be solid for me.. turns out that WHSmith is not as solid as I supposed. So they suggest you to make an account at Kobo (unless you have one already, like I did) so that they can transfer your books on that account.

Lovely! For me that was very good news, since having the books on my Kobo account means not only being able to access them as ePub (which I had already), but also that I could read them on their apps for Android and iPad, as well as on their own website (very Amazon-y of them). Unfortunately there is a problem: out of the six books I bought at WHSmith, they only let me transfer… two!

Seems like that, even though WHSmith decided to give (or sell) its customers to Kobo, as well as leaving them to provide their ebook offering instead, their partnership does not cover the distribution rights of the books they used to sell. This means that for instance the four Dresden Files novels I bought from WHSmith, that were being edited, even digitally, by the British publisher, are not available to the Canadian store Kobo, who only list the original RoC offerings.

This brings up two whole issues: the first is that unless your supplier is big enough that you can rely on it to exists forever, you shouldn’t trust DRM; luckily for me on the ePub side the DRM is so shallow that I don’t really care for its presence, and on the other hand I foresee Amazon’s DRM to be broken way before they start to crumble. The second issue is that even in the market of digital distribution, which is naturally a worldwide, global market, the regional limitations are making it difficult to have fair offerings; again Amazon seems to sidestep this issue, as it appears to me like there is no book available only on one region in their Kindle offerings: the Italian Kindle store covers all the American books as well.

How you can tell you’re dealing with a bunch of fanboys

In my previous post where I criticised Linus’s choice of bumping the kernel’s version to 3 without thinking through the kind of problems we, as distributors, would have faced with broken build systems that rely on the output of uname command, I expected mixed reactions, but mostly I thought it would have brought in technical arguments.

Turns out that the first comment was actually in support of the breakage for the sake of finding bugs, while another (the last at the time of writing), shows the presence of what, undeniably, is a fanboy. A Linux (or Linus) one at it, but still a fanboy. And yes, there are other kinds of fanboys, beside Apple’s. And of the two comments, the former is the one I actually respect.

So how do you spot fanboy’s of all trades? Well, first look for people who stick with one product, or one manufacturer. Be it Apple, Lenovo, Dell, or in the case of software, Canonical, Free Software Foundation, KDE or Linus himself, sticking with a single supplier without even opening to the idea that others have done something good is an obvious sign of being a fanboy.

Now, it is true that I don’t like having things of many different vendors as they tend to work better together when they are from the same, but that’s not to say I can’t tell what else is good from another vendor. For instance, after two Apple laptops and an iMac, I didn’t have to stay with Apple… I decided to get a Dell, and that’s what I’m using right now. Similarly, even though I liked Nokia’s phone, my last two phones were a Motorola and, nowadays, an HTC.

Then make sure to notice whether they can’t accept flaws in the product or decisions. Indeed one of the most obnoxious behaviours in Apple’s fanboys, who tend to justify all the choices of the company as something done right. Well, here is the catch: not all of them are! Now, part of this is underscored in the next tract, but it is important to understand that for a fanboy even what would be a commercial failure, able to bring a company near bankruptcy, is a perfect move, and was just misunderstood by the market.

Again, this is not limited to Apple fanboys; it shouldn’t be so difficult to identify a long list of Nokia fanboys who keep supporting their multi-headed workforce investment strategy of maintaining a number of parallel operating systems and classes of devices, in spite of a negative market response… and I’m talking about those who are not to gain directly from said strategy — I’m not expecting the people being laid off, or those whose tasks are to be reassigned from their favourite job, to be unsupportive of said strategy of course.

But while they are so defensive of their love affair, fanboys also can’t see anything good in what their competitors do. And this is unfortunately way too common in the land of Free Software supporters: for them Sony is always evil, Microsoft never does anything good, Apple is only out to make crappy designs, and so on.

This is probably the most problematic situation: since you can’t accept that the other manufacturers (or the other products) have some good sides to them, you will not consider improvements in the same way. This is why just saying that anybody claiming Apple did something good is a fanboy is counterproductive: let’s look at what they do right, even if it’s not what we want (they are after all making decisions based on their general strategy, that is certainly different from the Free Software general strategy).

And finally, you’re either with them or against them. Which is what the comment that sprouted the discussion shows. You’re either accepting their exact philosophy or you’re an enemy, just an enemy. In this case, I just had to suggest that Linus’s decision was made without thinking of our (distributors) side, and I became an enemy who should use some other projects.

With all this on the table, can you avoid becoming a fanboy yourself? I’m always striving to make sure I avoid that, I’m afraid many people don’t seem to accept that.

My take about the Sony EPIC FAIL

First of all let me be clear: it was an EPIC FAIL, anybody saying otherwise is pretty much deluded. On the other hand, even though I’m one of those users who have enough rights to be upset with Sony (I just renewed my Plus subscription a couple of weeks ago), I don’t feel like this is all like people pretend it to be. A lot of the sentiment you see out there seem to come from people who aren’t in that database at all, and that are just trying to take a shot at Sony, either because they hate all corporations alike, or because they still feel they should have kept Linux an option on the PS3s.. or even more likely because they would like for Sony to invest more in developing consoles but pretend not to pay for games.

Am I being cynical? Probably. But I also read enough posts over the last year or so that seem to pretend that each and every PS3 owner should have felt robbed of the opportunity or running Linux on their systems… and as a PS3 owner myself, I don’t really see what’s the point. Sure there were a couple of things that as a Linux enthusiast and hacker I would have liked to be able to do, but with the exception of the clustering efforts to crunch numbers (which seem to be a field nowadays in the hand of high-end graphics cards), the most useful thing I have seen done with Linux on a PlayStation 3 has been testing BluRay movies with Linux, like Steve “beandog” has posted on Planet Gentoo a long time ago.

But my take is more interested in putting into prospective what the EPIC FAIL was about. Definitely I don’t count a general intrusion an EPIC FAIL by itself: most systems out there are going one way or another to fail… of course, we don’t expect them to fail as badly as putting at risk this many users. But then again, my main reason to think that Sony misdesigned their whole network is a simple one: the intruders gathered the users’ passwords.

Given that I expect most people commenting or reading about this are non-technical gamers (and people who don’t play but, as I said above, want to feel smug about it), I don’t expect most of them to put this into context: “Obviously Sony knows my password! I tell it to them every time I connect!” — which for anybody who ever worked on securing web application is a very naïve statement.

When you design a secure login system you do not store the password, but rather a function (hash) of it; when the login request comes in, you take the received string, apply the same function to it, and compare the result with the one you stored. Bonus points for salting such hash so that the same password, on two different users’ records, would be saved differently. Which is why on good systems you have “Reset password” options, and not “Recover password” ones (and why I loathe those systems which do send me back my password).

The fact that Sony declared passwords and (interestingly) security questions as compromised, makes it apparently likely that they didn’t store the hashes, but rather the cleartext passwords. I’m not sure about this myself to be honest: it sounds very stupid for them to make such a puny mistake, but Occam’s Razor calls for the most obvious explanation and that is definitely it. A more complex (but still feasible) explanation is that the intrusion was a long-term one, and that the intruders were able to snoop the passwords between the user and the authentication chain, during the time they are left in cleartext, from the application’s point of view.

I’ll leave a point for discussion for those who have had to deal with credit cards handling: I know that there are security protocols that need to be followed to be given access to processing credit cards; is the “hash the passwords” one missing? If so it might be as much fault of the credit card companies as it is of Sony.

Speaking about long-term, I’m still not sure why everybody’s assume that the (apparent) DoS on Sony’s infrastructure was related to the intrusion. Most complainers seem also to ignore Sony’s statement about finding out only later on that the database was compromised up to this point. By experience, it sounds like oversimplifying the situation. Until further pointers, it cannot be entirely ruled out that the intrusion was an inside job, maybe happening for months or more by now, and that the DoS only served unintentionally as a method to catch the auditing guys’ eyes. Personally I’d believe that on the count that Sony is not telling you that your currently listed creditcard is compromised, but that any creditcard you used is compromised. Which is scarier.

So to add at least a bit of a point to this whole mess, I think that at least the commentators of Free Software area should stop trying to find faults in corporations who don’t share wholly their ideals, and should rather try showing users another viable way. Asking them to stay in the past decade is not viable, and yet if we keep bickering among themselves, that is definitely what’s going to happen. Anybody said “fragmentation”?

An year with my Reader

Okay so it’s not a full year in any sense – I bought it way over an year ago, and I found out that it could be used with a modern technology like ePub just past April – but if I have to remember one as such, 2010 has been the year of the Reader for me. And not just for me as it happens.

First of all, thanks to the Reader I was able to read a whole lot more than in the past years; I’m not sure if it’s just novelty that will wear off, but I’m pretty sure I wouldn’t have brought with me so many books to read during travels as I did, mostly because of the small form factor (and the fact that it fits neatly into my usual bag). Anobii statistics reports I read 31 books, ten thousands pages worth of content this year — and this is nothing to say about the sheer variety of them compared to the past.

While I was never limiting my readings to a particular genre, the Reader, with the much cheaper ebooks, allowed me to choose among a wider range of books for my readings. Also the convenience of getting the book right away is not something to ignore; I actually read through Cyber War mostly because I could get it right after hearing about it on Real Time with Bill Maher. Beside that particular book, I went on to read some classics like The Picture of Dorian Gray that I never found the time to look up before, and economics/business books such as Too Big To Fail and Free which actually interested me greatly.

Surprisingly, what I found most difficult to read on the Reader were the reason I originally looked back to the Reader: O’Reilly books. Since they are generated with DocBook, they have one big issue that makes them hard to read on these devices, they split too much. Let’s take for instance Being Geek which I’d like to read next, if I can find a way to do so without irks; on a PDF or print edition, there are page breaks only between “sections”, rather than chapters. Chapters, which are actually often enough just a couple of paragraphs long, are simply printed one after the other continuously; this is quite okay because otherwise, the padding added at the end of each would waste a lot of paper, and would transform a 200ish pages’ book into a 500ish or so. As I said, DocBook ePub generation is imperfect in this regard as it splits the output HTML files (yes it uses HTML internally, let’s move on) on chapter markers, which means that every three paragraphs I have to wait till the Reader fetches the next file to render separately, slowing my reading down enough to make it difficult to continue.

Reading the PDF version of books on the reader is also not the brightest idea; since the screen of the PRS-505 is relatively small, you can’t read a PDF file full-size; while the newest firmware allows to zoom and reflow the text, this becomes also unusable as a way to read O’Reilyl books because the page number marker is not ignored. Even worse when complex diagrams are involved as the Reader is pretty much useless for those — for those technical books, I probably wouldn’t mind a tablet with a bigger screen; I’ve been considering the Archos 101 but I don’t currently have the dough to afford one; and when I’ll have, they’ll probably be sold out already.

Speaking about tablets, once again I think that Apple, even though can’t really be praised for a tremendously good job with the iPad, had a primary role in making 2010 the year of eBooks not only for me but for the whole situation, together with Amazon — the latter finally launched the Kindle in Europe (and once again, it’s not something I’d buy, considering Amazon’s “it’s all ours!” approach). With those two companies driving consumer attention (even though rarely consumers themselves) toward eBooks, I was somewhat curious about the Italian branches of Mediamarkt and Saturn starting to carry eInk devices in-store, especially since I knew there were no real Italian eBook pool to draw from, for the customers buying the devices.

Turns out that while Amazon entered the Italian market, IBS, that has been many times considered the Italian answer to Amazon, and Mediamarkt itself opened eBook stores, carrying Italian content in ePub format (mostly locked with Adobe Digital Editions DRM). I’m happy to note that while backcatalog is still not available, at least they carry both the big take-it-all publisher Mondadori and the lesser Chiarelettere with its vastly politics books — especially nice since the average books from the latter I bought before were both pretty expensive, and quite huge in the pure physical dimension).

At any rate, the bottom line for me is that the Reader now looks like a pretty good buy, more than it ever did at the time. But please, make it possible to skip over wireless, 3g, bluetooth, touchscreen.. the two weeks charge that the PRS-505 both promises and delivers make all of those look like wastes, especially since I only end up loading new stuff on it once every two weeks, which is also the time I end up charging its battery.

Rygel; replacing MediaTomb?

I’ve ranted about overlays leaving notes about Gnome overlay I had to fight with that because of Rygel which reportedly needed the new (testing) version of Gtk+.

Now, my interest in Rygel is so that we can rid of MediaTomb in Portage; I added it myself, when I tried to make use of my PlayStation 3 for streaming video (mostly anime and Real Time with Bill Maher). It actually never worked as well as I itnended for very long; it needed proper transcode support, and what was there was incomplete. Also, the code itself was messy and hacky, with commented-out code still there, and bundled libraries. When I replaced my Samsung TV with a Sony Bravia last year, I was also hoping MediaTomb worked with that (because it actually supports UPnP by itself alone), but that wasn’t the case.

At any rate, with MediaTomb failing to keep up with releases, cleanup code, and so on so forth, I gave up vastly on the UPnP idea; even using the XBMC instance on my AppleTV, the best seems to be using Samba instead. This until a couple of weeks ago when I started worrying about my bedroom’s media outlets. I have three UPnP-enabled devices (Bravia TV, PlayStation 3 and XBox 360), but I use an always-on AppleTV to play my stuff; that really wounds like a waste.

Even more so given that the AppleTV doesn’t really play Full HD content, not with XBMC at least; and my hopes for it to become useful, actually trusting in Apple’s declaration that they would have brought TV series to the iTunes Store in Europe vanished quite a long time ago. And to reinforce the fact that I made a totally shitty deal when I bought this AppleTV, rumors have that the new version of it will be a totally different product, cheaper and with no on-board storage… now I can guess the reason for that; as I said I stream my video from my main storage (Yamato itself), but I actually am glad that the AppleTV I have has 160GB storage so I can keep a copy of my photos, and of all the music I have, in lossless format (ALAC).

At any rate, I wanted to give UPnP/PlayStation 3 another chance; and the current way to do that is using Rygel, developed by the Gnome community and tied to GStreamer (even though I have a number of personal reserves against it). Now, thankfully, most of the needed code was already around in Portage, and Petteri had a partial ebuild for an older Rygel version, so I spent a night trying to work it out. It needs the GUPnP stack that is developed together with it obviously, and it relies on Vala for a big part of it including the GUI.

The “stable’ version is from quite a long time ago; and if you know software enough, you know that ”stable” means “unmaintained” when its release version ends with a “.0”. So I went for the development series, 0.7. And updating the dependencies, it turned out to need Gtk+ 2.21, with all the related trouble. Funnily, Arun notified me that the configure.ac script actually outright lies, it requires 2.21 just because it can, but it does not need it, and works fine with 2.20; I haven’t had much time to update the ebuilds so that they ignore the dependency, but I was able to test it for a little while with 2.21.

I’m sincerely not excessively impressed with it; of course it works definitely better than MediaTomb, and I guess that UPnP/DLNA are messed up enough that they have trouble to actually get them working properly, but… it seems like for the European version of the Bravia TV they always transcode to mpeg2/mp3 (which I’ll tell you, is crappy quality; the TV can do DVB-T HD out of the box, I guess it has support for decoding H.264/AAC), and even the PlayStation3 has trouble identifying some files, even when they should play correctly on local storage; and PS3 is declared to be their platform of choice.

The interface itself is quite difficult to work on, it has no way to monitor the scan status; it also only index files if the extension is one of those they recognize and…. funnily enough they recognize .mp4 files but not .m4v files, which are just the same thing; rename the latter to the former, and voiltà, it works… so you got a possible bug there. I haven’t reported it but on IRC, where I was suggested to check the config files that… are quite a bit messed up.

I’ll fix up in tree some ebuilds for Rygel at some point this weekend if I can get the time to, it’s still a pretty good replacement for MediaTomb, it’s just something I’d probably use rather than XBMC-over-Samba.

Of course this is still not solving my problem with video playback; especially since it does not work with softsub that are overly common with J-Drama…

How Sony gets even more of my money

As I said before, I’m a very pragmatic person when it comes to software and hardware, as they are just tools. While I can understand ethics, I find there are a number of much more important things to fight for the higher ground in (politics, environment, and even more of those, health care). In particular, when it comes to videogames, I’m very much just looking for the entertainment value. So I really don’t care whether a game is free software or not.

*If you disagree with that and find me a bad person for actually playing any proprietary game, you’re free to stop reading this entry, or even my blog entirely, now.*

Sometimes this is for the greater good anyway, given that id Software at least used to release the engine of their older games to the public, and you can actually learn from that and create something new. How many games are there, based on the Quake 3 engine that was opened up?

But at any rate, this said I think that it’s interesting to look at the strategies that proprietary software developers use to get money, as some of those can be used in the Free Software world to find the funds to work on new projects. And this includes to some point proprietary games developers.

This said, I have already written about Hustle Kings which I positively adored. a pool game, much more complex than foobillard for what it’s worth (not that I’ve been able to play any foobillard since a very long time ago, it doesn’t seem to work that well on an r700 with KMS and Compiz, let alone the older r300 I was using). It’s a pretty inexpensive game by itself, and they provided “crazy tables” expansions for free, and then a (even cheaper) snooker table extension — and I adore snooker.

Also, just like I’m quite happy to buy hardware from Free Software-friendly producers, and subsidize projects I’m relying upon and I cannot help with technique, I’m quite fine with paying for games if that means the studios that made them can make better, nicer games for me to play and be entertained.

At the same time, I don’t think that trying to spill my money with “premium content” such as avatars and themes is something that works out at all; this is done, though, by both Sony and Microsoft with their consoles. But there are people paying for that stuff, and if it works for them.. it’s just like people paying for novelty ringtones… and if somebody feels like I’m quoting some witty British author whose unfortunate departure a few years ago is still hurting us… well, it’s the case.

Anyway, last July, Sony started a subscription model for their PlayStation Network, called PlayStation Plus. While it is priced not too lower than the Microsoft subscription for XBox Live Gold, it feels less oppressive: anything that you could do before is still free, it just adds discounts, some premium content, one feature or two but more importantly, they feed you four free games each months, divided into a standard game for the PS3, one first PlayStation game, and two minis that can be played both on PS3 and PSP.

I subscripted to the option (I’m not subscribed to XBox Live), and actually I find it not too bad an idea. And it shows that they are not just dumping a whole lot of stuff on users this way; first of all, by subscribing in the first two months of offering, they added one extra bonus: Little Big Planet. The game, which I won’t go much in the details of, is still selling for whole price in Italy (€60), so it’s not a bad gift, with a one-year subscription priced at €50. But even so it’s a game that is just a start for them, as it has a huge number of “premium” download content in form of costumes for your character (slightly more decent offering than avatars, I’d say), and level creation kits. And new levels altogether!

With the basic offering for July, they also gave out a definitely-nice Wipeout HD, which I have to say, I loved, as it reminds me of the Podracer game I played as a demo sooo many times when I was a boy. What is important of this is that while the game does not have multiple download content as LBP has, it got an expansion already available in the store… and even more cleverly, they made available the extension (Wipeout HD Fury) with a discount for PlayStation Plus subscribers in August. I’m quite sure a lot of people will be getting that if they didn’t do so before already.

And this month’s main game is instead Zen Pinball. Now I got to say I was very happy with this choice, because I’m a closeted virtual pinball player. I used to play a lot of pinball games when I was younger, one I had as a MS-DOS game, at the time when copying or passing a game was just a matter of calling DISKCOPY (in all-caps obviously, since it was DOS), and another demo I played for a veeery long time (if you couldn’t tell, i wasn’t buying so many games when I was actually playing them as a kid and boy, under Windows), was Balls of Steel … so Zen Pinball actually interested me for a while; I ever came close to buying it at some point, but dropped out of the idea, and thus I’m quite happy for their choice.

What has this in common with the other choices, though? Well there are a series of extra tables, with different, themed features, available to download for a price small enough to be interesting. So even here, even by giving away the basic game, they are making money in distributing the extra download content.

It is interesting how even Sony is moving toward a free(ish)-platform, paid-content idea, and that is not altogether a bad thing. The problem is, can we provide any similar strategy in Free Software without compromising? If so, how?