Software Defined Remote Control

A number of months ago I spoke about trying to control a number of TV features in Python. While I did manage to get some of the adapter boards that I thought I would use printed, I hadn’t had the time to go and work on the software to control this before we started looking for a new place, which meant I shelved the project until we could get to the new place, and once we got there it was a matter of getting settled down, and then, … you got the idea.

As it turns out, I had one week free at the end of November — my employer decided to give three extra days on the (US) Thanksgiving week, and since my birthday was at the end of the week, I decided to take the remaining two days off myself to make it a nice nine days contiguous off. Perfect timeframe to go and hack on some projects such as that.

Also, one thing changed significantly since the time I started thinking about this: I started using Home Assistant. And while it started mostly as a way for me to keep an eye on the temperature of the winter garden, I found that with a bit of configuration, and a pull request, changing the input on my receiver with it was actually easier than using the remote control and trying to remember which input was mapped to what.

That gave me finally the idea of how to implement my TV input switch tree: expose it as one or more media players in Home Assistant!

Bad (Hardware) Choices

Unfortunately, as soon as I went to start implementing the switching code, I found out that I had made a big mistake in my assumptions: the Adafruit FT232H breakout board does not support PWM outputs, including the general time-based pulsing (without a carrier frequency). Indeed, while the Blinka library can technically support some of the features, it seems like none of the Linux-running platforms would be able to manage that. So there goes my option of just using a computer to drive the “fake remote” outputs directly. Well, at least without rewriting it in some other language and find a different way to send that kind of signals.

I looked around for a few more options, but all of it ended up being some compromise: MicroPython doesn’t have a very usable PulseOut library as far as I could tell; Arduino-based libraries don’t seem to allow two outputs to happen at roughly the same time; and as I’m sure I already noted in passing, CircuitPython lacks a good “secondary channel” to be instructed from a computer (the serial interface is shared with the REPL control, and the HID is gadget-to-host only).

After poking around a few options and very briefly considering writing my own C version on an ATmega, I decided to just go for the path of least resistance, and go back to CircuitPython, and try to work with the serial interface and its “standard input” to the software.

The problem with doing that is that the Ctrl-C command is intended to interrupt the command, and that means you cannot send the byte 0x03 un-escaped. At the end I thought about it, and decided that CircuitPython is powerful enough that just sending the commands in ASCII wouldn’t be an issue. So I decided to write a simplistic Flask app that would take a request over HTTP and send the command via the serial port. It worked, sort of. Sometimes while debugging I would end up locking the device (a Trinket M0) in the REPL, and that meant the commands wouldn’t be sent.

The solution I came up with was to reset the board every time I started the app, by sending Ctrl-C and Ctrl-D (0x03, 0x04) to force the board to reset. It worked much better.

Not-Quite-Remote Controlled HDMI Switch

After that worked, the problem was ensuring that the commands sent actually worked. The first component I needed to send the commands to was the HDMI switch. It’s a no-brand AliExpress-special HDMI switch. It has one very nice feature for what I need to do right now. It obviously has an infrared remote control – one of those thin, plasticky domes one – but it particularly has the receiver for it on a cord, which is connected with a pretty much standard 3.5mm “audio jack”.

This is not uncommon. Randomly searching Amazon or AliExpress for “HDMI switch remote” can find you a number of different, newer switches that use the same remote receiver, or something very similar to it. I’m not sure if the receivers are compatible between each other, but the whole idea is the same: by using a separate receiver, you can stick the HDMI switch behind a TV, for instance, and just make the receiver poke from below. And most receivers appear to be just a dome-encased TSOP17xx receiver, which is a 3-pin IC, which works great for a TRS.

When trying this out, I found that what I could do would be to use a Y-cable to allow both the original receiver and my board to send signals to the switch — at which point, I can send in my own pulses, without even bothering with the carrier frequency (refer to the previous post for details on this, it’s long). The way the signal is sent, the pulses need to ground the “signal” line (that is usually at 5V); to avoid messing up the different supplies, I paired it on an opto-coupler, since they are shockingly cheap when buying them in bulk.

But now that I tried setting this up with an input selection, I found myself not able to get the switch to see my signal. This turned out to require an annoying physical debugging session with the Saleae and my TRRS-to-Saleae adapter (that I have still not released, sorry folks!), which showed I was a bit off on the timing of the NEC protocol the switch used for the remote control. This is now fixed in the pysirc library that generates the pulses.

Once I got the input selector working for the switch with the Flask app, I turned to Home Assistant and added a custom component that exposes the switch as a “media_player” platform. In a constant state of “Idle” (since it doesn’t have a concept of on or off), it allowed me and my wife to change the input while seeing the names of the devices, without hunting for the tiny remote, and without having to dance around to be seen by the receiver. It was already a huge improvement.

But it wasn’t quite enough where I wanted it to be. In particular, when our family calls on Messenger, we would like to be able to just turn on the TV selected to the right input. While this was partially possible (Google Assistant can turn on a TV with a Chromecast), and we could have tried wiring up the Nabu Casa integration to select the input of the HDMI switch, it would have not worked right if the last thing we used the TV for was the Nintendo Switch (not to be confused with the HDMI switch) or for Kodi — those are connected via a Yamaha receiver, on a different input of the TV set!

Enter Sony

But again, this was supposed to be working — the adapter board included a connection for an infrared LED, and that should have worked to send out the Sony SIRC commands. Well, except it didn’t, and that turned out to be another wild goose chase.

First, I was afraid that when I fixed the NEC timing I broke the SIRC ones — but no. To confirm this, and to make the rest of my integration easier, I took the Feather M4 to which I hard-soldered a Sony-compatible IR LED, and wrote what is the eponymous software defined remote control: a CircuitPython program that includes a few useful commands, and abstractions, to control a Sony device. For… reasons, I have added VCR as the only option beside TV; if you happen to have a Bluray player by Sony, and you want to figure out which device ID it uses, please feel free.

It might sound silly, but I remember seeing a research paper in UX from the ’90s of using gesture recognitions on a touchpad on a remote control to allow more compact remote controls. Well, if you wanted, you could easily make this CircuitPython example into a touchscreen remote control for any Sony device, as long as you can find all the right device IDs, and hard code a bunch of additional commands.

So, once I knew that at least on the software side I was perfectly capable of control the Sony TV, I had to go and do more hardware debugging, with the Saleae, but this time with the probes directly on the breadboard, as I had no TRS cable to connect to. And that was… a lot of work, to rewire stuff and try.

The first problem was that the carrier frequency was totally off. The SIRC protocol specifies a 40kHz carrier frequency, which is supposedly easier to generate than the 38kHz used by NEC and others, but somehow the Saleae was recording it as a very variable frequency that oscillated between 37kHz and 41kHZ. So I was afraid that trying to run two PWM outputs on the Trinket M0 was a bad idea, even if one of them was set to nought hertz — as I said, the HDMI switch didn’t need a carrier frequency.

I did toy briefly with the idea of generating the 40kHz carrier wave separately, and just gating it to the same type of signal I used for the HDMI switch. Supposedly, 40kHz generators are easy, but at least for the circuits I found at first glance, it requires a part (640kHz resonator) that is nearly impossible to find in 2020. Probably fell out of use. But as it turn out it wouldn’t have helped.

Instead, I took another Feather. Since I ran out of M4, except for the one I hardwired already an IR LED to, I instead pulled up the nRF52840 that I bought and barely played with. This should have been plenty capable to give me a clean 40kHz signal and it indeed was.

At that point I noticed another problem, though: I totally screwed up the adapter board. In my Feather M4, the IR LED was connected directly between 3V and the transistor switching it. A bit out of spec, but not uncommon given that it’s flashed for very brief impulses. On the other hand when I designed the adapter, I connected it to the 5V rail. Oops, that’s not what I was meant to be doing! And I did indeed burn out the IR LED with it. So I had to solder a new one on the cable.

Once I fixed that, I found myself hitting another issue: I could now turn on and off the TV with my app, but the switch stopped responding to commands either from the app or from the original remote! Another round of Saleae (that’s probably one of my favourite tools — yes I splurged when I bought it, but it’s turning out to be an awesome tool to have around, after all), and I found that the signal line was being held low — because the output pin is stuck high…

I have not tried debugging this further yet — I can probably reproduce this without my whole TV setup, so I should do that soonish. It seems like opening both lines for PWM output causes some conflicts, and one or the other end up not actually working. What I solved this with was only allowing one command before restarting the Feather. It meant taking longer to complete the commands, but it allowed me to continue with my life without further pain.

One small note here: since I wasn’t sure how Flask concurrency would interact with accessing a serial port, I decided to try something a bit out of the ordinary, and set up the access to the Feather via an Actor using pykka. It basically means leaving one thread to have direct access to the serial port, and queue commands as messages to it. It seems to be working fine.

Wrapping It All Up

Once the app was able to send arbitrary commands to the TV via infrared, as well as changing the input of the HDMI, I extended the Home Assistant integration to include the TV as a “media_player” entity as well. The commands I implemented were Power On and Off (discrete, rather than toggle, which means I can send a “Power On” to the TV when it’s already on and not bother it), and discrete source selection for the three sources we actually use (HDMI switch, Receiver, Commodore 64). There would be a lot more commands I could theoretically send, including volume control, but I can already access those via the receiver, so there’s no good reason to.

After that it was a matter of scripting some more complicated acts: direct selection of Portal, Chromecast, Kodi, and Nintendo Switch (which are the four things we use the most). This was easy at that point: turn on the TV (whether it was on or not), select the right input on either the receiver or the switch, then select the right input ion the TV. The reason why the order seems a bit strange is that it takes a few seconds for the TV to receive commands after turning on, but by doing it this way we can switch between Chromecast and Portal, or Nintendo Switch and Kodi, in pretty much no time.

And after that worked, we decided the $5/month to Nabu Casa were worth it, because that allows us to ask Alexa or Google Assistant to select the input for us, too.

Eventually, this lead me to replace Google’s “Turn off the TV” command in our nightly routine to trigger a Home Assistant script, too. Previously, it would issue the command to the Chromecast, routing through the whole Google cloud services between the device that took the request and the Chromecast. And then the Chromecast would be sending the CEC command to power off… except that it wouldn’t reach the receiver, which would stay on for another two hours until it finally decided it was time to turn off.

With the new setup, Google is triggering the Home Assistant script, and appears to do that asynchronously. Then Home Assistant sends the request to my app, that then sends it to the Feather, that sends the power off to the TV… which is also read by the receiver. I didn’t even need to send the power off command to the receiver itself!

All in all, the setup is satisfying.

What remains to be done is to try exposing a “Media Player” to Google Home, that is not actually any of the three “media_player” entities I have, but is a composite of them. That way, I could actually just expose the different input trees as discrete inputs to Google, and include the whole play, pause, and volume control that is currently missing from the voice controls. But that can wait.

Instead, I should probably get going at designing a new board to replace the breadboard mess I’m using right now. It’s hidden away enough that it’s not in our face (unlike the Birch Books experiments), but I would still like having a more… clean setup. And speaking of that, I really would love if someone already contributed an Adafruit Feather component for EAGLE, providing the space for soldering in the headers, but keeping the design referencing the actual lines as defined in it.

Controlling Your TV via (Circuit)Python

This is yet another of the pre-announced projects, and possibly one of the most procrastinated ones. I own a Sony Bravia TV I bought in Ireland in 2013, and is still working very well for our needs (we don’t watch that much TV). It’s connected to a Yamaha RX-V475 receiver on one input and a cheap HDMI switch on the other input, because there’s too many devices, although we only use three or four of them most of the time: Chromecast, Portal TV, HTPC, and PlayStation 4. They are split equally between the two inputs. So far, so good.

The problem starts with the fact that sometimes if the TV is turned on by the Chromecast or the Portal, the ARC does not initialize properly, and we hear no audio. The solution is worthy of The IT Crowd: tell the TV to use the internal speakers, then tell it to use the external speakers again — turn off and on the ARC itself. It’s annoying and takes a few keypresses.

What I have been wanting for a while is a way to ask Assistant (or Alexa) to “unfuck the TV” — that is to reset the audio channel for us, recording a macro to do that ARC off/on dance. It was for this reason I bought the Feather M4 last year, but I only ended up starting to work on it just this month.

To make this possible, the first thing I needed was to know the commands sent by the remote, and have a way to replicate them back to the TV. I already had some experience with infrared receivers as, a long time ago in a different life, I maintained the LIRC patchset for Linux 2.6, for a while. I even tried writing Qt (3) bindings for LIRC. I wonder if I can find the source code anywhere. But that was not as useful as I thought.

Thankfully, Ken Shirriff wrote good documentation on the protocol, and linked to further details, and even to a full archive of all the Sony command codes. Which made my life much easier, but not completely easy. While Adafruit has an IRRemote library, it does not correctly interpret Sony SIRC commands. I considered adding the support directly into it, but it turned out to be a bit more invasive than expected, so I ended up instead writing a separate package that included both the decoder and the encoder (this was before I started the new job, so releasing it was easy — but now I’m having to wait a little bit to continue on it.)

Now, once I managed to decode the commands, I need to be able to send them. And here’s where things get interesting. What we usually refer to as commands are a bunch of bits. These are encoded, based on the protocol, as a series of pulses, which are modulated on top of a carrier wave with a certain frequency.

Unfortunately, it turns out that talking to a Sony TV is nowhere near as easy as it might sound. Let’s try to figure out what’s going on by providing a bit of vocabulary. An infrared signal as used by most remote controls usually carries a command and, in most cases, an address to specify which device should take the command (since they are usually used in place where there are multiple devices using infrared remote controls). These are encoded in bits according to the rules of the protocol, and then converted to pulses. These pulses are then applied to a carrier wave of a certain frequency, which defines the speed at which the infrared LED is “blinking” on and off. The LED itself has a wavelength which represent the “colour” of the light in the infrared spectrum.

For Sony’s SIRC, the TV expects a 40kHz carrier, and it seems a 950nm wavelength. It seems like you can use 940nm LEDs but they have worse reception, and only works if they are bright enough. The first batch of LEDs I bought, as well as the pre-soldered module from DFRobot, turned out to not be bright enough for my TV to recognize — so I decided to try with desoldering the LED from a replacement remote control from eBay, which worked fine, so I thought I needed 950nm LEDs — but no, it works with good enough 940nm LEDs, just not with the tiny ones I originally bought from Mouser.

So once I had a way to send arbitrary Sony commands to my TV, I started looking for options to trigger the ARC reset — unfortunately this is proving more complicated than I expected: there’s no command that I could send that would provide me with the ARC menu. Instead I can only bring up the Sync menu reliably — but that menu has different entries depending on whether the selected input has working HDMI CEC, which is extremely annoying.

On the other hand, I did find commands that select directly the different inputs directly, instead of showing up the input selection menu and choosing from there. Which gave me a different idea to start with: while I haven’t given up on the macro for fixing the audio, what I can do is to script input selection across the two-level mux.

I started by wondering if I could send the IR command to the HDMI switcher as well, so that I could select between those two easily — that turned out to be another pile of yaks to shave. The switcher uses the NEC protocol, which has a 38kHz carrier wave, but that turned out not to matter as much (the decoder it uses seem to accept 40kHz just as well) — instead I had a hard time to get it to receive the command because it expected a NEC “repeat signal” to seal the command. I guess that’s going to be a blog post in and by itself.

Now, my original plan was to get something running on the Feather, attach an AirLift wing to give it WiFi, and control that… somehow. I also considered re-coding this with ESP32 and ESPHome, despite it not having an obvious way to send SIRC commands while making sense — it doesn’t represent the commands the way the protocol expects, and the most reasonable way I could find was to generate the pulse sequence, and just sending that raw.

But then I thought it over and realised that, at least for the moment, it makes more sense for me to use an USB-to-GPIO bridge, and control this from a full blown computer — the reason for that is that I would want to be able to open a webpage on my phone, or my wife’s, and select the right input altogether. And while there’s a Python module for controlling the receiver, using that from CircuitPython or MicroPython would probably take some work. And even though I could just control the Feather remotely, via MQTT or USB-Serial, it would still likely be more work than driving this all from the same Flask app locally.

Unfortunately, I don’t have code to show you yet. While my new dayjob has much simpler policies than the last, I need to complete the training before I can start releasing new projects. Hopefully next update will come with code and demos.

Book review: Amusing Ourselves to Death

This is a tricky review to write because I’m having a very bad time finishing this book. Indeed, while it did start well, and I was actually interested in the idea behind the book, it easily got nasty, in my mind. But let’s start from the top, and let me try to write a review of a book I’m not sure I’ll be able to finish without feeling ill.

I found the book, Amusing Ourselves to Death, through a blog post in one of the Planets I follow, and I found the premise extremely interesting: has the coming of the show business era meant that people are so much submersed by entertainment to lose sight of the significance of news? Unfortunately, as I said the book itself, to me, does not make the point properly, as it exaggerates to the point of no return. While the book has been written in 1985 – which means it has no way to know the way the Web changed media once again – it is proposed to be still relevant today in the introduction as written by the son of the author. I find that proposition unrealistic. It goes as far as stating that most of the students the book was told to read agreed with it — I would venture a guess that most of them didn’t want to disagree with their teacher.

First of all, the author is a typography snob and that can be easily seen when he spends pages and pages telling all the nice things about printed word — at the same time, taking slights at the previous “media” of spoken word. But while I do agree with one of the big points in the book (the fact that different forms makes discourse “change” — after all, my blog posts have a different tone from Autotools Mythbuster, and from my LWN articles), I do not think that a different tone makes for a more or less “validity” of it. Indeed this is why I find it extremely absurd that, for Wikipedia, I’m unreliable when writing on this blog, but I’m perfectly reliable the moment I write Autotools Mythbuster.

Now, if you were to take the first half of the book and title it something like “History of the printed word in early American history”, it would be a very good and enlightening read. It helps a lot to frame into context the history of America especially compared to Europe — I’m not much of an expert in history, but it’s interesting to note how in America, the religious organisations themselves sponsored literacy, while in Europe, Catholicism tried their best to keep people within the confines of illiteracy.

Unfortunately, he then starts with telling how evil the telegraph was by bringing in news from remote places, that people, in the author’s opinion, have no interest in, and should have no right to know… and the same kind of evilness is pointed out in photography (including the idea that photography has no context because there is no way to take a photograph out of context… which is utterly false, as many of us have seen during the reporting of recent wars. Okay, it’s all gotten much easier thanks to Photoshop, but in no way it was impossible in the ‘80s.

Honestly, while I can understand having a foregone conclusion in mind, after explaining how people changed the way they speak with the advent of TV, no longer caring about syntax frills and similar, trying to say that in TV the messages are drown in a bunch of irrelevant frills is … a bit senseless. The same way it is senseless to me to say that typography is “pure message” — without even acknowledging that presentation is an issue for typography as much as TV, after all we wouldn’t have font designers otherwise.

While some things are definitely interesting to read – like the note about the use of pamphlet in the early American history that can easily compare to blogs today – the book itself is a bust, because there is no premise of objectivity, it’s just a long text to find reasons to reach the conclusion the author already had in mind… and that’s not what I like to read.

Hopefully it’ll go better with my next read.

My green fetish

Ok, maybe the post’s title is not the most safe for work I ever wrote but the content definitely is not anything wrong. And if you’re wondering why this post will be shorter than usual and with more grammar errors, that’s because I’m again using the tablet to write and my thumbs haven’t gotten used to the letters’ disposition. Taking the so-called Smart Cover out makes it much nicer to write on, by the way. Even if I am using the Tucano Magico cover that keeps it attached to the back by itself.

In the past month I decided that it was time to get a subscription to Sky, the satellite TV provider, once again. I dropped it when I “took over” the house, but nowadays I just wish to be able to watch something before sleeping, and in English if at all possible, and they make it possible indeed. Besides the obvious series, there is something I love to watch and that is the National Geographic programme World’s Greenest Houses.

The reason why I love it, is that it shows mainly how it can be possible to actually have a cool house, with all accessories and trimmings, and yet being energy efficient. Indeed, that is something I wish I could do in my house as well. The obvious first problem of course is the money needed to do the work, and of course most of the houses they show were built since the start with green in mind, rather than adapted from a built, living house.

Okay, I maybe it also tickles that part of me that used to create new scenarios and buildings in his unofficial Ultima OnLine shard, which is likely the same one that likes to play with The Sims 3.

There is, unfortunately, something that airs from time to time in the same slots, with the same title, in Italian at least, but that shows a “challenge”. Said challenge is a more Big Brother alike programme, where two families take forced steps into what they define green living. While the target is indeed a greener life, I dislike this one with all my heart. For two main reasons: the first is that the whole point of going none to 100% is the kind of challenge that most people will look away from ever picking up; the second is that I think the authors are not into Green at all!

Indeed the one episode I tried to watch, I had to change channel disgusted right at the first ad break. Why, you ask? Well, in the teasers, they show the little child of one family on the verge of crying, as the parents tell him or her (sorry I forgot) that they would have to “cancel his birthday”, as they were forced by the programme not to use their car. Don’t get me wrong, I’m not a huge fan of cars either, heck I don’t even have a driving license, but that simple sound bite is so negative…

Okay, possibly the rest of the episode would show the family overcome the difficulty, either by planning a different kind of birthday party or by organising something that would use public transports or bikes… But let’s be honest, do you find such a tease before the ad break positive at all?

I’m not sure if the child in that episode overcame the scare of the “bad green”, but I’m wondering if other kids were to watch part of that episode, what would their impression of green be? And honestly, if I were to let my nephew watch something on TV beside the cartoons he watches every day, World’s Greenest Houses, the classic variant, would be my first choice. If nothing else he would see that if can be cool to help the environment. But especially considering how difficult is for most children I can think of, including me and some of my friends when we were young, to watch a show till the end, I’m afraid the challenges noted before would do nothing but scare them away…

To be honest I think that most of my personal feeling toward green have to find their source in the Walt Disney Italia comics, with their scouts knockoff… Thus why I feel that kids should be shown that you can be green and cool at the same time.

Ah well… To conclude I would also like to point out that similar scare tactics are applied more or less the same way with free software. To you to see if you can see the parallel, and agree or not with me…

Gaming can be seriously cool

Sometimes I wonder how did I spend my childhood considering I was never much of a gamer. While my classmates spent most of their days playing Baldur’s Gate, Quake 3 or GTA, I’ve spent time playing only RTS (and not so many), and Ultima OnLine (if you were an Italian player and you played on Dragons’ Land or Heaven, you might know me as GM Eorl or GM Unicorn). I’ve re-discovered playing basically just last year when I bought the Nintendo DS, as a stress relieve…

I have to say, I start to think that if I did try to play more in the past years, I might have spared myself the stress-caused pancreatitis. But you can’t change the past, I can only try not to fall into that again.

Anyway, tonight I finally tried Devil May Cry 4, and I have to say Mark was right: it’s cool. Nothing less than I expected from CAPCOM actually. One of the few games I can say marked my teens is absolutely Street Fighter Alpha 3 (I was unbeatable with Chun-Li), and through emulation I did love a lot of classic CAPCOM games. One thing they share, they have splendid character designs, and they tend to make combat like a dance. DMC4 brings this to an epic point.

I’ve tried just the first three missions, but the gameplay makes it very clear that I’ll have a lot of hours of fun in the near future ;)

And this makes it difficult for me to choose what to play between that and Genji, which I paid very little for the quality of the game I’ve seen up to now. If you’re a fantasy anime fan, you should really try it out. I think the last game which got me so much immersed was Throne of Darkness.

It’s relaxing me quite a bit sincerely, although now I think I’ll go back to watching The Weakest Link on BBC Prime before sleeping, tomorrow is a new long day, as I’ll be going to look for a (quite expensive I’m afraid) pair of bifocals. I read way too much during the day and I can’t really keep switching glasses all day long. By this pace I’ll have quite a few pair of glasses: one big pair with Transition lenses for going around (and for driving when I’ll get my license), one small pair, also with Transition lenses, so I can look cool when I’m around ;), a not-so-big pair with medium distance lenses for working at my workstation(s), a light pair for when I watch TV, the bifocals for when I’m going around the house, … how annoying!

To the rest of Gentoo developers with a PlayStation3… we should get together and find a game we can get a tournament running!

Sometimes one run is not enough

I mean an emerge run to install a system.

Now that both disks work fine, I finally started reinstalling Gentoo on my Enterprise. This involved creating a new LVM volume, extend the software raid so that the new disks are used, and finally take the stage3 and start the real work.

The real work was quite simplified by the fact that what I lost on the old disks was /usr/lib, mostly, so all the Portage configuration (make.conf and the /etc/portage directory) was still available to me, as well as the kernel configuration (to use as a base, as the kernel version changed on me, I last used .22 and now I’m upgrading to .23), and also the old world file.

I reduced the old world file actually, removing stuff that I installed just as a test, removing Kyocera’s PPD as they changed once again the packaging (and I can’t even find it on the US site, only on the Italian one.. and in a new multiple-language package, that will mean I’ll have to write a totally new ebuild for that), and removing kde-color-schemes that started spawning errors about kde-config.

The result was a single emerge command that installed about 800 packages, basically my whole world with all its dependencies, beside whatever was already in stage3.

Unfortunately there were a few breaks in the middle: the modules obviously failed, for the known buggy interaction between gcc, Linux .23 and sandbox, plus I found an ICE (probably caused by my borderline CFLAGS, I’ll have to investigate), a failure in MySQL (linux-headers related, says my guts), a missing dep on pmount, and yet another case of people using autotools without knowing how they work, mpeg4ip missing a few m4 files.

Anyway, if all goes well, the installation process should be completed in a few days, and then I can return working on Gentoo, cleaning up the PAM mess to begin with, and then starting on lighter stuff for a while, as I need to keep my workload way lighter than it was before, after what I passed in hospital.

At least last night I was able to sleep well, as I finally have the furniture for my room (even though there are a few glitches that will be solved next week) which means I can finally sleep alone, and in a decent bed.

And a note to self: double check the type of outlet you have on the wall for the aerial next time. Modern aerial outlets sold in Italy have a male connector, while in the past a female connector was used. The previous establishment allowed to swap the sides of the cable, like you do with EuroAV/SCART cables, HDMI cables and so on, but caused confusion for many people with VCRs’ cables (I know more than a few people confused by that). Unfortunately I was used to the old style, and bought two male connectors for the cabling, when I should have bought a female L-shaped connector. (For who’s wondering, it’s a IEC 169-2 connector.


I bet that you started reading this entry expecting to find some quite interesting technical discussion about C or C++ types and casts. Sorry, this is actually a vague, mostly useless and almost certainly boring post about another kind of typecasting .

This post stemmed from a discussion last night with a friend of mine, about the almost non-existent reuse of good actors for TV series (if you couldn’t tell, I’m quite a TV series kind of guy, rather than movie guy at least).

There are certainly a lot of good actors that worked on TV series in the past years, who hasn’t appeared since their respective series completed.

Star Trek is one of the gold mines of typecasting; the late DeForest Kelley and James Doohan never really overcame their typecast as “Bones” McCoy and Scotty; Leonard Nimoy come to the point to title his autobiographies I am not Spock and I am Spock. Patrick Stewart is a bit more lucky, as he’s now identified with Xavier from the X-Men movies (although I also remember him for the minimal part in Robin Hood: Men in Tights .. what I can say, I’m a Brooks fan). Two exception certainly are William Shatner and Rene Auberjonois, both starring in Boston Legal (although the author seems to either be a Star Trek fan, or is using the same exact cast agency, as you can also find the actors who interpreted Quark, Neelix and Seven of Nine as guest stars).

And more recent series like Buffy the Vampire Slayer, and Friends have the same problem too. Although Sarah Michelle Gellar, Alyson Hannigan and Michelle Trachtenberg had their own roles in many different movies and TV series after BtVS finished, they’ll probably remain for a lot of people just Buffy, Willow and Dawn. And they don’t seem to appear as regulars in any series for now. Similar fate for James Marsters (who I admit I haven’t seen in any other role yet). Jennifer Aniston got enough roles not to “just” be Rachel Green from Friends but still doesn’t appear regular in any series.

On the other hand, there are a few actors who got at least two main roles in two different series, this is the case of Richard Dean Anderson (MacGyver from the eponymous series and Jack O’Neill from Stargate SG-1), and Michael Weatherly (Eyes Only in Dark Angel and DiNozzo in NCIS), as well as Eliza Dushku who interpreted Faith in the above-mentioned BtVS and had the main role in Tru Calling (which was cancelled). These last two might have been somehow “lucky” that their show was cancelled, as they avoided the typecast trap. Or we might be the lucky ones (see later on this post).

In Boston Legal, beside James T. Kirk and Odo, there also is Candice Bergen, who portrayed Murphy Brown. That series is a nice way to show that actors will eventually come out of their typecast, if they live long enough: people will forget about them (how many people nowadays actually know who Candice Bergen is? I had to look it up, as I probably just seen her show zapping when I was a kid), and with a new generation they can take a new role.

But why do i care about this? There’s no actual reason, just I was wondering why couldn’t I see more often the protagonists of TV series I liked in the near past, and started thinking how many of them were stuck in typecasting. I feel a bit sorry for them, because often they have quite a lot of skill, but are left unused after a single role (that might be long enough). Although this might not be a great loss for them (they get to do other roles, too, and they certainly don’t end up on a street), it’s a loss for the viewers like me who would like to see them more often.

On the other hand, this is a total opposite to the Italian way of doing TV series: here the same actors are reused over and over and over, to the point you can’t really understand them well enough, as they are quite mixed one with the other. People like Marco Columbro, Elena Sofia Ricci, Gastone Moschin and so on, while good actors, ends up being so omnipresent that you just can’t get to like them after a while, even in the roles of the good ones. Sometimes, they are even seen on two different series at the same time. And it’s not a good thing.

Okay so sorry for this boring reflection, I hope you won’t mind its presence :)