Updates on Silicon Labs CP2110

One month ago I started the yak shave of supporting the Silicon Labs CP2110 with a fully opensource stack, that I can even re-use for glucometerutils.

The first step was deciding how to implement this. While the device itself supports quite a wide range of interfaces, including a GPIO one, I decided that since I’m only going to be able to test and use practically the serial interface, I would at least start with just that. So you’ll probably see the first output as a module for pyserial that implements access to CP2110 devices.

The second step was to find an easy way to test this in a more generic way. Thankfully, Martin Holzhauer, who commented on the original post, linked to an adapter by MakerSpot that uses that chip (the link to the product was lost in the migration to WordPress, sigh), which I then ordered and received a number of weeks later, since it had to come to the US and clear customs through Amazon.

All of this was the easy part, the next part was actually implementing enough of the protocol described in the specification, so that I could actually send and receive data — and that also made it clear that despite the protocol being documented, it’s not as obvious as it might sound — for instance, the specification says that the reports 0x01 to 0x3F are used to send and receive data, but it does not say why there are so many reports… except that it turns out they are actually used to specify the length of the buffer: if you send two bytes, you’ll have to use the 0x02 report, for ten bytes 0x0A, and so on, until the maximum of 63 bytes as 0x3F. This became very clear when I tried sending a long string and the output was impossible to decode.

Speaking of decoding, my original intention was to just loop together the CP2110 device with a CH341 I bought a few years ago, and have them loop data among each other to validate that they work. Somehow this plan failed: I can get data from the CH341 into the CP2110 and it decodes fine (using picocom for the CH341, and Silicon Labs own binary for the CP2110), but I can’t seem to get the CH341 to pick up the data sent through the CP2110. I thought it was a bad adapter, but then I connected the output to my Saleae Logic16 and it showed the data fine, so… no idea.

The current status is:

  • I know the CH341 sends out a good signal;
  • I know the CP2110 can receive a good signal from the CH341, with the Silicon Labs software;
  • I know the CP2110 can send a good signal to the Saleae Logic16, both with the Silicon Labs software and my tiny script;
  • I can’t get the CH341 to receive data from the CP2110.

Right now the state is still very much up in the air, and since I’ll be travelling quite a bit without a chance to bring with me the devices, there probably won’t be any news about this for another month or two.

Oh and before I forget, Rich Felker gave me another interesting idea: CUSE (Character Devices in User Space) is a kernel-supported way to “emulate” in user space devices that would usually be implemented in the kernel. And that would be another perfect application for this: if you just need to use a CP2110 as an adapter for something that needs to speak with a serial port, then you can just have a userspace daemon that implements CUSE, and provide a ttyUSB-compatible device, while not requiring short-circuiting the HID and USB-Serial subsystems.

Reverse Engineering and Serial Adapter Protocols

In the comments to my latest post on the Silicon Labs CP2110, the first comment got me more than a bit upset because it was effectively trying to mansplain to me how a serial adapter (or more properly an USB-to-UART adapter) works. Then I realized there’s one thing I can do better than complain and that is providing even more information on this for the next person who might need them. Because I wish I knew half of what I know now back when I tried to write the driver for ch314.

So first of all, what are we talking about? UART is a very wide definition for any interface that implements serial communication that can be used to transmit between a host and a device. The word “serial port” probably bring different ideas to mind depending on the background of a given person, whether it is mice and modems connected to PCs, or servers’ serial terminals, or programming interfaces for microcontrollers. For the most part, people in the “consumer world” think of serial as RS-232 but people who have experience with complex automation systems, whether it is home, industrial, or vehicle automation, have RS-485 as their main reference. None of that actually matters, since these standards mostly deal with electrical or mechanical standards.

As physical serial ports on computer stopped appearing many years ago, most of the users moved to USB adapters. These adapters are all different between each other and that’s why there’s around 40KSLOC of serial adapters drivers in the Linux kernel (according to David’s SLOCCount). And that’s without counting the remaining 1.5KSLOC for implementing CDC ACM which is the supposedly-standard approach to serial adapters.

Usually the adapters are placed either directly on the “gadget” that needs to be connected, which expose a USB connector, or on a cable used to connect to it, in which case the device usually has a TRS or similar connectors. The TRS-based serial cables appeared to become more and more popular thanks to osmocom as they are relatively inexpensive to build, both as cables and as connectors onto custom boards.

Serial interface endpoints in operating systems (/dev/tty{S,USB,ACM}* on Linux, COM* on Windows, and so on) do not only transfer data between host and device, but also provides configuration of parameters such as transmission rate and “symbol shape” — you may or may not have heard references to something like “9600n8” which is a common way to express the transmission protocol of a serial interface: 9600 symbols per second (“baud rate”), no parity, 8-bit per symbol. You can call these “out of band” parameters, as they are transmitted to the UART interface, but not to the device itself, and they are the crux of the matter of interacting with these USB-to-UART adapters.

I already wrote notes about USB sniffing, so I won’t go too much into detail there, but most of the time when you’re trying to figure out what the control software sends to a device, you start by taking a USB trace, which gives you a list of USB Request Blocks (effectively, transmission packets), and you get to figure out what’s going on there.

For those devices that use USB-to-UART adapters and actually use the OS-provided serial interface (that is, COM* under Windows, where most of the control software has to run), you could use specialised software to only intercept the communication on that interface… but I don’t know of any such modern software, while there are at least a few well-defined interface to intercept USB communication. And that would not work for software that access the USB adapter directly from userspace, which is always the case for Silicon Labs CP2110, but is also the case for some of the FTDI devices.

To be fair, for those devices that use TRS, I actually have considered just intercepting the serial protocol using the Saleae Logic Pro, but beside being overkill, it’s actually just a tiny fraction of the devices that can be intercepted that way — as the more modern ones just include the USB-to-UART chip straight onto the device, which is also the case for the meter using the CP2110 I referenced earlier.

Within the request blocks you’ll have not just the serial communication, but also all the related out-of-band information, which is usually terminated on the adapter/controller rather than being forwarded onto the device. The amount of information changes widely between adapters. Out of those I have had direct experience, I found one (TI3420) that requires a full firmware upload before it would start working, which means recording everything from the moment you plug in the device provides a lot more noise than you would expect. But most of those I dealt with had very simple interfaces, using Control transfers for out-of-band configuration, and Bulk or Interrupt1 transfers for transmitting the actual serial interface.

With these simpler interfaces, my “analysis” scripts (if you allow me the term, I don’t think they are that complicated) can produce a “chatter” file quite easily by ignoring the whole out of band configuration. Then I can analyse those chatter files to figure out the device’s actual protocol, and for the most part it’s a matter of trying between one and five combinations of transmission protocol to figure out the right one to speak to the device — in glucometerutils I have two drivers using 9600n8 and two drivers using 38400n8. In some cases, such as the TI3420 one, I actually had to figure out the configuration packet (thanks to the Linux kernel driver and the datasheet) to figure out that it was using 19200n8 instead.

But again, for those, the “decoding” is just a matter to filtering away part of the transmission to keep the useful parts. For others it’s not as easy.

0029 <<<< 00000000: 30 12                                             0.

0031 <<<< 00000000: 05 00                                             ..

0033 <<<< 00000000: 2A 03                                             *.

0035 <<<< 00000000: 42 00                                             B.

0037 <<<< 00000000: 61 00                                             a.

0039 <<<< 00000000: 79 00                                             y.

0041 <<<< 00000000: 65 00                                             e.

0043 <<<< 00000000: 72 00                                             r.

This is an excerpt from the chatter file of a session with my Contour glucometer. What happens here is that instead of buffering the transmission and sending a single request block with a whole string, the adapter (FTDI FT232RL) sends short burts, probably to reduce latency and keep a more accurate serial protocol (which is important for device that need accurate timing, for instance some in-chip programming interfaces). This would be also easy to recompose, except it also comes with

0927 <<<< 00000000: 01 60                                             .`

0929 <<<< 00000000: 01 60                                             .`

0931 <<<< 00000000: 01 60                                             .`

which I’m somehow sceptical they come from the device itself. I have not paid enough attention yet to figure out from the kernel driver whether this data is marked as coming from the device or is some kind of keepalive or synchronisation primitive of the adapter.

In the case of the CP2110, the first session I captured starts with:

0003 <<<< 00000000: 46 0A 02                                          F..

0004 >>>> 00000000: 41 01                                             A.

0006 >>>> 00000000: 50 00 00 4B 00 00 00 03  00                       P..K.....

0008 >>>> 00000000: 01 51                                             .Q

0010 >>>> 00000000: 01 22                                             ."

0012 >>>> 00000000: 01 00                                             ..

0014 >>>> 00000000: 01 00                                             ..

0016 >>>> 00000000: 01 00                                             ..

0018 >>>> 00000000: 01 00                                             ..

and I can definitely tell you that the first three URBs are not sent to the device at all. That’s because HID (the higher-level protocol that CP2110 uses on top of USB) uses the first byte of the block to identify the “report” it sends or receives. Checking these against AN434 give me a hint of what’s going on:

  • report 0x46 is “Get Version Information” — CP2110 always returns 0x0A as first byte, followed by a device version, which is unspecified; probably only used to confirm that the device is right, and possibly debugging purposes;
  • report 0x41 is “Get/Set UART Enabled” — 0x01 just means “turn on the UART”;
  • report 0x50 is “Get/Set UART Config” — and this is a bit more complex to parse: the first four bytes (0x00004b00) define the baud rate, which is 19200 symbols per second; then follows one byte for parity (0x00, no parity), one for flow control (0x00, no flow control), one for the number of data bits (0x03, 8-bit per symbol), and finally one for the stop bit (0x00, short stop bit); that’s a long way to say that this is configured as 19200n8.
  • report 0x01 is the actual data transfer, which means the transmission to the device starts with 0x51 0x22 0x00 0x00 0x00 0x00.

This means that I need a smarter analysis script that understands this protocol (which may be as simple as just ignoring anything that does not use report 0x01) to figure out what the control software is sending.

And at the same time, it needs code to know how “talk serial” to this device. Usually the out-of-bad configuration is done by a kernel driver: you ioctl() the serial device to the transmission protocol you need, the driver sends the right request block to the USB endpoint. But in the case of the CP2110 device, there’s no kernel driver implementing this, at least per Silicon Labs design: since HID devices are usually exposed to userland, and in particular to non-privileged applications, sending and receiving the reports can be done directly from the apps. So indeed there is no COM* device exposed on Windows, even with the drivers installed.

Could someone (me?) write a Linux kernel driver that expose CP2110 as a serial, rather than HID, device? Sure. It would require fiddling around with the HID subsystem a bit to have it ignore the original device, and that means it’ll probably break any application built with Silicon Labs’ own development kit, unless someone has a suggestion on how to have both interfaces available at the same time, while it would allow accessing those devices without special userland code. But I think I’ll stick with the idea of providing a Free and Open Source implementation of the protocol, for Python. And maybe add support for it to pyserial to make it easier for me to use it.


  1. All these terms make more sense if you have at least a bit of knowledge of USB works behind the scene, but I don’t want to delve too much into that.
    [return]

Yak Shaving: Silicon Labs CP2110 and Linux

One of my favourite passtimes in the past years has been reverse engineering glucometers for the sake of writing an utility package to export data to it. Sometimes, in the quest of just getting data out of a meter I end up embarking in yak shaves that are particularly bothersome, as they are useful only for me and no one else.

One of these yak shaves might be more useful to others, but it will have to be seen. I got my hands on a new meter, which I will review later on. This meter has software for Windows to download the readings, so it’s a good target for reverse engineering. What surprised me, though, was that once I connected the device to my Linux laptop first, it came up as an HID device, described as an “USB HID to UART adapter”: the device uses a CP2110 adapter chip by Silicon Labs, and it’s the first time I saw this particular chip (or even class of chip) in my life.

Effectively, this device piggybacks the HID interface, which allows vendor-specified protocols to be implemented in user space without needing in-kernel drivers. I’m not sure if I should be impressed by the cleverness or disgusted by the workaround. In either case, it means that you end up with a stacked protocol design: the glucometer protocol itself is serial-based, implemented on top of a serial-like software interface, which converts it to the CP2110 protocol, which is encapsulated into HID packets, which are then sent over USB…

The good thing is that, as the datasheet reports, the protocol is available: “Open access to interface specification”. And indeed in the download page for the device, there’s a big archive of just-about-everything, including a number of precompiled binary libraries and a bunch of documents, among which figures AN434, which describe the full interface of the device. Source code is also available, but having spot checked it, it appears it has no license specification and as such is to be considered proprietary, and possibly virulent.

So now I’m warming up to the idea of doing a bit more of yak shaving and for once trying not to just help myself. I need to understand this protocol for two purposes: one is obviously having the ability to communicate with the meter that uses that chip; the other is being able to understand what the software is telling the device and vice-versa.

This means I need to have generators for the host side, but parsers for both. Luckily, construct should make that part relatively painless, and make it very easy to write (if not maintain, given the amount of API breakages) such a parser/generator library. And of course this has to be in Python because that’s the language my utility is written in.

The other thing that I realized as I was toying with the idea of writing this is that, done right, it can be used together with facedancer, to implement the gadget side purely in Python. Which sounds like a fun project for those of us into that kind of thing.

But since this time this is going to be something more widely useful, and not restricted to my glucometer work, I’m now looking to release this using a different process, as that would allow me to respond to issues and codereviews from my office as well as during the (relatively little) spare time I have at home. So expect this to take quite a bit longer to be released.

At the end of the day, what I hope to have is an Apache 2 licensed Python library that can parse both host-to-controller and controller-to-host packets, and also implement it well enough on the client side (based on the hidapi library, likely) so that I can just import the module and use it for a new driver. Bonus points if I can sue this to implement a test fake framework to implement the tests for the glucometer.

In all of this, I want to make sure to thank Silicon Labs for releasing the specification of the protocol. It’s not always that you can just google up the device name to find the relevant protocol documentation, and even when you do it’s hard to figure out if it’s enough to implement a driver. The fact that this is possible surprised me pleasantly. On the other hand I wish they actually released their code with a license attached, and possibly a widely-usable one such as MIT or Apache 2, to allow users to use the code directly. But I can see why that wouldn’t be particularly high in their requirements.

Let’s just hope this time around I can do something for even more people.

Reverse engineering notes: USB sniffing

You have probably by now read a number of the posts I wrote about reverse engineering glucometers. And while I have complained about the lack of documentation, and maintain a repository of reverse-engineered protocols, I have not really shared the tools I’m using for my work.

The first problem with this is that I’m using a closed-source USB sniffer. I’m not proud of it, but it proved itself useful and worth the price, since the alternative that Microsoft suggests (Message Analyzer) appears not to be working for me, and USBpcap is not supported on Windows 10.

The native file format of USBlyzer is a CFBF container, but it also includes the ability to export the sniff to text CSV. Originally, I had to fight quite a bit with that, because version 2.1 of the tool produced a mostly unserviceable CSV – in particular the actual packet data was in an unmarked column – but the current version (2.2) is actually decent enough.

I have been working on these CSV, parsing them into a Python structure, and then manipulating them to produce what I refer to as “chatter” files, which is the format you see in my blog posts usually. These are just hexdumps (using the hexdump module) prefixed with a direction of the packet and the packet number, to make it easier to refer to the raw trace. The scripts I’ve used for this translation have evolved quite a bit, from a set of copy-pasted CSV parsing to building a dedicated module for the parsing, to the latest version that separates the idea of reassembling the higher-level protocol packets from actually producing the “chatter” file.

All of these scripts are yet to be released, but I hope to be able to do very soon. I’m also planning to start working a way to access the original CFBF (.ulz) files without requiring the UI to convert to CSV, as that should make my life significantly easier, as it avoids the most boring step in the process, which relies on the Windows UI. Luckily, there is an olefile module that allows you to access these files and the streams in them, I just not have started looking into what the structure of the content is.

I did contact the original developers of the software, and ask them to publish the file format specifications, since they should not contain any special sauce of their business case, but they told me they won’t do that, because it would require them to set in stone the format and never change it again. I told them I disagree on the stance, but it is their decision to make. So instead, I’ll be spending some time figuring this out in the future, feel free to keep reading this blog until I get more details on it.

One of the goals I’d like to have is the ability to convert the USBlyzer traces (either from CSV or their own format) to pcap format so that I can analyze it in Wireshark. This should allow me to make queries such as “Show me all the packets that have the fourth bytes as a value higher than 2” — which is indeed something I have been doing recently, in very convoluted ways. Unfortunately when I started looking on how to do this, I found out two things that made me very unhappy.

The first is that there isn’t a single way to store USB sniff in pcap format, but two. One is the “classic” usbmon one that you can get by building and loading the usbmon module in Linux, and starting Wireshark. The other is the one used by USBpcap to save the information from Windows. These have different formats and (as far as I can tell) there is no easy way to convert between the two. I’m also not sure if the standard Wireshark dissectors apply to that.

The other problem is that the USBpcap format itself is so Windows specific that, despite documenting the whole format on its website, it relies on some constant values coming from the Windows SDK. And you can imagine that depending on the Windows SDK for a Python utility to convert between two file formats is not a great idea. Luckily for me, Wine also has a header with the same constants, but being able to copy that code into the conversion utility means there’s a bit of work to do to make sure I can publish it under the proper license — as I would like to keep every tool’s license the most liberal I can.

You could be asking why on Earth I’m not using virtual machines and just good old standard usbmon to solve my problem. And the answer is two-fold: the laptop I’m using to do the development is just not powerful enough to run Windows 10 on a virtual machine, particularly when having to run Java software such as is the case for Contour, and on the other hand I don’t have enough USB ports to dedicate one to the virtual machine for attaching the devices.

I have an alternative plan, which involves using a BeagleBone Black and USBProxy, but I have not started on that project, among other things because it requires a bit of a complicated setup with USB Ethernet devices and external chargers. So it’s planned, but not sure when I’ll get to that.

Also, speaking of Wireshark, a quick check by using usbmon with my tool and the FreeStyle Libre (because I travel with that, so it’s easier to test on the road), tells me that here is something not quite correct in the HID dissector. In particular it looks like it only recognizes the first report sent by the device as an HID packet, when the response is fragmented into multiple packets. I need to spend some time to track that problem down, and possibly figure out how difficult it would be for me to build further dissectors that can reassemble the higher-level protocol, the way TCP and SCTP sessions can be displayed in Wireshark already.

Free Idea: a QEMU Facedancer fuzzer

This post is part of a series of free ideas that I’m posting on my blog in the hope that someone with more time can implement. It’s effectively a very sketched proposal that comes with no design attached, but if you have time you would like to spend learning something new, but no idea what to do, it may be a good fit for you.

Update (2017-06-19): see last paragraph.

You may already be familiar with the Facedancer, the USB fuzzer originally designed and developed by Travis Goodspeed of PoC||GTFO fame. If you’re not, in no so many words, it’s a board (and framework) that allows you to simulate the behaviour of any USB device. It works thanks to a Python framework which supports a few other board and could – theoretically – be expanded to support any device with gadgetfs support (such as the BeagleBone Black I have at home, but I digress).

I have found about this through Micah’s videos, and I have been thinking for a while to spend some time to do the extension to gadgetfs so I can use it to simulate glucometers with their original software — as in particular it should allow me to figure out which value represent what, by changing what is reported to the software and see how it behaves. While I have had no time to do that yet, this is anyway a topic for a different post.

The free idea I want to give instead is to integrate, somehow, the Facedancer framework with QEMU, so that you can run the code behind a Facedancer device as if it was connected to a QEMU guest, without having to use any hardware at all. A “Virtualdancer”, which would not only obviate the need for hardware in the development phase (if the proof of concept of a facedancer were to require an off-site usage) but also would integrate more easily into fuzzing projects such as Bochspwn or TriforceAFL.

In particular, I have interest not only in writing simulated glucometers for debugging purposes (although a testsuite that requires qemu and a simulated device may be a bit overkill), but also in simulating HID devices. You may remember that recently I had to fix my ELECOM trackball, and this is not the first time I have to deal with broken HID descriptor. I have spent some more time looking into the Linux HID subsystem and I’m trying to figure out if I can make some simplifications here and there (again, topic for another time), so having a way to simulate an HID device with strange behaviour and see if my changes fix it or not would be extremely beneficial.

Speaking of HID, and report descriptors in particular, Alex Ionescu (of ReactOS fame) at REcon pointed out that there appear to be very few reported security issues with HID report descriptor parsing, in particular for Windows, which seems strange given how parsing those descriptor is very hard, and in particular there are very seriously broken descriptors out there. This would be another very interesting surface for a QEMU-based dancer software, to run through a number of broken HID report descriptor and send data to see how the system behaves. I would be very surprised if there is no bug in particular on the many small and random drives that apply workarounds such as the one I did for ELECOM.

Anyway, as I said I haven’t even had time to make the (probably minor) modifications to the framework to support BBB (which I already have access to), so you can imagine I’m not going to be working on this any time soon, but if you feel like working on some USB code, why not?

Update (2017-07-19): I pointed Travis at this post over Twitter, and he showed me vUSBf. While this does not have the same interface as facedancer, it proves that there is a chance to provide a virtual USB device implemented in Python.

As a follow up, Binyamin Sharet linked to Umap2 which supports fuzzing on top of facedancer, but does not support qemu as it is.

While it’s not quite (yet) what I had in mind, it proves that it is a feasible goal, and that there is already some code out there getting very close!

Diabetes control and its tech, take 4: glucometer utilities

This is one of the posts I lost due to the blog problems with draft autosaving. Please bear with the possibly missing pieces that I might be forgetting.

In the previous post on the subject I pointed out that thanks to a post in a forum I was able to find how to talk with the OneTouch Ultra 2 glucometer I have (the two of them) — the documentation assumes you’re using HyperTerminal on Windows and thus does not work when using either picocom or PySerial.

Since I had the documentation from LifeScan for the protocol, starting to write an utility to access the device was the obvious next step. I’ve published what I have right now on a GitHub repository and I’m going to write a bit more on it today after a month of procrastination and other tasks.

While writing the tool, I found another issue with the documentation: every single line returned by the glucometer is ending with a four-digits (hex) checksum, but the documentation does not describe how the checksum is calculated. By comparing some strings with the checksum I knew, I originally guessed it might have been what I found called “CRC16-Syck” — unfortunately that also meant that the only library implementing it was a GPL-3 one, which clashed with my idea of a loose copyleft license for the tools.

But after introducing the checksum verification, I found out that the checksum does not really match. So more looking around with Google and in forums, and I get told that the checksum is a 16-bit variation of Fletcher’s checksum calculated in 32-bit but dropping the higher half… and indeed it would then match, but when then looking at the code I found out that “32-bit fletcher reduced to 16-bit” is actually “a modulo 16-bit sum of all the bytes”. It’s the most stupid and simple checksum.

Interestingly enough, the newer glucometers from LifeScan use a completely different protocol: it’s binary-based and uses a standard CRC16 implementation.

I’ve been doing my best to design the utility in such a way that there is a workable library as well as an utility (so that a graphical interface can be built upon it), and at the same time I tried making it possible to have multiple “drivers” that implement access to the glucometer commands. The idea is that this way, if somebody knows the protocol for other devices, they can implement support without rewriting, or worse duplicating, the tool. So if you own a glucometer and want to add support for it to my tool, feel free to fork the repository on GitHub and submit a merge request with the driver.

A final note I want to leave about possible Android support. I have been keeping in mind the option of writing an Android app to be able to dump the readings on the go. Hopefully it’s still possible to build Android apps for the market in Python, but I’m not sure about it. At the same time, there is a more important problem: even though I could connect my phone (Nexus 4) to the glucometer with an USB OTG cable and the one LifeScan sent me, but the USB cable has a PL2303 and I doubt that most Android devices would support it anyway.

The other alternative I can think about is to find an userland implementation of PL2303 that lets me access it as a serial port without the need for a kernel driver. If somebody knows of any software already made to solve this problem, I’ll be happy to hear.

More USB chargers doubts

It was slightly less than an year ago that I have vented some doubts about USB chargers and a few more I have now. As I said last week, I changed the ROM on my Milestone and thanks to Robert I have also re-calibrated the battery, with the phone now lasting over a day with a single charge (terrific!).

When doing the calibration, it was suggested to use Battery Monitor to check the status of the battery during the process. The widget itself is quite nice, actually, and has one nice feature that estimates current flow in the device: negative while discharging, positive while charging. This feature is what made me even more doubtful about general usefulness of USB chargers.

I mostly use two USB chargers for my phone: the original one from Motorola, rated at 800mA, and the one I got for my iPod when I bought it a few years back, rated at 1000mA (1A). When I use the Motorola one, the widget shows just shy of 500mA of positive flow… when I do the same on the iPod charger, it shows around 200300. Given the iPod one should have more power than the Motorola one, it shows that something’s wrong.

I remember reading a technical article a few months ago about how Apple enforces their “Made for iPhone” brands on chargers by limiting the amount of current it would require of a charger depending on specifics resistance value over the data lines of the USB port, so that a number of chargers don’t even reach the power of a standard USB port (500mA) when used with an iPhone. Now I’m wondering whether the problem here is that Motorola did the same or if it’s the iPod charger that also tries to “validate” the presence of an iPod on the USB connection. Either way, the option sucks.

It is funny to think that there are so many specifications nowadays that calls for an universal charging solution – just look at this Wikipedia article – and yet nothing seems to stop manufacturers from imposing artificial limitations for the only reason to sell you their own charger!

Of course, simply relying on two chargers, and even more importantly, on the reading of a software application estimates, is no way to draw proper conclusions. The proper course of action, which I wish I had the time to pursue already, would be to add an ammeter in the chain, discharge the phone, then look at what’s really going on in term of current flow during the charge process. My original intention was to add the ammeter after the charger and before the adapter, using male and female USB Type A ports, but nowadays I’m doubtful. Since the European cEPS requirements don’t include the use of a USB Type A charger, but simply of a microUSB connector, it seems like Samsung took the opportunity to provide its users with an old-fashioned charger, where the cable is captive and microUSB is only the connector option.

Given both Samsung and Motorola use Android these days, it wouldn’t be a fair comparison if the two chargers weren’t cross-tested with the other manufacturer’s phone, but that also requires that the ammeter is added in the microUSB chain… option that would disallow testing charging of iPhone and iPod devices since they use the dock connector, and not microUSB.

Any suggestion on how to realise the hardware needed is very welcome, as I’ve already demonstrated I’m not that good an electronics person.

Hardware doubts: USB-based chargers

It seems like a huge number of phones and general portable devices, nowadays, charge with the help of USB-based chargers; these usually consists of extremely compact devices with a power plug, and an USB socket (type A) and come with a standard USB cable (usable for data as well as pure power) using either type mini-B (for older models) or micro-B (for almost all modern models, as they seem to have standardised with the help of the Open Mobile Terminal Platform.

I have to note that sometimes, even if the charging happens over the USB port, the manufacturer provides you not with a generic USB charger, but rather with a (sometimes model-specific!) charger that have from the terminal side the proper mini- or micro-USB plug, but is hardwired into the adapter. Motorola used to do that at the time of the V3 models (and many others; including their bluetooth headsets; I got a very old, very cheap one yesterday for my Milestone), and Samsung seems to be doing that, at least with the Corby.

At any rate, this is actually handy: I can leave a single charger in my bedroom, and one ready in my bag; then have around just a few cables (one for the iPod, one for the Milestone, one for the BT headset). On my office, I use the USB ports on my computer to charge them; this wouldn’t work as well, if I didn’t have this nice Belkin HUB that provides even ports with half an Ampere per port (the maximum standardised by USB); without that it would split the 500mA between multiple ports, and then it would take æons to charge.

But while travelling the past weekend to be at a fair to help some friends out, I noticed that sometimes, I really need to charge both the iPod and the cellphone, and bringing around a single charger while handy stops me from doing that. Luckily I remembered that a friend of mine suggested that there are many dual-USB chargers out there. I found some by Belkin as well, in a Saturn store near here, but unfortunately they seem to have a different problem.

As I said the computer USB ports are rated at an output of 500mA; on the other hand, the charger that Motorola gave me with my Milestone is rated for 850mA, and the iPod charger is rated at 1A (1000mA). This is pretty useful, as the higher current should allow for faster charge… and still not burn anything out. So, looking at dual-USB wall chargers, you’d expect them being rated for total output of 1.7A-2A, so that each port can output the equivalent current to the single-output chargers. But browsing through the above-mentioned store, I noticed that about half of them are rated for total output 1A (0.5A per-port, like a computer) and half of them are not rated at all on the box, or not clearly. For instance, the Belkin ones say they have “two USB 1A ports”, can’t really tell whether the charger has two 1A-ports or two ports for a total of 1A.

I hate when the specifics on the boxes don’t really make it explicit what you’re going to buy. Does anybody know these products and can shed some light over the matter?

Consumer hardware considerations

I’ve already written about my buying hardware from Germany for what concerns most of my non-urgent hardware needs. And with the exclusion of harddrives since they tend to fail on me quite often (by the way, with the cold and everything I didn’t get around to order my new hard disks, I’ll see to do that next week since I’m really needing them). Not only that, but sometimes I get friends asking me if I can fetch them something at a lower price than they’d buy it here.

It obviously has disadvantages: it takes time for the stuff to arrive, it takes even more if something is faulty, but all in all most of my friends are glad they can save some money since they rarely have urgent needs for computers, as they tend to play most of the time with them, rarely they need it for job or school. At any rate, the whole thing is sometimes distressing for me, I got used to shipment delays and similar, which is physiological with this way of buying stuff. On the other hand, it keeps me up to date with consumer hardware which I don’t lately deal with myself, since I currently use Apple laptops and I got higher grade hardware for my workstation.

In 2008 I had to build two computers, for two friends of mine in the same household, and to cut down on the amount of work to do, I choose to use mostly-similar hardware between the two of them. Which consisted mainly in using the same wireless card and the same motherboard. Unfortunately one bought a 32-bit XP Pro license, and the other a 64-bit XP Pro license which caused quite some difference between the installation of the two of them, but it wasn’t that bad. (Yes I know XP is evil, I know it all too well, so I usually provide installation media for Linux as well, but since they need it to play or do 3D rendering, I don’t go syndicate about it; their loss not to use a Free operating system I guess).

When the time came to order the two boxes that I received last week, I decided to use the same botherboard again, a Gigabyte GA-MA770-DS3 which turned out pretty neat for both the other two boxes. Again one friend bought a 32-bit XP and the other 64-bit XP. Although I ordered the same exact product code, the motherboard I received are quite different, hardware revision 2.0 rather than 1.0, with different drivers even for AHCI to the point that it wouldn’t work with the install CDs I prepared for the other two. Okay, no big deal.

But the problems with XP (the huge amount of them) are not what I’m interested in talking about. Instead I’m interested in the way they evolved the hardware on a very surface level between the two revisions. This motherboard has an interestingly huge amount of USB ports which makes it pretty useless to use USB hubs, with the exception of having ports right over your desk rather than behind your computer; before, you had six leads for them internally, two connecting to the front in most computers and one behind on a bracket over a PCI slot or similar.

The new revision instead has four leads internally, and expose the rest of them on the backplate; to do that they removed the serial port, which is now a lead on the motherboard. It reminded me so much of old AT motherboard where the only connection on the backplate was the DIN keyboard connection and you had to use brackets (or the chassis’s cut) for the other connections. The interesting bit is that they don’t give you a bracket with the serial port, nor they give you one with the parallel port, which already on rev 1.0 was not present on the backplate. It’s interesting to note how the old DB-25/DE-9 is disappearing step by step from our computers. Luckily neither of my friends wanted serial or parallel ports, but even if they wanted I think I still have some connectors around.

When I was given my first ‘modern’ computer, a Pentium 133 MHz, in 1997, it was standard to have a gameport/MPU-401 port on board for MIDI and joysticks; since the introduction of joysticks with three axis, and gamepads with eight or more buttons, this started to be quite a problem though. Even more when Sony introduced the DualShock line which had twelve buttons, a D-pad (which means another four buttons) and two analog sticks (which again changed with the coming of the PlayStation 3), and PC gamepad manufacturers had to find ways to do something as good, which turned out to be USB gamepads. Since the MIDI hardware was already not that common at the time, this was one of the first thing to disappear, and software support for it started to bitrot, I remember having to patch my kernel for a while with my Athlon 1000 which had an integrated MPU-401 port, because it failed to initialize it properly. From AC’97 onward, I don’t think I have seen another gameport on computers I fixed, which said “good bye” to one format.

The second port to say bye has been the parallel port, which was used by the old line printers, with their more or less complex children and with its nightmares to work with on Linux. I was never able to control the parallel port decently from Linux because most of the sample code for that was designed to be used with DOS or Windows 95 and used the I/O ports directly, which is not really a good thing. Interestingly enough, the 8255 interface that was used for those was used at my school for the electronics laboratory, although it then had a full 24-bit interface, rather than registry-based interface that EPP ports have. Enterprise had a parallel port, Yamato does not have it.

The third port is the serial port ad I can really guess quite well why they are taking it away, on most modern desktop systems it’s useless, nobody would ever think of making use of it. Even analog modems nowadays use USB interfaces through CDC. Luckily for me, Yamato still has two, one of which is wired to the SBMC board for IPMI, but even this way I have not one but three USB RS-232 adapters connected to Yamato (one of which recently supported by Linux); admittedly, only two are connected USB-side, the other is connected serial-port side and I use for connecting to the serial console from the laptop which has no serial port at all.

The only remaining D-subminiature you still find on modern computers is the VGA port, but even that is getting away soon, it is already gone from most laptops, and it’s getting replaced by DVI and HDMI ports. On one of the two computers, the video card has a dual-DVI video card, which means it won’t have any D-sub port on the back of it. Sheesh!

I know this post is mostly useless, but it really made me feel old, I have to say.

Updating a BIOS without floppies, Windows or CDs

One regression I have with Yamato is that the BIOS cannot be updated through the BIOS itself, like I could do with my previous ASUS motherboard (and Award BIOS). The AMI bios of the Tyan motherboard doesn’t support that.

But since I don’t want to use my Windows installation to do the job (I use it only for work and to play more recent games that don’t play in Wine), I decided to look up something that could allow me to do the upgrade using only Free Software, my Gentoo Linux install, and the files provided by Tyan.

Caution: I make no warranty that the procedure will work properly, if this post gets published it means I was able to get it working myself but it might not work for you! Please remember that you do this at your risk!

I found an interesting post about FreeDOS and USB drives, which made my life much more easier. But as I’m using Linux and in particular Gentoo, I revised a bit the instructions :)

  • first of all, merge the software we’re going to need, this means sys-boot/makebootfat (from my overlay) and app-arch/libarchive (to extract an ISO file later on);
  • proceed then to create the directory where to put the files that will go on the virtual floppy, let’s call it “bios-update”;
  • download the new BIOS from Tyan’s website, and put the content of the zip archive in “bios-update”;
  • download FreeDOS (I suggest you to use the torrent download rather that ibiblio that is so slow!);
  • extract the ISO, with bsdtar it’s just the same as extracting a standard tar file: xf and all;
  • copy freedos/setup/odin/{command.com,kernel.sys} from the FreeDOS ISO to the “bios-update” directory;
  • from freedos/packages/src_base/kernels.zip extract the files source/ukernel/boot/fat{12,16,32lba}.bin.
  • run the makebootfat command with a commandline similar to this: makebootfat -o /dev/sde -E 255 -1 fat12.bin -2 fat16.bin -3 fat32lba.bin -m /usr/share/syslinux/mbr.bin bios-update; note that /dev/sde has to be changed to your USB device.

Now just restart your system with the USB drive in it and run the flash utility as provided by your manufacturer.

As an alternative you may rename an eventual batch file provided by your manufacturer to autoexec.bat so that it is ran at boot; I have no idea (yet) how to stop FreeDOS from asking date and time, but whatever, just press enter at them. You don’t usually have to worry about infinite loops of BIOS updates, as once the update is done, I’ve noticed all BIOS, after update, require you to fill in the configuration again.

I’m tempted to streamline this to a script or make the makebootfat ebuild fetch the needed files out of the FreeDOS ISO, or create an ebuild that install the basic FreeDOS files needed to create the boot disk. But maybe another time.