So, as I suspended my work on a Valgrind frontend until I can decie if I should be hacking at helgrind to produce an XML output, or just focus for now on writing a memcheck frontend, akin to Valkyrie, I decided to resume working on something I started working on quite some months ago: ruby-elf.
For those of you who don’t read my blog since it started, ruby-elf is an ELF file parser written in pure Ruby (Ruby-Elf). I started writing it to have a script capable of identifying colliding symbols between different shared objects on the system.
Together with ruby-elf, I also implemented a very simple
nm command, and a
readelf -d script. They are very basic commands and don’t follow 1:1 the behaviour of the equivalent tools from binutils, but they were a nice testcase while working on ruby-elf in the past.
What instead was really missing in ruby-elf was a real testsuite. So I decided to go with that, considering that writing testcases for my Valgrind frontend made me see how the code was shaping up.
I decided that the proper way to test ruby-elf was to actually provide a set of ELF files to parse, and I started with Linux/amd64 and Linux/x86 files as those were the files I could compile without having to install a crosscompiler.
The first test were trivial and passed fairly easily, but when I added a more complex test, that looked for a specific symbol that had to be missing in the file, I got a very nasty failure, an OutOfBound exception for the Elf’s Symbol Type value on the x86 executable. After looking at the code for a while, I thought it was correct, so I asked solar if he knew why I would find an impossible type on the symbol, but that made no sense at all.
After checking the offsets of the read value, I came to see that there was a 64-bit read for the address, rather than a 32-bit read. Further debugging shown me that using alias to create the specific read functions (for addresses) on the Elf file depending on the class didn’t work quite as well as I hoped.
The thing goes this way: I open a 32-bit file, the class is Elf32, so the alias should create read_address as read_u32; then I open a 64-bit file, the class is Elf64, so the alias should create read_address as read_u64. Then I load the 32-bit file’s symbols, and read_address is called. I expected alias to create the alias on the instance, as I ran it from an instance function, not outside scope, but instead it’s created on the class, which means that at that point, read_address is still aliased to read_u64, reading a 64-bit address rather than a 32-bit one.
Now, either this is a bug in Ruby, or I misunderstood the alias command… and I have to say that if it’s not a bug, alias is non-intuitive compared with a lot other code in Ruby which does just what you might think it does.
Anyway, thanks to the fact I started writing testcases, I was able to identify the problem. Tomorrow I’ll add some more executables, of different machine types, and of different OSes (FreeBSD to begin with), so that the testcases can check as much code as possible from ruby-elf.
Too bad writing testcases for libxine is almost impossible.
I don’t understand the problem. You have two classes Elf32 and Elf64, with a base class Elf. And in each derived class, you implement read_address differently. If you defined both read_32 and read_64 in the base class, aliasing the right one to read_address in the subclasses should work. Where’s the problem?
When I talk about Elf32 and Elf64 classes, I mean ELF classes, not Ruby classes. There is just one Elf::File Ruby class, which finds out the Elf class, and then in theory had to alias the correct functions in the open method.The code now instead uses two functions that switch on the Elf class loaded.