Complex software testing

Yamato is currently ready to start a new tinderbox run, with tests enabled (and test-fail-continue feature so that it does not stop the whole merge when tests do fail); I still have to launch it and I’m still not sure if I should: beside the quite long tests for GCC, which also fail, the glibc tests not only fail but also don’t seem to fail reliably, stopping the ebuild from continuing. I wonder if this is a common trait of tests.

The main issue here is that without tests it makes it very difficult to identify whether the software is behaving as it should or not; as I said, not using gems helped me before and I had plans to test an otherwise not-testable software (although it failed in misery). And because of lack of testing in packages such as dev-ruby/ruby-fcgi, so-called “compatibility patches” get added that don’t really work as they are supposed to.

By having a testsuite you can easily track down issues with concurrency, arch-specific code and other similar classes of problems. Unfortunately, with software that gets complex pretty quickly, and the need for performance overcoming the idea of splitting code in functional units, testing can get pretty ugly.

I currently have two main projects that are in dire need for testing, both failing badly right now; the first is my well-known ruby-elf that, while already having an extensive (maybe too extensive) unit testing suite, lacks some kind of integration testing for the various tools (cowstats, rbelf-size and so on) that can ensure that the results they report are the one that are expected of them. The other project is probably one of the most complex projects I ever worked on: feng .

Testing feng is a very interesting task, since you cannot stop at testing the functional units in it (which, by the way, does not exist: all the code depends one way or another on another piece of it!), you’ve got to test at the protocol level. Now, RTSP is derived out of HTTP, so one could expect that using the methods employed to test HTTP would be good enough.. not the case though: while testing an HTTP server or a web application can be tricky, it’s at least an order of magnitude easier than testing a streaming server. I can write basic tests that ensure the correct behaviour of the server to a series of RTSP requests, but it’d also have to check the RTP data being sent to be correct, and that RTCP is sent and received correctly.

As I said, it’s also pretty difficult to test the software with unit testing, because the various subsystems are not entirely isolated one from the other, so testing the various pieces requires to either fake the presence of the other subsystems, or heavily splitting the code. They rely not only on functions but also on data structure, and the presence of certain, coherent data inside these. Splitting the code is though not always an option because it might get ugly to have good performances out of it, or might require very ugly interfaces to pass the data around.

Between one thing and the other that I have seen lately, I’m really hoping one day to work on some project where extensive testing is a hard requirement, rather than something that I’m doing myself, alone, and is not essential to deliver the code. Sigh.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s