When the tests are more important than the functions

I’ve got to admit that the whole “Test-Driven Development” hype was not something that appealed to me as much — not because I think tests are wrong, I just think that while tests are important, focusing almost exclusively on them is just as dumb as ignoring them altogether.

In the Ruby world, there is so much talking about tests, that it’s still very common to find gems that don’t work at all with newer Ruby versions, but their specs pass just fine. Or even the tests pass fine for the original author, but they will fail on everyone else’s system because they depend heavily on custom configuration — sometimes, they depend on case-insensitive filesystems because the gem was developed on Windows or Mac, and never tested on Linux. Indeed, for the longest time, Rails own tests failed to work at all chances, and the “vendored” code they brought in, never had a working testsuite. Things have improved nowadays but not significantly.

Indeed, RubyGems do not make it easy to perform testing upon install, which means that many gems distributed lack part of the testsuite altogether — sometimes this is an explicit choice; in the case of my own RubyElf gem the tests are not distributed because they grow and grow, and they are quite a bit of megabytes at this point; if you want to run them you fetch the equivalent snapshot from GitHub — the ebuild in Gentoo uses that as a basis for that reason.

Sometimes even gems coming from people who are considered nearly Ruby Gods, like rails_autolink by tenderlove end up with a gem that fails tests, badly, in its release — the version we have in Portage is patched up, and the patch is sent upstream. Only the best for our users.

Now unfortunately, as I noted in the post’s title, some projects care more about the tests than the functionality — the project in question is the very same Typo that I use for this blog, and which I already mused forking to implement fixes that are not important for upstream. Maybe I should have done that already, maybe I will do that.

So I sent a batch of changes and fixes to upstream, some of them fixing issues compelled by their own changes, other implementing changes to allow proper usage of Typo over SSL vhosts (yes my blog is now available over SSL — I have to fix a few links and object load paths in some of the older posts, but it will soon work fine), other again simply making it a bit more “SEO”-friendly, since that seems to be a big deal for the developers.

What kind of response do I get about the changes? “They fail spec” — no matter that the one commit I’m first told it breaks specs actually fix editing of blog post after a change that went straight to master, so it might break specs, but it solve a real life issue that makes the software quite obnoxious. So why did I not check specs?

group :development, :test do
  gem 'thin'
  gem 'factory_girl', '~> 3.5'
  gem 'webrat'
  gem 'rspec-rails', '~> 2.12.0'
  gem 'simplecov', :require => false
  gem 'pry-rails'
end

I have no intention to start looking into this whole set of gems just to be able to run the specs for a blog which I find are vastly messed up. Why do I think so? Well, among other reasons, I’ve been told before quite a few times that they wouldn’t ever pass on PostgreSQL — which happens to be the database that has been powering this very instance for the past eight years. I’m pretty sure it’s working good enough!

Well, after asking (a few times) for the specs output — turns out that most of the specs broken are actually those that hardcode http:// in the URLs. Of course they break! My changes use protocol-relative URIs which means that the output changes to use // — no spec is present that tries to validate the output for SSL-delivered blogs which would otherwise break before.

And what is the upstream’s response to my changes? “It breaks here and there, would you mind looking into it?” Nope! “The commit breaks specs.” — No offer (until I complained loudly enough on IRC) for them to look into it, and fix either the patches or the specs are needed. No suggestion that there might be something to change in the specs.

Not even a cherry-pick of the patches that do not break specs.

Indeed as of this writing, even the first patch in the series, the only one that I really would care about get merged, because I don’t want to get out-of-sync with master’s database, at least until I decide to just get to the fork, is still there lingering, even if there is no way in this world that it breaks specs as it introduces new code altogether.

Am I going to submit a new set of commits with at least the visible specs’ failures fixed? Not sure — I really could care more about it, since right now my blog is working, it has the feature, the only one missing being the user agent forwarding to Akismet. I don’t see friendliness coming from upstream, and I keep thinking that a fork might be the best option at this point, especially when, suggesting the use of Themes for Rails to replace the currently theme handling, so that it works properly with the assets pipeline (one of the best features of Rails 3), the answer was “it’s not in our targets” — well it would be in mine, if I had the time! Mostly because being able to use SCSS would make it easier to share the stylesheets with my website (even though I’m considering getting rid of my website altogether).

So my plead to the rest of the development community, which I hope can be considered part of, is to not be so myopic that you care more about tests passing than features working. For sure Windows didn’t reach its popularity level being completely crash-proof — and at the same time I’m pretty sure that they did at least a level of basic testing on it. The trick is always in the compromise, not on the absolute care or negligence for tests.

Exit mobile version