IPv6 in 2020 — Nope, still dreamland

It’s that time of the year: lots of my friends and acquaintances went to FOSDEM, which is great, and at least one complained about something not working over IPv6, which prompted me to share once again my rant over the newcomer-unfriendly default network of a a conference that is otherwise very friendly to new people. Which then prompted the knee-jerk reaction of people who expect systems to work in isolation, calling me a hater and insulting me. Not everybody, mind you — on Twitter I did have a valid and polite conversation with two people, and while it’s clear we disagree on this point, insults were not thrown. Less polite people got blocked because I have no time to argue with those who can’t see anyone else’s viewpoint.

So, why am I insisting that IPv6 is still not ready in 2020? Well, let’s see. A couple of years ago, I pointed out how nearly all of the websites that people would use, except for the big social networks, are missing IPv6. As far as I could tell, nothing has changed whatsoever for those websites in the intervening two years. Even the number of websites that are hosted by CDNs like Akamai (which does support IPv6!), or service providers like Heroku are not served over IPv6. So once again, if you’re a random home user, you don’t really care about IPv6, except maybe for Netflix.

Should the Internet providers be worried, what with IPv4 exhaustion getting worse and worse? I’d expect them to be, because as Thomas said on Twitter, the pain is only going to increase. But it clearly has not reached the point where any of the ISPs, except a few “niche” ones like Andrews & Arnold, provide their own website over IPv6 — the exception appears to be Free, who if I understood it correctly, is one of the biggest providers in France, and does publish AAAA records for their website. They are clearly in the minority right now.

Even mobile phone providers, who everyone and their dog appear to always use as the example of consumer IPv6-only networks, don’t seem to care — at least in Europe. It looks like AT&T and T-Mobile US do serve their websites over IPv6.

But the consumer side is not the only reason why I insist that in 2020, IPv6 is still fantasy. Hosting providers don’t seem to have understood IPv6 either. Let’s put aside for a moment that Automattic does not have an IPv6 network (not even outbound), and let’s look at one of the providers I’ve been using for the past few years: Scaleway. Scaleway (owned by Iliad, same group as Online.net) charges you extra for IPv4. It does, though, provide you with free IPv6. It does not, as far as I understand, provide you with multiple IPv6 per server, though, which is annoying but workable.

But here’s a quote from a maintenance email they sent a few weeks ago:

During this maintenance, your server will be powered off, then powered on on another physical server. This operation will cause a downtime of a few minutes to an hour, depending on the size of your local storage. The public IPv4 will not change at migration, but the private IPv4 and the IPv6 will be modified due to technical limitations.

Scaleway email, 2020-01-28. Emphasis theirs.

So not only the only stable address the servers could keep is the IPv4 (which, as I said, is a paid extra), but they cannot even tell you beforehand which IPv6 address your server will get. Indeed, I decided at that point that the right thing to do was to just stop publishing AAAA records for my websites, as clearly I can’t rely on Scaleway to persist them over time. A shame, I would say, but that’s my problem: nobody is taking IPv6 seriously right now but a few network geeks.

But network geeks also appear to like UniFi. And honestly I do, too. It worked fairly well for me, most of the time (except for the woes of updating Mongodb), and it does mostly support IPv6. I have a full IPv6 setup at home with UniFi and Hyperoptic. But at the same time, the dashboard is only focused on IPv4, everywhere. A few weeks ago it looked like my IPv6 network had a sad (I only noticed because I was trying to reach one of my local machines with its AAAA hostname), and I had no way to confirm it was the case: I eventually just rebooted the gateway, and then it worked fine (and since I have a public IPv4, Hyperoptic gives me a stable IPv6 prefix, so I didn’t have to worry about that), but even then I couldn’t figure out if the gateway got any IPv6 network connection from its UIs.

I’m told OpenWRT got better about this. You’re no longer required to reverse engineer the source to figure out how to configure a relay. But at the same time, I’m fairly sure they are again niche products. Virgin Media Ireland’s default router supported IPv6 — to a point. But I have yet to see any Italian ISP providing even the most basic of DS-Lite by default.

Again, I’m not hating on the protocol, or denying the need to move onto the new network in short term. But I am saying that network folks need to start looking outside of their bubble, and try to find the reasons for why nothing appears to be moving, year after year. You can’t blame it on the users not caring: they don’t want to geek out on which version of the Internet Protocol they are using, they want to have a working connection. And you can’t really expect them to understand the limits of CGNs — 64k connections might sound ludicrously few to a network person, but for your average user it sounds too much: they only are looking at one website at a time! (Try explaining to someone who has no idea how HTTP works that you get possibly thousands of connections per tab.)

Project Memory

For a series of events that I’m not entirely sure the start of, I started fixing some links on my old blog posts, fixing some links that got broken, most of which I ended up having to find on the Wayback Archive. While looking at that, I ended up finding some content for my very old blog. A one that was hosted on Blogspot, written in Italian only, and seriously written by an annoying brat that needed to learn something about life. Which I did, of course. The story of that blog is for a different time and post, for now I’ll actually focus on a different topic.

When I started looking at this, I ended up going through a lot of my blogposts and updated a number of broken links, either by linking to the Wayback machine or by removing the link. I focused on those links that can easily be grepped for, which turns out to be a very good side effect of having migrated to Hugo.

This meant, among other things, removing references to identi.ca (which came to my mind because of all the hype I hear about Mastodon nowadays), removing links to my old “Hire me!” page, and so on. And that’s where things started showing a pattern.

I ended up updating or removing links to Berlios, Rubyforge, Gitorious, Google Code, Gemcutter, and so on.

For many of these, turned out I don’t even have a local copy (at hand at least) of some of my smaller projects (mostly the early Ruby stuff I’ve done). But I almost certainly have some of that data in my backups, some of which I actually have in Dublin and want to start digging into at some point soon. Again, this is a story for a different time.

The importance to the links to those project management websites is that for many projects, those pages were all that you had about them. And for some of those, all the important information was captured by those services.

Back when I started contributing to free software projects, SourceForge was the badge of honor of being an actual project: it would give you space to host a website, as well as the ability to have source code repositories and websites. And this was the era before Git, before Mercurial, and the other DVCS, which means either you had SourceForge, or you likely had no source control at all. But SourceForge admins also reviewed (or at least alleged to) every project that was created, and so creating a project on the platform was not straightforward, you would do that only if you really had the time to invest on the project.

A few projects were big enough to have their own servers, and a few projects were hosted in some other “random” project management sites, that for a while appeared to sprout out because the Forge software used by SourceForge was (for a while at least) released as free software itself. Some of those websites were specific in nature, others more general. Over time, BerliOS appeared to become the anti-SourceForge, with a streamlined application process, and most importantly, with Subversion years before SF would gain support for it.

Things got a bit more interesting later, when things like Bazaar, Mercurial, GIT and so on started appearing on the horizon, because at that point having proper source control could be achieved without needing special servers (without publishing it at least, although there were ways around that. Which at the same time made some project management website redundant, and others more appealable.

But, let’s take a look at the list of project management websites I have used and are now completely or partly gone, with or without history:

  • The aforementioned BerliOS, which has been teetering back and forth a couple of times. I had a couple of projects over there, which I ended up importing to GitHub, and I also forked unpaper. The service and the hosting were taken down in 2014, but (all?) the projects hosted on the platform were mirrored on SourceForge. As far as I can tell they were mirrored read-only, so for instance I can’t de-duplicate the unieject projects since I originally wrote it on SourceForge and then migrated to BerliOS.

  • The Danish SunSITE, which hosted a number of open-source projects for reasons that I’m not entirely clear on. NoX-Wizard, an open-source Ultima OnLine server emulator was hosted there, for reasons that are even murkier to me. The site got renamed to dotsrc.org, but they dropped all the hosting services in 2009. I can’t seem to find an archive of their data; Nox-Wizard was migrated during my time to SourceForge, so that’s okay by me.

  • RubyForge used that same Forge app as SourceForge, and was focused on Ruby module development. It was abruptly terminated in 2014, and as it turns out I made the mistake here of not importing my few modules explicitly. I should have them in my backups if I start looking for them, I just haven’t done so yet.

  • Gitorious set itself up as being an open, free software software to compete with GitHub. Unfortunately it clearly was not profitable enough and it got acquired, twice. The second time by competing service GitLab, that had no interest in running the software. A brittle mirror of the project repositories only (no user pages) is still online, thanks to Archive Team. I originally used Gitorious for my repositories rather than GitHub, but I came around to it and moved everything over before they shut the service down, well almost everything, as it turns out some of the LScube repos were not saved, because they were only mirrors… except that the domain for that project expired so we lost access to the main website and GIT repository, too.

  • Google Code was Google’s project hosting service, that started by offering Subversion repositories, downloads, issue trackers and so on. Very few of the projects I tracked used Google Code to begin with, and it was finally turned down in 2015, by making all projects read-only except for setting up a redirection to a new homepage. The biggest project I followed from Google Code was libarchive and they migrated it fully to GitHub, including migrating the issues.

  • Gemcutter used to be a repository for Ruby gems packages. I actually forgot why it was started, but it was for a while the alternative repository where a lot of cool kids stored their libraries. Gemcutter got merged back into rubygems.org and the old links now appear to redirect to the right pages. Yay!

With such a list of project hosting websites going the way of the dodo, an obvious conclusion to take is that hosting things on your own servers is the way to go. I would still argue otherwise. Despite the amount of hosting websites going away, it feels to me like the vast majority of the information we lost over the past 13 years is for blogs and personal websites badly used for documentation. With the exception of RubyForge, all the above examples were properly archived one way or the other, so at least the majority of the historical memory is not gone at all.

Not using project hosting websites is obviously an option. Unfortunately it comes with the usual problems and with even higher risks of losing data. Even GitLab’s snafu had higher chances to be fixed than whatever your one-person-project has when the owner gets tired, runs out of money, graduate from university, or even dies.

So what can we do to make things more resilient to disappearing? Let me suggest a few points of actions, which I think are relevant and possible right now to make things better for everybody.

First of all, let’s all make sure that the Internet Archive by donating. I set up a €5/month donation which gets matched by my employer. The Archive not only provides the Wayback Machine, which is how I can still fetch some of the content both from my past blogs and from blogs of people who deleted or moved them, or even passed on. Internet is our history, we can’t let it disappear without effort.

Then for what concerns the projects themselves, it may be a bit less clear cut. The first thing I’ll be much more wary about in the future is relying on the support sites when writing comments or commit messages. Issue trackers get lost, or renumbered, and so the references to those may be broken too easily. Be verbose in your commit messages, and if needed provide a quoted issue, instead of just “Fix issue #1123”.

Even mailing lists are not safe. While Gmane is currently supposedly still online, most of the gmane links from my own blog are broken, and I need to find replacements for them.

This brings me to the following problem: documentation. Wikis made documenting things significantly cheaper as you don’t need o learn lots neither in form of syntax nor in form of process. Unfortunately, backing up wikis is not easy because a database is involved, and it’s very hard, when taking over a project whose maintainers are unresponsive, to find a good way to import the wiki. GitHub makes thing easier thanks to their GitHub pages, and it’s at least a starting point. Unfortunately it makes the process a little more messy than the wiki, but we can survive that, I’m sure.

Myself, I decided to use a hybrid approach. Given that some of my projects such as unieject managed to migrate from SourceForge to BerliOS, to Gitorious, to GitHub, I have now set up a number of redirects on my website, so that their official websites will read https://www.flameeyes.eu/p/glucometerutils and it’ll redirect to wherever I’m hosting them at the time.

Ranting on about EC2

Yes, I’m still fighting with Amazon’s EC2 service for the very same job, and I’m still ranty about it. Maybe I’m too old-school, but I find using the good old virtual servers is much much easier to deal with. It’s not that I cannot see the usefulness of the AWS approach (you can easily try to get something going without sustaining a huge initial investment of capital to get the virtual servers, and you can scale it further on in the working), but I think more than half the interface is just an afterthought, rather than an actual design.

The whole software support for AWS is a bit strange: the original tools, that are available in Portage, are written in Java for the big part, but they don’t seem to be actively versioned and properly released by Amazon themselves, so you actually have to download the tools, then check the version from the directory inside the tarball to know the stable download URL for them (to package them in Gentoo, that is). You can find code to manage AWS services in many languages, including Ruby, for various pieces of it, but you cannot easily find an alternative console if not the ElasticFox extension for Firefox, which I have to say makes me doubt a lot (my Firefox is already slow enough). On the other hand, I actually found some promising command-line utilities in Rudy (which I packaged in Gentoo with a not indifferent effort), but beside some incompatibility with the latest version of the amazon-ec2 gem (which I fixed myself), there are other troubles with it (like not being straightforward how to handle multiple AMIs for different roles, or being impossible to handle snapshot/custom AMI creation through just it). Luckily, the upstream maintainer seems to be around and quite responsive.

Speaking about the libraries, it seems like one of the problems with the various Ruby-based libraries is that one of the most commonly used libraries (RightScale’s right_aws gem) is no longer maintained, or at least upstream has gone missing, and that causes obvious stir in the community. There is a fork for it, that forks the HTTP client library as well (right_http_connection, becoming http_connection — interestingly enough for a single, one line change that I’ve simply patched in on the Gentoo package). The problem is that the fork got worse than the original gem for what packaging is concerned: not only the gem is not providing the documentation, Rakefile, tests and so on, but they are not even tagged in the git repository last I check. Alas.

Luckily, it seems like amazon-ec2 is much better at this job; not that it was pain-free, but even here upstream is available, and fast to release a newer version; the same goes for litc, and the dependencies of the above-mentioned Rudy (see also this blog post from a couple of days ago). This actually make it so that the patches I’m applying, and adding to Gentoo, are deleted or don’t even enter the tree to begin with, which is good for the users who have to sync to keep the size of Portage down to acceptable levels.

Now, back to the EC2 support proper; I already ranted before about the lack of Gentoo support; turns out that there is more support if you go over the American regions, rather than the European one. And at the same time, the European zone seems to have problems: I spent a few days wondering why right_aws failed (and I thought it was because of the bugs that they forked it in the first place), but at the end I had to decide that the problem was with AWS itself: from time to time, a batch of my request fall into oblivion, with errors ranging from “not authorized“ to “instance does not exist” (for something I’m still SSH’d into, by the way). At the end, I decided to move to a different region, US/East, which is where my current customer is doing their tests already.

Now this is not easy either since there is no way to simply ask Amazon to transfer a volume from a given region (or zone) and copy it to another in their own systems (you can use snapshot to recreate a volume within a region on different availability zones, but that’s another problem). The official documentation suggests you to use out-of-band transmission (which, for big volumes, becomes expensive), and in particular the use of sync. Now this wouldn’t have to be too difficult, their suggestion is also to use rsync directly, which would be a good suggestion, if not for one particular. As far as I can tell, the only well-supported community distribution available, with a decently recent kernel (one that works with modern udev, for instance) is Ubuntu; in Ubuntu, you cannot access the root user directly as you all probably well know, and EC2 is no exception (indeed, the copiable command that they give you to connect to your instances is wrong for the Ubuntu case, they explicitly tell you to use the root user, when you have, instead, to use the ubuntu user, but I digress); this also means that you cannot use the root user as either origin or destination of an rsync command (you can sudo -i to get a root session from one or the other side, but not on both, and you need it on both to be able to rsync over the privileged files); okay the solution is definitely easy to find, you just need to tar up the tree you want to transfer, and then scp that over, but it really strikes odd to me that their suggested approach does not work with the only distribution that seems to be updated and supported on their platform.

Now, after the move to the US/East region, problems seems to have disappeared and all commands finally succeeded every time, yuppie! I finally was able to work properly on the code for my project, rather than having to fight with deployment problems (this is why my work is in development and not system administration); after such an ordeal, writing custom queries in PostgreSQL was definitely more fun (no Rails, no ActiveRecord, just pure good old PostgreSQL — okay I’m no DBA either, and sometimes I might have difficulties getting big queries to perform properly, as demonstrated by my work on the collision checker but some simpler and more rational scheme I can deal with pretty nicely). Until I had to make a change to the Gentoo image I was working with, and decided to shut it down, restart Ubuntu, and make the changes to create a new AMI; then hell broke loose.

Turns out that for whatever reason, for all the day yesterday (Wednesday 17th February), after starting Ubuntu instances, with both my usual keypair and a couple of newly-created ones (to exclude a problem with my local setup), the instance would refuse SSH access, claiming “too many authentication failures”. Not sure on the cause, I’ll have to try again tonight and hope that it works as I’m late on delivery already. Interestingly enough, the system log (which only appears one out of ten requests for it from the Amazon console) shows everything as okay, with the sole exception of the Plymouth software that crashes with segmentation fault (code 11) just after the kernel loaded.

So all in all, I think that as soon as this project is completed, and with the exception of eventual future work on this, I will not turn back to Amazon’s EC2 anytime soon; I’ll keep getting normal vservers, with proper Gentoo on them, without hourly fees, with permanent storage and so on so forth (I’ll stick with my current provider as well, even though I’m considering adding a fallback mirror somewhere else to be on the safe side; while my blog’s not that interesting, I have a couple of sites on the vserver that might require me to have higher uptime, but that’s a completely off-topic matter right now).

xine, monkeys, branches and micro-optimisations

It has been a while since I last blogged about the technical progress of xine, because I mostly didn’t follow his technical progress in the past months, being too occupied trying to save myself from my own body, I decided to dedicate myself first to some less technical issues.

The first of which was the bug tracker, as it seems that almost unanimously, the developers of xine, me included, hate the SourceForge tracker and the ones like it (GForge’s and Savane’s). There are also enough people not interested in jumping on the Launchpad boat, or the Google boat, as they both havethe same technical limitation that SourceForge has: they are not open solutions; SF was but it was then closed up, GForge also was, but now is no more (this probably accounts for the fact that many new services provided by Alioth aren’t well integrated into the GForge installation), Savane still is free, and I admit their bug tracker is a bit better, although it’s not exactly the best, acceptable though. The last decision was to choose between Scarab (JSP-based) or Roundup (Python-based); the first should be faster to set up as it’s pretty much ready out of the box, while Roundup requires quite some fiddling (that Luca actually already did for FFmpeg and MPlayer so the problem does not even pose itself); just lately an user proposed FlySpray to me in a comment, and that entered the competition too, especially since, being PHP based, it’s less burden than JSP, and it’s certainly good enough out of the box when compared to Roundup.

The current status of the bug tracker is still unsure. I’m waiting for Siggi (one of the oldest xine developers still active and maintainer of the xinehq.de website) to see if he has time to manage and administrate the Roundup install; if he does, then you’ll soon see the tracker on that site, maybe a bit rough until we can find someone with HTML and CSS skills to improve it. If Siggi is unable to take care of it, we have a Plan B, as IOS Solutions (the same provider where my blog is located now, and where Reinhard Tartler – the Debian and Ubuntu maintainer of xine packages – works) is ready to offer us a vserver for the xine project (which I’m ready to maintain myself, after all it will probably share a good 80% of packages and configurations with my own).

So the bug tracker should be solved, it’s just a matter of time to decide with which plan to proceed, but there should be no mistake, the bug tracker will be opened soon, and we can ditch the SourceForge tracker soon.

There is one other non-technical problem though, that I started looking at some days ago: the licences, as I already suggested in my blog about OpenGL.

If you follow the recent changes in software licences due to the release of GPLv3 you’ll see that there are indeed many projects moving forward to use the new version of the licence released by the Free Software Foundation. Beside the GNU project’s packages, that of course were already supposed to move to GPLv3 when it was released, and this includes binutils and gcc, two major parts of a standard toolchain for Free Operating Systems, there is the change in Samba’s licence, that is causing already headaches to KDE people, as you break the license if you distribute binary packages of software linking at the same time to the new Samba and Qt (as Qt, although allowing to be used with LGPL-licensed code, does not allow GPLv3 yet).

xine does use Samba, for the SMB input plugin; xine also uses libcdio, that was being discussed for moving to GPL3, too. So on suggestion of Matthias Kretz (the Phonon maintainer for KDE4), I decided to make sure that the xine code was clean when it came to license. It wasn’t and still isn’t.

The main problems were due to source files and scripts that were added by project members but lacked any header with licence information and copyright. Most of the times the cause is just laziness, and I can’t really blame them :) Other times is just assuming that the code is so trivial that it makes no sense to put a license on it. For those cases I suggested using ISC license (mostly an as-is license) that explicitly permits the use of the code almost for everything.

Unfortunately there are also cases of license incompatibilities: to play NSF files (NES audio format), xine uses code taken from nosefart; although nosefart is declared GPL2, almost all the source files have no header with copyright and license, while some of them shows a clear copyright referring to MAME, which I’m quite sure is not GPL-compatible; the Goom visualisation plugin, then, has some files with a “All rights reserved” notice, which makes it a very bad idea to distribute. These are the biggest cases, but of course there is the libw32dll code, that although taken from MPlayer and Wine does not show any copyrights or licences, and the vidix code that… well that’s just a mess in every way.

Up to now I was able to replace some functions’ replacements with copies from OpenBSD and NetBSD, licensed under ISC or 3-clauses BSD licences, making them safe and compatible with any GPL. The alternative of course was to make use of gnulib, but I’m not ready enough to shoot at the bird with a BFG10K; for xine-ui it might be too much, although a proper fix in the xine-lib build system might help. The problem is that for now the replacement functions are built into libxine.so itself, rather than being compiled in every other plugin. It might not get into xine-lib-1.2, but if I ever get around to add a libxinecore in 1.3 (which provides a “protected ABI”, that only libxine and the plugins can use, but the frontends can not), it might be moved there to be done with it.

For what concerns vidix instead, Diego (the other Diego) informed me that MPlayer does have a newer version of vidix with all the license information cleared up. Unfortunately my Enterprise can’t really run vidix (at least last time I tried), but I’ll certainly try to find time to work on it in the next week, so to merge the code coming from MPlayer and replacing it at least for 1.2 series.

The problems remain with Nosefart and Goom. What I have intention to do (as soon as I find time to) is to make both of them optional at runtime (right now nosefart is always built because it adds no external dependency, and same applies to Goom), and then make sure that they are not built by binary distributions. At the same time, they should be disabled with the USE flag bindist, but that’s another story.

I hope to clear the use of those as soon as possible, as I don’t really want to have a package that is inconsistent with its own licence.

On the theme of micro-optimisations, instead, I want to say that I’ve done “some” commits on xine-ui in the past days, I decided to take a look at the code trying to see what could be fixed, as I found some pretty nasty things while I was checking its licences. The result is that the current CVS version of xine-ui should have a reduced vmsize, at the cost of some kilobytes more on disk. I achieved this by marking as constant as many global symbols (they can also be static to an unit actually) as possible (if you have a non stripped binary of an app you want to optimise this way, check the output of objdump -t file | fgrep .data and objdump -t file | fgrep .rodata, and make sure the first is almost empty. I’m still not sure why the on-disk file gets bigger rather than smaller, but it might depend by the fact that I have debug information available.

Unfortunately xine-ui really was written over and over, and it might as well use a good rewrite. I tried to fix some stuff, but it’s just one bandaid over another.

The same technique I used to find the symbols to move from .data to .rodata (the first is the section where variables are declared in an ELF file, while the latter is where constants are written, it’s read only and can be shared among processes) cannot be applied so easily to xine-lib though: the -fvisibility tricks we use hide the symbols from objdump output. Plus there is the problem that for instance by running it over the FAAD plugin, I’d get all the libfaad symbols too, as without libfaad in Portage I’m still building my local xine-lib without system libfaad (Alexis if you read this, I’d be very happy if you could at least provide an overlay with the ebuild for 2.6.1 :) ).

I don’t know why, but such micro-optimisations that most likely don’t change the speed of the code, nor the biggest memory problems in xine, are something that I like to do, they relax me quite a bit.

Anyway, today I worked on libcdio too, but as I have yet to finish what I started during the night, I’ll blog about that tomorrow.

Finally on the new server

Okay, the blog is now moved on its new location. I have also moved the GIT repositories but not the website, that will come soon. In the mean time at least the blog is accessible on the new box. The rest of the content (website, downloads, and of course ServoFlame) will come in the next hours.

I wish to thank Reinhard Tartler and the staff of IOS Solutions who rented me the server.

And the nice thing about Gentoo/FreeBSD: I migrated from FreeBSD to Linux (Gentoo Linux of course) with the same configuration. I just needed to tweak a couple of things as I want it a bit more reliable than before.

Now I won’t be cut out of the net if I get again the slashdot effect ;)

The long story of xine and the bugs

You may remember some time ago I blogged about the need for the xine project to replace the SourceForge bug tracker with something more usable.

Today I felt like this need is even more important, when a change in last.fm servers (or rather in their HTTP response) caused xine 1.1.8 not reporting anymore to Amarok the changed last.fm track.

The problem here was that the bug in bugs.kde.org was handled directly without submitting anything up the stream (I suppose I was unavailable at the time). I’m not surprised, Everybody in the Amarok team seems to agree that the SoruceForge tracker is unusable.

But, if you are one of the people who ever looked for alternative bug trackers, you might know that it’s not an easy task: the most common choice for big projects is Bugzilla, but the resources it needs makes it impossible to find a cheap host for it. The alternatives aren’t that better: Mantis is, in my opinion, just as bad as SourceForge tracker, Roundup requires mod_python, and the other tracker I know is Scarab which is JSP-based.

Now, I asked before the incidents of last summer to Siggi (the xinehq.de admin) if he could provide a tracker, and he agreed to try putting Roundup on it, but he hasn’t wrote anything to xine-devel in the mean time, so I asked him to bring me up to date with the status, but I’m afraid it’s not going to be feasible in short term, at this point.

Of the two alternatives, Roundup is probably the simplest to host as it requires Apache, mod_python and a database, but it has the disadvantage to require a complex configuration, as it’s more like a framework for bug trackers rather than a bugtracker of its own. Although most of the work has already been done by Luca for MPlayer and FFmpeg, and I can ask him to pass on his configuration, it’s still a bit of a mess to configure for what I saw.

Scarab on the other hand has higher requirements, as it’s a JSP webapplication, but it seems to be more ready for consumption out of the box. The problem here is to find a proper hosting provider: JSP hosting is expensive, very expensive sometimes, and the alternative is a dedicated or virtual server, which also isn’t cheap (although it might be cheaper than the hosting).

Now, of course I’ll have to wait for Siggi, as he might have good news for me, and then I won’t have to worry about it anymore, but in the mean time, I was wondering which option we have if he doesn’t have good news.

A basic JSP hosting, without PostgreSQL nor SSL support, seems to be about $25 a month; with the current dollar value, it wouldn’t be much at all, if I had a stable job, but I don’t have one, plus it might be desirable to get SSL support so that the passwords aren’t sent in clear text (the SSL support is available together with PgSQL and other services at twice the price where I looked up to now). If there was more interested in xine project by its users (a lot of which are Amarok users who use xine as a backend) it could be possible to try a yearly fundraiser to get the needed money, but considering that I rarely see comments in my blog about xine, I doubt doing so would bring us the needed money every year.