Project Memory

For a series of events that I’m not entirely sure the start of, I started fixing some links on my old blog posts, fixing some links that got broken, most of which I ended up having to find on the Wayback Archive. While looking at that, I ended up finding some content for my very old blog. A one that was hosted on Blogspot, written in Italian only, and seriously written by an annoying brat that needed to learn something about life. Which I did, of course. The story of that blog is for a different time and post, for now I’ll actually focus on a different topic.

When I started looking at this, I ended up going through a lot of my blogposts and updated a number of broken links, either by linking to the Wayback machine or by removing the link. I focused on those links that can easily be grepped for, which turns out to be a very good side effect of having migrated to Hugo.

This meant, among other things, removing references to identi.ca (which came to my mind because of all the hype I hear about Mastodon nowadays), removing links to my old “Hire me!” page, and so on. And that’s where things started showing a pattern.

I ended up updating or removing links to Berlios, Rubyforge, Gitorious, Google Code, Gemcutter, and so on.

For many of these, turned out I don’t even have a local copy (at hand at least) of some of my smaller projects (mostly the early Ruby stuff I’ve done). But I almost certainly have some of that data in my backups, some of which I actually have in Dublin and want to start digging into at some point soon. Again, this is a story for a different time.

The importance to the links to those project management websites is that for many projects, those pages were all that you had about them. And for some of those, all the important information was captured by those services.

Back when I started contributing to free software projects, SourceForge was the badge of honor of being an actual project: it would give you space to host a website, as well as the ability to have source code repositories and websites. And this was the era before Git, before Mercurial, and the other DVCS, which means either you had SourceForge, or you likely had no source control at all. But SourceForge admins also reviewed (or at least alleged to) every project that was created, and so creating a project on the platform was not straightforward, you would do that only if you really had the time to invest on the project.

A few projects were big enough to have their own servers, and a few projects were hosted in some other “random” project management sites, that for a while appeared to sprout out because the Forge software used by SourceForge was (for a while at least) released as free software itself. Some of those websites were specific in nature, others more general. Over time, BerliOS appeared to become the anti-SourceForge, with a streamlined application process, and most importantly, with Subversion years before SF would gain support for it.

Things got a bit more interesting later, when things like Bazaar, Mercurial, GIT and so on started appearing on the horizon, because at that point having proper source control could be achieved without needing special servers (without publishing it at least, although there were ways around that. Which at the same time made some project management website redundant, and others more appealable.

But, let’s take a look at the list of project management websites I have used and are now completely or partly gone, with or without history:

  • The aforementioned BerliOS, which has been teetering back and forth a couple of times. I had a couple of projects over there, which I ended up importing to GitHub, and I also forked unpaper. The service and the hosting were taken down in 2014, but (all?) the projects hosted on the platform were mirrored on SourceForge. As far as I can tell they were mirrored read-only, so for instance I can’t de-duplicate the unieject projects since I originally wrote it on SourceForge and then migrated to BerliOS.

  • The Danish SunSITE, which hosted a number of open-source projects for reasons that I’m not entirely clear on. NoX-Wizard, an open-source Ultima OnLine server emulator was hosted there, for reasons that are even murkier to me. The site got renamed to dotsrc.org, but they dropped all the hosting services in 2009. I can’t seem to find an archive of their data; Nox-Wizard was migrated during my time to SourceForge, so that’s okay by me.

  • RubyForge used that same Forge app as SourceForge, and was focused on Ruby module development. It was abruptly terminated in 2014, and as it turns out I made the mistake here of not importing my few modules explicitly. I should have them in my backups if I start looking for them, I just haven’t done so yet.

  • Gitorious set itself up as being an open, free software software to compete with GitHub. Unfortunately it clearly was not profitable enough and it got acquired, twice. The second time by competing service GitLab, that had no interest in running the software. A brittle mirror of the project repositories only (no user pages) is still online, thanks to Archive Team. I originally used Gitorious for my repositories rather than GitHub, but I came around to it and moved everything over before they shut the service down, well almost everything, as it turns out some of the LScube repos were not saved, because they were only mirrors… except that the domain for that project expired so we lost access to the main website and GIT repository, too.

  • Google Code was Google’s project hosting service, that started by offering Subversion repositories, downloads, issue trackers and so on. Very few of the projects I tracked used Google Code to begin with, and it was finally turned down in 2015, by making all projects read-only except for setting up a redirection to a new homepage. The biggest project I followed from Google Code was libarchive and they migrated it fully to GitHub, including migrating the issues.

  • Gemcutter used to be a repository for Ruby gems packages. I actually forgot why it was started, but it was for a while the alternative repository where a lot of cool kids stored their libraries. Gemcutter got merged back into rubygems.org and the old links now appear to redirect to the right pages. Yay!

With such a list of project hosting websites going the way of the dodo, an obvious conclusion to take is that hosting things on your own servers is the way to go. I would still argue otherwise. Despite the amount of hosting websites going away, it feels to me like the vast majority of the information we lost over the past 13 years is for blogs and personal websites badly used for documentation. With the exception of RubyForge, all the above examples were properly archived one way or the other, so at least the majority of the historical memory is not gone at all.

Not using project hosting websites is obviously an option. Unfortunately it comes with the usual problems and with even higher risks of losing data. Even GitLab’s snafu had higher chances to be fixed than whatever your one-person-project has when the owner gets tired, runs out of money, graduate from university, or even dies.

So what can we do to make things more resilient to disappearing? Let me suggest a few points of actions, which I think are relevant and possible right now to make things better for everybody.

First of all, let’s all make sure that the Internet Archive by donating. I set up a €5/month donation which gets matched by my employer. The Archive not only provides the Wayback Machine, which is how I can still fetch some of the content both from my past blogs and from blogs of people who deleted or moved them, or even passed on. Internet is our history, we can’t let it disappear without effort.

Then for what concerns the projects themselves, it may be a bit less clear cut. The first thing I’ll be much more wary about in the future is relying on the support sites when writing comments or commit messages. Issue trackers get lost, or renumbered, and so the references to those may be broken too easily. Be verbose in your commit messages, and if needed provide a quoted issue, instead of just “Fix issue #1123”.

Even mailing lists are not safe. While Gmane is currently supposedly still online, most of the gmane links from my own blog are broken, and I need to find replacements for them.

This brings me to the following problem: documentation. Wikis made documenting things significantly cheaper as you don’t need o learn lots neither in form of syntax nor in form of process. Unfortunately, backing up wikis is not easy because a database is involved, and it’s very hard, when taking over a project whose maintainers are unresponsive, to find a good way to import the wiki. GitHub makes thing easier thanks to their GitHub pages, and it’s at least a starting point. Unfortunately it makes the process a little more messy than the wiki, but we can survive that, I’m sure.

Myself, I decided to use a hybrid approach. Given that some of my projects such as unieject managed to migrate from SourceForge to BerliOS, to Gitorious, to GitHub, I have now set up a number of redirects on my website, so that their official websites will read https://www.flameeyes.eu/p/glucometerutils and it’ll redirect to wherever I’m hosting them at the time.

What’s wrong with release notifications?

Distributions of the like of Gentoo have one huge issue with users: they all demand their updates the same moment they are released. This is why many people, including me, have ranted before about the meme of 0day bumps. Generally speaking, we tend to know about the new release of a package we maintain, because we follow, tightly or loosely the development. Unfortunately, it’s quite possible that the new release passes into background for whatever reason, and the result is, well, a not bumped package. Note here: it’s very well possible that some developer will forget to bump his own (upstream) package; shit happens sometimes.

Most of the time, to solve this kind of problem we can use one of the many tools at our disposal to identify release notifications… unfortunately this is not all that feasible nowadays: it was better and it definitely got worse in the past months! Given that most upstream barely have a release publishing procedure, most of us preferred notifications that are not “actively handled” by the developers, but rather than happen as a by-product of the release itself: this way even sloppier release had their notification sent out.

The biggest provider of by-product release notifications was, once upon a time, SourceForge — I say “once upon a time” because they stopped that. While I can understand that a lot of the services offered by SF were redundant, and that most projects ended up setting up better, if less integrated, software anyway (such as phpBB, Mantis – as Bugzilla wouldn’t work, or various Wikis), and I can appreciate that the old File Release System was definitely overcomplex, I can’t see why they stopped allowing to subscribe to notifications. The email that they sent are now loosely replaced by the RSS feed of released files… the problem is that the feed is huge (as it lists all the files ever released for a project) and not sorted chronologically. Sure there is still freshmeat but to have notifications working there you’re asking the upstream maintainer to explicitly remember bumping the version in a different website, and that’s a bit too much, for most people.

You’d expect that other sites that took the place of SourceForge got better at handling this things, wouldn’t you? But that’s definitely not true. Strangely enough, the good example here seem to come from the Ruby community (I say “strangely enough” because you might remember that I ranted so much about missing practises, mandatory procedures and metadata and so on.

First of all, Rubyforge still keeps release notifications by mail (good!), and second, kudos to gemcutter that allows for subscribing to a (semi-)private RSS feed with the new release of just a subset of all the gems released (the Gentoo Ruby team has a common one where the gems present in portage are subscribed — if I remember to add them, that is). It works, sorta. It still requires for you to poll something, although at the end you’re switching the mail reader for the feed reader, so it’s not much of a change. The one problem with that is that you end up receiving a bit more noise than I’d like, as gems that are available in binary form for Windows and Java are listed more than once after an update. But it’s good that it actually is integrated with the gem release procedure.

On the other hand, excluding all the packages that have no hosting facility included at all (which sometimes, such as sudo’s case, have better announcement systems), there are two sites that I count as major screw up, with different degrees: Google Code and Launchpad. The former, is just partly screwed: starring a project does not subscribe to updates, but at least it has a feed of all the files released by the project. What I find definitely strange is that there is no integrated “Subscribe in Google Reader” that would have been definitely more friendly.

Launchpad, instead looks much worse. I recently picked up co-maintainership of a trio of projects and not only there is no email notification, there is no feed of either releases or files! Which means that the only way to find whether a project made a new release is to check the homepage of them. Yuppie. I opened a bug for that on launchpad, but I now lost the link: it was duped against something else, and is no longer visible through launchpad’s own interface, which is, in my book, yet another failure.

Why is it so difficult that packagers need the notifications? This gets even more silly considering that I’m sure the main argument against notification is going to be ”but users won’t have to care, as the package will be available on their distribution”.

The hard return to ruby-hunspell and rust

As you can easily imagine from what I wrote the past few days, I’ve been busy trying to cleanup after myself in old projects that are near to abandoned. I wrote about my resolution to spend more time working starting new year, to save some money for getting my driving license and a car, and in the past days I cleaned up both the rbot bugzilla plugin (as well as rbot’s ebuild) and then nxhtml today, so it was quite obvious that I had also to take a look to long ignored projects like ruby-hunspell and rust (and of course rubytag++).

I started with ruby-hunspell as with that I can also fix the hunspell plugin for rbot (and thus put back in the only feature left over from ServoFlame). The first problem I wanted to tackle down was to remove the cmake dependency. As I said yesterday, I’ve started feeling the power in GNU make, and I also have enough reasons not to use cmake that if I could convert the build of the extensions (they are quite simple after all) to simple GNU make, I would do it gladly.

Indeed switching the build system to simple GNU make with some tricks around was not difficult at all, and the result is good enough to me. It’s not (yet) perfect, but it’s nicer. I also hope to generalise it a bit so that I can reuse it for rubytag++ too, and hide some of the dirty tricks I use.

Thankfully there is a good note about it, in the five releases between the previous time I worked on ruby-hunspell and today (1.1.4 then, 1.1.9 today), hunspell added support for pkg-config files, making the build system quite nicer. Also thanks to git improvements, making the tarball is quite easier. And thanks to the power of GNU make, instead of having a tarball.sh script, it’s now simply make tarball (although I will probably switch to make dist, I thought about this just now).

The problems weren’t laying too far though. First, I changed something on rust some time ago it seems, and now the ruby to C type function changed name, so I had to update the ruby-hunspell description script to suit that change. Then there is the problem that Hunspell now hides some of the functions being experimental (and by the way do they have any consideration for the object ABI? Dropping some functions on a preprocessor conditionals inside a C++ class isn’t the most sound of the ideas, I’m afraid…), so I had to comment those. The biggest problem came with the parsers extension, that used to provide bindings for the libparsers library installed by hunspell.

The libparsers library is not installed only in static form, and its headers are not installed at all. This is probably half intentional, as in they probably consider the libparsers library an internal library that other projects shouldn’t use, so they removed the header files, the problem is that they still install the library at this point, making its possible use a bit ambiguous. At any rate, for now I disabled the parsers extension, it wasn’t very much hunspell related anyway, so I will certainly prefer if they dropped it from being installed entirely. That extension was also the only one that had a testunit, I should write a testsuite for ruby-hunspell and the hunspell extension too, so that at least I have something to test with.

There is one big problem though, to release a new ruby-hunspell, which is a requirement for rbot-hunspell, I need to do a release of rust, too, but I don’t remember much of rust details, it has been almost an year since I last worked on it :( Additionally, my /tmp is noexec now, it wasn’t when I prepared the testsuite, so the tests fail as the shared object built in /tmp can’t be loaded in memory. I’ll have to test tomorrow if TMPDIR environment variable is respected, in which case I’d be using /var/tmp. I’ll also add a make dist target to rust so that I don’t need extra stuff to prepare the packages.

Finally, there is the problem of the git repositories: for some reason pushing to the remote repository accessed through this trick fails like there was nothing to push. Considering I now have my own (v)server, I’ll probably just move rust and ruby-hunspell back together with the other git repositories I export. This will also simplify thins when I’ll put gitarella back too.

Tomorrow will be dedicated to work for most of the time, but if I can squeeze some time for this I’ll try to address the issues, and I promise this time I’ll write more comments and documentation.

Serving community

I’m not dead yet (if I continue using this phrase, when I’ll be dead, someone will have to write on my tombstone “he’s dead, finally”), but I’ve been quite swamped out with my job. It’s quite nice when you work on two months on a thing, without test data, and when you get test data and you get to produce the test suite you found that everything you’ve being doing was based on a wrong assumption… nice, really, not.

Anyway, since I left Gentoo, I’ve decided to change a few other things in my style. Although I did report a few bugs on the bugzilla for a couple of issues I had, most of my work lately has been upstream. Yesterday accounted for a few fixes in xine, three patches for Valgrind (that you can find in my overlay, if you want to look at them), and one for FFmpeg (that will pass unnoticed as usual). As working downstream works nice only if you can actually be helpful to users by taking action immediately, it’s less a priority for me to work on that side, although it’s obviously not entirely gone, so you can see my work on an init.d script for rbot (always in my overlay), that I needed so that ServoFlame doesn’t need to be started by hand – and with the new connection handling code in SVN, it’s actually coping nicely even with network failures – and on an updated xine-ui snapshot that is now in portage, solving the obnoxious bug with the doubleclick to go fullscreen. In all this, I stopped writing down the timetable for stable for the packages though.

Today instead I tried working on the Rust project a bit more, mainly by writing a new page (not yet linked on the site, so I won’t link it here just yet) with some step by step guide on how to set up a developer’s repository for Rust, complete with CIA-on-push and mail-on-push so that the rest of the project (users and other devs) would be able to review the change (this was suggested in a nice video from Google’s TechTalks — you can watch the video with your favourite media player, that I hope will be xine, by downloading it for the PSP/iPod; the mp4 H264/AAC file played nice for me on xine even with the green blobs.. see later on this entry); unfortunately the default update script for git installed in the hooks directory does not mail the actual diffs as far as I can see, and it doesn’t really provide a decent subject, so I’ll have to work on it a bit tomorrow.

I wish to thank David who started working on Rust with me, he’s probably the first person who was ever interested in working in a project of mine :) This is why ost of my projects ended up dead, being the only one working on them, I often felt liek they are of no interest to anyone else.

Anyway, let’s move on a bit on the green blobs I named above. If you ever tried to play an H264 stream with xine, you might have experienced this problem: green rectangles touching hte left border of the video appearing through all the stream. It doesn’t happen on all streams, but it does on some. With Valgrind’s help I was able to find the «leaf cause» of this problem, as it reports an “invalid read of size 4” in a section of the MMX code, and I can report that by disabling MMX code in FFmpeg itself the problem is solved, but why this happens it’s still a mystery to me, someone else should look into the issue… Mike that would be you, possibly :)

Anyway, tonight I was finally able to stop worrying about Windows’s crazy behaviour when a symlink is stored in a 7zip file (the extracted file is only a few bytes long, but being named .exe is enough for Windows to try running it… luckily it only produced an Invalid Instruction kind of error, rather than killing the system altogether), so tomorrow a new segment of work will start for me, and that might give me more time between one battery of tests and the other.

One of the things I am considering to relax, is to replace my current setup of xmodmap, xbindkeys and xvkbd by writing a new application that binds mouse buttons press with a keysym press, so that I can finally stop patching evdev with Olivier’s patch to use my LX700 keyboard, and finally close that odissey. With XCB the task actually seems way easier than with Xlib, I probably can write an xbindkeys workalike in very little time, but there is more involved in this as I want to replace three programs and not just one.

Right now I have evdev patched so that some of the “buttons” of the keyboard (the ones that are actually on the keyboard rather than on the mouse) are reported as keys again, then I xmodmap their keycode to a keysym (that is also bound on KDE’s shortcuts) and I can use those; for the forward/next buttons on my mouse, I instead use xbindkeys to bind the mouse press to a call to xvkbd to send the keysym, this work with a fork() call which is far from being lightweight. What I want to do is to use a vanilla evdev, no xmodmap, no xbindkeys and no keysym, but a xmouse2sym background process that reads a simple file containing a series of pairs «button = keysym» (similarly to .xmodmaprc), find if the keysym is already registered, in which case the proper keycode would be used, for the keysym that are not already registered, it would take a free slice of unused cods to allocate the key.

I’ve looked up the XCB methods to do this, and it’s mostly straightforward, and it should also be quite fast to implement with XCB compared to Xlib, but the big problem is that the string form of the keysym cannot be used internally, you need to use the number form, which usually is returned by XStringToKeysym() function, client-side… this function is not present in XCB itself, so I have either to use Xlib to get that function, or I’ll need to write my own function to get this data. Such a function would be nice in xcb-util so I might look into it… maybe I’ll be able to finally learn lexers and parsers ^_^

Okay so it’s now quite late and I should be sleeping.. it’s just that I don’t really feel that nice lately after sleeping, probably an aftermath of leaving Gentoo, or just my life that’s trying to tell me I’m not immortal, and I should employ it better, who knows…

Okay, I exaggerated

When I previously wrote that Rust is to the same level as my previous Ruby extensions generator. It was and is, yes, ready to build Ruby-Hunspell, to the point that the current GIT head of this one is currently only based on Rust and thus depends n it, but I did forget one thing. When I converted ruby-hunspell from a hand-crafted C source file to a generated extension, I indeed extended the generator to add a few more cases that were needed for hat particular task, but RubyTag++ was already covering a lot more tasks, that I still haven’t covered in Rust.

But this is not a bad news anyway, the work is starting being less messy and more comfortable, now that a pattern starts to appear. While the original generator was a mostly top-down approach, slightly object oriented, but with a lot of if and case conditions to abstract the various differences between a method and a constructor and between parameters, this time I’ve started to use a more object oriented approach, as it should happen with Ruby.

Just tonight I found a pretty good abstraction: everything that is described in the interface description file (beside the bindings object itself) is an Element, that can contain more children Elements; every Element provides to the bindings a declaration (that also contains the declarations of its children), a definition (that also contains the definitions of its children) and an initialization (that also contains the initializations of its children), these are the three main components for Ruby bindings, the first two are simply instances of the respective C/C++ concepts, while the latter is simply the content of the initialisation function for the extension itself, where modules, classes and so on are registered with Ruby.

So while I’m developing Rust, and adding more complex features, I start actually reducing the code and making it more easily readable. By moving the C/C++ code on separate files, also, I’ve made that also more readable, even if that means wasting some space to contain them.

What is missing right now to convert RubyTag++ to Rust? Well, quite a lot of code, constants I just added tonight, but then attributes have to be added too, with their set/get structure, and inheritance has to be tweaked, as right now is likely not working, and I have to get a better support for sub-modules, and classes within classes.

But I also implemented a few things that were missing or pretty much unusable in the previous generator of course, for instance you can now describe custom methods inline in the description without having YAML for describing a template, and you can just make some differences between the methods if they are slight: the description file is in Ruby, so you can use variables and treat them how you prefer. And I started adding support for classes wrapping around C interfaces, together with the start of ruby-xine bindings (that will be my real-world scenario to keep in a working state).

When I started, I decided I wanted to write as many test cases as I could to make sure that I didn’t introduce regressions between the versions; unfortunately I haven’t been able to write more than a couple of testcases up to now, but I’ve being using ruby-hunspell and ruby-xine as my test suite… this is not going to work well on the long run, so tomorrow I’ll see to add some more test cases describing the features I added tonight, hopefully this way I can just run Rust’s test directory (that uses a pretty old-style Makefile to run the tests, but I haven’t being able to understand Rakefile correctly.. unless it is just a way to invoke system from ruby, and thus has no advantage over make other than not requiring make installed) rather than building two extensions and run their testsuites (actually ruby-xine still hasn’t had a testsuite, but it will have tomorrow, it will also be quite easy, as I just need, as a “testsuite”, a single unit with a few tests, at least for now).

I’ll remember again for who’s interested, the rust site is at RubyForge, there’s a rust-devel mailing list, you can subscribe to ask questions or propose patches, enhancements and similar, there’s also a CIA project page that I just set up (thanks Micah for your site that is improving a lot lately!) where you can follow the updates; as soon as I ge something working with ruby-xine, you’ll find that also as a GIT repository under Rust.

And now the questions are: will I be able to write an user (as in person using it, not the kind of final user one expects) documentation for Rust? And, how can I focus some attention on Rust so that there will be someone else but me interested in it? :)
I know that by starting ruby-xine I’m likely to get the attention of Luis (metalgod), but that’s hardly enough, logically.

Update (2017-04-22): as you may know, Rubyforge was shut down in 2014. Unfortunately that means that most of the Rust documentation and repository are probably gone (I may have backups but I have not uploaded them anywhere).

How to use GIT via SFTP only

GIT is pretty nice for an SCM software, it’s nice to be able to work locally on a project without losing track of the changes before publishing it on a proper server. One problem of it, though, is that you can’t find much hosting space for it, as it requires you either HTTP push capabilities or an SSH access where GIT is installed, as it needs it server side to be able to push.

I’ve worked around this for most of my projects by hosting my own GIT repositories on Farragut, but this limits quite a bit the scope of the projects, and limits also the fact that if I’m offline, my repositories are, too. For this I’ve moved my overlay to overlays.gentoo.org as soon as GIT support was added.

When I started doing some serious work on Rust and Ruby-Hunspell, I decided I couldn’t just leave them to die with my server if my connection is revoked, or if something happens to it, so I’ve looked for an alternative solution. I had already hosting on RubyForge for the ruby-hunspell site but the only access to the site was through SFTP, and thus git wasn’t able to handle it.

Thinking of it, I found a solution that was quite obvious: GIT can push to a local path too, so I just needed to get a copy of the pushed repository and load it there; using SFTP would have been slow, but worked. I first did it that way, but it grew tired easily.

The next step was helped by Timothy, who talked often of Fuse and who handled together with genstef the ebuilds for fuse to run on Gentoo/FreeBSD: using sshfs and fuse, I could push to a «local» path that was instead mounted from the RubyForge SFTP site. And that’s what I’m using now to push both ruby-hunspell and rust repositories, both of which are available to users via HTTP on the two respective sites.

I found one big showstopper with this approach today, but after wasting some of Ferdy’s time, I found a pretty simple workaround for it too: during the second push, GIT tries to rename the master.lock file into master, without removing master of course; on SSHFS this does not work by default and returns an EPERM error (Permission denied); to fix that you just need to enable the rename workaround while mounting the SFTP directory.

It is slow, but it works nicely if you don’t have a server where you can use GIT properly.

For who’s wondering, this is what I’m using currently to mount the RubyForge server:

sshfs -o workaround=rename rubyforge.org:/var/www/gforge-projects/ 
    /media/repos/flame/remote/rubyforge

I hope this post can be useful to someone else too :)

Update (2017-04-22): as you may know, Rubyforge was shut down in 2014. Unfortunately that means that most of the Rust documentation and repository are probably gone (I may have backups but I have not uploaded them anywhere).

Rust is almost ready

Okay, so today I restarted eating almost normally, I probably got the flu that seems to have voyaged through Europe in the last days, listening to what a few people already told me. I wasn’t sure if I could sustain my usual workflow today though, because I was still missing force due to the two days I passed without eating at all, so I sent the day working half time on job stuff and half time on Rust. This latter one came almost to completion in the mean time.

if you forgot about it, Rust is my bindings generator for Ruby, evolution of the bindings generator I used for RubyTag++ and Ruby-Hunspell up to now, that allows you to generate a working Ruby extension binding a C++ library by just describing its interface. While the original bindings generator used a YAML description of the interface, his wasn’t as extensible as I needed, so this time it is using Ruby itself, kinda like ActiveRecord and rails. I’ll show you all the syntax tomorrow when I upload the code, as I’ll need to write some documentation for it anyway.

So what it is working in Rust and what will still require work? First of all, Rust is now up to the same level my previous generator was, which means it’s far from being complete as I want it to be (the target is to be able to write just as easily bindings for C libraries that use OOP interfaces, like Avahi PulseAudio or xine), but it is important to me that it reached this point because this way I won’t have to maintain both Rust and my other generator, and I can test its working state by using Ruby-Hunspell and RubyTag++ as regression tests until I finish the regression tests themselves (I only wrote two up to now).

I’ve now asked hosting on Rubyforge, if all goes well in the next few days I’ll put up a draft of a site on there, and then start pushing my changes to the GIT repository there (sshfusefs is quite slow, but it works nicely for what I need to do). I’ll need a logo as most of the Ruby projects have an appealing one; if anybody have an idea or a proposal, it is welcome, my graphics skills don’t exist at all).

Hopefully, it will be possible through Rust to have bindings for the libraries I named above without too much work. I still wonder if it makes sense to have them as separate projects (as Ruby-Hunspell currently is), or if it would be simpler to leave them all live under Rust itself; but for that there will be time to decide.

For tonight, I can feel happy, and work on a few more testcases. I’d like to be able to watch some Anime too, but this depends on a series of factors, like in which bed I’ll sleep, tonight (while I wasn’t feeling well I took possess of my mother’s bed as it is more stable than mine, and I had enough nausea without adding the mattress deforming every time I moved, and here I don’t have a power socket to connect the laptop to while I watch Anime).