Why I do not like Hugo

Not even a year ago, I decided to start using Hugo as the engine for this blog. This has mostly served me well, except for the fact that it relies on me having some kind of access to a console, a text editor, and my Bitbucket account, which made posting stuff while travelling a bit harder, so I opted instead for writing drafts, and then staggering their posts — which is why you now see that for the most part I post something once every three days, except for the free ideas.

Hugo was sold to me as a static generator for blogs, and indeed when I looked into it, that’s what it was clearly aiming on being. Sure the support for arbitrary taxonomies make it possible to use it in slightly different setups for a blog, but it was at that point seriously focusing on blog, and a few other similar site types. The integration with Disqus was pretty good from the start, as much as I’m not really happy about that choice, and the conversion proceeded mostly smoothly, although it took me weeks to make sure the articles were converted correctly, and even months in I dedicated a few nights a month just to go through the posts and make sure their formatting was right, or through the tags to collapse duplicates.

All in all, while imperfect, it was not as horrible as having to maintain my own Typo fork. Until last week.

I finally decided that maintaining a separate website for the projects is a bad idea. Not just for the style being out of sync between the two, but most importantly because I barely ever update that content, as most of my projects are dead or have their own website already (like Autotools Mythbuster) or they effectively are just using their GitHub repository as the main description, even though it pains me. So the best option I found is to just build the pages I care about into Hugo, particularly using a custom taxonomy for the projects, and be done with it. Except.

Except that to be able to do what I had in mind, I needed a feature that was committed after the version of Hugo I froze myself at, so I had to update. Updates with Typo were always extremely painful because of new dependencies, and new features, and changes to the database layout, and all those kind of problems. Certainly Hugo won’t have these problems! Except it decided not to be able to render the theme I was using, as one function got renamed from RSSlink to RSSLink.

That was an easy fix; a bit less easy at first was figuring out that someone decided that RSS feeds should include, unconditionally, the summary of the article, not the full text, because, and I quote: «This is a somewhat breaking change, but is what most people expect from their RSS feeds.»

I’m not sure what these “most people” are. And I’d say that if you want to change such as default, maybe you want it to be an option, but that does not seem to be Hugo’s style, as I’ll show later. But this is not why I’m angry. I’m angry because changing the RSS from full content to summary is a very clear change in impression.

An RSS feed that has full article content, is an RSS feed for a blog (or other site) that wants to be read. You can use this feed to syndicate on Planets (yes they still exist), read it on services like Feedly, or NewsBlur (no they did not all disappear with the death of Google Reade), and have it at hand on offline readers on your mobile devices, too.

RSS feeds that only carry summaries, are there to drive traffic to a site. And this is where the nasty smell around SEOs and similar titles come back in from below the door. I totally understand if one is trying to make a living off their website they want to be able to bring in traffic, which include ads views and the like. I have spoken about ads before, and though I recently removed it from the blog altogether for lack of any useful profit, I totally empathise with those who actually can make a profit and want people to see their ads.

But the fact that the tools decide to switch to this mode make me feel angry and sick, because they are no longer empowering people to make their view visible, they are empowering them to trick users into opening a website, to either get served ads, or (if they are geek enough to use Brave) give bitcoin to the author.

As it turns out, it’s not the only thing that happen to have changed with Hugo, and they all sound like someone decided to follow the path of WordPress, that went from a blogging engine to a total solution for managing websites — which is kind of what Typo did when becoming Publify. Except that instead of going to a general website solution, they decided to one-up all of them. From the same release notes of the version that changed the RSS feed defaults:

Hugo 0.20 introduces the powerful and long sought after feature Custom Output Formats; Hugo isn’t just that “static HTML with an added RSS feed” anymore. Say hello to calendars, e-book formats, Google AMP, and JSON search indexes, to name a few ( #2828 ).

Why would you want to build e-book formats and calendars with the same tool you used to build a blog with? Sure, if it actually was practical I could possibly make Autotools Mythbuster use this, but I somehow doubt it would have enough support for what I want to get out of the output, so I don’t even want to consider that for now. But all in all, it looks like widening a little too much the target field.

Anyway, I went and reverted the changes for my local build of Hugo. I ended up giving up on that by the way, and just applied a local template replacement instead, since that way I could also re-introduce another fix I needed for the RSS that was not merged upstream (the ability to put the taxonomy data into the feed, so you can use NewsBlur’s intelligence trainer to filter out some of my blog’s content). Of course maintaining a forked copy of the builtin template also means that it can break when I update if they decided that it should be FeedLink next time around.

Then I pushed the new version, including the updated donations page – which is not redirected from the old one yet, still working on that – and stopped looking too much onto it. I did this (purposefully) in the 3-days break between two posts, so that if something broke I would have time to fix it, but it looked everything was alright.

Until I noticed that I somehow flooded Planet Gentoo with a bunch of posts dating back up to 2006! And someone pinged me on Hangouts for the same reason. So I rolled back to the old version (that did not solve the flooding unfortunately), regenerated, and started digging what happened.

In the version of Hugo I used originally, the RSS feeds were fixed to 15 items. This is a perfectly reasonable debug for a blog, as I didn’t go anywhere near it even at at the time I was spending more time blogging than sleeping. But since Hugo is no longer targeting to be a blog engine, that’s not enough. “News” sites (and I use it in quote, because too many of those are actually either aggregators of other things, or just outright scammers, or fake news sites) would have many more than that per day, so 15 is clearly not a good option for them. So in Hugo 0.19 (the version before the one that changed to use summary), this change can be found:

Make RSS item limit configurable #3035

This is reasonable. The default is kept to 15, but now you can change it in the configuration file to whatever you want it to be, be it 20, 50, or 100.

What I did not notice at that point, was from the following version:

Raise the default rssLimit #3145

That sounds still good, no? It raises the limit. To what?

hugolib: Default rssLimit to unlimited

Of course this is perfectly fine for small websites that have a hundred or two pages. But this blog counts over 2400 articles, written over the span of 12 years (as I have recovered a number of blog posts from previous platforms, and I’m still always looking to see if I can find the backups with the posts of my 15 years old self). It ended up generating a 12MB RSS feed with every single page published up to them.

So what am I doing now? That, unfortunately, I’m not sure. This is the kind of bad upgrade path that frustrated the heck out of me with Typo. Unfortunately the only serious alternative I know to this is WordPress, and that still does not support Markdown unless you use a strange combinations of plugins and I don’t even want to get into that.

I am tempted to see what’s out there for Ruby-based blog engines, although at this point I’m ready to pay for a solution that works native on AWS EC21, to avoid having to manage it myself. I would like to be able to edit posts without requiring me a console and git client, and I would like to have an integrated view of the comments, instead of relying on Disqus2, which at least a few people hate, and I don’t particularly enjoy.

For now, I guess I’ll have to be extra careful if I want to update Hugo. But at least I should be able to not break this again so easily as I’ll be checking the output before and after the change.

Update (2017-08-14): it looks like this blog post got picked up by Internet’s own peanut gallery that not only don’t seem to understand why I’m complaining here (how many SEOs ther?), but also appear to have started suggesting with more or less care a number of other options. I say “more or less” because some are repeats or aimed at solving different problems than mine. There are a few interesting ones I may spend some time looking into, either this week or the next while on the road.

Since this post is over a month old by now, I have not been idle and started instead trying out WordPress, with not particularly stunning results either. I am though still leaning towards that option, because WordPress has the best upgrade guarantees (as long as you’re not using all kind of plugins) and it solves the reliance on Disqus by having proper commenting support.


  1. Update (2017-08-14): I mean running the stack, rather than pushing and storing to S3. If there’s a pre-canned EC2 instance I can install and run, that’ll be about enough for me.
    [return]
  2. don’t even try suggesting isso. Maybe my comments are not big data, but at 2400+ blog posts, I don’t really want to have something single-threaded that access a SQLite file!
    [return]

My horrible experience with Three Ireland

I have not ranted about the ineptitude of companies for a while, but this time I have to go back to it. Most of the people who follow me on Twitter are probably already fully aware of what’s going on, so if you want to skip on reading this, feel free.

When I moved to Ireland in 2013, I evaluated quickly the mobile providers available and decided to become a customer of Three Ireland. I was already a customer of Three back in Italy, and they had the same offer here than they had there, which involved the ability to be “Three like at home”, roaming on foreign Three networks for free, using the same allowance of calls and data that you have on your own country. Since my expectation was to go home more often than I actually did, roaming to Three Italy sounded like a good deal.

Fast forward four years, and I ended up having to give up and moved to a new provider altogether. This all precipitated since Three Ireland took effectively four months to fix up my account so I could actually use it, but let’s take one step at a time.

Back in January this year, my Tesco credit card got used fraudulently. Given I have been using Revolut for most of my later trips to the States, I can narrow down where my card was skimmed as one of three places, but it looks like the MIT Coop store is the most likely culprit. This is a different story, and luckily Tesco Bank managed to catch the activity right away, cancelled my card and issued me a new one. This is something I talked about previously.

The main problem was migrating whatever was still attached to that card onto a different one. I managed to convert most of my automated debits onto my Ulster Bank MasterCard (except Vodafone Italy, that’s a longer story), but then I hit a snag. My Three Ireland account was set up to auto-top-up from my Tesco Bank card €20 every month. This was enough to enable the “All you can eat data” offer, which gave me virtually unlimited data in Ireland, UK, Italy, and a few other countries. Unfortunately when I went to try editing my card, their management webapp (My3) started throwing errors at me.

Or rather, not even throwing errors. Whenever I would go to list my payment cards to remove the now-cancelled card, it would send me back to the service’s homepage. So I called them, and I’ll remind you this is January, to ask if they could look into it, and advised I won’t be able to take call because I was about to leave for the USA.

The problem was clearly not solved when I got back to Ireland, and I called them again, told me I would be contacted back from their tech support and they will give me an update. They called me, of course always at awkward times, and the first thing they asked me was for a screenshot of the error I was shown, except I was shown no error. So they had to go back and forth a couple of times with them, both on the phone and over Twitter (both publicly and over direct messages).

At some point during this exchange they asked me for my password. Now, I use LastPass so the password is not actually sensitive information by itself, but you would expect that they would have built something in place where they can act as one of their customers, for debugging purposes, or at least be able to override the password, and just ask me to change it afterwards. Since the second auto-top-up failed and required me to make a manual payment, I decided to give up, and send them screenshot of both the loading page and the following landing page, and send it to them as requested.

Aside note here: the reason why these auto-top-up are important, is that without these, you get charged for every megabyte you use. And you don’t get any notification that your all-you-can-eat expired, you only get a notification after you spent between €5 and €10 in data, as that’s what law requires. So if the auto-top-up failed, you end up just using your credit. Since I used to spend the credit on Google Play instead (particularly to pay for Google Play Music All Access — my, what a mouthful!), this was not cool.

By end of March, when the third auto-top-up failed, and I ended up wasting €15 for not noticing it. I called them again, and I managed to speak to the only person in this whole ordeal who actually treated me decently. She found the ticket closed because they did not receive my screenshot, so she asked me to send them directly to her address and she attached them to the ticket herself. This reopened the ticket, but turned out not to help.

At this point I’ve also topped up the €130 that were required to request an unlock code for my Sony Xperia XA phone, so I decide to request that in parallel while I fight with trying to be able to configure my payment cards. Since the phone is Sony, the unlock code comes directly from them and Three advises is going to take up to 21 working days. When I send the request, I get an email back telling me the unlock request was not successful, and to contact the customer support. Since I was already bothering them on Twitter, I do so there, and they reassure me that they took care of it and sent the request through.

Also, this time I give up and give them my password, too. Which became even funnier, because as I was dictating it to them I go to “ampersand” and they reply “No that’s impossible, it’s not a valid character for the password!” — as it happens it is indeed not a valid character, now. When I set my password it was valid, but now it is not. I found out after they fixed the problem, because of course by then I wanted to change my password, and LastPass generated another one with the & character.

It took another month for them to finally figure out the problem, and another three or four requests for screenshots, despite them knowing my password. And a couple of times asking me to confirm my email address, despite it already being in the system and all. But at least that part got fixed.

Now remember the unlock code request above? 21 working days in most cases mean around a month. So a month after my unlock code request I call them, and they inform me that the 21 working days would expire the next day, a Friday. The reason is to be found in Easter and bank holidays being present, reducing the number of working days in the month. Fair enough, I still ask them what’s going to happen if the 21 days promise is breached, and the guy on the phone denies it is even possible. Of course the day after I got to chat with them again, and they realize that there was no update whatsoever and there should have been at least one.

They decide to request an urgent unlock, since on the Thursday I would be leaving for China, and they promise me the unlock code would be there by Monday. Goes without saying it didn’t work. When I called on Monday they told me that only Sony can provide the unlock, and since it was a long weekend they were not going to answer until at least the day after (May 1st was bank holiday, too). At this point I was pissed and asked to speak with a manager.

Unfortunately the person at the phone was not actually human, but rather was replaced by one of those call center scripts kind of drone and not only kept telling me that they had nothing personal against me, which I did not care if they did, to be honest, but refused to redirect me to a manager when I pointed out that this was ludicrous after fighting four months to get the other problem solved. They kept saying that since the ticket is closed, there was nothing they could talk to me about. They also insisted that since the unlock code hasn’t arrived they couldn’t even offer me a trade-in with an unlocked phone, as that is only available if the unlock code fails to work.

I ended up having to buy myself a new phone, because I could not risk going to China with a locked phone again. Which turned out to be an interesting experience as it looks like in Ireland, the only places to buy unlocked phones are either corner shops selling Chinese phones, or Argos. I ended up buying an Xperia X from Argos, and I’m very happy of the result, although I did not intend to spend that money. But that’s a story for another day, too. Of course the unlock code arrived the day after I bought my new phone, or should I say the day after I gave up on Three Ireland, and moved to Tesco Mobile.

Because at that point, the drone got me so angry that I decided to just spend all of my credit (minus €20 because I hit the usage limit) buying movies and books on Google Play, and when I picked up the phone on Tuesday, I also picked up a SIM for Tesco Mobile. I found out that MNP in Ireland takes less than an hour and just involves a couple of confirmation codes, rather than having to speak with people and fill in forms. And I’m indeed happy on Tesco Mobile right now.

Why am I so riled up? Because I think Three Ireland lost a big opportunity to keep a customer, the moment when I expressed my dissatisfaction with the service and with the lack of unlock code. They could have offered me a trade-in of the current phone. They could have given me the credit I spent because of their issue back. They could have even offered me a new, any new phone, locked to their network, to make it harder for me to leave them. Instead they went the road of saying that since the problem has been solved at all, there was never any problem.

I found this particularly stupid particularly compared to the way Virgin Media and Sky Ireland seem to have it down to practice: when I called Sky to ask them if they had any better offer than Virgin, back when I used TV service, they told me they couldn’t do better broadband, but they would offer me a lower price on the TV package so that I could unbundle it from Virgin. When I called Virgin to remove the TV package (because at the time they were going to increase the monthly fee), they offered to lower their price for a year to make it still more convenient for me.

Computers and popular media

In my (quite harsh, admittedly) review of Trojan Horse I pointed out a number of “WTF” moment that I had while reading the book, most of which come down to technical details. On Twitter, the author as surprised I preferred Zero Day – even considering my review wasn’t much nicer – as both him and the reviewers found the new book more interesting.

What I’m afraid of is that for the most part, the difference between reactions is connected to the different between the dayjobs. When you’re writing something that is completely real about computers, most of the time it’s tremendously boring. If you want some kind of thrill, you need to bump it a little bit out of “real” into “realistic” if you wish to make it thrill at all — often this is achieved by making the book happen twenty minutes into the future (Warning: the link points to TV Tropes!).

I’ve read and enjoyed a lot William Gibson’s cyberpunk books, and even though they are definitely fiction, and fantasy, they give you the idea of “realism” by being far enough into the future that you can’t pretend them to be happening now.

Other times, even if it’s not “real”, something can be “realistic enough” if it doesn’t seem entirely out there — just take a look at CSI or (better) NCIS. They obviously do a ton of stuff that’s not really possible, but it still doesn’t feel totally alien if you know a bit about technology — not now, but maybe in a matter of a dozen years. I honestly enjoy watching NCIS because in the middle of the fantasy and highly unlikely things they are able to poke in enough “real” things that it feels like they are winking at me (and the other technical viewers) saying “yes we know we exaggerate, but it would be boring otherwise”. This includes, in the poorly disguised pilot for NCIS Los Angeles, the presence of a touch-screen device highly reminiscent of Microsoft Surface PixelSense (interesting to note how Microsoft re-branded their old product so they could use the name for a new one, isn’t it?).

Then again you got shows like CSI Miami, Numb3rs (damn it and their l33t speak!) and movies like Hackers (I know for many it’s still a decent movie due to the presence of Angelina Jolie.. I don’t like her so it doesn’t have even that redeeming quality for me). In these, computers are completely alien technology from what we know and they do everything, but thanks to the fact that they are explained like they were actual things that could happen, many people believe that it’s the case. Think of it as a CSI Effect where people expect you and me (the computer guys) to be gods on earth because they’ve seen other computer guys doing that on movies. You know what I mean don’t you?

Interestingly, I know of other two authors who’ve been involved with computers before writing: Patricia Cornwell who worked as a computer analyst (at a coroner office, no less), and (at least I’m pretty sure I read about it before, although now I can’t find much in suitable references) Jim Butcher. Their approach to dealing with computers in their book is quite interested.

The former has actually written proper technical talk in her books, to the point I’m always positively surprised of what she can pull off while making the story enjoyable for people who have no clue whether what’s written is real or not. The latter has taken a quite drastic approach: wizards destroy computers just by being present, so there is no computer or technical talk involved in the books at all — although the amount of popular culture and geeky stuff that is referenced in the books shows how much of a real geek Butcher is.

Russinovich stance is for the most part akin to what of CSI in my view: he’s making it generally realistic and then stretching it a bit to make it more interesting. This was good in Zero Day, even though at that point the stretch was that everything, including the Dreamliner was running Windows XP (or Vista, or 7)… but in Trojan Horse, trying to abandon the too-technical talk, to focus more on the story… we ended up with the big WTF I noted. More so, I don’t really see much interesting in the story as it happens in Trojan Horse as I’ve said, so …

One’s DoS fix is one’s test breach (A Ruby Rant)

What do you know, it did indeed become a series!

Even though I’m leaving for Los Angeles again on Monday, March 5th, I’m still trying to package up the few dependencies that I’m missing to be able to get Radiant 1.0 in tree. This became more important to me now that Radiant 1.0.0 has finally been released, even though it might become easier for me to just target 1.1.0 at some later point, given that it would then use Rails 3. Together with that I’m trying to find what I need to get Typo 6 running to update this blog’s engine under the hood.

Anyway one of the dependencies I needed to package for Radiant 1.0. was delocalize which is an interesting library that allows dealing with localise date and time formats (which is something I had to hack together for a project myself, so I’m quite interested in it). Unfortunately, the version I committed to the tree is not the right one I should have packaged, which would have been 0.2.6 or so — since Radiant 1.0 is still on Rails 2.3. Unfortunately there is not a git tag for 0.2.6 so I haven’t packaged that one yet (and the .gem file only has some of the needed test files, not all of them — it’s the usual issue: why do you package the test files in the .gem file if you do not package the test data?), but I’m also trying to find why the code is failing on Ruby 1.9.

Upstream is collaborative and responsive, but as many others is now more focused on Ruby 1.9, rather than 1.8. What does this mean? Well, one of the recent changes to most of the programming languages out there involves changing the way keys are mangled in hash-based storage objects (such as Ruby’s Hash); this has changed the way Ruby 1.8 reported the content of those objects (Ruby 1.9 has ordered hashes which means that they don’t appear to have changed), which caused headaches for both me and Hans for quite a while.

This shows just one of the many reasons why we might have to step in on the throttle and try to get Ruby 1.9 stable and usable soon, possibly even before Ruby 1.8 falls out of security updates, which is scheduled for June — which was what I promised in the FOSDEM talk and I’m striving to make sure we deliver.

Gems make it a battle between the developer and the packager

It is definitely not a coincidence that whenever I have to dive into Gentoo Ruby packaging I end up writing a long series of articles for my blog that should have the tag “Rant” attached, until I end up deciding that it’s not worth it and I should rather do something else.

The problem is that, as I said many times before (and I guess the Debian Ruby team agrees as well), the whole design of RubyGems makes it very difficult to package them properly, and at the same time provides the developers with enough concepts to make the packaging even more tricky than it would by merely due tot he format.

As the title says, for one reason or another, RubyGems’s main accomplishment is simply to put extensions’ developers and distributions’ packages one against the other, with the former group insisting on doing things “fun”, and the latter doing things “right’. I guess most of the members of the former group also never tried managing a long term deployment of their application outside of things like Heroku (that are paid to take care of that).

And before somebody tells me I’m being mean by painting the developers puny with their concept of fun, it’s not my fault if in the space of an hour after tweeting a shorter version of the first paragraph of this post, two people told me that “development is fun”… I’m afraid for most people that’s what matters, it being fun, not reliable or solid…

At any rate… even though as we speak nobody expressed interest (via flattr) on packaging of the Ruby MongoDB driver that I posted about yesterday, I started looking into it (mostly because I’m doing another computer recovery for a customer and thus I had some free time in my hands while I waited for antivirus to complete, dd_rescue to copy data over, and so on so forth).

I was able to get some basic gems for bson and mongo working, which were part of the hydra repository I noted, but the problems started when I looked into plucky which is the “thin layer” used by the actual ORM. It is not surprising that this gem also is “neutered” to the point of being useless for Gentoo packaging requirements, but there are more issues. First of all it required one more totally new gem to be packaged – log_buddy which also required some fixes – that is not listed in the RubyGems website (which is proper, if you consider that the tests are not executable from the gem file), but most importantly, it relied on the matchy gem.

This is something I already had to deal with, as it was in another long list of dependencies last year or the one before (I honestly forgot). This gem is interesting: while the package is dev-ruby/matchy, it was only available as a person-specific gem in Gemcutter: jnunemaker-matchy and mcmire-matchy; the former is the original (0.4.0), while the latter is a fork that fixed a few issues, among which there was the main problem: jnunemaker-matchy is available neither as a tarball nor as a git tag.

For the package that originally required matchy for us (dev-ruby/crack), mcmire’s fork worked quite well, and indeed it was just a matter of telling it to use the other gem for it to work. That’s not the case for plucky, even thought jnunemaker didn’t release any version of matchy in two years, it only works with his version of matchy. Which meant packaging that one as well, for now.

Did I tell you that mcmire’s version works with Ruby 1.9, while jnunemaker’s doesn’t? No? Well, I’m telling you now. Just so you know, almost in 2012, this is a big deal.

And no, there is not a 0.4.0 yet. Two years after release. The code stagnated since then.

Oh and plucky’s tests will fail depending on how Ruby decides to sort an Hash’s keys array. Array comparison in Ruby is (obviously) ordered.

Then you look at the actual mongo_mapper gem that was the leaf of the whole tree.. and you find out that running the tests without bundler fixing the dependencies is actually impossible (due to the three versions of i18n that we have to allow side-installation of). And the Gemfile, while never declaring dependencies on the official Mongo driver (it gets it through plucky), looks for bson_ext (the compiled C extension, that in Gentoo was not going to exist, since it’s actually installed by the same bson package — I’ll have to create a fake gemspec for it just so it can be satisfied).

And this actually brings us to a different problem as well: even though plucky has been updated (to version 0.4.3) in November, it still requires series 1.3 of the Mongo driver. Version 1.4.0 was released in September, and we’re at version 1.5.2.

And I didn’t name SystemTimer gem, which is declared a requirement during development (but not by the gem of course, since you’re not supposed to run tests there) only for Ruby 1.8 (actually only for mri18, what about Ruby EE?) which lacks an indication of a homepage in the RubyGems website….

I love Ruby. I hate its development.

Keep on…

  • keep on ignoring requests coming from a QA team member;
    • keep on de-CCing QA on your bugs when said QA team member state that the fix is the wrong one, just stating that “it’s the wrong place open a new bug” when your solution was decided there;
    • keep on complaining if less than 1% of bugs filed, out of literally thousands lack a log file;
    • keep on asking me to not use the f-word because it makes it bad for you to be associated with Planet Gentoo (but on the other hand, feel no harm in being associated with people who repeatedly made Gentoo unusable for its users);
    • keep on spitting on me for pointing out that your unmask ideas are between reckless and totally stupid;
    • keep ignoring the bugs that are reported for your package;
    • keep bumping packages you don’t maintain, without looking into the further QA-related issues, and without declaring yourself the maintainer;
    • keep repeating the same mistakes and when asked to revise your attitude use the “but he did it as well” card.

Keep it up this way, then look back to see if there is QA at all.

Una storia veneziana

Non scrivo spesso in italiano e so che il più delle volte è per lamentarmi, ma uno sfogo di tanto in tanto fa bene direi. Oggi appunto vorrei lamentarmi della società che si occupa della gestione di rifiuti e acquedotto nella maggior parte della provincia di Venezia, compreso dove abito io, ovviamente: Veritas.

Lo scorso ottobre, per una serie di ragioni troppo lunghe da spiegare, ho richiesto il subentro come cliente, prendendo il posto dei miei genitori, a cui erano intestate, separatamente, le bollette di acqua e rifiuti TIA. Ho anche deciso di richiedere contestualmente l’addebito bancario diretto, visto che altrimenti sono le solite opzioni: pagare con bonifico (€1 a colpo) o con bollettino postale (€1.2 e un viaggio fino alle poste che ovviamente sono aperte solo di mattina, o anche di più se si vuole pagare con carta di credito dal sito delle poste, sempre che funzioni).

Ad ogni modo, la richiesta viene presa in carico, e me ne rendo conto quando rientrano i soldi della cauzione pagata da mio nonno (l’originale intestatario della bolletta dell’acqua quando ancora si parlava di Aspiv). Anche dal sito della mia banca noto l’accettazione della domiciliazione bancaria sui pagamenti utenze, quindi vado tranquillo.

La prima bolletta arriva a Marzo, in cui risulta essere domiciliata, anche se mancano i dettagli bancari sulla fattura stessa, ma imputo la cosa a un errore di stampa, e la seconda bolletta pure. Finché un giorno non mi ritrovo una minacciosa lettera da Veritas che mi intima di pagare le due bollette arretrate. Ohibò! Controllo ed effettivamente il pagamento non è mai stato effettuato. Chiamo, la mattina successiva, per scoprire qual’è il problema. “Lei non ha fornito l’IBAN” “Guardi sono abbastanza sicuro di averlo fatto, ho qua la documentazione” “No guardi che se l’avesse fornito l’avremmo inserito quindi non l’ha fornito”.

Dopo aver provato a far capire alla non-proprio-gentile operatrice che se la mia banca mi indica che la domiciliazione è inserita, che il loro stesso sistema non mi fa ricevere i bollettini postali di pagamento, e che le fatture riportano che ho richiesto la domiciliazione, significa che c’è un problema a monte, decido di lasciar perdere e chiedo se può inserire lei l’IBAN per me. No, devo inviare un fax (nel 2010!)… oppure utilizzare lo Sportello Online. Oh bene, almeno qualcosa che funzioni, si spera. Effettivamente dal loro sito risulto mancare l’IBAN; visto che l’IBAN è diventato necessario per le domiciliazioni da non molto mi aspetto che sia un avviso apposito, e compilo il mio IBAN senza dubbi ulteriori.

Finché ieri pomeriggio non mi arriva una nuova fattura di Veritas, che mi fa notare che la precedente fattura non è stata pagata. E ancora una volta riporta la dicitura “_Secondo le disposizioni da Lei impartite, l’importo sarà addebitato salvo buon fine presso:
con codice ABI/CAB /._” . Testuali parole, si è tutto lasciato in bianco.

Oggi chiamo di nuovo, e l’operatrice, un po’ più gentile, ma comunque stizzita quando ripeto che il problema è loro mi fa sapere che comunque a loro non risulta alcuna richiesta di RID da parte mia (e allora perché non ricevo neanche il bollettino? Mah!), e che l’unica cosa che posso fare è scaricare il modulo dal sito (che insiste nel ridarmi due volte anche se le ho ripetuto che stavo guardando lo Sportello Online), e compilarlo comprensivo di documento d’identità, e spedirlo, ovviamente, via fax.

Nel frattempo, ho da pagare due bollette di Veritas tramite bonifico bancario, cosa che mi scoccia già che basta. Perlomeno non sembrano aver inserito interessi di mora per il pagamento in ritardo (perché se fosse successo, in quel caso il fatto che il problema fosse loro non sarebbe decisamente passato in secondo luogo così facilmente).

E tutto questo perché? Perché probabilmente qualche impiegato ha ben voluto ignorare il fatto che esiste l’IBAN e ha compilato la mia pratica con solo ABI e CAB nel momento del subentro, che da un lato è bastato per effettuare la richiesta di domiciliazione, ma dall’altro, essendo la prima fattura stata emessa nel 2010, non era abbastanza per effettuare l’addebito.

Poi il problema è che i giovani italiani non vanno fuori di casa, eh? Chissà come mai.

You get what you ask for

I didn’t want to blog about this but seems like I’m forced to.

Today while I was reading Planet KDE on Google Reader, I read something quite worrisome, this blog by Boudewijn Rempt . Worrisome because it seems to depict our ex-developer Andrea “lcars” Barisani as a newbie of software security and oCERT as a scam.

Now, I have worked with Andrea quite a bit in Gentoo, and oCERT is the security handler for the xine project as well as my first contact when I find interesting things . I wouldn’t believe for an instant that Andrea would try to sneak in a backdoor in the code. Still worth noting because I do have a responsibility for the xine project, so I don’t think he’d be upset with me doublechecking the facts.

I thus asked Robert about it and he pointed out that our very own Ferris reported the failure! (and I would like to thank once again Ferris for always checking test failures, especially for security issues, it’s not the first time he catches something like that). And as Boudewijn said in the post update, Marc Deslauriers from KUbuntu identified the problem in a change from upstream that was reverted.

Okay so why am I writing this post? Well, I first protested on the blog comments to say that if the two of them never heard of Andrea or oCERT, that’s quite their problem. And that trusting upstream just because, well, it’s upstream is not always the right thing; as it turns out it was an upstream mistake after all. I also noted that the post itself was FUD against Andrea and oCERT from a spiteful upstream that tried to put the blame on malice, and that if we are to insult somebody as having not to be trusted because one patch out of a lot that Andrea coordinated before fails, then we should start looking at every project’s commits to see who has introduced which security bug and then point them out as malicious.

Interestingly enough though, expecting a reply, I noticed that the post now has no comment at all. When I posted mine there were another in Andrea’s defence with the author’s replying that even if it was a mistake he was not to be trusted; one from me with a reply from another person, my reply to that and a comment from “joe” pointing out it was upstream. That makes six comments, not zero. I checked a couple of times to make sure it wasn’t a broken cached page too.

I could think this was a bona fide mistake of the database, blog admin panel or anything like that, but as my post’s title say, you get what you ask for, and I am now to understand that Boudewijn Rempt has maliciously deleted the comments that pointed out he was just reporting a woeful reply full of FUD, and he is, thus, not to be trusted.

And if I were to apply his own logic, the whole Krita project’s code should not be trusted, it might be just one huge big backdoor. But I know some of the people working on KOffice are pretty cool and nice guys so I wouldn’t want to say that. But sure as death, I’d wish that some of Boudewijn Rempt peers in the KDE project were to actually try to teach him that this type of posts are just poison against the people who tried to help the community, maybe he’d be able to trust those. Or maybe he’ll just feel angry that I’m reusing part of his own strategy against him.

You get what you ask for, as I said.

Some more about Summer of Code, on the Gentoo side.

First a service information, I’ve taken a day off because yesterday I ended up in the ER after three days of headaches; don’t worry, the (urgent) CAT scan shown everything is okay, and all the blood tests are fine too. I just have to consider this a signal telling me not to stress too much out of me.

Following yesterday’s request about SoC, I started wondering about writing something out of last year’s handling of SoC in Gentoo. The part of me that tries to not be too critic of others told me not to write, but I feel compelled to. I already can tell this is not going to be good PR for Gentoo, but as for once we deserve it, and with “we” I mean myself too, as I didn’t want to mess with Google SoC last year (well, maybe it was better this way as I would have disappeared as a mentor without notice, with what happened).

Up to yesterday, the Gentoo SoC page still shown the information about last year’s SoC. Thanks to Joshua the page is now updated to show the 2008 proposed projects for the students to try applying for, please all the interested parties to look into it. More proposals and more mentors are most likely welcome, and if you’re interested in one of these projects, it might be a good idea to start fleshing out some of the details already, so that when you submit the application you have a valid proposal already.

The old page can be accessed still, even if it’s not linked in the archives (even if the URL is not explicitly referenced, it’s not a secret, anybody can find it in the viewcvs). By the time you read this entry, if you’re not doing so right after I write it, it might be possible that the page was changed, if you can’t understand what I am referring to, check 2007.xml revision 1.1 .

Take a look at the page now. Yes it is correct, there is no information about which students took part in the SoC. There is no list of accepted projects, there is no result information. The page was not updated to reflect the outcomes of the SoC, and not even to reflect the actual projects that formed SoC last year.

I’m sorry, but this has a huge FAIL stamp all over it, in bold red characters. I don’t want this to happen again. The first problem one can see easily is the very limited number of people that the team consisted of last year. The page only refers to Christel and Alec. Luckily this year we have quite more people involved in this task and that hopefully will avoid repeating that huge failure.

But what about the projects themselves? Luckily for us, Google archives the data, so you can find last year’s projects on Gentoo’s SoC 2007 page at Google . Unfortunately, when the time came to review the results of these projects, I was unavailable (remember, I didn’t really come back as a developer till mid to end October 2007, while it wasn’t my fault I still feel sorry for having been unable to help the process). I suppose I would be able to judge whether a project brought good results by having know about its completion after the fact. If I didn’t even heard of it anymore, I would suppose the results were pretty shaky…

  • the Collective Maintenance project, I really don’t see any trace of around; failed I’d say;
  • BaseGUI I also didn’t find any trace about; failed too; sorry Luis, not wanting to pick on you, but you’re the first one in the order of the list to be also a Gentoo Developer, this detracts point to the idea that Gentoo Developers have more chances to complete something for SoC;
  • GNAP cross compile support (and same applies to the other GNAP project, as well as SCIRE); I can’t tell about this, I admit GNAP is way out of my usual league of competence, so I’d like to ask for a status about these three projects to the developers involved; a post on Gentoo Planet would be appreciated;
  • archfs: heard nothing about this either;
  • equizApp, I’m not sure about this either, I haven’t heard it named in quite a while, but as I remember Betelgeuse discussing again of changes in the recruitment process, I don’t suppose this was successful either.
  • Python bindings for Paludis this is probably the only project which I heard of when I came back; the interesting notes are that this is a project that is not, strictly speaking, related to Gentoo (or at least to Gentoo Foundation that is the mentoring organisation) – Paludis is explicitly an external project – and the developer was already a Gentoo developer.

What does this say? Well at a first glance one might actually argue that this plays against my request. The only project that completed successfully was one handled by a Gentoo Developer. But I don’t think the main point to judge by the rate of completed project. As I said, my opinion in SoC is that it is helpful to find new good developers.

I think this was a total failure in that regard. The year before we had even two more applications accepted, and quite a few of those had results that actually can be looked at. In that edition, though, we had a lot more students being developers already (and not even all of them succeeded, anyway). I don’t think the better results are to be referred to the higher dev to newbie ratio, but rather to the higher feasibility of the accepted proposals.

SoC is great, it gives three paid months to a student to work on something they wouldn’t have worked on otherwise. But they are three months. The people has to understand that there will be no more checks after those months, and it comes down to either implement something that can be done and completed, or they have to show intent to continue after those months. That is also an important lesson to learn.

In 2006’s edition, the two Gentoo/FreeBSD-related projects were slightly active, and the students seemed to stay around after a while even if the SoC finished. They didn’t make devs, but I impute that also to the not-much-inspiring environment that the Gentoo/BSD project has become again (and that is also my fault, I want to take back the project as soon as I’m a bit more free than I am now).

As for people from 2007 edition, I don’t remember the name of either of the students (with peper’s and araujo’s exception of course).

The projects, judging by the applications, were quite high-shooting. Proposing to reinvent entirely the maintenance process of Gentoo seems like proposing to fix Italian Parliament in a week (people knowing Italian politics, this is your cue to laugh out loud). One slot was allocated to something that I can’t see interesting Gentoo directly (archfs) and one for an external project (Peper’s python bindings for Paludis).

But how should one judge the intent of the students to stay around after SoC ended? Well I admit that is the tricky part. While I referred to “new blood” I didn’t specifically mean “somebody that has never seen, touched or contributed to Gentoo” bur rather “somebody that hasn’t been an active Gentoo Developer”; people who have been active in the community, even by just reporting bugs, would probably stick around longer than people that never used Gentoo. FFmpeg solves the problem in a slightly different way, as you can see on their SoC Wiki page : they give some admission tasks that the student has to complete before its application is considered for acceptance; if somebody is in only for the money, and counts on disappearing, they’ll likely look for something else and abstain from wasting time and slots.

I don’t think is feasible to get the students to pass an ebuild quiz, for instance, but it might be worth to interview them, or at least to ask them “Since when do you use Gentoo?”. Not like it has to be mandatory to use Gentoo, but it certainly would allow to prioritise people who have at least a clue what they are proposing.

I’m sorry Luis for picking again on you, again, it’s just a matter of who is under hand at the moment. You’re not really an active developer; CIA stats for araujo count less than 400 commits. And they are shared with a FreeBSD developer that goes by the same username. It’s not thus a matter of being already a developer to become more active for the SoC timeframe.

I know this post is really a rant, I know it’s not much constructive, and I’m sorry I can’t be more constructive, for SoC, than volunteering to be a mentor this time and to point out the mistakes I think should be avoided. I can’t come back in time, and I couldn’t really help last year to clean up the stuff after the fact.

I do hope that this year there won’t be the same issues again. And having a stricter selection of students is IMHO a very important thing to do. That and being able to judge the feasibility of a project. So as a suggestion to anybody who wants to apply, I can tell this: work beforehand. Even before submitting the application, try to flesh out details, try to understand what is involved. Don’t be afraid to ask, especially questions about the structure of a software or the method used for maintain it.

Read the blogs, read the documentation and check the commit history of the project. Check who proposed a given project, and ask him more details, I know I will be quite happy to give them to you (and to others maybe, through a blog). [Service request for Joshua and the other people working on SoC: if you’re reading this, please add a contact entry in the table, so that possible students can contact who proposed them if there is not a proper project which they can reference to, like for sandbox).

If you start already to at least investigate what you want to do, it is more likely that we’ll see through your intentions to continue working with the project after the SoC months.

In a different tone, I wish that Jakub, some #gentoo operators and some of the forum moderators could volunteer as SoC admins. If you don’t know what an admin is, it is a person that work for the mentoring organisation that is not going to mentor a student directly, but that is involved in the process of selecting the valid applications. I ask this because you are part of the user-facing interface of Gentoo, and you tend to know who is active (in bugzilla, IRC and forums). And the previous commitment to Gentoo is another important information for choosing wisely the applications to accept.

How could you force volunteer to do what they don’t want to?

Seems like a lot of people don’t grasp the easy (to me) concept that you can’t really force volunteers to do something they don’t want to do.

Gentoo is currently maintained by volunteers. I think this works quite fine for the technical part, we improved our quality over the years, there are improvements to the tree every day, since last night you can also find KDE 4 in the tree to be used (if you’re really daring).

What most of us have a problem with is public relations, and I admitted this before too. Donnie already wrote about this, and he’s an expert in the field so I’ll add nothing to what he wrote. I also think we should get more documentation in place, especially for the development parts; I tried to do my best with my maintainer’s guides, but I admit it’s a big time commitment, even more than the actual development.

People complaints about Gentoo Foundation trouble causing GWN to disappear and the 2007.1 release to be skipped are totally out of sanity’s area. Those two failures are technical failures, there’s no way that the presence of Gentoo Foundation would have changed anything in the way GWN and releases are handled.

As I said before, I care very little about Gentoo Foundation, I don’t care if it exists or not because it doesn’t change the way I’m going to continue my work as a developer, being just a bureaucratic entity used to handle donations, copyrights and other menial tasks which little or nothing have to do with the technical side of Gentoo, which is what I have at heart.

So, as long as the Gentoo Foundation is only there to provide a mean to handle copyrights, trademarks, and to take care of the donations Gentoo receives as a whole, I have nothing against a single person taking care of it. I have no problem with Daniel taking it over. To me, it’s just the same if it exists or not. I think this is true for the majority of developers too, otherwise we wouldn’t have ended up not having enough candidates for the new trustees, and the whole problem with the Foundation lapsing wouldn’t have found its way in the first place. Another option is to get the project handled by SFC or SPI, I don’t care which one of those, as long as they take away the legal details from the developers not caring about the legal details, I’m fine.

Again, as I said, what I don’t want to see is Gentoo Foundation taking a technical role. It’s not there for that, it’s there to get the legal details away from the techs; keep the technical details being the only thing the developers need to be aware of.

I’m not saying there’s no space for improvement. I think the perfect setup would have three main “departments” in Gentoo: Foundation handling the legal side, Developers doing what they do best: developing (being ebuilds, documentation, or infrastructure), and Public Relations which takes care of keeping users informed. Tech people aren’t the best people you can find to take care of public relations, this is sure. Even a comparatively smaller project like Amarok has techs and pr separated: the Amarok team takes care of writing the software, the Rokymotion team takes care of public relations, fundraising and so on.

I do think that the homepage would need a bit of overhaul. We had to give up once last year with the site restyle, as Curtis disappeared, we might as well try to get a new restyle for it. It would probably be a good idea to put at least the headlines from Planet Gentoo on the site, as Planet Gentoo is for most developers the main way to reach users, and we should focus a lot more on that in my opinion.

I’m not saying that Daniel should stay away from Gentoo as a whole. For what I’m concerned, he’s just not the kind of guy I’d like to take any order from, as in my opinion he’s not the best manager of people I’ve seen. And mind you, I’m not a good manager of people myself, I know what I’m doing wrong on that – and I don’t care enough to change it as I don’t need to be a manager of people – so I can see what he’d been doing wrong when he returned for a day last year.

I don’t care he started the project, as to me, he’s just a person like any other once he left the project. I don’t expect his personal technical views to be taken any differently than mine, Mike’s or Ferdy’s, as I expect technical views to be judged only on their technical basis.

I don’t judge Daniel for what he did when he left the project the first time, as one of the users pointed out in the comments to my last blog, I wasn’t around at the time. I can judge what people says about him, especially people I trust the opinion of. But even ignoring what I’ve been told, I don’t need to have any information about that to judge if I want him as tech lead or not. I said already what I think on that and I won’t repeat it.

As about judging on someone else’s opinion, I think it’s not a bad thing. We all tend to do that to some extent, as we delegate representative to take action on things, at least in a good part of the world, by electing them. When I’m voting, say, for Antonio Di Pietro (Italian politician for Italia dei Valori party, which I voted at the last political elections of April 2006 – yeah I’m a commie :P), I’m accepting his opinions on matters I won’t have my direct say on. The same happens when I accept the opinions of an older (in term of development time) colleague on Daniel. I have no trouble with that, when I have faith on the capacity of that colleague to judge stuff, on a given plane (technical or otherwise).

It’s more or less the same reason why, at the last council meeting, I was able to quit before the end, trusting Donnie to say and do the right thing about the CoC. I know he’s way better than me about that stuff, and I trust him to do the right choice. And I won’t be pissed off if I don’t personally like the outcome (up to now I have no problem, by the way), for two reasons: he probably has good reasons for what he does, and I decided by myself to leave every detail on that up to him, I can’t complain.

As for what concerns developers ignoring users’ wishes about Daniel’s return… I think those users has no idea of what goes to be a developer, plain and simple, and most times would wish for something that would actually not work. _[Edit: I was made notice that this phrase sounds a bit harsh and overgeneralised. so I added the those part on it.]

For instance, I’m sure that a lot of users would like that we did 247 maintenance of the tree, taking at maximum a couple of hours to get a new version of a package in portage. Sure it would be nice, but there’s the obstacle that most of the developers have a job, and Gentoo is barely part of it, if at all. Even I, being mostly a part-time worker at contract, and being at home in front of my box most of the day almost all days, tend to have something else to take care of sometimes.

Users don’t likely know what will happen if Daniel is back – Heck, most of us devs don’t know either! – but they are willing to push for it just because it would be the change, and they feel a change is needed; if the outcome is not what they wanted, they’ll probably scream to get rid of him again, or simply decide Gentoo is not what they want anymore and get to use something else, who cares if developers who committed a lot of time into the project feel the failure on their shoulder.

I don’t think a radical change is really needed, I think we need to change a little bit the way we interact with users though: developers aren’t good at volunteering information, it’s not a stereotype or a cliché, it’s as real as this blog. The problem is that the technical-minded people tend to not get into details of what they find obvious. This is probably why there is little documentation around in a lot of projects (xine, HAL, …), even when the users would need it. I’m afraid I also suffer from that issue, even if I’m trying to focus myself to get rid of that for a couple of years at least. As the developers can’t volunteer information, we need some staffers to hang around developers, and ask them what they are doing, how’s stuff proceeding and so on. I for one wouldn’t mind if somebody asked me “Hey how’s it going on with PulseAudio maintenance? Did you add the glib USE flag yet? Any change ready to be done in the next weeks? Need help with anything?”, to then publish an article on the GMN.

As Steve said already: my email is there for you to use to write to me, feel free to enquiry me about what I’m going to do for something you care about. I tend to write a lot in my blog about what I’m doing though, so the only thing I’d ask you is to first make sure you read my last two blog entries on the topic you’re going to ask about, before asking.